Feed aggregator

NPL leading Government-backed metrology project to accelerate UK’s role in compound semiconductor innovation

Semiconductor today - Wed, 11/26/2025 - 11:07
The National Physical Laboratory (NPL) in Teddington, UK has been appointed by the Department for Science, Innovation and Technology (DSIT) to lead a £1.2m Government-funded project to establish new metrology capabilities that will strengthen the UK’s semiconductor innovation infrastructure. The strategic investment aims to accelerate the UK’s role in developing next-generation semiconductor materials and processes, helping to attract private investment and boost economic growth in the sector...

Keysight Hosts AI Thought Leadership Conclave in Bengaluru

ELE Times - Wed, 11/26/2025 - 09:31

 Keysight Technologies, Inc. announced the AI Thought Leadership Conclave, a premier forum bringing together technology leaders, researchers, and industry experts to discuss the transformative role of artificial intelligence (AI) is shaping digital infrastructure, wireless technologies, and connectivity.

Taking place on December 9, 2025, in Bengaluru, the conclave will showcase how AI is redefining the way networks, cloud, and edge systems are designed, optimized, and scaled for a hyperconnected world. Through keynote sessions, expert panels, and interactive discussions, participants will gain insights into:

  • The role of AI in shaping data center architecture, orchestration, and resource optimization
  • Emerging use cases across industries, from healthcare and manufacturing to mobility and entertainment
  • Ethical, regulatory, and security considerations in large-scale AI infrastructure
  • Collaborative innovation models and global standardization efforts

Additional sessions will focus on AI-driven debugging and optimization, data ingestion and software integration for scalable AI, and building secure digital foundations across cloud and edge environments.

“AI is rapidly becoming the backbone of digital transformation, and the ability to integrate intelligence into every layer of infrastructure will define the next decade of innovation,” said Sudhir Singh, Country Manager, Keysight India. “Through the AI Thought Leadership Conclave, Keysight is facilitating an exchange of ideas, showcasing AI-centered advancements, and shaping the connected future.”

In addition to focused discussions and technology presentations, the conclave will host an AI Technology Application Demo Fair, featuring live demonstrations of advanced solutions developed by Keysight and its technology partners. Attendees will also have ample opportunities to connect with industry leaders, participate in business and customer meetings, and engage in discussions with representatives from industry standard bodies.

The post Keysight Hosts AI Thought Leadership Conclave in Bengaluru appeared first on ELE Times.

Finaly i think that i have managed making ahelp rail for op amps etc ±15V

Reddit:Electronics - Wed, 11/26/2025 - 07:49
Finaly i think that i have managed making ahelp rail for op amps etc ±15V

I used the TL431 reference programmable zenner with an emitter follower for extra stability. This takes my ±38V and makes a ±15V helprail to power op amps etc. Think i hould be able to draw 500mA-1A current on the help rail!. One more step closer to finish my linear dual rail build ±0-35v, 2.2A per rail total 4.4A.

submitted by /u/Whyjustwhydothat
[link] [comments]

A High-Voltage DC Motor Speed Modulation Control Project

Reddit:Electronics - Wed, 11/26/2025 - 05:59
A High-Voltage DC Motor Speed Modulation Control Project

A year ago, I worked at a workshop that specialized in rewinding electric motors and transformers. We frequently received motors and transformers for maintenance and rewinding, but sometimes we received DC motors that typically operated with a 400 V DC stator and a 200 V DC armature.

To run and test those motors, our power setup was quite cumbersome. We would connect 400 V AC to a large motor-generator set, and the output from that would power the DC motor's stator. For the armature, we took a single-phase 220 V AC line, passed it through a bridge rectifier, and then controlled the voltage using a Variac before finally feeding it to the armature.

This entire process was bulky. It inspired me to design a power circuit capable of electronically controlling the armature voltage, which is essential for modulating the motor's speed. Unfortunately, I never got the opportunity to implement the circuit. The owner of the shop, who was also my electrical machines professor at university, was an elderly gentleman who passed away, and the project get stalled.

Recently, I've been experimenting with the circuit in simulation and found it can be used for several interesting applications:

  • High-Voltage (HV) Switch
  • Linear Regulator
  • Step-Down (Buck) Converter
  • Step-Up (Boost) Converter (A topic from a previous post)
  • I'm also confident it could be used to build a audio driven signal modulation ( weird but possible ).

My biggest worry was the power that the IGBTs would have to sustain. If we assume the voltage drop across the IGBT (VCE​) is around 100V (the point of maximum power dissipation), the IGBT would need to dissipate about 450W of power.

I was highly concerned about whether a single IGBT could handle this continuous load without failing. I was planning to mount the transistors onto a large aluminum heat sink block and place several IGBTs in parallel to distribute the power load among them.

Anyway, I wanted to share this project with you. here The diagram for circuitJS.

$ 1 0.000005 10.20027730826997 49 5 43 5e-11 t 320 192 320 144 1 -1 195.0523838167561 -0.7316787632313719 100 default f 336 240 336 192 40 1.5 0.02 w 176 144 304 144 0 R 176 144 96 144 0 0 40 200 0 0 0.5 t 256 416 176 416 0 1 0 0.47551466520158947 100 default t 256 416 336 416 0 1 -5.6784882330384585 0.47551466464471304 100 default w 256 416 256 384 0 w 176 384 256 384 0 w 176 400 176 384 0 r 336 432 336 512 0 100 w 336 512 176 512 0 g 176 512 112 512 0 0 w 336 144 352 144 0 r 432 432 432 512 0 100 w 432 512 336 512 0 w 432 432 432 336 0 w 432 144 560 144 0 r 560 144 560 512 0 22 w 560 512 432 512 0 p 688 144 688 512 3 0 0 0 w 560 144 688 144 0 w 688 512 560 512 0 r 432 144 432 240 0 1000 r 176 432 176 512 0 100 r 176 240 176 144 0 10000000 w 176 288 176 240 0 w 176 320 176 384 0 w 176 240 336 240 0 w 336 240 336 400 0 w 352 192 352 144 0 w 432 240 432 272 0 w 432 144 352 144 0 w 432 272 384 272 0 w 432 336 432 320 0 t 384 304 432 304 0 1 0 0.6259000454766762 100 default w 432 272 432 288 0 w 384 272 384 304 0 t 384 304 176 304 0 1 -5.2027187673209605 0.47576946571749795 100 default o 19 32 0 4098 320 0.1 0 1 38 22 F1 0 1000 100000 -1 Resistance

submitted by /u/Inevitable-Round9995
[link] [comments]

Here is an interesting ITS1A thyratron tube clock I made. These are very interesting display tubes that contain seven tiny thyratrons, one for each display segment. You can see the electron pathways changing inside each tube as the digits change. More...

Reddit:Electronics - Wed, 11/26/2025 - 00:48

The ITS1A display tube is a bit of a mystery since it is poorly documented and little is known about the application for the tube. It was manufactured by the Soviet Union at the height of the Cold War when LEDs and VFDs were readily available. But, why then was it developed? It may be these tubes were developed for SW radio applications since their internal ‘multiplexing ‘ capability yields little or no EMV to interfere with weak signal reception. OR, unlike nearly every other neon display, which require control signals in the hundreds of volts range to activate, the ITS1A can be connected directly to a micro-controller and run with TTL level 5 volt signals. This is possible because the ITS1A contains seven tiny thyratrons, one for each segment, which perform the level shifting to control the 300 volt signals needed to ionize the gas inside the tube. The ITS1A is also unique in that it is a neon tube that does not glow amber like all other cold Cathode tubes, instead each of the tubes display segments is a phosphor-coated cup that illuminates green by electron spatter from the control thyratrons. In operation and when viewed from the side this beautiful little tube actually presents in three colors; pink/purple from the neon ionization, a little bit of blue from the electron paths inside the thyratrons, and from the front, the segments glow a beautiful cyan/green from the phosphor coating

submitted by /u/Legend_of_the_Wind
[link] [comments]

Tektronix 516a beauty

Reddit:Electronics - Tue, 11/25/2025 - 22:14
Tektronix 516a beauty

Found this beauty for 30bucks ! The owner was an Thomson enginner 30years ago. Bit of dust inside, im planning to restore it ! Its huge ! Around 20kg !

submitted by /u/tx30840
[link] [comments]

Infused concrete yields greatly improved structural supercapacitor

EDN Network - Tue, 11/25/2025 - 22:01

A few years ago, a team at MIT researched and published a paper on using concrete as an energy-storage supercapacitor (MIT engineers create an energy-storing supercapacitor from ancient materials) (also called an ultracapacitor), which is a battery based on electric fields rather than electrochemical principles. Now, the same group has developed a battery with ten times the storage per volume of that earlier version, by using concrete infused with various materials and electrolytes such as (but not limited to) nano-carbon black.

Concrete is the world’s most common building material and has many virtues, including basic strength, ruggedness, and longevity, and few restrictions on final shape and form. The idea of also being able to use it as an almost-free energy storage system is very attractive.

By combining cement, water, ultra-fine carbon black (with nanoscale particles), and electrolytes, their electron-conducting carbon concrete (EC3, pronounced “e-c-cubed”) creates a conductive “nanonetwork” inside concrete that could enable everyday structures like walls, sidewalks, and bridges to store and release electrical energy, Figure 1.

Figure 1 As with most batteries, schematic diagram and physical appearance are simple, and it’s the details that are the challenge. Source: Massachusetts Institute of Technology

This greatly improved energy density was made possible by their deeper understanding of how the nanocarbon black network inside EC3 functions and interacts with electrolytes, as determined using some sophisticated instrumentation. By using focused ion beams for the sequential removal of thin layers of the EC3 material, followed by high-resolution imaging of each slice with a scanning electron microscope (a technique called FIB-SEM tomography), the joint EC³ Hub and MIT Concrete Sustainability Hub team was able to reconstruct the conductive nanonetwork at the highest resolution yet. The analysis showed that the network is essentially a fractal-like “web” that surrounds EC3 pores, which is what allows the electrolyte to infiltrate and for current to flow through the system. 

A cubic meter of this version of EC3—about the size of a refrigerator—can store over 2 kilowatt-hours of energy, which is enough to power an actual modest-sized refrigerator for a day. Via extrapolation (always the tricky aspect of these investigations), they say that 45 cubic meters of EC3 with an energy density of 0.22 kWh/m3 – a typical house-sided foundation—would have enough capacity to store about 10 kilowatt-hours of energy, the average daily electricity usage for a household, Figure 2.

Figure 2 These are just a few of the many performance graphs that the team developed. Source: Massachusetts Institute of Technology

They achieved highest performance with organic electrolytes, especially those that combined quaternary ammonium salts—found in everyday products like disinfectants—with acetonitrile, a clear, conductive liquid often used in industry, Figure 3.

Figure 3 They also identified needed properties for the electrolyte and investigated many possibilities for this critical component. Source: Massachusetts Institute of Technology

If this all sounds only like speculation from a small-scale benchtop lab project, it is, and it isn’t. Much of the work was done in cooperation with the American Concrete Institute, a research and promotional organization that studies all aspects of concrete, including formulation, application, standardized tests, long-term performance, and more.

While the MIT team, perhaps not surprisingly, is positioning this development as the next great thing—and it certainly gets a lot of attention in the mainstream media due to its tantalizing keywords of “concrete” and “battery”—there are genuine long-term factors to evaluate related to scaling up to a foundation-sized mass:

  • Does the final form of the concrete matter, such a large cube versus flat walls?
  • What are the partial and large-scale failure modes?
  • What are the long-term effects of weather exposure, as this material is concrete (which is well understood) but with an additive?
  • What happens when an EC3 foundation degrades or fails—do you have to lift the house and replace the foundation?
  • What are the short and long-term influences on performance, and how does the formulation affect that performance?

The performance and properties of the many existing concrete formulations have been tested in the lab and in the field over decades, and “improvements” are not done casually, especially in consideration of the end application.

Since demonstrating this concrete battery in structural mode lacks visual impact, the MIT team built a more attention-grabbing demonstration battery of stacked cells to provide 12-V of power. They used this to operate a 12-V computer fan and a 5-V USB output (via a buck regulator) for a handheld gaming console, Figure 4.

Figure 4 A 12-V concrete battery powering a small fan and game console provides a visual image which is more dramatic and attention-grabbing. Source: Massachusetts Institute of Technology

The work is detailed in their paper “High energy density carbon–cement supercapacitors for architectural energy storage,” published in Proceedings of the National Academy of Sciences (PNAS). It’s behind a paywall, but there is a posted student thesis, “Scaling Carbon-Cement Supercapacitors for Energy Storage Use-Cases.” Finally, there’s also a very informative 18-slide, 21-minute PowerPoint presentation  at YouTube (with audio), “Carbon-cement supercapacitors: A disruptive technology for renewable energy storage,” that was developed by the MIT team for the ACI.

What’s your view? Is this a truly disruptive energy-storage development? Or will the realities of scaling up in physical volume and long-term performance, as well as “replacement issues,” make this yet another interesting advance that falls short in the real world?

Check back in five to ten years to find out. If nothing else, this research reminds us that there is potential for progress in power and energy beyond the other approaches we hear so much about.

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related Content

The post Infused concrete yields greatly improved structural supercapacitor appeared first on EDN.

experimenting with step up converter and High voltage

Reddit:Electronics - Tue, 11/25/2025 - 19:28
experimenting with step up converter and High voltage

Hey everyone!

I've been diving into some high-voltage (HV) power electronics experiments recently. I wanted to share a project I've been tinkering with: a custom step-up converter.

We all know that step-up (Boost) circuits are excellent for boosting low-voltage inputs (like 12V), but I had a different idea: what if I use the Boost topology on an already high DC voltage?

My goal is to take a 100V DC input (or ∼167V DC if I rectify and filter a 120V AC line) and significantly boost it.

I'm currently deep in the simulation phase and plan to build a physical prototype soon. I'm looking for feedback from anyone experienced with HV DC/DC conversion on my approach.

here is the diagram for circuitJS:

txt $ 1 0.000005 3.046768661252054 50 5 43 5e-11 w 752 0 752 32 0 w 752 -32 752 -128 0 f 928 -16 752 -16 32 1.5 0.02 w 752 32 752 48 0 w 704 32 752 32 0 w 704 64 704 32 0 w 752 192 816 192 0 w 752 80 752 144 0 r 752 144 752 192 0 100 t 704 64 752 64 0 1 0 0 100 default g 560 192 528 192 0 0 w 752 192 688 192 0 r 816 -64 816 192 0 22 w 560 80 560 96 0 w 560 48 560 32 0 t 704 64 560 64 0 1 0 0 100 default w 560 192 688 192 0 r 688 144 688 192 0 100 r 560 144 560 192 0 100 w 560 96 560 112 0 w 624 96 560 96 0 w 624 128 624 96 0 t 624 128 560 128 0 1 0 0 100 default t 624 128 688 128 0 1 0 0 100 default r 560 -64 560 32 0 10000000 w 704 -64 704 -144 0 R 560 -64 512 -64 0 0 40 100 0 0 0.5 f 688 32 688 -64 40 1.5 0.02 l 560 -64 672 -64 0 0.1 0 0 d 672 -64 672 -128 2 default c 672 -128 560 -128 4 0.000009999999999999999 0.001 0.001 0.1 g 560 -128 528 -128 0 0 w 672 -128 752 -128 0 w 816 -128 816 -64 0 w 688 32 688 112 0 w 688 32 560 32 0 g 704 -144 704 -176 0 0 w 816 -128 752 -128 0 w 1088 0 1104 0 0 w 1040 0 1088 0 0 w 1088 -160 1088 0 0 r 1280 -160 1088 -160 0 3300 w 1280 -32 1280 -160 0 w 1280 -32 1232 -32 0 w 1232 -128 1232 -64 0 w 1168 -128 1232 -128 0 165 1104 -96 1120 -96 6 0 R 1040 -128 1008 -128 0 0 40 5 0 0 0.5 w 1040 -128 1168 -128 0 r 1040 0 1040 -128 0 1000000 g 1040 96 1040 112 0 0 c 1040 32 1040 96 4 3e-7 0.001 0.001 0 w 1040 32 1104 32 0 w 1040 0 1040 32 0 w 1280 -32 1280 192 0 w 1280 192 928 192 0 w 928 192 928 -16 0 w 1040 96 1200 96 0 w 1200 96 1200 64 0

submitted by /u/Inevitable-Round9995
[link] [comments]

🚀 НАЗК запрошує студентів провести Урок «Доброчесність починається з мене» для учнів 6–9 класів

Новини - Tue, 11/25/2025 - 16:45
🚀 НАЗК запрошує студентів провести Урок «Доброчесність починається з мене» для учнів 6–9 класів
Image
kpi вт, 11/25/2025 - 16:45
Текст

🚀 НАЗК запрошує студентів провести Урок «Доброчесність починається з мене» для учнів 6–9 класів у межах Тижня доброчесності 2025.

Це шанс показати, що чесність – це сила, а не слабкість, і що один урок може запустити хвилю змін у країні.

Під час уроку ви:

Побачити реальне виробництво

Новини - Tue, 11/25/2025 - 16:20
Побачити реальне виробництво
Image
kpi вт, 11/25/2025 - 16:20
Текст

Екскурсії на виробництво є важливою складовою підготовки майбутніх фахівців: студенти ознайомлюються зі структурою підприємств, умовами та специфікою роботи на них, особливостями виробничого процесу, інноваційними технологіями та обладнанням, спілкуються з кваліфікованими спеціалістами.

A simpler circuit for characterizing JFETs

EDN Network - Tue, 11/25/2025 - 15:00

The circuit presented by Cor Van Rij for characterizing JFETs is a clever solution. Noteworthy is the use of a five-pin test socket wired to accommodate all of the possible JFET pinout arrangements.

This idea uses that socket arrangement in a simpler circuit. The only requirement is the availability of two digital multimeters (DMMs), which add the benefit of having a hold function to the measurements. In addition to accuracy, the other goals in developing this tester were:

  • It must be simple enough to allow construction without a custom printed circuit board, as only one tester was required.
  • Use components on hand as much as possible.
  • Accommodate both N- and P-channel devices while using a single voltage supply.
  • Use a wide range of supply voltages.
  • Incorporate a current limit with LED indication when the limit is reached.
The circuit

The resulting circuit is shown in Figure 1.

Figure 1 Characterizing JFETs using a socket arrangement. The fixture requires the use of two DMMs.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Q1, Q2, R1, R3, R5, D2, and TEST pushbutton S3 comprise the simple current limit circuit (R4 is a parasitic Q-killer).

S3 supplies power to S1, the polarity reversal switch, and S2 selects the measurement. J1 and J2 are banana jacks for the DMM set to read the drain current. J3 and J4 are banana jacks for the DMM set to read Vgs(off). 

Note the polarities of the DMM jacks. They are arranged so that the drain current and Vgs(off) read correctly for the type of JFET being tested—positive IDSS and negative Vgs(off) for N-channel devices and negative IDSS and positive Vgs(off) for P-channel devices.

R2 and D1 indicate the incoming power, while R6 provides a minimum load for the current limiter. Resistor R8 isolates the DUT from the effects of DMM-lead parasitics, and R9 provides a path to earth ground for static dissipation.

Testing JFETs

Figure 2 shows the tester setup measuring Vgs(off) and IDSS for an MPF102, an N-channel device. The specified values of this device are Vgs(off) of -8v maximum and IDSS of 2 to 20 mA. Note that the hold function of the meters was used to maintain the measurements for the photograph. The supply for this implementation is a nominal 12-volt “wall wart” salvaged from a defunct router. 

Figure 2 The test of an MPF302 N-Channel JFET using the JFET characterization circuit.

Figure 3 shows the current limit in action by setting the N-JFET/P-JFET switch to P-JFET for the N-channel MPF102. The limit is 52.2 mA, and the I-LIMIT LED is brightly lit. 

Figure 3 The current limit test that sets the N-JFET/P-JFET switch to P-JFET for the N-channel MPF102.

John L. Waugaman’s love of electronics began when I built a crystal set at age 10 with my father’s help. Earning a BSEE from Carnegie-Mellon University led to a 30-year career in industry designing product inspection equipment and four patents. After being RIF’d, I spent the next 20 years as a consultant specializing in analog design in industrial and military projects. Now I’m retired, sort of, but still designing.  It’s in my blood, I guess.

Related Content

The post A simpler circuit for characterizing JFETs appeared first on EDN.

Gold-plated PWM-control of linear and switching regulators

EDN Network - Tue, 11/25/2025 - 15:00
“Gold-plated” without the gold plating

Alright, I admit that the title is a bit over the top. So, what do I mean by it? I mean that:

(1) The application of PWM control to a regulator does not significantly degrade the inherent DC accuracy of its output voltage,

(2) Any ability of the regulator’s output voltage to reach below that of its internal reference is supported, and

(3) This is accomplished without the addition of a new reference voltage.

Refer to Figure 1.

Figure 1 This circuit meets the requirements of “Gold-Plated PWM control” as stated above.

Wow the engineering world with your unique design: Design Ideas Submission Guide

How it works

The values of components Cin, Cout, Cf, and L1 are obtained from the regulator’s datasheet. (Note that if the regulator is linear, L1 is replaced with a short.)

The datasheet typically specifies a preferred value of Rg, a single resistor between ground and the feedback pin FB. 

Taking the DC voltage VFB of the regulator’s FB pin into account, R3 is selected so that U2a supplies a V_sup voltage greater than or equal to 3.0 V. C7 and R3 ensure that the composite is non-oscillatory, even with decoupling capacitor C6 in place.C6 is required for the proper operation of the SN74AC04 IC U1.

The following equations govern the circuit’s performance, where Vmax is the desired maximum regulator output voltage:

R3   = ( Vsup / VFB – 1 ) · 10k
Rg1 = Rg / ( 1 – ( VFB / Vsup ) / ( 1 – VFB/Vmax ))
Rg2 = Rg · Rg1 / ( Rg1 – Rg )
Rf = Rg · ( Vmax / VFB – 1 )

They enable the regulator output to reach zero volts (if it is capable of such) when the PWM inputs are at their highest possible duty cycle. 

U1 is part of two separate PWMs whose composite output can provide up to 16 bits of resolution. Ra and Rb + Rc establish a factor of 256 for the relative significance of the PWMs.

If eight bits or less of resolution is required, Rb and Rc, and the least significant PWM, can be eliminated, and all six inverters can be paralleled.

The PWMs’ minimum frequency requirements shown are important because when those are met, the subsequent filter passes a peak-to-peak ripple less than 2-16 of the composite PWM’s full-scale range. This filter consists of Ra, Rb + Rc, R5 to R7, C3 to C5, and U2b.

Errors

The most stringent need to minimize errors comes from regulators with low and highly accurate reference voltages. Let’s consider 600 mV and 0.5% from which we arrive at a 3-mV output error maximum inherent to the regulator. (This is overly restrictive, of course, because it assumes zero-tolerance resistors to set the output voltage. If 0.1% resistors were considered, we’d add 0.2% to arrive at 0.7% and more than 4 mV.)

Broadly, errors come from imperfect resistor ratios and component tolerances, op-amp input offset voltages and bias currents, and non-linear SN74AC04 output resistances. The 0.1% resistors are reasonably cheap.

Resistor ratios

If nominally equal in value, such resistors, forming a ratio, contribute a worst-case error of ± 0.1%. For those of different values, the worst is ± 0.2%. Important ratios involve:

  • Rg1, Rg2, and Rf
  • R3 and R4
  • Ra and Rb + Rc

Various Rf, Rg ratios are inherent to regulator operation.

The Rg1, Rg2; R3, R4; and Ra, Rb + Rc pairs have been introduced as requirements for PWM control.

The Ra / (Rb + Rc) error is ± 0.2%, but since this involves a ratio of 8-bit PWMs at most, it incurs less than 1 least significant bit (LSbit) of error.

The Rg1, Rg2 pair introduces an error of ±0.2 % at most.

The R3, R4 pair is responsible for a worst-case ±0.2 %. All are less than the 0.5% mentioned earlier.

Temperature drift

The OPA2376 has a worst-case input offset voltage of 25 µV over temperature. Even if U2a has a gain of 5 to convert FB’s 600 mV to 3 V, this becomes only 125 µV.

Bias current is 10-pA maximum at 25°C, but we are given a typical value only at 125°C of 250 pA.

Of the two op-amps, U2b sees the higher input resistance. But its current would have to exceed 6 nA to produce even 1-mV of offset, so these op-amps are blameless.

To determine U1’s output resistance, its spec shows that its minimum logic high voltage for a 3-V supply is 2.46 V under a 12-mA load. This means that the maximum for each inverter is 45 Ω, which gives us 9 Ω for five in parallel. (The maximum voltage drop is lower for a logic low 12 mA, resulting in a lower resistance, but we don’t know how much lower, so we are forced to worst-case it at a ridiculous 0 V!)

Counting C3 as a short under dynamic conditions, the five inverters see a 35-kΩ load, leading to a less than 0.03% error.

Wrapping up

The regulator and its output range might need an even higher voltage, but the input voltage IN has been required to exceed 3.2 V. This is because U1 is spec’d to swing to no further than 80 mV from its supply rails under loads of 2 kΩ or more. (I’ve added some margin, but it’s needed only for the case of maximum output voltage.)

You should specify Vmax to be slightly higher than needed so that U2b needn’t swing all the way to ground. This means that a small negative supply for U2 is unnecessary. IN must also be less than 5.5 V to avoid exceeding U2’s spec. If a larger value of IN is required by the regulator, an inexpensive LDO can provide an appropriate U2 supply.

I grant that this design might be overkill, but I wanted to see what might be required to meet the goals I set. But who knows, someone might find it or some aspect of it useful.

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

Related Content

The post Gold-plated PWM-control of linear and switching regulators appeared first on EDN.

Government approves 17 projects worth Rs. 7,172 crore under ECMS

ELE Times - Tue, 11/25/2025 - 13:16

The Ministry of Electronics and IT announced for the clearance of 17 additional proposals, worth Rs. 7,172 crore under the Electronics Components Manufacturing Scheme (ECMS). The projects are expected to generate production worth Rs 65,111 crore and 11,808 direct jobs across the country, according to the ministry.

The approved projects are spread across 9 states from Jammu and Kashmir to Tamil Nadu, focusing on the government’s commitment towards a ‘balanced regional growth’ and creation of high-skill jobs beyond metropolitan clusters.

This approval focuses on developing key technologies used in various IT hardware, wearables, telecom, EVs, industrial electronics, defence, medical electronics, and renewable energy, like oscillators, enclosures, camera modules, connectors, Optical Transceiver (SFP), and multi-layered PCBs.

Minister of Electronics and IT, Ashwini Vaishnaw highlighted that the next phase of value chain integration is being unravelled, from devices to components and sub-assemblies which will ensure that India’s electronics sector reaches $500 billion in manufacturing value by 2030–31.

The Minister also launched the 1st Generation Energy-Efficient Edge Silicon Chip (SoC) (ARKA-GKT1), jointly developed by Cyient semiconductors Pvt Ltd and Azimuth AI along with the projects. The Platform-on-a-Chip SoC integrates advanced computing cores, hardware accelerators, power-efficient design, and secure sensing into a single chip, delivering up to 10x better performance while reducing cost and complexity. It supports smart utilities, cities, batteries, and industrial IoT, showcasing India’s shift toward a product-driven, high-performance semiconductor ecosystem.

The post Government approves 17 projects worth Rs. 7,172 crore under ECMS appeared first on ELE Times.

South Wales-based compound semiconductor cluster celebrates tenth anniversary

Semiconductor today - Tue, 11/25/2025 - 13:08
On 13 November, over 100 industry leaders, researchers, policymakers and educators gathered in Cardiff to celebrate the 10-year anniversary of CSconnected, the world’s first compound semiconductor cluster. This included reflections from founding figures, recognition of ‘Cluster Champions’, and a keynote address from Jack Sargeant MS, Wales’ Minister for Culture, Skills and Social Partnership, whose portfolio aligns with the cluster’s focus on talent pipelines and skills development...

Cornell upgrades lab with MOCVD system for next-gen nitride materials

Semiconductor today - Tue, 11/25/2025 - 11:22
A laboratory upgrade at Cornell University will help to forge new directions for nitride semiconductors by expanding their capabilities to support technologies such as quantum computers and next-generation radio-frequency and power devices...

BD Soft strengthens cybersecurity offerings for BFSI and Fintech businesses with advanced solutions

ELE Times - Tue, 11/25/2025 - 11:06

BD Software Distribution Pvt. Ltd. has expanded its Managed Detection and Response (MDR) and Data Loss Prevention (DLP) solutions for the BFSI and Fintech sectors amid rising cyber risks fuelled by digital banking growth and cloud-led transformation. The strengthened suite addresses vulnerabilities linked to sophisticated phishing and ransomware attacks, insecure third-party integrations, and increasing exposure of APIs and financial data across distributed environments.

BD Soft’s cybersecurity portfolio now includes solutions from leading global and Indian innovators; Axidian, headquartered in Dubai (UAE), for identity governance and privileged access management; FileCloud, based in Austin (USA), for hyper-secure EFSS (Enterprise File Sync & Share) capabilities; GTB Technologies, headquartered in California (USA), for advanced Data Loss Prevention (DLP); and Hunto.ai, based in Mumbai (India), for external threat intelligence and monitoring. Together, these solutions enable financial institutions to strengthen data governance, prevent fraud, meet regulatory obligations, and build resilient security frameworks that safeguard customer trust.

The surge in sector-wide threats is driven by the industry’s dependence on digital platforms and sensitive financial data. Over 60% of cyberattacks in India now target BFSI and Fintech, and cloud-related security incidents have risen by more than 45% in the last two years. With rapid mobile banking adoption expanding the attack surface, risks such as unauthorized access, data leakage, credential compromise, and insider-driven breaches continue to intensify, making continuous, intelligence-driven cyber defence essential for financial institutions today.

Commenting on the development, Mr. Zakir Hussain Rangwala, CEO, BD Software Distribution Pvt. Ltd., said, “As financial brands accelerate digital adoption, robust encryption, zero-trust architecture, and continuous monitoring are no longer optional, they are foundational to trust and financial stability. Our focus is enabling institutions to go not just digital, but safely digital.”

The post BD Soft strengthens cybersecurity offerings for BFSI and Fintech businesses with advanced solutions appeared first on ELE Times.

The role of AI processor architecture in power consumption efficiency

EDN Network - Tue, 11/25/2025 - 10:27

From 2005 to 2017—the pre-AI era—the electricity flowing into U.S. data centers remained remarkably stable. This was true despite the explosive demand for cloud-based services. Social networks such as Facebook, Netflix, real-time collaboration tools, online commerce, and the mobile-app ecosystem all grew at unprecedented rates. Yet continual improvements in server efficiency kept total energy consumption essentially flat.

In 2017, AI deeply altered this course. The escalating adoption of deep learning triggered a shift in data-center design. Facilities began filling with power-hungry accelerators, mainly GPUs, for their ability to crank through massive tensor operations at extraordinary speed. As AI training and inference workloads proliferated across industries, energy demand surged.

By 2023, U.S. data centers had doubled their electricity consumption relative to a decade earlier with an estimated 4.4% of all U.S. electricity now feeding data-center racks, cooling systems, and power-delivery infrastructure.

According to the Berkeley Lab report, data-center load growth has tripled over the past decade and is projected to double or triple again by 2028. The report estimates that AI workloads alone could by that time consume as much electricity annually as 22% of all U.S. households—a scale comparable to powering tens of millions of homes.

Total U.S. data-center electricity consumption increased ten-fold from 2014 through 2028. Source: 2024 U.S. Data Center Energy Usage Report, Berkeley Lab

This trajectory raises a question: What makes modern AI processors so energy-intensive? Whether rooted in semiconductor physics, parallel-compute structures, memory-bandwidth bottlenecks, or data-movement inefficiencies, understanding the causes becomes a priority. Analyzing the architectural foundations of today’s AI hardware may lead to corrective strategies to ensure that computational progress does not come at the expense of unsustainable energy demand.

What’s driving energy consumption in AI processors

Unlike traditional software systems—where instructions execute in a largely sequential fashion, one clock cycle and one control-flow branch at a time—large language models (LLMs) demand massively parallel elaboration of multiple-dimensional tensors. Matrices many gigabytes in size must be fetched from memory, multiplied, accumulated, and written back at amazing rates. In state-of-the-art models, this process encompasses hundreds of billions to trillions of parameters, each of which must be evaluated repeatedly during training.

Training models at this scale require feeding enormous datasets through racks of GPU servers running continuously for weeks or even months. While the computational intensity is extreme, so is the energy footprint. For example, the training run for OpenAI’s GPT-4 is estimated to have consumed around 50 gigawatt-hours of electricity. That’s roughly equivalent to powering the entire city of San Francisco for three days.

This immense front-loaded investment in energy and capital defines the economic model of leading-edge AI. Model developers must absorb stunning training costs upfront, hoping to recover them later through the widespread use of the inferred model.

Profitability hinges on the efficiency of inference, the phase during which users interact with the model to generate answers, summaries, images, or decisions. “For any company to make money out of a model—that only happens on inference,” notes Esha Choukse, a Microsoft Azure researcher who investigates methods for improving the efficiency of large-scale AI inference systems. His quote appeared in the May 20, 2025, MIT Technology Review article “We did the math on AI’s energy footprint. Here’s the story you haven’t heard.”

Indeed, experts across the industry consistently emphasize that inference not training is becoming the dominant driver of AI’s total energy consumption. This shift is driven by the proliferation of real-time AI services—millions of daily chat sessions, continuous content generation pipelines, AI copilots embedded into productivity tools, and ever-expanding recommender and ranking systems. Together, these workloads operate around the clock, in every region, across thousands of data centers.

As a result, it’s now estimated that 80–90% of all compute cycles serve AI inference. As models continue to grow, user demand accelerates, and applications diversify, further widening this imbalance. The challenge is no longer merely reducing the cost of training but fundamentally rethinking the processor architectures and memory systems that underpin inference at scale.

Deep dive into semiconductor engineering

Understanding energy consumption in modern AI processors requires examining two fundamental factors: data processing and data movement. In simple terms, this is the difference between computing data and transporting data across a chip and its surrounding memory hierarchy.

At first glance, the computational side seems conceptually straightforward. In any AI accelerator, sizeable arrays of digital logic—multipliers, adders, accumulators, activation units—are orchestrated to execute quadrillions of operations per second. Peak theoretical performance is now measured in petaFLOPS with major vendors pushing toward exaFLOP-class systems for AI training.

However, the true engineering challenge lies elsewhere. The overwhelming contributor to energy consumption is not arithmetic—it is the movement of data. Every time a processor must fetch a tensor from cache or DRAM, shuffle activations between compute clusters, or synchronize gradients across devices, it expends orders of magnitude more energy than performing the underlying math.

A foundational 2014 analysis by Professor Mark Horowitz at Stanford University quantified this imbalance with remarkable clarity. Basic Boolean operations require only tiny amounts of energy—on the order of picojoules (pJ). A 32-bit integer addition consumes roughly 0.1 pJ, while a 32-bit multiplication uses approximately 3 pJ.

By contrast, memory operations are dramatically more energy hungry. Reading or writing a single bit in a register costs around 6 pJ, and accessing 64 bits from DRAM can require roughly 2 nJ. This represents nearly a 10,000× energy differential between simple computation and off-chip memory access.

This discrepancy grows even more pronounced at scale. The deeper a memory request must travel—from L1 cache to L2, from L2 to L3, from L3 to high-bandwidth memory (HBM), and finally out to DRAM—the higher the energy cost per bit. For AI workloads, which depend on massive, bandwidth-intensive layers of tensor multiplications, the cumulative energy consumed by memory traffic considerably outstrips the energy spent on arithmetic.

In the transition from traditional, sequential instruction processing to today’s highly parallel, memory-dominated tensor operations, data movement—not computation—has emerged as the principal driver of power consumption in AI processors. This single fact shapes nearly every architectural decision in modern AI hardware, from enormous on-package HBM stacks to complex interconnect fabrics like NVLink, Infinity Fabric, and PCIe Gen5/Gen6.

Today’s computing horsepower: CPUs vs. GPUs

To gauge how these engineering principles affect real hardware, consider the two dominant processor classes in modern computing:

  • CPUs, the long-standing general-purpose engines of software execution
  • GPUs, the massively parallel accelerators that dominate AI training and inference today

A flagship CPU such as AMD’s Ryzen Threadripper PRO 9995WX (96 cores, 192 threads) consumes roughly 350 W under full load. These chips are engineered for versatility—branching logic, cache coherence, system-level control—not raw tensor throughput.

AI processors, in contrast, are in a different league. Nvidia’s latest B300 accelerator draws around 1.4 kW on its own. A full Nvidia DGX B300 rack unit, housing eight accelerators plus supporting infrastructure, can reach 14 kW. Even in the most favorable comparison, this represents a 4× increase in power consumption per chip—and when comparing full server configurations, the gap can expand to 40× or more.

Crucially, these raw power numbers are only part of the story. The dramatic increases in energy usage are multiplied by AI deployments in data centers where tens of thousands of such GPUs are running around the clock.

Yet hidden beneath these amazing numbers lies an even more consequential industry truth, rarely discussed in public and almost never disclosed by vendors.

The well-kept industry secret

To the best of my knowledge, no major GPU or AI accelerator vendor publishes the delivered compute efficiency of their processors defined as the ratio of actual throughput achieved during AI workloads to the chip’s theoretical peak FLOPS.

Vendors justify this absence by noting that efficiency depends heavily on the software workload; memory access patterns, model architecture, batch size, parallelization strategy, and kernel implementation can all impact utilization. This is true, and LLMs place extreme demands on memory bandwidth causing utilization to drop substantially.

Even acknowledging these complexities, vendors still refrain from providing any range, estimate, or context for typical real-world efficiency. The result is a landscape where theoretical performance is touted loudly, while effective performance remains opaque.

The reality, widely understood among system architects but seldom stated plainly is simple: “Modern GPUs deliver surprisingly low real-world utilization for AI workloads—often well below 10%.”

A processor advertised at 1 petaFLOP of peak AI compute may deliver only ~100 teraFLOPS of effective throughput when running a frontier-scale model such as GPT-4. The remaining 900 teraFLOPS are not simply unused—they are dissipated as heat requiring extensive cooling systems that further compound total energy consumption.

In effect, much of the silicon in today’s AI processors is idle most of the time, stalled on memory dependencies, synchronization barriers, or bandwidth bottlenecks rather than constrained by arithmetic capability.

This structural inefficiency is the direct consequence of the imbalance described earlier: arithmetic is cheap, but data movement is extraordinarily expensive. As models grow and memory footprints balloon, this imbalance worsens.

Without a fundamental rethinking of processor architecture—and especially of the memory hierarchy—the energy profile of AI systems will continue to scale unsustainably.

Rethinking AI processors

The implications of this analysis point to a clear conclusion: the architecture of AI processors must be fundamentally rethought. CPUs and GPUs each excel in their respective domains—CPUs in general-purpose control-heavy computation, GPUs in massively parallel numeric workloads. Neither was designed for the unprecedented data-movement demands imposed by modern large-scale AI.

Hierarchical memory caches, the cornerstone of traditional CPU design, were originally engineered as layers to mask the latency gap between fast compute units and slow external memory. They were never intended to support the terabyte-scale tensor operations that dominate today’s AI workloads.

GPUs inherited versions of these cache hierarchies and paired them with extremely wide compute arrays, but the underlying architectural mismatch remains. The compute units can generate far more demand for data than any cache hierarchy can realistically supply.

As a result, even the most advanced AI accelerators operate at embarrassingly low utilization. Their theoretical petaFLOP capabilities remain mostly unrealized—not because the math is difficult, but because the data simply cannot be delivered fast enough or close enough to the compute units.

What is required is not another incremental patch layered atop conventional designs. Instead, a new class of AI-oriented processor architecture must emerge, one that treats data movement as the primary design constraint rather than an afterthought. Such architecture must be built around the recognition that computation is cheap, but data movement is expensive by orders of magnitude.

Processors of the future will not be defined by the size of their multiplier arrays or peak FLOPS ratings, but by the efficiency of their data pathways.

Lauro Rizzatti is a business advisor at VSORA, a company offering silicon solutions for AI inference. He is a verification consultant and industry expert on hardware emulation.

Related Content

The post The role of AI processor architecture in power consumption efficiency appeared first on EDN.

Created a parallel serial adapter for a dot matrix printer

Reddit:Electronics - Tue, 11/25/2025 - 09:48
Created a parallel serial adapter for a dot matrix printer

Went to a local electronics store to buy some knobs and things, I mentioned dot matrix printers to an employee and he pulled one out of his butt (the back of the store) and gave it to me for free!

Felt like I had to make the serial connector myself to go with the retro feel, so I did!

submitted by /u/cstrlib
[link] [comments]

Advancing Quantum Computing R&D through Simulation

ELE Times - Tue, 11/25/2025 - 09:47

Courtesy: Synopsys

Even as we push forward into new frontiers of technological innovation, researchers are revisiting some of the most fundamental ideas in the history of computing.

Alan Turing began theorizing the potential capabilities of digital computers in the late 1930s, initially exploring computation and later the possibility of modeling natural processes. By the 1950s, he noted that simulating quantum phenomena, though theoretically possible, would demand resources far beyond practical limits — even with future advances.

These were the initial seeds of what we now call quantum computing. And the challenge of simulating quantum systems with classical computers eventually led to new explorations of whether it would be possible to create computers based on quantum mechanics itself.

For decades, these investigations were confined within the realms of theoretical physics and abstract mathematics — an ambitious idea explored mostly on chalkboards and in scholarly journals. But today, quantum computing R&D is rapidly shifting to a new area of focus: engineering.

Physics research continues, of course, but the questions are evolving. Rather than debating whether quantum computing can outpace classical methods — it can, in principle — scientists and engineers are now focused on making it real: What does it take to build a viable quantum supercomputer?

Theoretical and applied physics alone cannot answer that question, and many practical aspects remain unsettled. What are the optimal materials and physical technologies? What architectures and fabrication methods are needed? And which algorithms and applications will unlock the most potential?

As researchers explore and validate ways to advance quantum computing from speculative science to practical breakthroughs, highly advanced simulation tools — such as those used for chip design — are playing a pivotal role in determining the answers.

Pursuing quantum utility

In many ways, the engineering behind quantum computing presents even more complex challenges than the underlying physics. Generating a limited number of “qubits” — the basic units of information in quantum computing — in a lab is one thing. Building a large-scale, commercially viable quantum supercomputer is quite another.

A comprehensive design must be established. Resource requirements must be determined. The most valuable and feasible applications must be identified. And, ultimately, the toughest question of all must be answered: Will the value generated by the computer outweigh the immense costs of development, maintenance, and operation?

The latest insights were detailed in a recent preprint, “How to Build a Quantum Supercomputer: Scaling from Hundreds to Millions of Qubits, by Mohseni et. al. 2024,” which I helped co-author alongside Synopsys principal engineer John Sorebo and an extended group of research collaborators.

Increasing quantum computing scale and quality

Today’s quantum computing research is driven by fundamental challenges: scaling up the number of qubits, ensuring their reliability, and improving the accuracy of the operations that link them together. The goal is to produce consistent and useful results across not just hundreds, but thousands or even millions of qubits.

The best “modalities” for achieving this are still up for debate. Superconducting circuits, silicon spins, trapped ions, and photonic systems are all being explored (and, in some cases, combined). Each modality brings its own unique hurdles for controlling and measuring qubits effectively.

Numerical simulation tools are essential in these investigations, providing critical insights into how different modalities can withstand noise and scale to accommodate more qubits. These tools include:

  • QuantumATK for atomic-scale modeling and material simulations.
  • 3D High Frequency Simulation Software (HFSS) for simulating the planar electromagnetic crosstalk between qubits at scale.
  • RaptorQu for high-capacity electromagnetic simulation of quantum computing applications.

Advancing quantum computing R&D with numerical simulation

The design of qubit devices — along with their controls and interconnects — blends advanced engineering with quantum physics. Researchers must model phenomena ranging from electron confinement and tunnelling in nanoscale materials to electromagnetic coupling across complex multilayer structures

Many issues that are critical for conventional integrated circuit design and atomic-scale fabrication (such as edge roughness, material inhomogeneity, and phonon effects) must also be confronted when working with quantum devices, where even subtle variations can influence device reliability. Numerical simulation plays a crucial role at every stage, helping teams:

  • Explore gate geometries.
  • Optimize Josephson junction layouts.
  • Analyze crosstalk between qubits and losses in superconducting interconnects.
  • Study material interfaces that impact performance.

By accurately capturing both quantum-mechanical behavior and classical electromagnetic effects, simulation tools allow researchers to evaluate design alternatives before fabrication, shorten iteration cycles, and gain deeper insight into how devices operate under realistic conditions.

Advanced numerical simulation tools such as QuantumATK, HFSS, and RaptorQu are transforming how research groups approach computational modeling. Instead of relying on a patchwork of academic codes, teams can now leverage unified environments — with common data models and consistent interfaces — that support a variety of computational methods. These industry-grade platforms:

  • Combine reliable yet flexible software architectures with high-performance computational cores optimized for multi-GPU systems, accessible through Python interfaces that enable programmable extensions and custom workflows.
  • Support sophisticated automated workflows in which simulations are run iteratively, and subsequent steps adapt dynamically based on intermediate results.
  • Leverage machine learning techniques to accelerate repetitive operations and efficiently handle large sets of simulations, enabling scalable, data-driven research.

Simulation tools like QuantumATK, HFSS, and RaptorQu are not just advancing individual research projects — they are accelerating the entire field, enabling researchers to test new ideas and scale quantum architectures more efficiently than ever before. With Ansys now part of Synopsys, we are uniquely positioned to provide end-to-end solutions that address both the design and simulation needs of quantum computing R&D.

Empowering quantum researchers with industry-grade solutions

Despite the progress in quantum computing research, many teams still rely on disjointed, narrowly scoped open-source simulation software. These tools often require significant customization to support specific research needs and generally lack robust support for modern GPU clusters and machine learning-based simulation speedups. As a result, researchers and companies spend substantial effort adapting and maintaining fragmented workflows, which can limit the scale and impact of their numerical simulations.

In contrast, mature, fully supported commercial simulation software that integrates seamlessly with practical workflows and has been extensively validated in semiconductor manufacturing tasks offers a clear advantage. By leveraging such platforms, researchers are freed to focus on qubit device innovation rather than spending time on infrastructure challenges. This also enables the extension of numerical simulation to more complex and larger-scale problems, supporting rapid iteration and deeper insight.

To advance quantum computing from research to commercial reality, the quantum ecosystem needs reliable, comprehensive numerical simulation software — just as the semiconductor industry relies on established solutions from Synopsys today. Robust, scalable simulation platforms are essential not only for individual projects but for the growth and maturation of the entire quantum computing field.

“Successful repeatable tiles with superconducting qubits need to minimize crosstalk between wires, and candidate designs are easier to compare by numerical simulation than in lab experiments,” said Qolab CTO John Martinis, who was recently recognized by the Royal Swedish Academy of Sciences for his groundbreaking work in quantum mechanics. “As part of our collaboration, Synopsys enhanced electromagnetic simulations to handle increasingly complex microwave circuit layouts operating near 0K temperature. Simulating future layouts optimized for quantum error-correcting codes will require scaling up performance using advanced numerical methods, machine learning, and multi-GPU clusters.”

The post Advancing Quantum Computing R&D through Simulation appeared first on ELE Times.

Pages

Subscribe to Кафедра Електронної Інженерії aggregator