Українською
  In English
Microelectronics world news
I don't know what I made but it's entertaining - miniSynth from an astable multivibrator circuit
| submitted by /u/Daniels998 [link] [comments] |
Jumping the Jeep: An alternative cost-effective solar cell example app

A solar charging kit, inexpensive as-is and purchased after further promotional enticement, enables keeping a remotely located vehicle battery topped off.
One of the things I enjoy most about technology is watching a new approach (along with products based on it) hit its high-volume stride, typically driven by one or only a couple of early applications, and then just explode from there, both replacing precursor technologies and expanding into brand new applications and markets. This has certainly been the case, for example, with LEDs. See, for example, my recent teardown (where they replaced fluorescent tubes) for an example of the former, and an earlier teardown (where their low power consumption and DC voltage foundation enabled the development of a light bulb with integrated battery backup) for an example of the latter.
A solar revolutionOr take, as another technology case study, solar cells. Their combination of efficiency and cost-effectiveness, in combination with equally pervasive lithium battery technology, has enabled widespread replacement of predecessor SLA-based energy storage systems, both portable and whole-home permanent installations, while dramatically expanding the accessible market for such devices. At the same time, they’re helping create entirely new categories of products. Take, as a humble example, Renogy’s 10W solar trickle charger kit, two of which I purchased back in October 2024 and one of which I recently, belatedly, and finally pressed into service:




Right now, as I write this, they’re selling on Amazon for $25.17 each, brand new. A year-plus ago, during Amazon’s Prime Days sales, I got them off the Resale (formerly Warehouse) site in used, like-new condition for $17.74. I don’t think they’d even been opened by the prior purchaser(s) prior to getting returned. The intent at the time was to use them to keep the batteries in two of my vehicles, then outdoor-stored at a lot about a half hour drive away, trickle-charged up. But I could never figure out how to securely attach the solar cells to the vehicle covers, far from routing their outputs to the battery compartments. That said, I eventually figured that latter part out: SAE extension cables:

One of the vehicles, my 2001 Volkswagen Eurovan Camper, is now parked in my garage for critter-protection purposes. The other, a 2006 Jeep Wrangler Unlimited Rubicon, most recently mentioned last March when I discussed its then-drained battery state, is still down there (now with a permanently disconnected battery). A few months back, when I drove down and checked on it, my preparatory suspicion was confirmed; as happens every few years, the combination of persistent sun and still-frequent precipitation (rain, snow, hail…) exposure, along with also-frequent wind, had disintegrated the cover:

While waiting for the replacement cover to arrive, I had a bright idea; this’d be the perfect time to finally try out that solar cell kit! My original idea was to mount it to the now-exposed vehicle hood. But then I realized that I had an even better option available, inside the vehicle:

in combination with the 12V auxiliary power connector built into the console:

As you can see from the above image (which I snagged from an enthusiast forum thread post to save me an hour-long round-trip drive to the storage lot to take my own shot; that’s not actually my rig), there are two of them. One, the “cigarette lighter” located within the ashtray, is ignition-switched. It obviously won’t work for my purposes. The other, while (I think) still fused, otherwise routes directly to the battery; it’s always “hot”. That’s the one I needed and used:

And it works perfectly! My perhaps-obvious concern was two-fold:
- It’d either not work sufficiently (or at all), leaving me with an eventually-drained battery once again, or
- It’d work too well, not terminating the trickle charge when it sensed a “full” state, thereby also leading to the battery’s demise (along with who-knows-what other issues).
Two weeks later, when I went back and checked (in the process of installing the new vehicle cover), I happily discovered that all my worrying was for naught; it was working exactly as planned. Now I just need to figure out how to securely attach the solar cell to the outside of the new cover, and I’ll be set! Suggestions, along with more general thoughts, are as-always welcomed in the comments!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- SLA batteries: More system form factors and lithium-based successors
- Perusing a LED-based gel nail polish UV lamp
- Dissecting a battery-backed LED light bulb
- Dead Lead-acid Batteries: Desulfation-resurrection opportunities?
The post Jumping the Jeep: An alternative cost-effective solar cell example app appeared first on EDN.
Power Tips #150: Overcoming high-voltage monitoring challenges in gigawatt-scale data centers

As AI and machine learning workloads accelerate, data center power consumption is beginning to outstrip existing infrastructure capacity. To meet this rising demand, new high-voltage DC standards support the higher-power, denser server racks now found at gigawatt-scale facilities. These high-voltage standards create engineering challenges when monitoring high-voltage power rails.
Designers need reliable, accurate, and fast-acting voltage supervision to prevent overvoltage damage to downstream components, and to help ensure a timely system response to undervoltage conditions. This article presents a supervision approach that addresses these requirements and enables the reliable deployment of next-generation high-voltage DC architectures.
The push toward high-voltage DC architecturesThe power profile of modern data centers is undergoing a dramatic shift as AI becomes the dominant application. Machine learning with large graphics processing unit arrays consumes power at levels once associated with industrial equipment rather than IT hardware. It is increasingly common for a single rack to draw 60 kW to 100 kW. Next‑generation AI systems are expected to push beyond 150 kW per rack.
Because traditional 48-V distribution designs cannot efficiently support these levels, designers are turning to a new class of high‑voltage DC standards centered around ±400 V or 800 V distribution. This shift, as shown in Figure 1, is not simply an incremental upgrade; it represents a fundamental change in the delivery of power across gigawatt‑scale facilities.
Figure 1 Conventional versus high-voltage data center power distribution. (Source: Texas Instruments)
Efficiency continues to drive the transition to higher voltages. Higher voltages reduce current and the I²R losses that dominate high power distribution, while also substantially cutting current and reducing conduction losses in cables, busbars, and connectors. Higher efficiency at large AI campuses means lower cooling requirements, improved energy performance, and increased computing density.
Higher voltages also unlock greater power‑delivery capability. Delivering 150 kW to 300 kW per rack at 48V requires heavy conductors, parallel cabling, and complex routing. High voltages deliver the same power with manageable current levels, enabling simpler infrastructures and longer distribution distances without excessive copper mass.
Cost provides yet another compelling factor. Smaller conductors, lighter busbars, and reduced copper usage lower material and installation expenses. At modern hyperscale data center campuses, these reductions are substantial.
Challenges in monitoring high-voltage power railsAs data‑center power architectures migrate toward higher‑voltage DC distribution, the need for monitoring and protection circuitry increases significantly. Higher-voltage DC distribution increases demands on monitoring and protection circuitry. Operating at ±400 V or 800 V means that every disturbance or transient condition carries more stored energy, with components operating closer to their absolute limits. These conditions reduce the margin for error and make precise power‑rail supervision essential.
Designers must contend with higher fault energy levels, faster electrical dynamics, increased electromagnetic noise, and tighter system‑level coordination requirements. In this environment, monitoring circuits must distinguish between harmless fluctuations and true fault conditions, with far greater accuracy and speed than lower‑voltage systems.
With these broader challenges in mind, let’s look more closely at two specific issues surrounding under- and overvoltage events:
- Response time. The voltage monitor must respond to faults fast enough to prevent damage to downstream components, but should not trigger erroneously from a noisy environment or short transient voltage fluctuations. For example, imagine a large current spike causing the supply voltage to drop while the power supply responds. If the voltage drops for only a very short time, it may not be considered a fault condition, thus requiring no action. As soon as the voltage is low enough to be considered a fault, however, the voltage monitor should take action as soon as possible to prevent damage.
- Size requirements are another common challenge for voltage monitoring. High-voltage data-center power supplies have extremely limited space, requiring the smallest possible monitoring solution. But it also has to be reliable. Ensuring that the voltage monitoring solution can be trusted to respond to faults is imperative to a reliable power supply and distribution system.
Figure 2 shows a minimal high-voltage monitoring circuit implementation using:
- A high-voltage resistor ladder to step down the power rail for sensing comparators.
- Two comparators to signal under- and overvoltage faults.
- A voltage reference for comparators.
- Filtering components.
- An amplifier to provide a scaled-down voltage for the analog-to-digital converter (ADC) for analog monitoring and telemetry of the power rail.

Figure 2 High-voltage monitoring circuit building blocks. (Source: Texas Instruments)
Implementing this circuit with discrete components may present significant drawbacks. Individual component tolerances will add together, resulting in significant errors requiring costly, high-accuracy, low-temperature-drift components. Resistors are especially problematic, as each resistor’s uncorrelated error will sum to create a significant cumulative error in the resistor-divider. Discrete components consume significant board space, which is typically at a premium in data-center applications.
Figure 3 shows a reference layout with space requirements for high-voltage monitoring with discrete components.

Figure 3 A discrete high-voltage monitoring implementation. (Source: Texas Instruments)
An integrated solutionAn integrated device for high-voltage supervision addresses these challenges by fully integrating the high-voltage resistor-divider, comparators, buffer, and additional features. The functional diagram in Figure 4 illustrates this approach, helping reduce total solution size while maintaining high performance.
By integrating the resistors, reference, and comparators, TI’s TPS371K-Q1 achieves an accuracy of 1% across the –40°C to 125°C temperature range, with a fast fault detection time of <5 µs, programmable glitch rejection and release delay time, as well as a 1% accurate high-bandwidth buffer that can directly drive 16-bit ADCs or downstream control circuits.

Figure 4 TPS371K-Q1 functional block diagram. (Source: Texas Instruments)
An integrated monitoring solution also provides significant board space savings in a compact package (Figure 5), requiring minimal external components.

Figure 5 Integrated high-voltage monitoring solution. (Source: Texas Instruments)
Application exampleThe implementation of a voltage monitoring system using the TPS371K-Q1 is straightforward. Figure 6 shows a basic schematic for monitoring the ±400V or 800V input to a DC/DC converter.

Figure 6 Voltage monitoring for a high-voltage DC/DC converter. (Source: Texas Instruments)
Using resistors on the ADJ OV and ADJ UV pins, designers can select under- and overvoltage thresholds to fit their system. The CTR and CTS pins allow the use of a capacitor to program a delay before assertion of a fault and a delay before deassertion once the voltage returns to normal. Open-drain outputs enable easy interface with logic levels other than the device’s own supply voltage. The VSENSE output pin provides a scaled representation of the SENSE input voltage for direct connection to an ADC. Designers can select voltage sense output factors with options ranging from 200 to 900.
Integrated monitoring solutionsThe transition to high‑voltage DC architectures is reshaping design requirements for next‑generation data‑center power systems, especially as AI workloads continue to push rack‑level power far beyond the limits of today’s distribution schemes. Reliable voltage supervision becomes foundational, helping ensure high‑energy power-rail monitoring with the speed, accuracy, and reliability required to protect downstream converters and maintain system stability.
Integrated monitoring solutions such as the TPS371K-Q1 address these challenges by combining precise threshold detection, fast fault response, programmable filtering, and compact implementation into a single device optimized for the electrical and space constraints of modern data centers. By adopting advanced monitoring approaches, designers can confidently deploy ±400 V and 800 V architectures that deliver the efficiency, power density, and reliability needed to support the continued growth of AI‑driven computing at the gigawatt scale.

Henry Naguski is an applications engineer for Linear Power at Texas Instruments, working with voltage references and supervisors. He specializes in shunt voltage references and high-voltage supervisors. Henry holds a bachelor’s degree in computer engineering from Montana State University.
Masoud Beheshti leads application engineering and marketing for Linear Power at Texas Instruments. He brings extensive experience in power management, having held roles in system engineering, product line management, and marketing and applications leadership. Masoud holds a bachelor’s degree in electrical engineering from Ryerson University and an MBA with concentrations in marketing and finance from Southern Methodist University.
Related Content
- The transition from 54-V to 800-V power in AI data centers
- The shift to 800-VDC power architectures in AI factories
- TI launches power management devices for AI computing
- TI’s power devices focus on higher power density
- Power Tips
The post Power Tips #150: Overcoming high-voltage monitoring challenges in gigawatt-scale data centers appeared first on EDN.
Variable‑reluctance sensors: From fundamentals to speed sensing

Variable reluctance (VR) sensors transform mechanical motion into electrical signals by exploiting changes in magnetic flux. As a ferromagnetic target moves past the sensor’s pole piece, the reluctance of the magnetic circuit varies, inducing a voltage in the coil.
This simple yet robust principle has made VR sensors indispensable in applications ranging from automotive crankshaft speed detection to industrial position monitoring. Their ability to deliver precise motion feedback without requiring external excitation makes them a cost-effective choice for engineers designing systems that demand reliable speed and position sensing.
Magnetic reluctance and VR sensors
Reluctance is a physical quantity that describes the opposition a magnetic circuit offers to the flow of magnetic flux. For instance, in the air gap of a permanent magnet—an essential part of a magnetic circuit—the reluctance is high because air has very low magnetic permeability.
This reluctance drops significantly when a piece of soft iron is placed in direct contact with the magnet’s poles, while it assumes an intermediate value if the same iron piece is positioned within the air gap without touching the poles. In each case, the magnetic field is altered accordingly.
VR sensors exploit this property by combining a permanent magnet with a coil to detect changes in magnetic flux. As ferromagnetic targets—such as gear teeth—modulate the magnetic circuit’s reluctance, an alternating voltage is induced in the coil. These passive magnetic transducers are widely applied in engine speed sensing and crankshaft/camshaft timing, valued for their ruggedness in high‑temperature and high‑performance environments.
The diagram below illustrates the operation of a VR sensor. The coil’s core is positioned close to a rotating gear, and each time a tooth passes near the sensor, the reluctance of the magnetic circuit formed by the permanent magnet changes. This variation alters the magnetic field, inducing a current in the coil and producing a voltage signal.
The frequency and amplitude of this signal are directly proportional to the gear’s rotational speed, while the direction of rotation has no effect. The signal amplitude, however, decreases as the air gap between the sensor and the gear teeth increases. Consequently, the primary limitation of VR sensors is their inability to reliably detect very slow or distant movements.

Figure 1 Schematic depicts the core arrangement of a variable reluctance sensor near a gear tooth. Source: Author
In essence, a permanent magnet forms the core of a VR sensor, establishing a fixed magnetic field. When a ferrous metal target—such as a gear tooth—approaches and passes the pole piece, the field strength changes. The alternating presence and absence of the ferrous material modulates the reluctance, or “resistance to the flow” of the magnetic field. This dynamic variation alters the field strength, inducing a current in the coil winding connected to the output terminals.
This has led to the widespread use of VR sensors across many industries. Consequently, they are also known by a range of application-specific names, including magnetic pickups, passive speed sensors, motion sensors, pulse generators, frequency generators, variable reluctance speed sensors, transducers, magnetic probes, and timing probes.
From this point onward, the discussion turns to the principal theme of the post—variable‑reluctance speed (VRS) sensors. Let us take a quick look at VRS sensors in action and the practical factors that matter most.
Note on terminology: To prevent confusion between VR and VRS, VR designates the broader class of magnetic transducers that convert motion into electrical signals, while VRS identifies the specialized subset engineered for rotational speed measurement.
Understanding VRS industrial magnetic speed sensors
A variable reluctance speed (VRS) sensor—often marketed by manufacturers as an industrial magnetic speed sensor—is a rugged, self-powered device that requires no external voltage source. It’s widely used to deliver speed, timing, and synchronization data to control circuits or displays as a pulse train, and is valued for its reliability in high temperature, high-performance environments.
In basic terms, a VRS industrial magnetic speed sensor employs a permanent magnet, pole piece, and coil to convert the motion of a ferrous target—such as a gear tooth—into an electrical signal.
The most common target is metal gear, but examples include bolt heads, disc perforations, and turbine blades. In every case, the target must be ferrous—preferably unhardened steel—to ensure reliable signal generation.
The output of a VRS sensor is an AC voltage whose amplitude and waveform vary with the speed of the monitored device. This signal is typically specified in terms of peak-to-peak voltage (Vp-p). Each complete waveform (cycle) is generated as a target passes the sensor’s pole piece (sensing area). When a standard gear is used, the resulting output signal closely resembles a sine wave when observed on an oscilloscope.

Figure 2 Diagram illustrates an application example of an industrial variable reluctance speed sensor. Source: Phoenix America
Signal conditioning for VRS sensors
Conditioning the output signal from a VRS sensor is crucial before it’s processed by downstream electronics such as a microcontroller. Proper conditioning ensures that the analog signal is efficiently and reliably converted into a clean, usable form—free from interference and with an amplitude compatible with the rest of the circuitry.
Not to refrain, but converting a possibly noisy analog signal with variable amplitude and frequency into a TTL/CMOS-compatible signal is a challenging task that demands careful design and robust signal-conditioning techniques.
Although signal conditioning can be implemented with discrete electronics, several semiconductor manufacturers now offer ICs specifically designed to handle this demanding task. Notably, onsemi provides the NCV1124, while Maxim Integrated, now part of ADI, offers the MAX992x family, both tailored for reliable conversion of variable-reluctance sensor outputs into clean, logic-level signals.
As a related note, this recalls some of my earlier experiments with classic interface and frequency-to-voltage converter ICs such as LM1815, LM2907, and LM2917. These devices, though older in design, provided valuable insight into the challenges of conditioning variable-reluctance sensor outputs and converting them into usable forms for measurement and control applications.

Figure 3 Simplified block diagram of MAX9924 highlights the IC’s role in transforming noisy variable-reluctance sensor inputs into clean, microcontroller-compatible signals. Source: Analog Devices
Just a quick tip: STMicroelectronics’ L9788 is a multifunction IC for automotive engine management systems. Among its many integrated features, it includes a dedicated VRS interface. This block processes crankshaft and camshaft sensor signals, offering both normal operation (conversion of differential voltages) and diagnostic mode (detecting shorts or open conditions). With adaptive hysteresis and built-in filtering, the VRS interface ensures reliable engine synchronization while reducing the need for external conditioning circuits.
Application considerations for VRS sensors
VRS sensors are not intended for sensing extremely low rotational speeds. The target passing the pole piece must travel at a minimum velocity or surface speed to generate an adequate output voltage. Proper sensor selection requires ensuring that the device delivers the necessary Vp-p at the lowest speed of interest, while still operating reliably at the maximum frequency of the application.
In most cases, the polarity of the output signal is inconsequential; when polarity matters, simply reversing the output leads resolves the issue. Furthermore, for every gear-tooth configuration, there exists an optimum pole-piece size and shape that maximize sensor output voltage, a relationship clearly documented in manufacturer datasheets. In addition, correct load resistance and precise air-gap setting are critical to achieving stable performance and consistent signal quality across the operating range.
That is all for now. While simplifying complex topics to fit into the pocket of fundamentals, there is always more detail waiting in the wings. This time, the essentials have been chalked out; deeper layers can follow in future installments—so if you found this technical take useful, share it with colleagues or add your thoughts in the comments to help shape the next deep dive.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Magnetic Sensors for Motion Control
- Ring Magnet Speed Sensing for EPS Systems
- Hall vs. VR: Which speed sensor is wrong for you?
- Method offers fail-safe variable-reluctance sensors
- Modeling and simulation of magnetoresistive sensor systems
The post Variable‑reluctance sensors: From fundamentals to speed sensing appeared first on EDN.
ROHM Strengthens Supply Capability for GaN Power Devices
Combining TSMC’s Process Technology to Build an End-to-End, In-Group Production System
ROHM has decided to integrate its own development and manufacturing technologies for GaN power devices with the process technology of TSMC, with which ROHM has an ongoing partnership, to establish an end-to-end production system within the ROHM Group. By licensing TSMC GaN technology, ROHM will strengthen its supply capability to meet growing demand for GaN in applications such as AI servers and electric vehicles.
GaN power devices offer excellent high-voltage and high-frequency performance, helping to improve efficiency and reduce size in a wide range of applications, and are already used in consumer products such as AC adapters. Adoption is also expanding in high-voltage applications such as power units for AI servers and on-board chargers for electric vehicles (EVs), and demand is expected to continue growing.
ROHM began developing GaN power devices at an early stage and established a mass-production system for 150V GaN at ROHM Hamamatsu in March 2022. In the mid-power range, ROHM has built its supply structure while advancing external collaborations. One of the key partners in this effort has been TSMC: ROHM has adopted a 650V GaN process since 2023, and in December 2024, the two companies entered into a partnership related to automotive GaN, further deepening their collaboration.
This latest integration represents an evolution of that partnership. Under a newly concluded license agreement, TSMC’s process technology will be transferred to ROHM Hamamatsu. ROHM aims to establish the production system in 2027 to meet expanding demand in applications such as AI servers.
Upon completion of the technology transfer, ROHM and TSMC will amicably conclude their automotive GaN partnership. At the same time, the two companies will continue to strengthen collaboration for higher efficiency and more compact power supply systems.
The post ROHM Strengthens Supply Capability for GaN Power Devices appeared first on ELE Times.
element14 Community launches smart security and surveillance design challenge
element14, an Avnet Community, in collaboration with ADI, has launched a new design challenge inviting engineers and makers to develop advanced security and surveillance prototypes.
Participants are tasked with designing a prototype or test rig utilising ADI’s MAX32630FTHR, a versatile development platform, and Würth Elektronik’s SMD LEDs with an integrated WL-ICLED controller. The challenge encourages creative applications of these components to deliver innovative security features.
Selected challengers will receive a free kit of components, with ADI’s MAX32630FTHR as the core element, to assist in building their prototypes. Each participant will document the build process and final outcome through blogs on the element14 Community platform.
Examples of potential applications include facial recognition door entry systems, voice and face detection, environmental monitoring, crowd sentiment analysis, break-in detection and remote security sentry solutions.
“Through this challenge, we’re inviting our global community to showcase creativity and problem-solving in the field of security and surveillance,” said Andreea Teodorescu, Global Director of Product Marketing & element14 Community. “It’s an opportunity for participants to learn, share ideas, and demonstrate how innovative thinking can address real-world safety challenges.”
“We’re excited to collaborate with the element14 Community on a challenge that inspires creativity and problem-solving,” said Stephane Di Vito, ADI Distinguished Engineer, Product Security. “This initiative brings together passionate designers and engineers to explore new ideas and develop solutions that can make security smarter and more effective.”
The post element14 Community launches smart security and surveillance design challenge appeared first on ELE Times.
Testing my new thermal cam on my dead phone’s remains😆
| submitted by /u/Commercial-Neck-7704 [link] [comments] |
First Solar licenses Oxford PV’s patents for US markets
Ascent Solar’s PV blankets to power NOVI AI Pathfinder spacecraft
An LM317-based 0-20mA to 4-20mA 2-wire converter

My recent DI contribution (Another silly simple precision 0/20mA to 4/20mA converter) used the LM337 regulator in a novel circuit arrangement to translate an input 0-20 mA current source into a 4-20 mA two-wire transmitter current loop (a standard two-terminal industrial current source).
The circuit can also be flipped over to perform the same operation using the LM317, which is more widely used, hence easily available, and lower priced. As before, it relies on tapering off an initial 4 mA current to zero in proportion to the input 0-20 mA, and adding the input and the tapered off 4 mA signal to create a 2-wire 4-20 mA output loop. The operation is identical, only with the current directions reversed.
Refer to Figure 1.

Figure 1 An input external boost transistor is used to limit U1 dissipation.
Wow the engineering world with your unique design: Design Ideas Submission Guide
At 0-mA input, the series combination of Rs and Rz sets the 4 mA “zero” value in the 4-20 mA output loop, set using Pz. This is pulled from the 24-V supply, going from IN to OUT, and into the output loop via Rs and Rz.
A 20-mA input current creates a 1.25 V drop in Rs, opposing the internal reference and reducing the current through Rz to 0 mA. The 20-mA input current is pushed into point X and the – (negative terminal) of the output loop, and pulled from the 24-V supply via OUT (through U1), to create a current of 20 mA + 0 mA in the output loop. Span setting is done by Ps.
Accurate current setting requires 2 S/Z passes to set the output current to within 0.05% transfer accuracy, or (actually much) better. Pots should be multi-turn 3296 types or similar, but single-turn trimmers may also work fairly well, as both pots have a small trim range by design.
The di/dV, current stability, and ease of trimming are excellent. Input to output linearity of the basic circuit is 0.02%. The heat sink has been replaced by an external boost transistor that ‘takes over’ above an output loop current of @ 6 mA, limiting U1 dissipation to give better di/dv performance of 0.02% over a voltage range from 5 V to 32 V.
For intermediate input currents, as before, a “zero” 4 mA current is tapered off to 0 mA proportional to the input 0-20 mA to give corresponding matching input and output loop currents.
A reverse protection diode is recommended in the 4-20 mA loop. Current limiting should be applied to limit fault current to safe levels. It can easily be inserted into Q1 emitter in this case.
The 0-20 mA input sees a positive drop here, of 1 to 0 V.
The comments regarding current stability and potentiometers in “Another silly simple precision 0/20mA to 4/20mA converter” are applicable here too.
In conclusion, the operation of an inexpensive, novel, and precise circuit to convert 0-20mA current signals to 4-20mA 2-wire current signals is described, using the LM317 regulator and an external boost transistor.
It easily holds a short-term stability of 0.02% of span, and has a linearity of 0.02%.
Only two stable resistors are needed for good overall temperature stability.
Pot tempco, type, and contact resistance are less critical due to the configuration used.
A 3 V minimum operating voltage allows as much as 1000E of loop resistance with a 24-V supply.
Ashutosh Sapre lives and works in a large city in western India. Drifting uninspired through an EE degree way back in the late nineteen-eighties, he was lucky enough to stumble across and be electrified by the Art of Electronics 1 and 2. Cut to now, he is a confirmed circuit addict, running a business designing, manufacturing, and selling industrial signal processing modules. He is proud of his many dozens of design pads consisting mostly of crossed-out design ideas.
Related Content
- Another silly simple precision 0/20mA to 4/20mA converter
- Silly simple precision 0/20mA to 4/20mA converter
- A 0-20mA source current to 4-20mA loop current converter
- PWM-programmed LM317 constant current source
The post An LM317-based 0-20mA to 4-20mA 2-wire converter appeared first on EDN.
R & S and LITEON demonstrate high‑throughput 5G femtocell testing with the PVT360A
Rohde & Schwarz and LITEON collaborate to showcase a production-optimised test setup for high-throughput multi-device testing at MWC Barcelona 2026. The demonstration will feature the high-performance PVT360A vector performance tester from Rohde & Schwarz characterising in parallel four new LITEON FlexFi 5G femtocells as devices under test (DUT). The setup highlights the adaptability of the test platform to various production and validation environments, all within a compact form factor.
Rohde & Schwarz has designed the PVT360A performance vector tester with a minimal footprint for maximum performance. It is a comprehensive solution for non-signalling 5G NR FR1 and LTE small cell testing in the design verification stage and in production. LITEON has selected the test platform for the manufacturing lines of their new FlexFi 5G femtocell, boosting the overall testing speed by 50%. At MWC 2026, the two companies will showcase a femtocell production testing setup characterising four DUTs using a single PVT360A.
The single‑box vector signal generator (VSG) and vector signal analyser (VSA) solution delivers efficient, high‑performance RF testing and pairs seamlessly with the R&S VSE Vector Signal Explorer software for reliable timing verification as well as comprehensive 5G NR downlink and uplink signal analysis. Engineered to significantly accelerate 5G production testing and streamline design validation workflows, the PVT360A features an innovative 2×8 port architecture, coupled with a unique Smart Channel feature that dynamically optimises resource allocation. This dramatically increases test throughput and enables manufacturers to test more devices in less time.
Beyond core testing efficiency, the PVT360A supports advanced 5G scenarios, including multi-component carrier testing and highly accurate MIMO measurements with optional dual signal generators and analysers. This combination of speed, versatility and support for complex 5G technologies makes the PVT360A a critical tool for manufacturers looking to rapidly scale 5G device production and deliver cutting-edge performance.
To enhance both the production efficiency and quality of its 5G femtocell products, LITEON has successfully integrated the PVT360A performance vector tester into its manufacturing lines, enabling fully automated calibration and verification processes. Leveraging its proprietary Smart Channel technology, a single unit can now simultaneously test four 5G femtocells. This enhancement has delivered a 50% increase in overall testing speed, significantly boosting production throughput while maintaining superior product consistency.
Richard Chiang, General Manager of LITEON Smart Life Application SBU, said: “To enhance our manufacturing excellence, we are embarking on a long-term partnership with Rohde & Schwarz. By adopting their PVT360A platform, we aim to achieve higher levels of automation and precision in our testing processes, ensuring that our products consistently meet the highest market standards.”
Goce Talaganov, Vice President Mobile Radio Testers at Rohde & Schwarz, said: “We are proud to support LITEON in advancing its smart manufacturing strategy with our PVT360A platform. Their ability to achieve higher throughput and consistent quality demonstrates how our scalable multiport architecture and smart channel technology can transform production efficiency. We look forward to deepening our collaboration and enabling even greater innovation in 5G small cell manufacturing.”
The post R & S and LITEON demonstrate high‑throughput 5G femtocell testing with the PVT360A appeared first on ELE Times.
Update: My Z80 homebuild PC is finally working!
| Thank you for everyone’s encouragement earlier and it was cool to read about other peoples mistakes! I finally got my z80 build going - 4mhz beast underclocked to 1mhz for stability and its running best OS out there - BASIC. I really would love to add proper video and audio output next, for a moment serial it is. [link] [comments] |
EPC adds 3-phase BLDC motor drive inverter evaluation board for humanoid robot joint applications
Infineon presents MCU and sensor solutions for the future of AI, IoT, mobility, and robotics
Next-generation embedded systems are essential for applications in the rapidly evolving connected world. They range from high-performance sensors for capturing critical data to advanced microcontrollers (MCUs) that process and analyse this data. At Embedded World 2026, taking place from March 10 to 12, 2026, in Nuremberg, Germany, Infineon Technologies AG will demonstrate how its innovative semiconductor solutions enable green and efficient energy, clean and safe mobility, and an intelligent and secure IoT. True to the motto “Driving decarbonization and digitalisation. Together,” the Infineon booth in Hall 4A, booth 138, will present highlights for applications ranging from AI and IoT to automotive and robotics that contribute to a more sustainable future.
Infineon’s highlight topics at embedded world 2026
Microcontrollers – the core of embedded intelligence: MCUs are the central processing units of modern embedded systems, coordinating control, computation, and connectivity in countless applications. In Nuremberg, Infineon will demonstrate its comprehensive MCU portfolio through live demos that illustrate real-world use cases, such as:
- Edge AI and robotics demonstrations, where Infineon PSOC and AURIX MCUs enable deterministic real-time processing, adaptive control, advanced safety, and secured connectivity
- Demos targeting software-defined vehicles, including the TRAVEO SDV Zonal Demo, highlighting how automotive MCUs support zonal E/E architectures, OTA updates, and software-driven innovation
- Industrial and IoT applications, showing how energy-efficient MCUs combine performance, safety, and cybersecurity to enable smart devices and enable manufacturers to comply with the upcoming European Cyberresilience Act (CRA)
XENSIV sensors – bridging the physical and digital worlds:
Sensors act as the interface between the real world and digital processing, enabling precise data acquisition for control, monitoring, and decision-making processes. At Embedded World 2026, Infineon will present its XENSIV sensor portfolio, demonstrating how sensor data powers advanced systems across automotive, industrial, and consumer electronics. The demos include:
- Robotics and Edge AI demos in which Infineon XENSIV sensors enable robots to see, hear, and feel, providing the environmental and contextual awareness required for safe interaction and autonomous behaviour
- Automotive and SDV-related use cases, showcasing how radar, magnetic, and current sensors support perception, monitoring, and zonal architectures in modern vehicles
- IoT and industrial demonstrations,s including the next generation XENSIV CMOS 60 GHz radar for IoT. These illustrate how MEMS microphones and other XENSIV sensors deliver reliable, high-fidelity data for connected and energy-efficient devices
In addition, Infineon experts will be giving in-depth presentations demonstrating how the company’s MCU and sensor solutions enable efficient, secure, and rapid innovations in areas such as AI, robotics, IoT, and software-defined vehicles.
The post Infineon presents MCU and sensor solutions for the future of AI, IoT, mobility, and robotics appeared first on ELE Times.
UV LED prices rising by 5% in Q1 due to increased material and labor costs
Proud to say I finally got my hands on a RIGOL DHO804
| submitted by /u/ieatgrass0 [link] [comments] |
Compound semiconductor materials market growing at 14% CAGR to almost $5.2bn by 2031
Simple shorts sniffer

Recently, frequent and favorite contributor Nick Cornford gave us a cool and novel acoustic-interface design for a super sub-ohmmeter capable of audibly sniffing out defects in PWBs: “Tuneful track-tracing.”
Figure 1’s design shamelessly nicks Cornford’s concept. It stretches the resistance sensing range by a few decades, thus spanning single-digit milliohms to double-digit ohms. This adds extra versatility for locating spurious connections in both loaded boards and boards with shorts in ground planes. Here’s how it works.

Figure 1 Audible milli-ohmeter output frequency is linear versus resistance over several orders of magnitude.
Wow the engineering world with your unique design: Design Ideas Submission Guide
A 50-mA excitation current is provided to the PWB under test by R5 via connections A (source) and B (half-Kelvin sense and current return). D1 limits the maximum developed voltage to ~700 mV. This prevents (potentially damaging) forward bias of components on loaded boards in case the short being sniffed unexpectedly disappears.
The current return side of B consists of the (approximately) known resistance (44 mΩ) of a 41-inch length of 24 AWG copper wire. The resulting 44 m x 0.05 A = ~2 mV drop provides a null reference for the A1a voltage to current amplifier. We’ll discuss that more shortly (no pun?).
The probe voltage mode signal is converted to current mode by transconductance amplifier Q1/A1a, the associated resistor network, and range selection switch S1. R6 provides static-discharge protection for A1’s input pin while developing only uV of offset from A1’s pA-level bias current. S1 provides two frequency/resistance ranges: 100 Hz/Ω and 10 kHz/Ω.
The shorts-sniffing process consists of sliding probe C along the problematic path on the PWB while listening to the resulting audio output. Its frequency rises or falls with the resistance between the probe contact and the Kelvin connection B rises or falls. Maximum resolution results if a quick initial nulling of offset voltage is done via Null pot R1 adjustment. It provides up to ±2 mV of input offset adjustment to cancel the op-amp offset for a zero (or near) Hz output when probe C is held at the point of excitation current entry to the PWB under test. Of course, you won’t hear the actual fundamental frequency when oscillation is that slow, but only the (annoying) buzz of the square wave rising and falling edges.
The A1b (more or less symmetrical) triwave/squarewave oscillator itself is built around the 2way current mirror comprising Q2, Q3, and D2 as described in this earlier DI: “A two-way mirror—current mirror that is.”
The mirror sources current into timing cap C1, linearly ramping it up, when A1b’s pin 7 is positive, and sinks current when pin 7 is low, ramping it down. The resulting 1Vpp triwave on C1 and the squarewave on pin 7 are approximately symmetrical.
Its actual frequency can be over the range from the subsonic to the ultrasonic, but of course (by definition), little information will be relayed to your ear by either. Thence cometh the utility of range switch S1.
Stephen Woodward‘s relationship with EDN’s DI column goes back quite a long way. Over 200 submissions have been accepted since his first contribution back in 1974. They have included best Design Idea of the year in 1974 and 2001.
Related Content
- Tuneful track-tracing
- A two-way mirror—current mirror that is
- A current mirror reduces Early effect
- Active two-way current mirror
The post Simple shorts sniffer appeared first on EDN.
R&S advances AI-RAN testing using digital twins in collaboration with NVIDIA
Rohde & Schwarz will showcase a new milestone in AI-driven wireless system testing at MWC Barcelona. The testbed, developed in collaboration with NVIDIA, integrates hardware-in-the-loop site-specific channel emulation using the NVIDIA Sionna Research Kit, enabling testing of AI-RAN applications under realistic channel conditions. The demonstration highlights the long-term collaboration of Rohde & Schwarz and NVIDIA, focusing on prototyping and validation of AI-RAN innovation with cutting-edge test and measurement solutions.
Evolving from prior proofs-of-concept in advanced neural receiver design – including custom constellations for pilotless communication – the new testbed advances from link-level validation to system-level verification using the full 5G NR protocol stack.
Powered by a single NVIDIA DGX Spark, the NVIDIA Sionna Research Kit runs a software-defined 5G RAN based on OpenAirInterface, while supporting AI inference workloads that comply with the strict real-time constraints of wireless systems. To showcase the flexibility of the research platform, a novel AI/ML-enhanced link adaptation algorithm has been integrated into the end-to-end system. It dynamically adjusts the downlink modulation and coding scheme (MCS) to optimise spectral efficiency and link reliability. The AI-driven link adaptation can learn not only site-specific propagation characteristics but also user equipment-specific behaviour on the fly, emphasising the need for end-to-end testbeds that capture these effects.
The testbed integrates the R&S SMW200A vector signal generator featuring dynamic channel emulation capabilities and the FSW signal and spectrum analyser. Jointly, these instruments enable the emulation of complex site-specific radio channels, seamlessly interfacing with the NVIDIA Sionna RT differentiable ray-tracing software. This closed-loop setup enables researchers and developers to evaluate the performance of novel AI-driven RAN features under dynamic, site-specific RF conditions – all without leaving the lab.
Gerald Tietscher, Vice President Signal Generators, Power Supplies and Meters at Rohde & Schwarz, said: “We’re excited to continue our ongoing collaboration with NVIDIA with this latest proof-of-concept for testing AI-enhanced base stations for both 5G-Advanced and 6G under realistic propagation conditions. Leveraging digital twin technology and ray tracing, this approach aims to bridge the gap between AI-driven wireless simulations and real-world deployment, facilitating more efficient and accurate testing of next-generation receiver architectures.”
Soma Velayutham, global industry business development lead for telecommunications at NVIDIA, said: “Synthetic data generation is transforming the way we train and validate AI-RAN systems by ensuring accuracy, scalability, and privacy, especially in settings of sparse data. Rohde & Schwarz, leveraging the NVIDIA Sionna Research Kit, exemplifies how industry-leading expertise and innovative technology can come together to accelerate progress in this critical field.”
The post R&S advances AI-RAN testing using digital twins in collaboration with NVIDIA appeared first on ELE Times.
A real-world approach for AI-driven semiconductor manufacturing

The semiconductor manufacturing industry faces an unprecedented data challenge. For the newest devices, test programs can contain over a million test items, generating gigabytes of data per chip across probe, assembly, and test operations. The largest deployments have reached the multi-petabyte range, creating a fundamental problem: traditional business intelligence tools simply cannot handle semiconductor-scale data with millions of columns and rows.
Public comments from three semiconductor executives sum up the challenge. “As a result of the increased complexity of advanced packaging, the amount of manufacturing and test data that semiconductor companies need to analyze has increased sixfold since 2022,” recently commented Mike Campbell, Qualcomm’s chief supply chain Officer.
At the same event, Aziz Safa, corporate VP and GM of Intel Foundry Automation, had this to say: “We have 600 petabytes of data across Intel. The challenge that we have is to be able to run algorithms in the areas where we need that data to solve problems.”
And John Kibarian, CEO of PDF Solutions, mirrored those remarks. In many cases, he said, no more than 5% of the collected semiconductor manufacturing data is used in analytics. Yet more than ever, access to timely analytics is critical to quickly ramp the yield of new advanced process nodes or ensure the quality of complex packages. In this context, it’s critical to find new innovative ways to scale the ability to analyze semiconductor data.
One comprehensive strategy includes a plan to enhance the capability of a data platform, already widely used across the semiconductor industry, to address this challenge by combining scalable analytics infrastructure with advanced AI capabilities, including large language models (LLMs) and autonomous agents.
This approach represents a fundamental rethinking of how semiconductor manufacturers can extract actionable insights from massive, complex datasets.
The scalability problem
Traditional business intelligence (BI) tools face critical limitations in semiconductor manufacturing environments. They rely on local memory, which severely restricts analysis and machine learning capabilities. They also lack computational and organizational scalability often related to the specific characteristics of semiconductor data that may have hundreds of thousands or even millions of parameters to analyze.
Think of a table with a million columns and hundreds of thousands of rows. Visualizing this type of dataset in a traditional data analytics or BI tool has reached its limit, and this approach will not address the future needs of an industry where data size and complexity keep increasing.
Typically, engineers develop bespoke scripts based on summary statistics disconnected from the original data sources, and these scripts are typically served without the infrastructure for robust sharing across the organization.
One answer is a new parallel and distributed data architecture with dynamic partitioning. Rather than bringing raw data to the client for analysis, the system keeps data in the server layer and delivers only the visualizations needed by users. This thin-client approach enables the system to scale dynamically based on current needs by caching in the data layer for faster access and pre-configured analytics running continuously across all available data.
The results are striking. Benchmark testing shows approximately 25-fold performance improvements on typical large test programs with the ability to work with one million test items and beyond, a scale of analysis previously impossible.
The system achieves this through parallelizable performance across both rows (individual die) and columns (test parameters), combining static compute nodes with burst cloud computing for cost-effective scaling to extremely large datasets.
Deploying AI models at scale across enterprise
Deploying AI in semiconductor manufacturing requires more than just training models; it demands a complete operational infrastructure. The infrastructure’s architectural strategy addresses three major operational challenges: deployment bottlenecks caused by manual handoffs and brittle integrations; data friction from building custom pipelines instead of leveraging existing systems; and governance risks from poor lineage between production models and training parameters.
One tool gaining market traction used by data scientists from code to production for semiconductor data is focused on deploying models at the edge. Add-on capabilities include the ability for engineers to add their own models.
An enterprise-grade model registry will enable model lifecycle governance, tracking, and sharing, with full data traceability ensuring that any model’s training inputs are always known.
Breaking down data silos
One of the most significant challenges in semiconductor manufacturing is the fragmentation of critical data across isolated systems. Yield data sits in one place, design diagnosis information in another, and equipment telemetry in yet another. This fragmentation blocks the correlation of volume yield data with physical layout features and prevents engineers from connecting specific process excursions with final yield outcomes.
One solution is extensive data integration efforts via a platform extending beyond traditional manufacturing analytics supported by a semiconductor-specific end-to-end data model.
Central to this effort is the development of a semiconductor-specific semantic data layer that maps the complex relationships between yield, design, process, and tool data. This allows alignment and linking data across domains and sources in the data platform. It also allows LLMs to interpret disparate data types as a unified whole rather than struggling with disconnected information sources.
Workflows as the foundation
A key architectural decision in the platform is to treat workflows as the internal language of the system. Every analytic operation—whether rules, machine learning pipelines, or batch analytics—is expressed as a workflow.
This provides several critical benefits. Workflows serve as the long-term memory of the system, capturing not just results, but the complete methodology used to achieve them. They can be created from learn mode, through LLMs, manually, or programmatically, and can be embedded within larger workflows for maximum reusability. Engineers may never need to directly interact with a workflow, but the capability is there when needed.
Critically, workflows act as semiconductor-specific content and context, encoding best practices as reusable playbooks. They provide transparency into how results are achieved and serve as guardrails for AI reasoning, helping prevent the hallucinations that can occur when LLMs operate without domain constraints.
The agentic LLM platform
The goal is to enable engineers to interact with manufacturing data at a higher level of abstraction. Rather than requiring deep technical knowledge of query languages and data structures, the result is a system where engineers can ask natural language questions and receive actionable insights.
Achieving this vision requires a “Semantic, Agentic, and Secure” infrastructure. The semantic layer is built on domain expertise, creating semiconductor-native knowledge graphs that encode the fundamental data hierarchy of manufacturing. This anchors LLM reasoning in the structural reality of manufacturing data, eliminating ambiguity and providing the ground truth context needed to prevent hallucinations.
For example, the system understands that CV refers to Characterization Vehicle, that yield represents the results of die binning, and that the data hierarchy flows from lot to wafer to die to package. It knows that common analytical tasks include yield trending, bin Pareto analysis, and univariate screening. This enables engineers to ask questions like “Show me the yield trend over the last week” or “What is the root cause of low yield in lot XX?” and receive meaningful, accurate responses.
The platform integrates a model context protocol for a truly agentic system. Rather than just summarizing text or answering questions, the system can autonomously plan and execute complete workflows from raw data ingestion through complex plot generation.
To ensure reliability and transparency, any agentic tasks are executed using scalable analytics workflows. They can be viewed, saved, and modified by engineers at any time to ensure total transparency of LLMs actions.
To ensure the sensitivity of semiconductor manufacturing data, a fully air-gapped, on-premises LLM infrastructure option, designed for intellectual property sovereignty, can be added. This ensures that sensitive yield data and proprietary models never leave secure firewalls, eliminating reliance on third-party cloud providers.
The path forward
A platform like this requires thorough research and development on technology selection, validation and tuning, engaging a large group of architects, developers, quality assurance specialists, designers, and product managers.
This type of platform addresses the critical industry challenge: de-risking AI adoption by securely scaling execution and maximizing return on investment from legacy data, while simultaneously future-proofing infrastructure for the rapidly emerging age of LLMs and autonomous agents.
By combining massive-scale data processing, an operational enterprise, intelligent data integration, and agentic LLM capabilities, all grounded in deep semiconductor domain expertise, the industry can be transformed. The platform can identify how value is extracted from the exponentially growing volumes of manufacturing data.
The approach suggests a future where engineers spend less time wrestling with data infrastructure and more time solving the complex yield and quality challenges that define success in semiconductor manufacturing.
Peter L. Kostka is a Vancouver-based technology entrepreneur with a track record of scaling complex deep-tech concepts into successful commercial outcomes. Currently, he serves as the director of product management for AI at PDF Solutions, where he spearheads the AI technology roadmap and leads rapid prototyping for semiconductor and battery manufacturing sectors.
Editor’s Note
Presentations by Qualcomm’s Mike Campbell (“AI-Driven Innovation in the Semiconductor Industry”) and Intel’s Aziz Safa (“Enabling AI/ML strategy using the PDF Suite”) were given at the 2025 PDF Solutions Users Conference.
John Kibarian’s “Revolutionizing Semiconductor Collaboration: The Emergence of AI-Driven Industry Platforms” keynote was presented at SEMICON West 2025.
Related Content
- Coming Soon: Low-Cost Mini Fabs
- 5 ways manufacturers benefit from AI in chip design
- Tapping AI for Leaner, Greener Semiconductor Fab Operations
- Addressing the Biggest Bottleneck in the AI Semiconductor Ecosystem
- Unlocking compound semiconductor manufacturing’s potential requires yield management
The post A real-world approach for AI-driven semiconductor manufacturing appeared first on EDN.



