Українською
  In English
Feed aggregator
Tune 555 frequency over 4 decades

The versatility of the venerable LMC555 CMOS analog timer is so well known it’s virtually a cliche, but sometimes it can still surprise us. The circuit in Figure 1 is an example. In it a single linear pot in a simple RC network sets the frequency of 555 square wave oscillation over a greater than 10 Hz to 100 kHz range, exceeding a 10,000:1 four decade, thirteen octave ratio. Here’s how it works.
Figure 1 R1 sets U1 frequency from < 10Hz to > 100kHz.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Potentiometer R1 provides variable attenuation of U1’s 0 to V+ peak-to-peak square wave output to the R4R5C1 divider/integrator. The result is a sum of an abbreviated timing ramp component developed by C1 sitting on top of an attenuated square wave component developed by R5. This composite waveshape is input to the Trigger and Threshold pins of U1, resulting in the frequency vs R1 position function plotted on Figure 2′s semi-log graph.

Figure 2 U1 oscillation range vs R1 setting is so wide it needs a log scale to accommodate it.
Curvature of the function does get pretty radical as R1 approaches its limits of travel. Nevertheless, log conformity is fairly decent over the middle 10% to 90% of the pot’s travel and the resulting 2 decades of frequency range. This is sketched in red in Figure 3.

Figure 3 Reasonably good log conformity is seen over mid-80% of R1’s travel.
Of course, as R1 is dialed to near its limits, frequency precision (or lack of it) becomes very sensitive to production tolerances in U1’s internal voltage divider network and those of the circuits external resistors.
This is why U1’s frequency output is taken from pin 7 (Discharge) instead of pin 3 (Output) to at least minimize the effects of loading from making further contributions to instability.
Nevertheless, the strong suit of this design is definitely its dynamic range. Precision? Not so much.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Another weird 555 ADC
- Gated 555 astable hits the ground running
- More gated 555 astable multivibrators hit the ground running
- Inverted MOSFET helps 555 oscillator ignore power supply and temp variations
The post Tune 555 frequency over 4 decades appeared first on EDN.
Emerging trends in battery energy storage systems

Battery energy storage systems (BESSes) are increasingly being adopted to improve efficiency and stability in power distribution networks. By storing energy from both renewable sources, such as solar and wind, and the conventional power grid, BESSes balance supply and demand, stabilizing power grids and optimizing energy use.
This article examines emerging trends in BESS applications, including advances in battery technologies, the development of hybrid energy storage systems (HESSes), and the introduction of AI-based solutions for optimization.
Battery technologiesLithium-ion (Li-ion) is currently the main battery technology used in BESSes. Despite the use of expensive raw materials, such as lithium, cobalt, and nickel, the global average price of Li-ion battery packs has declined in 2025.
BloombergNEF reports that Li-ion battery pack prices have fallen to a new low this year, reaching $108/kWh, an 8% decrease from the previous year. The research firm attributes this decline to excess cell manufacturing capacity, economies of scale, the increasing use of lower-cost lithium-iron-phosphate (LFP) chemistries, and a deceleration in the growth of electric-vehicle sales.
Using iron phosphate as the cathode material, LFP batteries achieve high energy density, long cycle life, and good performance at high temperatures. They are often used in applications in which durability and reliable operation under adverse conditions are important, such as grid energy storage systems. However, their energy density is lower than that of traditional Li-ion batteries.
Although Li-ion batteries will continue to lead the BESS market due to their higher efficiency, longer lifespan, and deeper depth of discharge compared with alternative battery technologies, other chemistries are making progress.
Flow batteriesLong-life storage systems, capable of storing energy for eight to 10 hours or more, are suited for managing electricity demand, reducing peaks, and stabilizing power grids. In this context, “reduction-oxidation [redox] flow batteries” show great promise.
Unlike conventional Li-ion batteries, the liquid electrolytes in flow batteries are stored separately and then flow (hence the name) into the central cell, where they react in the charging and discharging phases.
Flow batteries offer several key advantages, particularly for grid applications with high shares of renewables. They enable long-duration energy storage, covering many hours, such as nighttime, when solar generation is not present. Their raw materials, such as vanadium, are generally abundant and face limited supply constraints. Material concerns are further mitigated by high recyclability and are even less significant for emerging iron-, zinc-, or organic-electrolyte technologies.
Flow batteries are also modular and compact, inherently safe due to the absence of fire risk, and highly durable, with service lifetimes of at least 20 years with minimal performance degradation.
The BESSt Company, a U.S.-based startup founded by a former Tesla engineer, has unveiled a redox flow battery technology that is claimed to achieve an energy density up to 20× higher than that of traditional, vanadium-based flow storage systems.
The novel technology relies on a zinc-polyiodide (ZnI2) electrolyte, originally developed by the U.S. Department of Energy’s Pacific Northwest National Laboratory, as well as a proprietary cell stack architecture that relies on undisclosed, Earth-abundant alloy materials sourced domestically in the U.S.
The company’s residential offering is designed with a nominal power output of 20 kW, paired with an energy storage capacity of 25 kWh, corresponding to an average operational duration of approximately five hours. For commercial and industrial applications, the proposed system is designed to scale to a power rating of 40 kW and an energy capacity of 100 kWh, enabling an average usage time of approximately 6.5 hours.
This technology (Figure 1) is well-suited for integration with solar generation and other renewable energy installations, where it can deliver long-duration energy storage without performance degradation.
Figure 1: The BESSt Company’s ZnI2 redox flow battery system (Source: The BESSt Company)
Sodium-ion batteries
Sodium-ion batteries are a promising alternative to Li-ion batteries, primarily because they rely on more abundant raw materials. Sodium is widely available in nature, whereas lithium is relatively scarce and subject to supply chains that are vulnerable to price volatility and geopolitical constraints. In addition, sodium-ion batteries use aluminum as a current collector instead of copper, further reducing their overall cost.
Blue Current, a California-based company specializing in solid-state batteries, has received an $80 million Series D investment from Amazon to advance the commercialization of its silicon solid-state battery technology for stationary storage and mobility applications. The company aims to establish a pilot line for sodium-ion battery cells by 2026.
Its approach leverages Earth-abundant silicon and elastic polymer anodes, paired with fully dry electrolytes across multiple formulations optimized for both stationary energy storage and mobility. Blue Current said its fully dry chemistry can be manufactured using the same high-volume equipment employed in the production of Li-ion pouch cells.
Sodium-ion batteries can be used in stationary energy storage, solar-powered battery systems, and consumer electronics. They can be transported in a fully discharged state, making them inherently safer than Li-ion batteries, which can suffer degradation when fully discharged.
Aluminum-ion batteriesProject INNOBATT, coordinated by the Fraunhofer Institute for Integrated Systems and Device Technology (IISB), has completed a functional battery system demonstrator based on aluminum-graphite dual-ion batteries (AGDIB).
Rechargeable aluminum-ion batteries represent a low-cost and inherently non-flammable energy storage approach, relying on widely available materials such as aluminum and graphite. When natural graphite is used as the cathode, AGDIB cells reach gravimetric energy densities of up to 160 Wh/kg while delivering power densities above 9 kW/kg. The electrochemical system is optimized for high-power operation, enabling rapid charge and discharge at elevated C rates and making it suitable for applications requiring a fast dynamic response.
In the representative system-level test (Figure 2), the demonstrator combines eight AGDIB pouch cells with a wireless battery management system (BMS) derived from the open-source foxBMS platform. Secure RF communication is employed in conjunction with a high-resolution current sensor based on nitrogen-vacancy centers in diamond, enabling precise current measurement under dynamic operating conditions.
Figure 2: A detailed block diagram of the INNOBATT battery system components (Source: Elisabeth Iglhaut/Fraunhofer IISB)
Li-ion battery recycling
Second-life Li-ion batteries retired from applications such as EVs often maintain a residual storage capacity and can therefore be repurposed for BESSes, supporting circular economy standards. In Europe, the EU Battery Passport—mandatory beginning in 2027 for EV, industrial, BESS (over 2 kWh), and light transport batteries—will digitally track batteries by providing a QR code with verified data on their composition, state of health, performance (efficiency, capacity), and carbon footprint.
This initiative aims to create a circular economy, improving product sustainability, transparency, and recyclability through digital records that detail information about product composition, origin, environmental impact, repair, and recycling.
HESSesA growing area of innovation is represented by the HESS, which integrates batteries with alternative energy storage technologies, such as supercapacitors or flywheels. Batteries offer high energy density but relatively low power density, whereas flywheels and supercapacitors provide high power density for rapid energy delivery but store less energy overall.
By combining these technologies, HESSes can better balance both energy and power requirements. Such systems are well-suited for applications such as grid and microgrid stabilization, as well as renewable energy installations, particularly solar and wind power systems.
Utility provider Rocky Mountain Power (RMP) and Torus Inc., an energy storage solutions company, are collaborating on a major flywheel and BESS project in Utah. The project integrates Torus’s mechanical flywheel technology with battery systems to support grid stability, demand response, and virtual power plant applications.
Torus will deploy its Nova Spin flywheel-based energy storage system (Figure 3) as part of the project. Flywheels operate using a large, rapidly spinning cylinder enclosed within a vacuum-sealed structure. During charging, electrical energy powers a motor that accelerates the flywheel, while during discharge, the same motor operates as a generator, converting the rotational energy back into electricity. Flywheel systems offer advantages such as longer lifespans compared with most chemical batteries and reduced sensitivity to extreme temperatures.
This collaboration is part of Utah’s Operation Gigawatt initiative, which aims to expand the state’s power generation capacity over the next decade. By combining the rapid response of flywheels with the longer-duration storage of batteries, the project delivers a robust hybrid solution designed for a service life of more than 25 years while leveraging RMP’s Wattsmart Battery program to enhance grid resilience.
Figure 3: Torus Nova Spin flywheel-based energy storage (Source: Torus Inc.)
AI adoption in BESSes
By utilizing its simulation and testing solution Simcenter, Siemens Digital Industries Software demonstrates how AI reinforcement learning (RL) can help develop more efficient, faster, and smarter BESSes.
The primary challenge of managing renewable energy sources, such as wind power, is determining the optimal charge and discharge timing based on dynamic variables such as real-time electricity pricing, grid load conditions, weather forecasts, and historical generation patterns.
Traditional control systems rely on simple, manually entered rules, such as storing energy when prices fall below weekly averages and discharging when prices rise. On the other hand, RL is an AI approach that trains intelligent agents through trial and error in simulated environments using historical data. For BESS applications, the RL agent learns from two years of weather patterns to develop sophisticated control strategies that provide better results than manual programming capabilities.
The RL-powered smart controller continuously processes wind speed forecasts, grid demand levels, and market prices to make informed, real-time decisions. It learns to charge batteries during periods of abundant wind generation and low prices, then discharge during demand spikes and price peaks.
The practical implementation of Siemens’s proposed approach combines system simulation tools to create digital twins of BESS infrastructure with RL training environments. The resulting controller can be deployed directly to hardware systems.
The post Emerging trends in battery energy storage systems appeared first on EDN.
Veeco and imec develop 300mm-compatible process to enable integration of barium titanate on silicon photonics
India- EU FTA to Empower India’s Domestic Electronics Manufacturing Industry to reach USD 100 Billion in the Following Decade
The India–European Union Free Trade Agreement (FTA) is poised to significantly reshape India’s electronics landscape, with industry estimates indicating it could scale exports to nearly $50 billion by 2031 across mobile phones, IT hardware, consumer electronics, and emerging technology segments—up from the current bilateral electronics trade of about $18 billion.
A Global Supplier
“The agreement aligns directly with India’s shift from scale-led domestic manufacturing to export-oriented integration with global value chains, while promoting inclusive growth across regions and skill levels,” says Pankaj Mohindroo, Chairman, ICEA
Emphsisisng on the significance of the FTA, he adds that in electronics, the FTA creates a credible pathway to build exports of nearly USD 50 billion by 2031 across electronic goods, including mobile phones, consumer electronics, and IT hardware. He further adds that the FTA carries the potential to exceed USD 100 billion in the following decade, anchored in manufacturing depth, job creation, innovation, and India’s emergence as a trusted global supplier.
Capitalsing a standards-driven market
At a time when global trade and supply chains are being reshaped by uncertainty and fragmentation, the India–EU FTA underscores a shared commitment to stability, predictability, and a trusted economic partnership. As the world’s fourth- and second-largest economies respectively, India and the European Union together account for nearly 25 percent of global GDP and close to one-third of global trade.
For India, the agreement goes beyond expanding trade volumes; it represents deeper engagement with one of the world’s most standards-driven markets, anchored in demonstrated capability, regulatory maturity, and institutional strength.
Preferential Access
The agreement gains added significance as global value chains increasingly prioritise resilience, diversification, and trusted partnerships. Under the FTA, over 99 percent of Indian exports by value are expected to receive preferential access to the EU market, sharply improving export competitiveness. With its scale, policy predictability, and expanding industrial base, India is well-positioned as a credible manufacturing partner for European lead firms seeking long-term stability beyond traditional supply centres.
Entry of Swedish Company ‘KonveGas’ into India
Amidst this positive environment, KonveGas, a Swedish company specializing in gas storage technology, has officially announced its entry into the Indian market. The fact that European Small and Medium Enterprises (SMEs) are now directly engaging with Indian industries is seen as a direct impact of the new trade policy.
The company has selected Delhi, Pune, and Gujarat for its initial phase of operations. These regions are India’s primary automotive and industrial hubs. Following the FTA, business opportunities in these sectors are expected to grow. The company aims to begin direct operations within the next six months.
The post India- EU FTA to Empower India’s Domestic Electronics Manufacturing Industry to reach USD 100 Billion in the Following Decade appeared first on ELE Times.
Designing edge AI for industrial applications

Industrial manufacturing systems demand real-time decision-making, adaptive control, and autonomous operation. However, many cloud-dependent architectures can’t deliver the millisecond response required for safety-critical functions such as robotic collision avoidance, in-line quality inspection, and emergency shutdown.
Network latency (typically 50–200 ms round-trip) and bandwidth constraints prevent cloud processing from achieving sub-10 ms response requirements, shifting intelligence to the industrial edge for real-time control.
Edge AI addresses these high-performance, low-latency requirements by embedding intelligence directly into industrial devices and enabling local processing without reliance on the cloud. This edge-based approach supports machine-vision workloads for real-time defect detection, adaptive process control, and responsive human–machine interfaces that react instantly to dynamic conditions.
This article outlines a comprehensive approach to designing edge AI systems for industrial applications, covering everything from requirements analysis to deployment and maintenance. It highlights practical design methodologies and proven hardware platforms needed to bring AI from prototyping to production in demanding environments.
Defining industrial requirements
Designing scalable industrial edge AI systems begins with clearly defining hardware, software, and performance requirements. Manufacturing environments necessitate wide temperature ranges from –40°C to +85°C, resistance to vibration and electromagnetic interference (EMI), and zero tolerance for failure.
Edge AI hardware installed on machinery and production lines must tolerate these conditions in place, unlike cloud servers operating in climate-controlled environments.
Latency constraints are equally demanding: robotic assembly lines require inference times under 10 milliseconds for collision avoidance and motion control, in-line inspection systems must detect and reject defective parts in real time, and safety interlocks depend on millisecond-level response to protect operators and equipment.

Figure 1 Robotic assembly lines require inference times under 10 milliseconds for collision avoidance and motion control. Source: Infineon
Accuracy is also critical, with quality control often targeting greater than 99% defect detection, and predictive maintenance typically aiming for high-90s accuracy while minimizing false alarm rates.
Data collection and preprocessing
Meeting these performance standards requires systematic data collection and preprocessing, especially when defect rates fall below 5% of samples. Industrial sensors generate diverse signals such as vibration, thermal images, acoustic traces, and process parameters. These signals demand application-specific workflows to handle missing values, reduce dimensionality, rebalance classes, and normalize inputs for model development.
Continuous streaming of raw high-resolution sensor data can exceed 100 Mbps per device, which is unrealistic for most factory networks. As a result, preprocessing must occur at the industrial edge, where compute resources are located directly on or near the equipment.
Class-balancing techniques such as SMOTE or ADASYN address class imbalance in training data, with the latter adapting to local density variations. Many applications also benefit from domain-specific augmentation, such as rotating thermal images to simulate multiple views or injecting controlled noise into vibration traces to reflect sensor variability.
Outlier detection is equally important, with clustering-based methods flagging and correcting anomalous readings before they distort model training. Synthetic data generation can introduce rare events such as thermal hotspots or sudden vibration spikes, improving anomaly detection when real-world samples are limited.
With cleaner inputs established, focus shifts to model design. Convolutional neural networks (CNNs) handle visual inspection, while recurrent neural networks (RNNs) process time-series data. Transformers, though still resource-intensive, increasingly perform industrial time-series analysis. Efficient execution of these architectures necessitates careful optimization and specialized hardware support.
Hardware-accelerated processing
Efficient edge inference requires optimized machine learning models supported by hardware that accelerates computation within strict power and memory budgets. These local computations must stay within typical power envelopes below 5 W and operate without network dependency, which cloud-connected systems can’t guarantee in production environments.
Training neural networks for industrial applications can be challenging, especially when processing vibration signals, acoustic traces, or thermal images. Traditional workflows require data science expertise to select model architectures, tune hyperparameters, and manage preprocessing steps.
Even with specialized hardware, deploying deep learning models at the industrial edge demands additional optimization. Compression techniques shrink models by 80–95% while retaining over 95% accuracy, reducing size and accelerating inference to meet edge constraints. These include:
- Quantization converts 32-bit floating-point models into 8- or 16-bit integer formats, reducing memory use and accelerating inference. Post-training quantization meets most industrial needs, while quantization-aware training maintains accuracy in safety-critical cases.
- Pruning removes redundant neural connections, typically reducing parameters by 70–90% with minimal accuracy loss. Overparameterized models, especially those trained on smaller industrial datasets, benefit significantly from pruning.
- Knowledge distillation trains a smaller student model to replicate the behavior of a larger teacher model, retaining accuracy while achieving the efficiency required for edge deployment.
Deployment frameworks and tools
After compression and optimization, engineers deploy machine learning models using inference frameworks, such as TensorFlow Lite Micro and ExecuTorch, which are the industry standards. TensorFlow Lite Micro offers hardware acceleration through its delegate system, which is especially useful on platforms with supported specialized processors.
While these frameworks handle model execution, scaling from prototype to production also requires integration with development environments, control interfaces, and connectivity options. Beyond toolchains, dedicated development platforms further streamline edge AI workflows.
Once engineers develop and deploy models, they test them under real-world industrial conditions. Validation must account for environmental variation, EMI, and long-term stability under continuous operation. Stress testing should replicate production factors such as varying line speeds, material types, and ambient conditions to confirm consistent performance and response times across operational states.
Industrial applications also require metrics beyond accuracy. Quality inspection systems must balance false positives against false negatives, where the geometric mean (GM) provides a balanced measure on imbalanced datasets common in manufacturing. Predictive maintenance workloads rely on indicators such as mean time between false positives (MTBFP) and detection latency.

Figure 2 Quality inspection systems must balance false positives against false negatives. Source: Infineon
Validated MCU-based deployments demonstrate that optimized inference—even under resource constraints—can maintain near-baseline accuracy with minimal loss.
Monitoring and maintenance strategies
Validation confirms performance before deployment, yet real-world operation requires continuous monitoring and proactive maintenance. Edge deployments demand distributed monitoring architectures that continue functioning offline, while hybrid edge-to-cloud models provide centralized telemetry and management without compromising local autonomy.
A key focus of monitoring is data drift detection, as input distributions can shift with tool wear, process changes, or seasonal variation. Monitoring drift at both device and fleet levels enables early alerts without requiring constant cloud connectivity. Secure over-the-air (OTA) updates extend this framework, supporting safe model improvements, updates, and bug fixes.
Features such as secure boot, signed updates, isolated execution, and secure storage ensure only authenticated models run in production, helping manufacturers comply with regulatory frameworks such as the EU Cyber Resilience Act.
Take, for instance, an industrial edge AI case study about predictive maintenance. A logistics operator piloted edge AI silicon on a fleet of forklifts, enabling real-time navigation assistance and collision avoidance in busy warehouse environments.
The deployment reduced safety incidents and improved route efficiency, achieving better ROI. The system proved scalable across multiple facilities, highlighting how edge AI delivers measurable performance, reliability, and efficiency gains in demanding industrial settings.
The upgraded forklifts highlighted key lessons for AI at the edge: systematic data preprocessing, balanced model training, and early stress testing were essential for reliability, while underestimating data drift remained a common pitfall.
Best practices included integrating navigation AI with existing fleet management systems, leveraging multimodal sensing to improve accuracy, and optimizing inference for low latency in real-time safety applications.
Sam Al-Attiyah is head of machine learning at Infineon Technologies.
Special Section: AI Design
- The AI design world in 2026: What you need to know
- AI workloads demand smarter SoC interconnect design
- AI’s insatiable appetite for memory
- The AI-tuned DRAM solutions for edge AI workloads
The post Designing edge AI for industrial applications appeared first on EDN.
💥Чек-лист корупціогенних факторів: що це і для чого
У КПІ ім. Ігоря Сікорського впроваджено важливий інструмент, що сприяє формуванню прозорого середовища в університеті – чек-лист корупціогенних факторів.
Ascent Solar closes up to $25m private placement
Atomera’s GaN-on-Si concept advances to PowerAmerica proposal stage
Стажування науковиць КПІ в компанії "Bio-H2 Umwelt GmbH"
В межах договорів з асоціацією "Ukraine Energy Autark", метою яких є поширення інноваційних дослідницьких проєктів та їхньої реалізації для енергетичної самодостатності України, старший викладач кафедри теплової та альтернативної енергетики НН ІАТЕ Ольга Власенко та асистент кафедри біоенергетики, біоінформатики та екобіотехнології ФБТ Діна Колтишева успішно пройшли в Німеччині стажування на тему "Біогаз та утилізація енергетичної біомаси" (на фото).
La Luce Cristallina launches CMOS-compatible oxide pseudo-substrate
Anritsu Launches TestDeck Web Solution to enhance Test & Measurement
ANRITSU CORPORATION has launched TestDeck, a web-based solution designed to promote digital transformation (DX) of mobile device testing. TestDeck integrates test planning, configuration, execution, and results management by connecting multiple communication test and measurement systems to a web server and aggregating test data. This centralized approach streamlines test operations and supports new perspectives in test analysis.
TestDeck web-based solution enhances the efficiency of test operations for communication test and measurement systems. TestDeck users can centrally manage test results and progress to rapidly identify performance trends and issues by device version using collected historical data. Furthermore, by visualizing and sharing centralized communication test and measurement systems, TestDeck optimizes testing across multiple domestic and international sites, helping cut test costs and shortening mobile device development cycles.
Anritsu is continuing to expand TestDeck functions to further advance test operations in the Beyond 5G and 6G eras.
Development Background
The number of required mobile device test items continues growing as communication standards and device functions evolve, increasing the test burden for vendors. Additionally, fragmented test data from different global test environments makes cross-functional analysis and results sharing difficult. TestDeck addresses these challenges by aggregating and visualizing equipment and test data for efficient testing.
Product Overview
TestDeck web solution promotes the digital transformation of testing. It supports efficient use of communication test and measurement systems, streamlines workflows, and optimizes testing on a global scale for both efficiency and new analytical perspectives.
Key Features:
• Test Vision: Centralized management of test results for failure cause and device trend analyses
• Test Hub: Aggregated management of test environments, plans, reservations, execution, and results
• Test Utilization: Centralized management of test equipment and licenses
• Comprehensive Test Automation (for PCT*1/RFCT*2): Automated GCF/PTCRB-based test planning for efficient measurement system operation
Supported Products:
• 5G NR Mobile Device Test Platform ME7834NR
• New Radio RF Conformance Test System ME7873NR
• Rapid Test Designer Platform (RTD) MX800050A
• SmartStudio NR MX800070A
Contact Anritsu to learn more about TestDeck MX710000A
Technical Terms
*1 PCT
Abbreviation for Protocol Conformance Test—key ME7834NR function for evaluating whether device adheres to various 3GPP communication protocol procedures following GCF/PTCRB certification requirements
*2 RFCT
Abbreviation for RF Conformance Test—key ME7873NR function for evaluating whether device TRx characteristics meet 3GPP radio parameter specifications following GCF/PTCRB certification requirements
The post Anritsu Launches TestDeck Web Solution to enhance Test & Measurement appeared first on ELE Times.
КПІшниці — серед переможниць CTF-змагань SHE DEFENDS: CYBER & OSINT
Одразу кілька команд Київської політехніки увійшли до числа лідерів на Всеукраїнських жіночих CTF-змаганнях SHE DEFENDS: CYBER & OSINT. Турнір спрямований на розвиток практичних навичок жінок у кібербезпеці та OSINT і посилення кіберстійкості сектору нацбезпеки та оборони.
КПІ ім. Ігоря Сікорського розпочинає партнерство з Dwarf Engineering
Підписана угода про співпрацю між КПІ ім. Ігоря Сікорського та українською інженерно-продуктовою компанією у сфері DefTech Dwarf Engineering дає дає старт новим освітнім можливостям для університету, як-от запуск нової міждисциплінарної освітньої програми «Комп’ютерний зір роботів».
⭐ Конкурс студентських ініціатив "Енергоефективність: від ідеї до дії"
Шановні здобувачі вищої освіти!
Another silly simple precision 0/20mA to 4/20mA converter

A recent Design Idea (DI), “Silly simple precision 0/20mA to 4/20mA converter,” by prolific DI contributor Stephen Woodward uses the venerable LM337 regulator in a creative configuration along with a few passive components, to translate an input 0-20 mA current source (say from a sensor with a separate power source that outputs a 0-20 mA signal current) into a 4-20 mA two-wire transmitter current loop (a standard 2 terminal industrial current source).
Below is another novel, ‘silly simple’ way of implementing the same function using the LM337. It relies on tapering off an initial 4 mA current to zero in proportion to the input 0-20 mA, and adding the input and the tapered off 4mA signal to create a 2-wire 4-20 mA output loop. It is loosely based on another Woodward gem [3]. Refer to Figure 1.

Figure 1 An input 0-20 mA is added to a tapered-off 4-0 mA at OUT to give an output 4-20 mA.
Wow the engineering world with your unique design: Design Ideas Submission Guide
First, imagine ‘0 mA’ current input (input loop open). The series arrangement of R1 parallel ‘R2 + Pz’ (‘Rz’@250E) and R3 parallel ‘R4+Ps’ (‘Rs’@62.5E) having a nominal value of 312.5E, sets the value of output loop current into OUT at 0mA+4mA (1.25V/312.5E), set using Pz.
Now, feeding a 20mA input current, imagine it pulled from junction X and pushed into the OUT terminal. This current is sourced from the output loop ‘+’, dropping 62.5E x 20mA=1.25V in Rs, in a direction opposing the internal reference voltage. With proper calibration, this reduces the drop across Rz to zero, and in doing so, reduces the original 4 mA contribution through Rz into OUT, also to zero.
The output loop current is now equal to the input current of 20mA+0mA (added at OUT), transferred from the input loop to the output loop from OUT to IN of U1. We have converted a current source input of 0-20 mA to a 2-wire loop current of 4-20 mA. The 20 mA setting is done by Ps.
Accurate current setting requires 2 S/Z passes to set the output current to within 0.05% or (much) better. Pots should be multi turn 3296 types or similar, but single turn trimmers will also work fairly well as both pots have a small trim range, by design.
The performance is excellent. The input to output linearity of the basic circuit is 0.02%. With a small heat sink, short term stability is within 0.02%, and change in loop current is 0.05% over a voltage from 5 V to 32 V. Transfer accuracy and stability are high because we aren’t transforming the input signal, only transferring it into the output loop. Reference drift affects only the basic 4 mA current and thus has a smaller effect on overall drift. The heat sink improves drift and di/dv by a factor of 3 to 4.
For intermediate input currents, the 4mA basic current via Rz into OUT is tapered off in proportion to the input 0-20 mA current. Thus at 10 mA (half) input current, the voltage at X changes suitably to maintain @500 mV across Rz, this supporting a contribution of 2 mA into OUT, down from the original 4 mA set at 0 mA input current. Output loop current into OUT is now the input 10mA+2mA=12mA, the halfway point of the 4-20 mA loop too. Similar reasoning applies to other input/output loop currents relationships.
A reverse protection diode is recommended in the 4-20 mA loop. Current limiting should be applied to limit fault current to safe levels. A series 2-transistor current limiter with appropriate resistance values is an excellent candidate, being low drop, low cost, fast acting and free from oscillation. A 40-mA ptc ‘polyfuse’ in the loop will protect the load from a complete short across both circuits (an unlikely event).
The basic drop seen by the 0-20 mA signal is -1 V to 0 V. Two diodes or an LED in series with the + of the 0-20-mA input allow the source to always see a positive drop.
Regarding stability: only the 68E(R3) and the 270E(R1) need to be 25 ppm 1% types to give low overall temperature drift, which is a significant plus. Pot drift, typically larger than that of fixed resistors, has less effect in the configuration used, wherein pots Ps and Pz, relatively high valued, control only a small part of the main current. Larger pot values also help minimize the effect of varying pot contact resistance.
A 3-V minimum operating voltage allows as much as 1000E of loop resistance with a 24-V supply, for the basic circuit.
It is a given that one of the loops will (need to) be floating. This is usually the source loop, as the instrument generating the 0-20 mA is powered from a separate supply.
Ashutosh Sapre lives and works in a large city in western India. Drifting uninspired through an EE degree way back in the late nineteen eighties, he was lucky enough to stumble across and be electrified by the Art of Electronics 1 and 2. Cut to now, he is a confirmed circuit addict, running a business designing, manufacturing and selling industrial signal processing modules. He is proud of his many dozens of design pads consisting mostly of crossed out design ideas.
Related Content/References
- Silly simple precision 0/20mA to 4/20mA converter
- A 0-20mA source current to 4-20mA loop current converter
- PWM-programmed LM317 constant current source
- https://www.radiolocman.com/shem/schematics.html?di=150983
The post Another silly simple precision 0/20mA to 4/20mA converter appeared first on EDN.
Choosing power supply components for New Space

Satellites in geostationary orbit (GEO) face a harsher environment due to plasma, trapped electrons, solar particles, and cosmic rays, with the environmental effect higher in magnitude compared with low Earth orbit (LEO)-Low Inclination, LEO-Polar, and International Space Station orbits. This is the primary reason why power supplies used in these satellites need to comply with stringent MIL standards for design, manufacturability, and quality.
GEO satellites circle around the earth in approximately 24 hours at about 3 km/s, at an altitude of about 35,786 km. There are only three main satellites that can cover the full globe, as these satellites are far from Earth.
In comparison, LEO satellites travel around the earth at of 7.8 km/s, at an altitude of less than 1,000 km, but they could be as low as 160 km above Earth. This is lower than GEO but still >10× higher than a commercial plane altitude at 14 km.
Total ionizing dose (TID) and single-event effects (SEEs) are two of the key radiation effects that need to be addressed by power supplies in space. Satellites placed in GEO face harsher conditions due to radiation compared with those in LEO.
GEO being farther from Earth is more susceptible to radiation; hence, the components used in GEO satellite power supplies need to be radiation-hardened (rad-hard) by design, which means all of the components must comply with TID and SEEs, as high as 100 Krad and 82 MeV cm2/mg, respectively.
In comparison, the LEO satellite components need to be radiation-tolerant with a relatively lower level of requirement of TID and SEEs. However, using no shielding from these harsh conditions may result in failure.
While individual satellites can be used for higher-resolution imaging, typically constellations of a large number of exact or similar types of relatively smaller satellites form a web or net around the earth to provide uninterrupted coverage. By working in tandem, these constellations provide simultaneous coverage for applications such as internet services and telecommunication.
The emergence of New Space has enabled the launch of multiple smaller satellites with lighter payloads for commercial purposes. Satellite internet services are slowly and steadily competing with traditional broadband and are providing more reliable connectivity for remote areas, passenger vehicles, and even aerospace.
Microchip offers a scalable approach to space solutions based on the mission. (Source: Microchip Technology Inc.)
Configurability for customization
The configurability of power supplies is an important factor for meeting a variety of space mission specifications. Voltage levels in the electrical power bus are generally standardized to certain values; however, the voltage of the solar array is not always standardized. This calls for a redesign of all the converters in the power subsystems, depending on the nature of the mission.
This redesign increases costs and development time. Thus, it is inherently important to provide DC/DC converters and low-dropout regulators (LDOs) across the power architecture that have standard specifications while providing the flexibility for customization depending on the system and load voltages. Functions such as paralleling, synchronization, and series connection are of paramount importance for power supplies when considering the specifications of different space missions.
Size, weight, power, and costDue to the limited volume available and the resource-intensive task of sending the objects into space against the pull of gravity, it is imperative to have smaller footprints, smaller size (volume), and lower weight while packing more power (kilowatts) in the given volume. This calls for higher power density for space optimization and higher efficiency (>80%) to get the maximum performance out of the resources available in the power system.
The load regulations need to be optimal to make sure that the output of the DC/DC converter feeds the next stage (LDOs and direct loads), matching the regulation requirements. Additionally, the tolerances of regulation against temperature variations are key in providing ruggedness and durability.
Space satellites use solar energy as the main source to power their loads. Some of the commonly used bus voltages are 28 V, 50 V, 72 V, 100 V, and 120 V. A DC/DC converter converts these voltages to secondary voltages such as 3.3 V, 5 V, 12 V, 15 V, and 28 V. Secondary bus voltages are further converted into usable voltages such as 0.8 V, 1.2 V, and 1.5 V with the help of points of load such as LDOs to feed to the microcontrollers (MCUs) and field-programable gate arrays (FPGAs) that drive the spacecraft loads.
A simplified power architecture for satellite applications, using Microchip’s standard rad-hard SA50-120 series of 50-W DC/DC power converters (Source: Microchip Technology Inc.)
Environmental effects in space
The space environment consists of effects such as solar plasma, protons, electrons, galactic cosmic rays, and solar flare ions. This harsh environment causes environmental effects such as displacement damage, TID, and SEEs that result in device-level effects.
The power converter considerations should be in line with the orbits in which the satellite operates, as well as the mission time. For example, GEO has more stringent radiation requirements than LEO.
The volume requirement for LEO tends to be higher due to the number of smaller satellites launched to form the constellations. The satellites’ power management faces stringent requirements and needs to comply with various MIL standards to withstand the harsh environment. The power supplies used in these satellites also need to minimize size, weight, power, and cost (SWaP-C).
Microchip provides DC/DC space converters that are suitable for these applications with the standard rad-hard SA50 series for deep space or traditional space satellites in GEO/MEO and the standard radiation-tolerant LE50 series for LEO/New Space applications. Using standard components in a non-hybrid structure (die and wire bond with hermetically sealed construction) can prevent lot jeopardy and mission schedule risk to ensure reliable and rugged solutions with faster time to market at the desired cost.
In addition to the ruggedness and SWaP-C requirements, power supply solutions also need to be scalable to cover a wide range of quality levels within the same product series. This also includes offering a range of packaging materials and qualification options to meet mission goals.
For example, Microchip’s LE50-28 isolated DC/DC power converters are available in nine variants, with single and triple outputs for optimal design configurability. The power converters have a companion EMI filter and enable engineers to design to scale and customize by choosing one to three outputs based on the voltage range needed for the end application. This series provides flexibility with up to four power converters to reach 200 W. It offers space-grade radiation tolerance with 50-Krad TID and SEE latch-up immunity of 37-MeV·cm2/mg linear energy transfer.
The space-grade LE50-28 series is based on a forward topology that offers higher efficiency and <1% output ripple. It is housed in a compact package, measuring 3.055 × 2.055 × 0.55 inches with a low weight of just 120 grams. These standard non-hybrid, radiation-tolerant devices in a surface-mount package comply with MIL-STD-461, MIL-STD-883, and MIL-STD-202.
In addition, the LE50-28 DC/DC power converters, designed for 28-V bus systems, can be integrated with Microchip’s PolarFire FPGAs, MCUs, and LX7720-RT motor control sensors for a complete electrical system solution. This enables customers to use cost-effective, standard LE50 converters to customize and configure solutions using paralleling and synchronization features to form more intricate power systems that can meet the requirements of LEO power management.
For New Space’s low- to mid-volume satellite constellations with stringent cost and schedule requirements, sub-Qualified Manufacturers List (QML) versions in plastic packages are the optimal solutions that provide the radiation tolerance of QML (space-grade) components to enable lower screening requirements for lower cost and shorter lead times. LE50 companions in this category are RTG4 FPGA plastic versions and the PIC64 high-performance spaceflight computing (PIC64-HPSC) LEO variant.
The post Choosing power supply components for New Space appeared first on EDN.
Vertical gallium nitride could transform high-voltage power electronics and support UK net-zero ambitions, says CSA Catapult
Перша конференція магістрантів ФМФ, присвячена результатам дисертаційних досліджень
Наприкінці осені на фізико-математичному факультеті КПІ ім. Ігоря Сікорського відбулася І науково-практична конференція магістрантів ФМФ, присвячена результатам дисертаційних досліджень здобувачів магістерського рівня освіти.
Nuvoton releases high-power 1W 379nm UV laser diode
STMicroelectronics recognised as a Top 100 Global Innovator 2026
- Clarivate’s list ranks the organisations leading the way in innovation worldwide
- ST earns the distinction for the eighth time overall, including five consecutive years since 2022
STMicroelectronics (NYSE: STM), a global semiconductor leader serving customers across the spectrum of electronics applications, has been named in the Top 100 Global Innovators 2026. In its 15th edition, the annual benchmark from Clarivate, a leading global provider of transformative intelligence, identifies and ranks organisations that consistently deliver high-impact inventions, shaping the future of innovation across industries. The Top 100 Global Innovators navigate complexity with clarity and set the pace for invention quality, originality and global reach.
“We are honoured to be recognised as a Top 100 Innovator by Clarivate for 2026, marking our fifth consecutive year and eighth time overall receiving this distinction. This achievement underscores STMicroelectronics’ unwavering commitment to sustained, large-scale innovation in products and technologies, driven by the creativity and dedication of our global teams,” said Alessandro Cremonesi, Executive Vice President, Chief Innovation Officer, and General Manager, System Research and Applications. “As the pace of technological change accelerates, we work in open collaboration with customers and partners to develop disruptive semiconductor technologies and solutions in sensing, power and energy, connectivity, data communications, compute and edge AI, helping them turn ambitious ideas into market-defining solutions.”
ST invests significantly in R&D, and about 20% of company employees work on product design, development and technology in extensive collaboration with leading research labs and corporate partners throughout around the world. The company’s Innovation Office focuses on connecting emerging market trends with internal technology expertise to identify opportunities, stay ahead of the competition, and lead in new or existing technology domains. ST is recognised as a leading semiconductor technology innovator in several areas, including smart power technologies, wide bandgap semiconductors, edge AI solutions, MEMS sensors and actuators, optical sensing, digital and mixed-signal technologies, and silicon photonics.
Maroun S. Mourad, President, Intellectual Property, Clarivate, said: “Recognition as a Top 100 Global Innovator is a remarkable achievement given the pace of change. Multi-year winners and new entrants are investing in AI innovation as it redefines the boundaries between research, engineering and commercial execution. The leaders we celebrate today are not just responding to this shift, they are designing for it.”
The Top 100 Global Innovators analysis is underpinned by the Clarivate Centre for IP and Innovation Research. Their analyses are founded in rigorous research leveraging the proprietary Derwent Strength Index, derived from the Derwent World Patents Index (DWPI) and its global invention data to measure the influence of ideas, their success and rarity, and the investment in inventions.
Detailed Methodology
The Top 100 Global Innovators uses a complete comparative analysis of global invention data to assess the strength of every patented idea, using measures tied directly to their innovative power. To move from the individual strength of inventions to identifying the organisations that create them more consistently and frequently, Clarivate sets two threshold criteria that potential candidates must meet and then adds a measure of their internationally patented innovation output over the past five years.
About STMicroelectronics
At ST, we are 50,000 creators and makers of semiconductor technologies, mastering the semiconductor supply chain with state-of-the-art manufacturing facilities. An integrated device manufacturer, we work with more than 200,000 customers and thousands of partners to design and build products, solutions, and ecosystems that address their challenges and opportunities, and the need to support a more sustainable world. Our technologies enable smarter mobility, more efficient power and energy management, and the wide-scale deployment of cloud-connected autonomous things. We are on track to be carbon neutral in all direct and indirect emissions (scopes 1 and 2), product transportation, business travel, and employee commuting emissions (our scope 3 focus), and to achieve our 100% renewable electricity sourcing goal by the end of 2027.
The post STMicroelectronics recognised as a Top 100 Global Innovator 2026 appeared first on ELE Times.



