Українською
  In English
EDN Network
SoC delivers dual-mode Bluetooth for edge devices

Ambiq’s Apollo510D Lite SoC provides both Bluetooth Classic and BLE 5.4 connectivity, enabling always-on intelligence at the edge. It is powered by a 32-bit Arm Cortex-M55 processor running at up to 250 MHz with Helium vector processing and Ambiq’s turboSPOT dynamic scaling. A dedicated Cortex-M4F network coprocessor operating at up to 96 MHz handles wireless and sensor-fusion tasks.

According to Ambiq, its Subthreshold Power Optimized Technology (SPOT) delivers 16× faster performance and up to 30× better AI energy efficiency than comparable M4- or M33-based devices. The SoC’s BLE 5.4 radio subsystem provides +14 dBm transmit power, while dual-mode capability supports low-power audio streaming and backward compatibility with Classic Bluetooth.
The Apollo510D Lite integrates 2 MB of RAM and 2 MB of nonvolatile memory with dedicated instruction/data caches for faster execution. It also includes secureSPOT 3.0 and Arm TrustZone to enable secure boot, firmware updates, and data protection across connected devices.
Along with the Apollo510D Lite (dual-mode Bluetooth), Ambiq’s lineup includes the Apollo510 Lite (no BLE radio) and the Apollo510B Lite (BLE-only). The Apollo510 Lite series is sampling now, with volume production expected in Q1 2026.
The post SoC delivers dual-mode Bluetooth for edge devices appeared first on EDN.
Dual-range motion sensor simplifies IIoT system designs

STMicroelectronics debuts the tiny ISM6HG256X three-in-one motion sensor in a 2.5 × 3-mm package for data-hungry industrial IoT (IIoT) systems, while also supporting edge AI applications. The IMU sensor combines simultaneous low-g (±16 g) and high-g (±256 g) acceleration detection with a high-performance precision gyroscope for angular rate measurement, ensuring the detection from subtle motion or vibrations to severe shocks.
“By integrating an accelerometer with dual full-scale ranges, it eliminates the need for multiple sensors, simplifying system design and reducing overall complexity,” ST said.
The ISM6HG256X is suited for IIoT applications such as asset tracking, worker safety wearables, condition monitoring, robotics, factory automation, and black box event recording.
In addition, the embedded edge processing and self-configurability support real-time event detection and context-adaptive sensing, which are needed for asset tracking sensor nodes, wearable safety devices, continuous industrial equipment monitoring, and automated factory systems.
(Source: STMicroelectronics)
Key features of the MEMS motion sensor are the unique machine-learning core and finite state machine, together with adaptive self-configuration and sensor fusion low power (SFLP). In addition, thanks to the SFLP algorithm, 3D orientation tracking also is possible with a few µA of current consumption, according to ST.
These features are designed to bring edge AI directly into the sensor to autonomously classify detected events, which supports real-time, low-latency performance, and ultra-low system power consumption.
The ISM6HG256X is available now in a surface-mount package that can withstand harsh industrial environments from -40°C to 105°C. Pricing starts at $4.27 for orders of 1,000 pieces from the eSTore and through distributors. It is part of ST’s longevity program, ensuring long-term availability of critical components for at least 10 years.
Also available to help with development are the new X-NUCLEO-IKS5A1 industrial expansion board with MEMS Studio design environment and software libraries, X-CUBE-MEMS1. These tools help implement functions such as high-g and low-g fusion, sensor fusion, context awareness, asset tracking, and calibration.
The ISM6HG256X will be showcased in a dedicated STM32 Summit Tech Dive, “From data to insight: build intelligent, low-power IoT solutions with ST smart sensors and STM32,” on November 20.
The post Dual-range motion sensor simplifies IIoT system designs appeared first on EDN.
LIN motor driver improves EV AC applications

As precise control of cabin airflow and temperature becomes more critical in vehicles to enhance passenger comfort as well as to support advanced thermal management systems, Melexis introduces the MLX81350 LIN motor driver for air conditioning (AC) flaps and automated air vents in electric vehicles (EVs). The MLX81350 delivers a balanced combination of performance, system integration, and cost efficiency to meet these requirements.
The fourth-generation automotive LIN motor driver, built on high-voltage silicon-on-insulator technology, delivers up to 5 W (0.5 A) per motor and provides quiet and efficient motor operation for air conditioning flap motors and electronic air vents.
(Source: Melexis)
In addition to flash programmability, Melexis said the MLX81350 offers high robustness and function density while reducing bill-of-materials complexity. It integrates both analog and digital circuitry, providing a single-chip solution that is fully compliant with industry-standard LIN 2.x/SAE J2602 and ISO 17987-4 specifications for LIN slave nodes.
The MLX81350 features a new software architecture that enhances performance and efficiency over the previous generation. This enhancement includes improved stall detection and the addition of sensorless, closed-loop field-oriented control. This enables smoother motor operation, lower current consumption, and reduced acoustic noise to better support automotive HVAC and thermal management applications, Melexis said.
However, the MLX81350 still maintains pin-to-pin compatibility with its predecessors for easier migration with existing designs.
The LIN motor driver offers lots of peripherals to support advanced motor control and system integration, including a configurable RC clock (24-40 MHz), four general-purpose I/Os (digital and analog), one high-voltage input, 5× 16-bit motor PWM timers, two 16-bit general timers, and a 13-bit ADC with <1.2 -µs conversion time across multiple channels, as well as UART, SPI, and I²C master or slave interfaces. The LIN interface enables seamless communication within vehicle networks, and provides built-in protection and diagnostic features, including over-current, over-voltage, and temperature shutdown, to ensure safe and reliable operation in demanding automotive environments.
The MLX81350 is designed according to ASIL B (ISO 26262) and offers flexible wake-up options via LIN, external pins, or an internal wake-up timer. Other features include a low standby current consumption (25 µA typ.; 50 µA max.) and internal voltage regulators that allow direct powering from the 12-V battery, supporting an operating voltage range of 5.5 V to 28 V.
The MLX81350 is available now. The automotive LIN motor driver is offered in SO-8 EP and QFN-24 packages.
The post LIN motor driver improves EV AC applications appeared first on EDN.
OKW’s plastic enclosures add new custom features

OKW can now supply its plastic enclosures with bespoke internal metal brackets and mounting plates for displays and other large components. The company’s METCASE metal enclosures division designs and manufactures the custom aluminum parts in-house.
(Source: OKW Enclosures Inc.)
One recent project of this type involved OKW’s CARRYTEC handheld enclosures. Two brackets fitted to the lid allowed a display to be flush mounted; a self-adhesive label covered the join between screen and case. Another mounting plate, fitted in the base, was designed to support a power supply.
Custom brackets and supports can be configured to fit existing PCB pillars in OKW’s standard plastic enclosures. Electronic components can then be installed on the brackets’ standoffs.
CARRYTEC (IP 54 optional) is ideal for medical and laboratory electronics, test/measurement, communications, mobile terminals, data collection, energy management, sensors, Industry 4.0, machine building, construction, agriculture and forestry.
The enclosures feature a robust integrated handle with a soft padded insert. They can accommodate screens from 8.4″ to 13.4″. Interfaces are protected by inset areas on the underside. A 5 × AA battery compartment can also be fitted (machining is required).
These housings can be specified in off-white (RAL 9002) ABS (UL 94 HB) or UV-stable lava ASA+PC (UL 94 V-0) in sizes S 8.74″ × 8.07″ × 3.15″, M 10.63″ × 9.72″ × 1.65/3.58″ and L 13.70″ ×11.93″ × 4.61″.
In addition to the custom metal brackets and mounting plates, other customizing services include machining, lacquering, printing, laser marking, decor foils, RFI/EMI shielding, and installation and assembly of accessories.
For more information, view the OKW website: https://www.okwenclosures.com/en/news/blog/BLG2510-metal-brackets-for-plastic-enclosures.htm
The post OKW’s plastic enclosures add new custom features appeared first on EDN.
A current mirror reduces Early effect

It’s just a fact of life. A BJT wired in common emitter, even after compensating for the effects of device and temperature variations, still isn’t a perfect current source.
Wow the engineering world with your unique design: Design Ideas Submission Guide
One of the flaws in the ointment is the Early effect of collector voltage on collector current. It can sometimes be estimated from datasheet parameters if output admittance (hoe) is specified (Ee ~ hoe / test current). A representative value is 1% per volt. Figure 1 shows its mischief in action in the behavior of a simple current mirror, where:
I2 = I1(1 + Vcb/Va)
Va ~ 100v
Ierr = Vcb/Va ~ 1%/V
Figure 1 Current mirror without emitter degeneration.
If the two transistors are matched, I2 should equal I1. But instead, Q2’s collector current may increase by 1% per Vcb volt. A double-digit Vcb may create a double-digit percentage error. That would make for a rather foggy “mirror”!
Fortunately, a simple trick for mitigating Early is well known to skilled practitioners of our art. (Please see the footnote). Emitter degeneration is based on an effect that’s 4000 times stronger than the effect of Vcb on Ic.
That’s the effect of Vbe on collector current, and it can easily reduce Ee by two orders of magnitude. Figure 2 shows how it works:
I2 ~ I1(1 + Vcb/Va) – (0.026R)(I2 – I1))
Ierr ~ (Vcb/Va)/(Vr/26mV + 1)

Figure 2 Current mirror with emitter degeneration
Equal resistors R added in series with both emitters will develop voltages Vr = I1*R and I2*R that will be equal if the currents are equal. But if the currents differ (e.g., because of Early), then a Vbe differential will appear…duh…
This is useful because the Vbe differential will oppose the initial current differential, and the effect is large, even if Vr is small. Figure 3 shows how dramatically this reduces Ierr.

Figure 3 A normalized Early effect (y-axis) versus emitter degeneration voltage Ve = Ia*R (x-axis). Note that just 50 mV reduces Early by 3:1. That’s indeed a “long way”!
Footnote
One DI has an earlier conversation about current mirrors and the Early effect: “A two-way mirror—current mirror that is.” In the grand tradition of editor Aalyia’s DI kitchen, frequent and expert commentator Ashutosh suggested how emitter degeneration could improve performance:
asa70
May 27, 2025
Regarding degen, i’ve found that a half volt at say 1mA FS helps match the in and out currents much better even at a tenth of the current, even for totally randomly selected transistors. I suppose it is because the curves will be closer at smaller currents, so that even a 50 mV drop goes a long way
Ashutosh certainly nailed it! 50mV does go a long (3:1) way!
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- A two-way mirror—current mirror that is
- A two-way Wilson current mirror
- Use a current mirror to control a power supply
- A comparison between mirrors and Hall-effect current sensings
The post A current mirror reduces Early effect appeared first on EDN.
Designing a thermometer with a 3-digit 7-segment indicator

Transforming a simple 10-kΩ NTC thermistor into a precise digital thermometer is a great example of mixed-signal design in action. Using a mixed-signal IC—AnalogPAK SLG47011—this design measures temperatures from 0.1°C to 99.9°C with impressive accuracy and efficiency.
SLG47011’s analog-to-digital converter (ADC) with programmable gain amplifier (PGA) captures precise voltage readings, while its memory table and width converter drive a 3-digit dynamic 7-segment display. Each digit lights up in rapid sequence, creating a stable indication for a user, a neat demonstration of efficient multiplexing.
Compact, flexible, and self-contained, this design shows how one device can seamlessly handle sensing, computation, and display control—no microcontroller required.
Operating principle
The circuit schematic of the thermometer with a 3-digit 7-segment indicator is shown in Figure 1.

Figure 1 The circuit schematic displays a thermometer with 3-digit 7-segment indicator. Source: Renesas
The VDIV = 1.8 V voltage is applied to PIN 7 through a resistive divider RT / (R + RT), where R = 5.6 kΩ. PIN 8 activates the first digit, while PIN 6 activates the second digit and decimal point. PIN 4 activates the third digit.
The signal from PIN 7 goes to the single-ended input of the PGA (buffer mode, mode #6) and then to ADC CH0 for further sampling. The allowable temperature range measured by the thermometer is 0.1°C to 99.9°C (or 273.25 K to 373.05 K).
The voltage (VIN) after the resistive divider is equal to:

The ADC converts this voltage to a 10-bit code using the formula:

Whereas,
- RT is the resistance of the NTC thermistor:

- R0 = 10,000 Ω is the resistance at ambient temperature T0 (25°C or 285.15 K)
- B = 4050 K is a constant of the thermistor
- VIN is the voltage on PIN 7
- 1024 represents the 10-bit resolution of the ADC (210)
- 1620 represents the internal Vref in mV
- VINdec is VIN in 10-bit decimal format
The maximum value of VINdec is 1023.
The NTC thermistor resistances for the minimum and maximum value of the temperature are calculated using equations below:

The maximum voltage after the resistive divider is ![]()
The minimum voltage after the resistive divider is ![]()
The relationship between the measured temperature and VIN for the applied parameters of the circuit is shown in Figure 2.

Figure 2 Graph shows the relationship between temperature and VIN.
Thermometer design
The GreenPAK IC-based thermometer design is shown in Figure 3. Download free Go Configure Software Hub to open the design file and see how the functionality is carried out.

Figure 3 Thermometer with 3-digit 7-segment indicator design is built around a mixed-signal IC. Source: Renesas
The SLG47011 mixed-signal IC contains a memory table macrocell that can hold 4096 12-bit words. This space is enough to store the values of each of the three indicator digits for each VINdec (1024 * 3 = 3072 values in total). In other words, the 3n word of the memory table corresponds to the first digit, the 3n + 1 to the second digit, and the 3n + 2 to the third digit of each corresponding T, where n = VINdec.
The ADC output value is sent to the MathCore macrocell, where it’s multiplied by three. This value is then used as a memory table address. Assuming that the ADC output is 1000, the MathCore output is 3000. This means that the memory table values at 3000, 3001, and 3002 addresses will be used and will correspond to the indicator’s first, second, and third digits accordingly.
Data from the MathCore output goes to the IN+ CH0 input of the multichannel DCMP macrocell. This data is compared with the data on the IN- CH0 input, which is taken from the Data Buffer0 output. Data Buffer0 stores the data from the CNT11/DLY11/FSM0 macrocell, which operates in Counter/FSM mode.
The Counter/FSM is reset to “1” when a HIGH signal from the ADC data-ready output arrives and starts counting upward. The multichannel DCMP OUT0 output is connected to the Keep input of CNT11/DLY11/FSM0. This means that when the CNT11/DLY11/FSM0 current value is equal to the MathCore output value, the DCMP OUT0 output is HIGH, and the Keep input of CNT11/DLY11/FSM0 is also HIGH, keeping the counted value for further addressing to the memory table.
At the same time, together with CNT11/DLY11/FSM0, the Memory Control Counter is counting upward from 0 and sets the memory table address.
Thus, when the ADC measures a certain voltage value, the previously described comparison operation will point to the corresponding voltage value stored in the memory table—three consecutively recorded digits, which are then dynamically displayed on the 7-segment display.
The memory table’s stored data then goes to the width converter macrocell, which converts the serial 12-bit input into a parallel 12-bit output (Table 1).

Table 1 The above data highlights width converter connections. Source: Renesas
The inverter enables the decimal point (DP) through PIN 16 based on state of 3-bit LUT0 (second digit).
To dynamically display the temperature, the digits will be ON sequentially with a period of 300 μs. The period is set by the CNT2/DLY2 macrocell (in Reset Counter Mode). The 3-bit LUT4 sets the clock of the width converter based on its synchronization with the CNT11/DLY11/FSM0 clock and the state of DCMP OUT0.
The P DLY, DFF8, and 3-bit LUT12 macrocells form a state counter for the Up/Down input of the Memory Control Counter macrocell based on the state of the second digit (falling edge on OUT2 of the width converter).
When the first digit is ON, the Memory Control Counter counts upward by 1; when the second digit is first set ON, the state counter is set to LOW, forcing the Memory Control Counter to count down, while it has already activated the third number. Therefore, the second number is activated again, and the state counter goes HIGH, forcing the Memory Control Counter count upward, while it has already activated the first digit. Thus, all three digits will be sequentially activated until there is a new measured value from the ADC macrocell.
CNT8/DLY8, CNT12/DLY12/FSM1, and 3-bit LUT7 are used to properly turn on the ADC after the first turn-on when POR arrives, as well as during further operation when the ADC is turned on and off. CNT12/DLY12/FSM1 provides a period of 1.68 s, which results in the thermometer value being updated every 1.68 s.
Memory table filling algorithm
The algorithm below is shown for a VDIV voltage of 1.8 V and a resistive divider of 5.6 kΩ and RT.
First, the resistance value of RT (Ω) at ambient temperature T is calculated using the formula:

Second, the value of the temperature t (°C) for a determined RT value is calculated by:

Then, the calculated t (°C) values are rounded to the first decimal point.
For each VINdec value, three values are assigned in the memory table as follows: each VINdec corresponds to three consecutive values in the memory table 3n, 3n + 1, and 3n + 2, where n = VINdec.
Three separate columns for each of the values of 3n, 3n + 1, and 3n + 2 should be created. They each correspond to the first, second, and third digits of the indicator, respectively. The first column is assigned to the first digit of the rounded t value. The second column is assigned to the second digit, and the third column is assigned to the third digit.
For each digit of each column, a 7-bit binary value is found (m11 – m5), corresponding to the activation of the corresponding digit of the 7-segment display (Table 2).

Table 2 The above data highlights the 7-segment code. Source: Renesas
When the measured tmeas temperature is in range 0.1°C > tmeas > 99.9°C, the 0 – L symbols should be displayed on the indicator. The third digit is not activated in this case.
The next step is to add 5 more bits (m4 – m0) to the right of this value to get a 12-bit number.
The ninth bit (m3) is responsible for turning on the first digit, the tenth bit (m2) is responsible for turning on the second digit, and the eleventh bit (m1) for the third digit. Since a 7-segment indicator with a common cathode is used, turning on the digit is done with a LOW level (0). Therefore, for the first column (with words of type 3n), the ninth bit (m3) will equal 0, while the tenth (m2) and the eleventh (m1) bits will equal 1.
For the second column (with words of type 3n + 1), the tenth bit (m2) will be equal to 0, while the ninth (m3) and eleventh (m1) bits will be equal to 1. For the third column (with words of type 3n + 2), the eleventh (m1) bit will be equal to 0, while the ninth (m3) and tenth (m2) bits will be equal to 1.
The twelfth bit (m0) is not used, so its value does not affect the design. The resulting 3072 binary 12-bit values must then be converted to hex.
The required values for the memory table are already determined, now they need to be sorted in ascending order of the Word index and inserted into the appropriate location in the software. For a better understanding of the connections between the memory table and the width converter, view Figure 4.

Figure 4 The above diagram highlights connections between the memory table and the width converter. Source: Renesas
Test results
Figure 5 shows the result of measuring a temperature of around 17°C with respect to data obtained by a multimeter thermocouple.

Figure 5 Temperature range is set up to room temperature of around 17°C. Source: Renesas
Figure 6 shows the result of measuring a temperature of around 59°C with respect to data obtained by a multimeter thermocouple.

Figure 6 The measurement results show a temperature of around 59°C. Source: Renesas
Figure 7 shows the result of measuring a temperature of around 70°C with respect to data obtained by multimeter thermocouple.

Figure 7 The measurement results show a temperature of around 70°C. Source: Renesas
The mixed-signal integration
This design illustrates a practical approach to implementing a compact digital thermometer using the SLG47011 mixed-signal chip. Its ADC with PGA enables precise indirect temperature measurement, while the memory table and width converter manage dynamic control of the 3-digit 7-segment indicator.
By adjusting the resistive divider and updating the memory table, engineers can easily redefine the measurement range to suit different applications. The result is a straightforward and flexible thermometer design that effectively demonstrates mixed-signal integration in practice.
Myron Rudysh is application engineer at Renesas Electronics.
Nazar Ftomyn is application engineer at Renesas Electronics.
Yaroslav Chornodolskyi is application engineer at Renesas Electronics.
Bohdan Kholod is senior product development engineer at Renesas Electronics.
Related Content
- Thermometers perform calibration checks
- Thermal Imaging Sensors for Fever Detection
- Electronic Thermometer Project by LM35 and LM3914
- Measure temperature precisely with an infrared thermometer
- Electronic Thermometer with CrowPanel Pico 2.8 Inch 320×240 TFT LCD
The post Designing a thermometer with a 3-digit 7-segment indicator appeared first on EDN.
The shift from Industry 4.0 to 5.0

The future of the global industry will be defined by the integration of AI with robotics and IoT technologies. AI-enabled industrial automation will transform manufacturing and logistics across automotive, semiconductors, batteries, and beyond. IDTechEx predicts that the global sensor market will reach $255 billion by 2036, with sensors for robotics, automation, and IoT poised as key growth markets.
From edge AI and IoT sensors for connected devices and equipment (Industry 4.0) to collaborative robots, or cobots (Industry 5.0), technology innovations are central to future industrial automation solutions. As industry megatrends and enabling technologies increasingly overlap, it’s worth evaluating the distinct value propositions of Industry 4.0 and Industry 5.0, as well as the roadmap for key product adoption in each.
Sensor and robotics technology roadmap for Industry 4.0 and Industry 5.0 (Source: IDTechEx)
What are Industry 4.0 and Industry 5.0?
Industry 4.0 emerged in the 2010s with IoT and cloud computing, transforming traditionally logic-controlled automated production systems into smart factories. Miniaturized sensors and industrial robotics enable repetitive tasks to be automated in a controlled and predictable manner. IoT networking, cloud processing, and real-time data management unlock productivity gains in smart factories through efficiency improvements, downtime reductions, and optimized supply chain integration.
Industry 4.0 technologies have gained significant traction in many high-volume, low-mix product markets, including consumer electronics, automotive, logistics, and food and beverage. Industrial robots have been key to automation in many sectors, excelling at tasks such as material handling, palletizing, and quality inspection in manufacturing and assembly applications.
If Industry 4.0 is characterized by cyber-physical systems, then Industry 5.0 is all about human-robot collaboration. Collaborative and humanoid robots better accommodate changing tasks and facilitate safer, more natural interaction with human operators—areas where traditional robots struggle.
Cobots are designed to work closely with humans without the need for direct control. AI models trained on tailored, application-specific datasets are employed to make cobots fully autonomous, with self-learning and intelligent behaviors.
The distinction between Industry 4.0 and Industry 5.0 technologies is ambiguous, particularly as products in both categories increasingly integrate AI. Nevertheless, technology innovations continue to enable the next generation of Industry 4.0 and Industry 5.0 products.
Intelligent sensors for Industry 4.0In 2025, the big trend within Industry 4.0 is moving from connected to intelligent industrial systems using AI. AI models built and trained on real operation data are being augmented into sensors and IoT solutions to automate decision-making and offer predictive functionality. Edge AI sensors, digital twinning, and smart wearable devices are all key enabling technologies promising to boost productivity.
Edge-AI-enabled sensors are hitting the market, employing on-board neural processor units with AI models to carry out data inference and prediction on endpoint devices. Edge AI cameras capable of image classification, segmentation, and object detection are being commercialized for machine vision applications. Sony’s IMX500 edge AI camera module has seen early adoption in retail, factory, and logistics markets, while Cognex’s AI-powered 3D vision system gains traction for in-line quality inspection in EV battery and PCB manufacturing.
With over 15% of production costs arising from equipment failure in many industries, edge AI sensors monitoring equipment performance and automating maintenance can mitigate risks. Analog Devices, STMicroelectronics, TDK, and Siemens all now offer in-sensor or co-packaged machine-learning vibration and temperature sensors for industrial predictive maintenance. Predictive maintenance has been slow to take off, however, with industrial equipment suppliers and infrastructure service providers (rail, wind, and marine assets) being early adopters.
Simulating and modeling industrial operational environments is becoming more feasible and valuable as sensor data volume grows. Digital twins can be built using camera and position sensor data collected on endpoint devices. Digital twins enable performance simulation and maintenance forecasting to maximize productivity and minimize operational downtime. Proof-of-concept use cases include remote equipment operation, digital staff training, and custom AI model development.
Beyond robotics and automation, industrial worker safety is still a challenge. The National Safety Council estimates that the total cost of U.S. work injuries was $177 billion in 2023, with high incident rates in construction, logistics, agriculture, and manufacturing industries.
Smart personal protection equipment with temperature, motion, and gas sensors can monitor worker activity and environmental conditions, giving managers oversight to ensure safety. Wearable IoT skin patches offering hydration and sweat analysis are also emerging in the mining and oil and gas industries, reducing risk by proactively addressing the physiological and cognitive effects of dehydration.
Human-robot collaboration for Industry 5.0Industry 4.0 relies heavily on automation, making it ideal for high-volume, low-mix manufacturing. As the transition to Industry 5.0 takes place, warehouse operators are seeking greater flexibility in their supply chains to support low-volume, high-mix production.
A defining aspect of Industry 5.0 is human-robot collaboration, with cobots being a core component of this concept. Humanoid robots are also designed to work alongside humans, aligning them with Industry 5.0 principles. However, as of late 2025, their technology and safety standards are still developing, so in most factory settings, they are deployed with physical separation from human workers.
Ten-year humanoid robot hardware market forecast (2025–2035) (Source: IDTechEx)
Humanoid robots, widely perceived as embodied AI, are projected to grow rapidly over the next 10 years. IDTechEx forecasts that the humanoid robot hardware market is set to take off in 2026, growing to reach $25 billion by 2035. This surge is fueled by major players like Tesla and BYD, who plan a more than tenfold expansion in humanoid deployment in their factories between 2025 and 2026.
As of 2025, despite significant hype around humanoid robots, there are still limited real-world applications where they fit. Among industrial applications, the automotive and logistics sectors have attracted the most interest. In the short- to mid-term, the automotive industry is expected to lead humanoid adoption, driven by the historic success of automation, large-scale production demands, and stronger cost-negotiation power.
Lightweight and slow-moving cobots, designed to work next to human operators without physical separation, have also gained significant momentum in recent years. Cobots are ideal options for small and mid-sized enterprises due to their low cost, small footprint, ease of programming, flexibility, and low power consumption.
Cobots could tackle a key industry pain point: the risk of shutdown to entire production lines when a single industrial robot malfunctions, due to the need to ensure human operators can safely enter robot working zones for inspection. Cobots could be an ideal solution to mitigate this, as they can work closely and flexibly with human operators.
The most compelling application of cobots is in the automotive industry for assembly, welding, surface polishing, and screwing. Cobots are also attractive in high-mix, low-volume production industries such as food and beverage.
Limited technical capabilities and high costs currently restrict wider cobot adoption. However, alternative business models are emerging to address these challenges, including cobot-as-a-service and try-first-and-buy-later models.
Outlook for Industry X.0AI, IoT, and robotics are mutually enabling technologies, with industrial automation applications positioned firmly within this nexus and poised to capitalize on advancements.
Key challenges for Industry X.0 technologies are long return-on-investment (ROI) timelines and bespoke application requirements. Industrial IoT sensor networks take an average of two years to generate returns, while humanoid robots in warehouses require 18 months of pilot testing before broader use. However, economies-of-scale cost reductions and supporting infrastructure can ease ROI concerns, while long-term productivity gains will also offset high upfront costs.
The next generation of industrial IoT technology will leverage AI to deliver productivity improvements through greater device intelligence and automated decision-making. With IDTechEx forecasting that humanoid and cobot adoption will take off by the end of the decade, the 2030s are set to be defined by Industry 5.0.
The post The shift from Industry 4.0 to 5.0 appeared first on EDN.
A precision, voltage-compliant current source
A simple current source
It has long been known that the simple combination of a depletion-mode MOSFET (and before these were available, a JFET) and a resistor made a simple, serviceable current source such as that seen on the right side of Figure 1.
Figure 1 Current versus voltage characteristics of a DN2540 depletion mode MOSFET and the circuit of a simple current source made with one, both courtesy of Microchip.
Wow the engineering world with your unique design: Design Ideas Submission Guide
This is evident from the figure’s left side, which shows the drain current versus drain voltage characteristics for various gate-source voltages of a DN2540 MOSFET. Once the drain voltage rises above a certain point, further increases cause only very slight rises in drain current (not visible on this scale). This simple circuit might suffice for many applications, except for the fact that the VGS required for a specific drain current will vary over temperature and production lots. Something else is needed to produce a drain current with any degree of precision.
Alternative current source circuitsAnd so, we might turn to something like the circuits of Figure 2.

Figure 2 A current source with a more predictable current, left (IXYS) and a voltage regulator which could be employed as a current source with a more predictable current, right (TI). Source: IXYS and Texas Instruments
In these circuits, we see members of the ‘431 family regulating MOSFET source and BJT emitter voltages. The Texas Instruments circuit on the right demonstrates the need for an oscillation-prevention capacitor, and my experience has been that this is also needed with the IXYS circuit on the left.
Although RL1, RS, and R1 pass precise, well-regulated currents to the transistors in their respective circuits, resistors RB and R do not. RB’s current is subject to a not well-controlled VGS, and R’s is affected by whatever variations there might be in VBATT.
The MOSFET circuit is a true two-terminal current source, so a load can be connected in series with the current source at its positive or negative terminal. But then the load is always subjected to the poorly-controlled RB current.
The BJT is part of a three-terminal circuit, and for a load to avoid the VBATT-influenced current through R, it could only be connected between VBATT and the BJT collectors. Even so, variations in VBATT could produce currents, which lead to voltages that are not entirely rejected at the TLA431 cathode, and so would produce uncontrolled currents in the BJTs and therefore in the load.
A true two-terminal current sourceFigure 3 addresses these limitations in circuit performance. In analyzing it, as always, I rely on datasheet maximum and minimum values whenever they are available, but resort to and state that I’m employing typical values when they are not.

Figure 3 This circuit delivers predictable currents to U1 and M1 and therefore to a load. It’s a true two-terminal current source which accommodates load connection to both low and high side.
U1 establishes 1.24 · ( 1 + R4 / R3 ) volts at VS and adds a current of VS / (R4 + R3) to the MOSFET drain.
An additional drain current comes from:
2 · ( VS – VBE(Q2) / ( R2 + R5 )
The “2” is due to the fact that R2 and R1 currents are identical (discounting the Early effect on Q1). The current through R1 is nearly constant regardless of the value of VGS. This current provides what U1 needs to operate.
The precision of the total DC current through the load is limited by the tolerances of R1 through R5, the U1 reference’s accuracy, and the value of the BJT’s temperature-dependent VBE drop. (U1’s maximum feedback reference current over its operating temperature is a negligible 1 µA.)
U1 requires a minimum of 100 µA to operate, so R5 is chosen to provide it with 150 µA. Per its On Semi datasheet, at this current and over Q1’s operating temperature range, the 2N3906’s typical VCE saturation voltage is 50 mV. Add that to the 15mV drop across R1 for a total of 65 mV, which is the smallest achievable VSG value.
Accordingly, we are some small but indeterminant amount shy of the maximum drain current guaranteed for the part (at 25°C, 25 V VDS, and 0 V VGS only) by its datasheet. At the other extreme, under otherwise identical conditions, a VGS of -3.5 V will guarantee a drain current of less than 10 µA. For such, U1 and the circuit as a whole will operate properly at a VS of 5 VDC.
Higher temperatures might require a more negative VGS by a maximum of -4.5 mV/°C and, therefore, possibly larger values of VS and, accordingly, of R5. This would be to ensure that U1’s cathode voltage remains above 1.24 V under all conditions.
D2 is selected for a Zener voltage which, when added to D1’s voltage drop, is greater than VS, but is less than the lesser of the maximum allowed cathode-anode voltage of U1 (18 V) and the maximum allowed VGS of M1 (20 V). D1‘s small capacitance shields the rest of the circuit from the Zener capacitance, which might otherwise induce oscillations. The diodes are probably not needed, but they provide cheap protection. Neither passes current or affects circuit performance during normal operation. C1 ensures stable operation.
U1 strives to establish a constant voltage at VS regardless of the DC and AC voltage variations of the unregulated supply V1. Working against it in descending order of impact are the magnitude of the conductance of the R3 + R4 resistor string, U1‘s falling loop gain with frequency, and M1’s large Rds and small Cds. Still, the circuit built around the 400-V VDS-capable M1 achieves some surprisingly good results in the test circuit of Figure 4.

Figure 4 Circuit used to test the impedance of the Figure 3 current source.
Table 1 and Figure 5 list and display some measurements. Impedances in megohms are calculated using the formula RLOAD · 10(-dB, VLOAD / VGEN) / 20 / 1E6.

Table 1 Impedances of the current source of Figure 3 at various frequencies, evaluated using the circuit of Figure 4.

Figure 5 Plotted curves of Figure 3 current source impedance from the data in Table 1.
ObservationsThere are several conclusions that can be drawn from the curves in Figure 5. The major one is that at low frequencies, the AC impedance Z is roughly inversely proportional to current. A more insightful way to express this is that Z is proportional to R3 + R4, which sets the current. With larger resistance, current variations produce larger voltages for the ‘431 IC to use for regulation; that is, there’s more gain available in the circuit’s feedback loop to increase impedance.
Another phenomenon is that in the 1 and 10-mA current curves, the impedance rises much more quickly as frequency increases above 1 kHz. This is consistent with the fact that the TLVH431B gain is more or less flat from DC to 1 kHz and falls thereafter. The following phenomenon masks this effect somewhat at the higher 100 mA current.
Finally, at all currents, there is an advantage to operating at higher values of VDS. This is especially apparent at the highest current, 100 mA. And this is consistent with the fact that for the characteristic curves of the DN2540 MOSFET seen in Figure 1, higher VDS voltages are required at higher currents before the curves become horizontal.
Precision current sourceA precision high impedance, moderate-to high voltage-compliant current source has been introduced. Its two-terminal nature means that a load in series with it can be connected to the source’s positive or negative end. Unlike earlier designs, the ‘431 regulator IC’s operating current is independent of both the source’s supply voltage and of its MOSFET’s VGS voltage. The result is a more predictable DC current as well as higher AC impedances than would otherwise be obtainable.
Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.
Related Content
- A high-performance current source
- PWM-programmed LM317 constant current source
- Simple, precise, bi-directional current source
- A negative current source with PWM input and LM337 output
- Programmable current source requires no power supply
The post A precision, voltage-compliant current source appeared first on EDN.
Back EMF and electric motors: From fundamentals to real-world applications

Let us begin this session by revisiting a nostalgic motor control IC—the AN6651—designed for rotating speed control of compact DC motors used in tape recorders, record players, and similar devices.
The figure below shows the AN6651’s block diagram and a typical application circuit, both sourced from a 1997 Panasonic datasheet. These retouched visuals offer a glimpse into the IC’s internal architecture and its practical role in analog motor control.

Figure 1 Here is the block diagram and application circuit of the AN6651 motor control IC. Source: Panasonic
Luckily, for those still curious to give it a try, the UTC AN6651—today’s counterpart to the legacy AN6651—is readily available from several sources.
Before we dive deeper, here is a quick question—why did I choose to begin with the AN6651? It’s simply because this legacy chip elegantly controls motor speed using back electromotive force (EMF) feedback—a clever analog technique that keeps rotation stable without relying on external sensors.
In analog systems, this approach is especially elegant: the IC monitors the voltage generated by the motor itself (its back EMF), which is proportional to speed. By adjusting the drive current to maintain a target EMF, the chip effectively regulates motor speed under varying loads and supply conditions.
And yes, this post dives into back EMF (BEMF) and electric motors. Let’s get started.
Understanding back EMF in everyday motors
A spinning motor also acts like a generator, as its coils moving through magnetic fields induce an opposing voltage called back EMF. This back EMF reduces the current flowing through the motor once it’s up to speed.
At that point, only enough current flows to overcome friction and do useful work—far less than the surge needed to get it spinning. Actually, it takes very little time for the motor to reach operating speed—and for the current to drop from its high initial value.
This self-regulating behaviour of back EMF is central to motor efficiency and protection. As the mechanical load rises and the motor begins to slow, back EMF decreases, allowing more current to flow and generate the required torque. Under light or no-load conditions, the motor speeds up, increasing back EMF and limiting current draw.
This dynamic ensures that the motor adjusts its power consumption based on demand, preventing excessive current that could overheat the windings or damage components. In essence, back EMF reflects motor speed and actively stabilizes performance, a principle rooted in classical DC motor theory.
It ‘s worth noting that back EMF plays a critical role as a natural current limiter during normal motor operation. When motor speed drops—whether due to a brownout or excessive mechanical loading—the resulting reduction in back EMF allows more current to flow through the windings.
However, if left unchecked, this surge can lead to overheating and permanent damage. Maintaining adequate speed and load conditions helps preserve the protective function of back EMF, ensuring safe and efficient motor performance.
Armature feedback method in motion control
Armature feedback is a form of self-regulating (passive) speed control that uses back EMF and has been employed for decades in audio tape transport mechanisms, luxury toys, and other purpose-built devices. It remains widely used in low-cost motor control systems where precision sensors or encoders are impractical.
This approach leverages the motor’s ability to act as a generator: as the motor rotates, it produces a voltage proportional to its speed. Like any generator, the output also depends on the strength of the magnetic field flux.
Now let’s take a quick look at how to measure back EMF using a minimalist hardware setup.

Figure 2 The above blueprint presents a minimalist hardware setup for measuring the back EMF of a DC motor. Source: Author
Just to elaborate, when the MOSFET is ON, current flows from the power supply through the motor to ground, during which back EMF cannot be measured. When the MOSFET is OFF, the motor’s negative terminal floats, allowing back EMF to be measured. A microcontroller can generate the required PWM signal to drive the MOSFET.
Likewise, its onboard analog-to-digital converter (ADC) can measure the back EMF voltage relative to ground for further processing. Note that since the ADC measures voltage relative to ground, a lower input value corresponds to a higher back EMF.
That is, measuring the motor’s speed using back EMF involves two alternating steps: first, run the motor for a brief period; then, remove the drive signal. Due to inertia in the motor and mechanical system, the rotor continues to spin momentarily, and this coasting phase provides a window to sample the back EMF voltage and estimate the motor’s rotational speed.
The reference signal can then be routed to the PWM section, where the drive power is fine-tuned to maintain steady motor operation.
Still, in most cases, since the PWM driver outputs armature voltage as pulses, back EMF can also be measured during the intervals between those pulses. Keep note, when the transistor switches off, a strong inductive spike is generated, and the recirculation current flows through the antiparallel flyback diode. Therefore, a brief delay is demanded to allow the back EMF voltage to settle before measurement.
Notably, a high-side P-channel MOSFET can be used as a motor driver transistor instead of a low-side N-channel MOSFET. Likewise, discrete op-amps—rather than dedicated ICs—can also govern motor speed, but that is a topic for another day.
And while this is merely a blueprint, its flexibility allows it to be readily adapted for measuring back EMF—and thus the RPM—of nearly any DC motor. With just a few tweaks, this low-cost approach can be adapted to support a wide range of motor control applications—sensorless, scalable, and easy to implement. Naturally, it takes time, technical skill, and a bit of patience—but you can master it.
Back EMF and the BLDC motor
Back EMF in BLDC motors acts like a built-in feedback system, helping the motor regulate its speed, boost efficiency, and support smooth sensorless control. The shape of this feedback signal depends on how the motor is designed, with trapezoidal and sinusoidal waveforms being the most common.
While challenges like low-speed control and waveform distortion can arise, understanding and managing back EMF effectively opens the door to unlocking the full potential of BLDC motors in everything from fans to drones to electric vehicles.
So, what are the key effects of back EMF in BLDC motors? Let us take a closer look:
- Design influence: The shape of the back EMF waveform—trapezoidal or sinusoidal—directly affects control strategy, acoustic noise, and how smoothly the motor runs. Trapezoidal designs suit simpler, cost-effective controllers, while sinusoidal profiles offer quieter, more refined motion.
- Position estimation: Back EMF is widely used in sensorless control algorithms to estimate rotor position.
- Speed control: Back EMF is directly tied to rotor speed, making it a reliable signal for regulating motor speed without external sensors.
- Speed limitation: Back EMF eventually balances the supply voltage, limiting further acceleration unless voltage is increased.
- Current modulation: As the motor spins faster, back EMF increases, reducing the effective voltage across the windings and limiting current flow.
- Torque impact: Since back EMF opposes the applied voltage, it affects torque production. At high speeds, stronger back EMF draws less current, resulting in lower torque.
- Efficiency optimization: Aligning commutation with back EMF waveform improves performance and reduces losses.
- Regenerative braking: In some systems, back EMF is harnessed during braking to feed energy back into the power supply or battery, a valuable feature in electric vehicles and battery-powered devices where efficiency matters.
Oh, I nearly skipped over a few clever tricks that make BLDC motor control even more efficient. One of them is back EMF zero crossing—a sensorless technique where the controller detects when the voltage of an unpowered phase crosses zero, presenting it to time commutation events without physical sensors. To avoid false triggers from electrical noise or switching artifacts, this signal often needs debouncing, either through filtering or timing thresholds.
But this method does not work at startup, when the rotor is not spinning fast enough to generate usable back EMF. That is where open-loop acceleration comes in: the motor is driven with fixed timing until it reaches a speed where back EMF becomes detectable and closed-loop control can take over.
For smoother and more precise performance, field-oriented control (FOC) goes a step further. It transforms motor currents into a rotating reference frame, enabling accurate torque and flux control. Though traditionally used in permanent magnet synchronous motors (PMSMs), FOC is increasingly applied to sinusoidal BLDC motors for quieter, more refined motion.
A vast number of ICs nowadays make sensorless motor control feel like a walk in the park. As an example, below you will find the application schematic of the DRV10983 motor IC, which elegantly integrates power MOSFETs for driving a three-phase sensorless BLDC motor.

Figure 3 Application schematic of the DRV10983 chip, illustrating its function as a three-phase sensorless motor driver with integrated power MOSFETs. Source: Texas Instruments
That wrap up things for now. Talked too much, but there is plenty more to uncover. If this did not quench your thirst, stay tuned—more insights are brewing.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Driver Precision, Efficiency Get a Boost from Microelectronics
- Brushless DC Motors – Part I: Construction and Operating Principles
- Brushless DC Motors–Part II: Control Principles
- Back EMF method detects stepper motor stall: Pt. 1–The basics
- Back EMF method detects stepper motor stall: Pt. 2–Torque effects and detection circuitry
The post Back EMF and electric motors: From fundamentals to real-world applications appeared first on EDN.
Beyond the current smart grid management systems

Modernizing the electric grid involves more than upgrading control systems with sophisticated software—it requires embedding sensors and automated controls across the entire system. It’s not only the digital brains that manage the network but also the physical devices, like the motors that automate switch operations, which serve as the system’s hands.
Only by integrating sensors and robust controls throughout the entire grid can we fully realize the vision of a smart, flexible, high-capacity, efficient, and reliable power infrastructure.

Source: Bison
The drive to modernize the power grid
The need for increased capacity and greater flexibility is driving the modernization of the power grid. The rapid electrification of transportation and HVAC systems, combined with the rise of artificial intelligence (AI) technologies, is placing unprecedented demands on the energy network.
To meet these challenges, the grid must become more dynamic, capable of supporting new technologies while optimizing efficiency and ensuring reliability.
Integrating distributed energy resources (DERs), such as rooftop solar panels, battery storage, and wind farms, adds further complexity. So, advanced fault detection, self-healing capabilities, and more intelligent controls are essential to managing these resources effectively. Grid-level energy storage solutions, like battery buffers, are also critical for balancing supply and demand as the energy landscape evolves.
At the same time, the grid must address the growing need for resilience. Aging infrastructure, much of it built decades ago, struggles to meet today’s energy demands. Upgrading these outdated systems is vital to ensuring reliability and avoiding costly outages that disrupt businesses and communities.
The increasing frequency of climate-related disasters, including hurricanes, wildfires, and heat waves, highlights the urgency of a resilient grid. Therefore, modernizing the grid to withstand and recover from extreme weather events is no longer optional, it’s essential for the stability of our energy future.
The challenges posed by outdated infrastructure and climate-related disasters are accelerating the adoption of advanced technologies like Supervisory Control and Data Acquisition (SCADA) systems and Advanced Distribution Management Systems (ADMS). These innovations enhance grid visibility, allowing operators to monitor and manage energy flow in real time. This level of control is crucial for quickly addressing disruptions and preventing widespread outages.
Additionally, ADMS makes the grid smarter and more efficient by leveraging predictive analytics. ADMS can forecast energy demand, identify potential issues before they occur, and optimize the flow of electricity across the grid. It also supports conditional predictive maintenance, allowing utilities to address equipment issues proactively based on real-time data and usage patterns.
The key to successful digitization: Fully integrated systems
Smart grids follow the dynamics of the overall global shift toward digitization, aligning with advancements in Industry 4.0, where smart factories go beyond advanced software and analytics. It’s a complete system that integrates IoT sensors, robotics, and distributed controls throughout the production line, creating a setup that’s more productive, flexible, and transparent.
By offering real-time visibility into the production process and component conditions, these automated systems streamline operations, minimize downtime, boost productivity, lower labor costs, and enhance preventive maintenance.
Similarly, smart grids operate as fully integrated systems that rely heavily on a network of advanced sensors, controls, and communication technologies.
Devices such as phasor measurement units (PMUs) provide real-time monitoring of electrical grid stability. Other essential sensors include voltage and current transducers, power quality transducers, and temperature sensors, which monitor key parameters to detect and prevent potential issues. Smart meters also enable two-way communication between utilities and consumers, enabling real-time energy usage tracking, dynamic pricing, and demand response capabilities.
The role of motorized switch operators in grid automation
Among the various distributed components in today’s modern grid infrastructure, motorized switch operators are among the most critical. These devices automate switchgear functions, eliminating the need for manual operation of equipment such as circuit breakers, load break switches, air and SF6 insulated disconnects, and medium- or high-voltage sectionalizers.
By automating these processes, motorized switch operators enhance precision, speed, and safety. They reduce the risk of human error and ensure smoother grid operations. Moreover, these devices integrate seamlessly with SCADA and ADMS, enabling real-time monitoring and control for improved efficiency and reliability across the grid.
Motorized switch operators aren’t just valuable for supporting the smart grid, they also offer practical business benefits on their own, even without smart grid integration. Automating switch operations eliminates the need to send out trucks and personnel every time a switch needs to be operated. This saves significant time, reduces service disruptions, and lowers fleet operation and labor costs.
Motorized switch operators also improve safety. During storms or emergencies, sending crews to remote or hazardous locations can be dangerous. Underground vaults, for example, can flood, turning them into high-voltage safety hazards. Automating these tasks ensures that switches can be operated without putting workers at risk.
The importance of a reliable motor and gear system
When automating switchgear operation, the reliability of the motor and gear system is crucial. These components must perform flawlessly every time, ensuring consistent operation in all conditions, from routine use to extreme situations like storms or grid emergencies.
Given that the switchgear in power grids is designed to operate reliably for decades, motor operators must be engineered with exceptional durability and dependability to ensure they surpass these long-term performance requirements.
Standard off-the-shelf motors often fail to meet the specific demands of medium- and high-voltage switchgear systems. General-purpose motors are typically not engineered to withstand extreme environmental conditions or the high number of operational cycles required in the power grid.
On the other hand, utilities need to modernize infrastructure without expanding vault sizes, and switchgear OEMs want to enhance functionality without altering layouts. A “drop-in” solution offers a seamless and straightforward way to integrate advanced automation into existing systems, saving time, reducing costs, and minimizing downtime.
To meet the unique challenges of medium- and high-voltage switchgear, motor and gear systems must balance two critical constraints—compact size and limited amperage—while still delivering exceptional performance in speed and torque.
Here’s why these attributes matter:
- Compact size: Space is at a premium in power grid applications, especially for retrofits where manual switchgear is being converted to automated systems. So, motors must fit within the existing contours and confined spaces of switchgear installations. Even for new equipment, utilities demand compact designs to avoid costly expansions of service vaults or installation areas.
- Limited amperage draw: Motors often need to operate on as little as 5 amps, far less than what’s typical for other applications. Developing a motor and gear system that performs reliably within such constraints is essential to ensuring compatibility with power grid environments.
- High speed: Fast operation is critical for the safe and effective functioning of switchgear. The ability to open and close switches rapidly minimizes the risk of dangerous electrical arcs, which can cause severe equipment damage, pose safety hazards, and lead to cascading power grid failures.
- High torque: Overcoming the significant spring force of switchgear components requires motors with high torque. This ensures smooth and consistent operation, even under demanding conditions.
The challenge lies in meeting all four of these requirements. Compact size and low amperage requirements often compromise the speed and torque needed for reliable performance. That’s why motor and gear systems must be specifically engineered and rigorously tested to meet the stringent demands of medium- and high-voltage switchgear applications. Only purpose-built solutions can provide the durability, efficiency, and reliability required to support the long-term stability of the power grid.
Meeting environmental and installation demands
Beyond size, power, and performance considerations, motor and gear systems for medium- and high-voltage switchgear must also meet stringent environmental and installation requirements.
For example, these systems are often exposed to extreme weather conditions, requiring watertight designs to ensure durability in harsh environments. This is especially critical for applications where switchgear is housed in underground vaults that may be prone to flooding or moisture intrusion. Additionally, using specialized lubrication that performs well in both high and low temperature extremes is essential to maintain reliability and efficiency.
Equally important is the ease of installation. Rotary motors provide a significant advantage over linear actuators in this regard. Unlike linear actuators, which require precise calibration, a process that is time-consuming, labor-intensive, and potentially error-prone, rotary motors eliminate this complexity. Their straightforward setup not only reduces installation time but also enhances reliability by eliminating the need for manual adjustments.
To address the diversity of designs in switchgear systems produced by various OEMs, it is essential to work with a motor and gear manufacturer capable of delivering customized solutions. Retrofits often demand a tailored approach due to the unique configurations and requirements of different equipment. Partnering with a company that not only offers bespoke solutions but also has deep expertise in power grid applications is critical.
Future-proofing systems with reliable automation
Automating switchgear operation is a vital step in advancing the modernization of power grids, forming a critical component of smart grid development. Reliable, high-performance motor operators enhance operational efficiency and ensure longevity, providing a solid foundation for evolving power systems.
No matter where a utility is in its modernization journey, investing in durable and efficient motorized switch operators delivers lasting value. This forward-thinking approach not only enhances current operations but also ensures systems are ready to adapt and evolve as modernization advances.
Gary Dorough has advanced from sales representative to sales director for the Western United States and Canada during his 25-year stint at Bison, an AMETEK business. His experience includes 30 years of utility industry collaboration on harmonics mitigation and 15 years developing automated DC motor operators for medium-voltage switchgear systems.
Related Content
- Building a smarter grid
- Energy Generation and Storage in the Smart Grid
- Smart-grid standards: Progress through integration
- Spanish Startup Secures Smart Grids from Cyberattacks
- Smart Grids and AI: The Future of Efficient Energy Distribution
The post Beyond the current smart grid management systems appeared first on EDN.
A tutorial on instrumentation amplifier boundary plots—Part 1

In today’s information-driven society, there’s an ever-increasing preference to measure phenomena such as temperature, pressure, light, force, voltage and current. These measurements can be used in a plethora of products and systems, including medical diagnostic equipment, home heating, ventilation and air-conditioning systems, vehicle safety and charging systems, industrial automation, and test and measurement systems.
Many of these measurements require highly accurate signal-conditioning circuitry, which often includes an instrumentation amplifier (IA), whose purpose is to amplify differential signals while rejecting signals common to the inputs.
The most common issue when designing a circuit containing an IA is the misinterpretation of the boundary plot, also known as the common mode vs. output voltage, or VCM vs. VOUT plot. Misinterpreting the boundary plot can cause issues, including (but not limited to) signal distortion, clipping, and non-linearity.
Figure 1 depicts an example where the output of an IA such as the INA333 from Texas Instruments has distortion because the input signal violates the boundary plot (Figure 2).

Figure 1 Instrumentation amplifier output distortion is caused by VCM vs. VOUT violation. Source: Texas Instruments

Figure 2 This is how VOUT is limited by VCM. Source: Texas Instruments
This series about IAs will explain common- versus differential-mode signaling, basic operation of the traditional three-operational-amplifier (op amp) topology, and how to interpret and calculate the boundary plot.
This first installment will cover the common- versus differential-mode voltage and IA topologies, and show you how to derive the internal node equations and transfer function of a three-op-amp IA.
The IA topologies
While there are a variety of IA topologies, the traditional three-op-amp topology shown in Figure 3 is the most common and therefore will be the focus of this series. This topology has two stages: input and output. The input stage is made of two non-inverting amplifiers. The non-inverting amplifiers have high input impedance, which minimizes loading of the signal source.

Figure 3 This is how a traditional three-op-amp IA looks like. Source: Texas Instruments
The gain-setting resistor, RG, allows you to select any gain within the operating region of the device (typically 1 V/V to 1,000 V/V). The output stage is a traditional difference amplifier. The ratio of R2 to R1 sets the gain of the difference amplifier. The balanced signal paths from the inputs to the output yield an excellent common-mode rejection ratio (CMRR). Finally, the output voltage, VOUT, is referred to as the voltage applied to the reference pin, VREF.
Even though three-op-amp IAs are the most popular topology, other topologies such as the two op amps offer unique benefits (Figure 4). This topology has high input impedance and single resistor-programmable gain. But since the signal path to the output for each input (V+IN and V-IN) is slightly different, this topology degrades CMRR performance, especially over frequency. Therefore, this type of IA is typically less expensive than the traditional three-op-amp topology.

Figure 4 The schematic shows a two-op-amp IA. Source: Texas Instruments
The IA shown in Figure 5 has a two-op-amp IA input stage. The third op amp, A3, is the output stage, which applies gain to the signal. Two external resistors set the gain. Because of the imbalanced signal paths, this topology also has degraded CMRR performance (<90dB). Therefore, devices with this topology are typically less expensive than traditional three-op-amp IAs.

Figure 5 A two-op-amp IA is shown with output gain stage. Source: Texas Instruments
While the aforementioned topologies are the most prevalent, there are several unique IAs, including current mirror, current feedback, and indirect current feedback.
Figure 6 depicts the current mirror topology. This type of IA is preferable because it enables an input common-mode range that extends to both supply voltage rails, also known as the rail-to-rail input. However, this benefit comes at the expense of bandwidth. Compared to two-op-amp IAs, this topology yields better CMRR performance (100dB or greater). Finally, this topology requires two external resistors to set the gain.

Figure 6 This is how current mirror topology looks like. Source: Texas Instruments
Figure 7 shows a simplified schematic of the current feedback topology. This topology leverages super-beta transistors (Q1 and Q2) to buffer input signal and forces it across the gain-setting resistor, RG. The resulting current flows through R1 and R2, which create voltages at the outputs of A1 and A2. The difference amplifier, A3, then rejects the common-mode signal.

Figure 7 Simplified schematic displays the current feedback topology. Source: Texas Instruments
This topology is advantageous because super-beta transistors yield a low input offset voltage, offset voltage drift, input bias current, and input noise (current and voltage).
Figure 8 depicts the simplified schematic of an indirect current feedback IA. This topology has two transconductance amplifiers (gm1 and gm2) and an integrator amplifier (gm3). The differential input voltage is converted to a current (IIN) by gm1. The gm2 stage converts the feedback voltage (VFB-VREF) into a current (IFB). The integrator amplifier matches IIN and IFB by changing VOUT, thereby adjusting VFB.

Figure 8 This schematic highlights the indirect current feedback topology. Source: Texas Instruments
One significant difference when compared to the previous topology is the rejection of the common-mode signal. In current feedback IAs (and similar architectures), the common-mode signal is rejected by the output stage difference amplifier, A3. Indirect current feedback IAs, however, reject the common-mode signal immediately at the input (gm1). This provides excellent CMRR performance at DC over frequency and independent of gain.
CMRR performance does not degrade if there is impedance on the reference pin (unlike other traditional IAs). Finally, this topology requires two resistors to set the gain, which may deliver excellent performance across temperature if the resistors have well-matched drift behavior.
Common- and differential-mode voltage
The common-mode voltage is the average voltage at the inputs of a differential amplifier. A differential amplifier is any amplifier (including op amps, difference amplifiers and IAs) that amplifies a differential signal while rejecting the common-mode voltage.
The inverting terminal connects to a constant voltage, VCM. Figure 9 depicts a more realistic definition of the input signal where two voltage sources represent VD. Each source has half the magnitude of VD. Performing Kirchhoff’s voltage law around the input loop proves that the two representations are equivalent.

Figure 9 The above schematic shows an alternate definition of common- and differential-mode voltages. Source: Texas Instruments
Three-op-amp IA analysis
Understanding the boundary plot requires an understanding of three-op-amp IA fundamentals. Figure 10 depicts a traditional three-op-amp IA with an input signal—with input and output nodes A1, A2 and A3 labeled.

Figure 10 A three-op-amp IA is shown with input signal and node labels. Source: Texas Instruments
Equation 1 depicts the overall transfer function of the circuit in Figure 10 and defines the gain of the input stage, GIS, and the gain of the output stage, GOS. Notice that the common-mode voltage, VCM, does not appear in the output-voltage equation, because an ideal IA completely rejects common-mode input signals.

Noninverting amplifier input stage
Figure 11 depicts a simplified circuit that enables the derivation of node voltages VIA1 and VOA1.

Figure 11 The schematic shows a simplified circuit for VIA1 and VOA1. Source: Texas Instruments
Equation 2 calculates VIA1:

The analysis for VOA1 simplifies by applying the input-virtual-short property of ideal op amps. The voltage that appears at the RG pin connected to the inverting terminal of A2 is the same as the voltage at V+IN. Superposition results are shown in Equation 3, which simplifies to Equation 4.


Applying a similar analysis to A2 (Figure 12) yields Equation 5, Equation 6 and Equation 7.

Figure 12 This is a simplified circuit for VIA2 and VOA2. Source: Texas Instruments



Difference amplifier output stage
Figure 13 shows that A3, R1 and R2 make up the difference amplifier output stage, whose transfer function is defined in Equation 8.

Figure 13 The above schematic displays difference amplifier input (VDIFF). Source: Texas Instruments

Equation 9, Equation 10 and Equation 11 use the equations for VOA1 and VOA2 to derive VDIFF in terms of the differential input signal, VD, as well as RF and the gain-setting resistor, RG.



Substituting Equation 11 for VDIFF in Equation 8 yields Equation 12, which is the same as Equation 1.

In most IAs, the gain of the output stage is 1 V/V. If the gain of the output stage is 1 V/V, Equation 12 simplifies to Equation 13:

Figure 14 determines the equations for nodes VOA3 and VIA3.

Figure 14 This diagram highlights difference amplifier internal nodes. Source: Texas Instruments
The equation for VOA3 is the same as VOUT, as shown in Equation 14:

Using superposition as shown in Equation 15 determines the equation for VIA3. The voltage at the non-inverting node of A3 sets the amplifier’s common-mode voltage. Therefore, only VOA2 and VREF affect VIA3.

Since GOS=R2/R1, Equation 15 can be rewritten as Equation 16:

Part 2 highlights
The second part of this series will use the equations from the first part to plot each internal amplifier’s input common-mode and output-swing limitation as a function of the IA’s common-mode voltage.
Peter Semig is an applications manager in the Precision Signal Conditioning group at Texas Instruments (TI). He received his bachelor’s and master’s degrees in electrical engineering from Michigan State University in East Lansing, Michigan.
Related Content
- Instrumentation amplifier input-circuit strategies
- Discrete vs. integrated instrumentation amplifiers
- New Instrumentation Amplifier Makes Sensing Easy
- Instrumentation amplifier VCM vs VOUT plots: part 1
- Instrumentation amplifier VCM vs. VOUT plots: part 2
The post A tutorial on instrumentation amplifier boundary plots—Part 1 appeared first on EDN.
ADI upgrades its embedded development platform for AI

Analog Devices, Inc. simplifies embedded AI development with its latest CodeFusion Studio release, offering a new bring-your-own-model capability, unified configuration tools, and a Zephyr-based modular framework for runtime profiling. The upgraded open-source embedded development platform delivers advanced abstraction, AI integration, and automation tools to streamline the development and deployment of ADI’s processors and microcontrollers (MCUs).
CodeFusion Studio 2.0 is now the single entry point for development across all ADI hardware, supporting 27 products today, up from five in the last year, when first introduced in 2024.
Jason Griffin, ADI’s managing director, software and AI strategy, said the release of CodeFusion Studio 2.0 is a major leap forward in ADI’s developer-first journey, bringing an open extensible architecture across the company’s embedded ecosystem with innovation focused on simplicity, performance, and speed.
CodeFusion Studio 2.0 streamlines embedded AI development. (Source: Analog Devices Inc.)
A major goal of CodeFusion Studio 2.0 is to help teams move faster from evaluation to deployment, Griffin said. “Everything from SDK [software development kit] setup and board configuration to example code deployment is automated or simplified.”
Griffin calls it a “complete evolution of how developers build on ADI technology,” by unifying embedded development, simplifying AI deployment, and providing performance visibility in one cohesive environment. “For developers and customers, this means faster design cycles, fewer barriers, and a shorter path from idea to production.”
A unified platform and streamlined workflowCodeFusion Studio 2.0, based on Microsoft’s Visual Studio Code, features a built-in model compatibility checker, performance profiling tools, and optimization capabilities. The unified configuration tools reduce complexity across ADI’s hardware ecosystem.
The new Zephyr-based modular framework enables runtime AI/ML workload profiling, offering layer-by-layer analysis and integration with ADI’s heterogeneous platforms. This eliminates toolchain fragmentation, which simplifies ML deployment and reduces complexity, Griffin noted.
“One of the biggest challenges that developers face with multicore SoCs [system on chips] is juggling multiple IDEs [integrated development environments], toolchains, and debuggers,” Griffin explained. “Each core whether Arm, DSP [digital signal processor], or MPU [microprocessor] comes with its own setup and that fragmentation slows teams down.
“In CodeFusion Studio 2.0, that changes completely,” he added. “Everything now lives in a single unified workspace. You can configure, build, and debug every core from one environment, with shared memory maps, peripheral management, and consistent build dependencies. The result is a streamlined workflow that minimizes context switching and maximizes focus, so developers spend less time on setup and more time on system design and optimization.”
CodeFusion Studio System Planner also is updated to support multicore applications and expanded device compatibility. It now includes interactive memory allocation, improved peripherals setup, and streamlined pin assignment.
CodeFusion Studio 2.0 adds interactive memory allocation (Source: Analog Devices Inc.)
The growing complexity in managing cores, memory, and peripherals in embedded systems is becoming overwhelming, Griffin said. The system planner gives “developers a clear graphical view of the entire SoC, letting them visualize cores, assign peripherals, and define inter-core communication all in one workspace.”
In addition, with cross-core awareness, the environment validates shared resources automatically.
Another challenge is system optimization, which is addressed with multicore profiling tools, including the Zephyr AI profiler, system event viewer, and ELF file explorer.
“Understanding how the system behaves in real time, and finding where your performance can improve is where the Zephyr AI profiler comes in,” Griffin said. “It measures and optimizes AI workflows across ADI hardware from ultra-low-power edge devices to high-performance multicore systems. It supports frameworks like TensorFlow Lite Micro and TVM, profiling latency, memory and throughput in a consistent and streamlined way.”
Griffin said the system event viewer acts like a built-in logic analyzer, letting developers monitor events, set triggers, and stream data to see exactly how the system behaves. It’s invaluable for analyzing, synchronization, and timing across cores, he said.
The ELF file explorer provides a graphical map of memory and flash usage, helping teams make smarter optimized decisions.
CodeFusion Studio 2.0 also gives developers the ability to download SDKs, toolchains, and plugins on demand, with optional telemetry for diagnostic and multicore support.
Doubling down on AICodeFusion Studio 2.0 simplifies the development of AI-enabled embedded systems with support for complete end-to-end AI workflows. This enables developers to bring their own models and deploy them in ADI’s range of processors from low-power edge devices to high-performance DSPs.
“We’ve made the workflow dramatically easier,” Griffin said. “Developers can now import, convert, and deploy AI models directly to ADI hardware. No more stitching together separate tools. With the AI deployment tools, you can assign models to specific cores, verify compatibility, and profile performance before runtime, ensuring every model runs efficiently on the silicon right from the start.”
Manage AI models with CodeFusion Studio 2.0 from import to deployment (Source: Analog Devices Inc.)
Easier debugging
CodeFusion Studio 2.0 also adds new integrated debugging features that bring real-time visibility across multicore and heterogeneous systems, enabling faster issue resolution, shorter debug cycles, and more intuitive troubleshooting in a unified debug experience.
One of the toughest parts of embedded development is debugging multicore systems, Griffin noted. “Each core runs its own firmware on its own schedule often with its own toolchain making full visibility a challenge.”
CodeFusion Studio 2.0 solves this problem, he said. “Our new unified debug experience gives developers real-time visibility across all cores—CPUs, DSPs, and MPUs—in one environment. You can trace interactions, inspect shared resources, and resolve issues faster without switching between tools.”
Developers spend more than 60% of their time doing debugging, Griffin said, and ADI wanted to address this challenge and reduce that time sink.
CodeFusion Studio 2.0 now includes core dump analysis and advanced GDB integration, which includes custom JSON and Python scripts for both Windows and Linux with multicore support.
A big advance is debugging with multicore GDP core dump analysis and RTOS awareness working together in one intelligent uniform experience, Griffin said.
“We’ve added core dump analysis, built around Zephyr RTOS, to automatically extract and visualize crash data; it helps pinpoint root causes quickly and confidently,” he continued. “And the new GDB toolbox provides advanced scripting performance, tracing and automation, making it the most capable debugging suite ADI has ever offered.”
The ultimate goal is to accelerate development and reduce risk for customers, which is what the unified workflows and automation provides, he added.
Future releases are expected to focus on deeper hardware-software integration, expanded runtime environments, and new capabilities, targeting growing developer requirements in physical AI.
CodeFusion Studio 2.0 is now available for download. Other resources include documentation and community support.
The post ADI upgrades its embedded development platform for AI appeared first on EDN.
32-bit MCUs deliver industrial-grade performance

GigaDevice Semiconductor Inc. launches a new family of high-performance GD32 32-bit general-purpose microcontrollers (MCUs) for a range of industrial applications. The GD32F503/505 32-bit MCUs expand the company’s portfolio based on the Arm Cortex-M33 core. Applications include digital power supplies, industrial automation, motor control, robotic vacuum cleaners, battery management systems, and humanoid robots.
(Source: GigaDevice Semiconductor Inc.)
Built on the Arm v8-M architecture, the GD32F503/505 series offers flexible memory configurations, high integration, and built-in security functions, and features an advanced digital signal processor, hardware accelerator and a single-precision floating-point unit. The GD32F505 operates at a frequency of 280 MHz, while the GD32F503 runs at 252 MHz. Both devices achieve up to 4.10 CoreMark/MHz and 1.51 DMIPS/MHz.
The series offers up to 1024 KB of Flash and 192 KB of SRAM. Users can allocate code-flash, data-flash, and SRAM location through scatter loading based on their specific application, which allows users to tailor memory resources according to their requirements, GigaDevice said.
The GD32F503/505 series also integrates a set of peripheral resources, including three analog-to-digital converters with a sampling rate of up to 3 Ms/s (supporting up to 25 channels), one fast comparator, and one digital-to-analog converter. For connectivity, it supports up to three SPIs, two I2Ss, two I2Cs, three USARTs, two UARTs, two CAN-FDs, and one USBFS interface.
The timing system features one 32-bit general-purpose timer, five 16-bit general-purpose timers, two 16-bit basic timers, and two 16-bit PWM advanced timers. This translates into precise and flexible waveform control and robust protection mechanisms for applications such as digital power supplies and motor control.
The operating voltage range of the GD32F503/505 series is 2.6V to 3.6 V, and it operates over the industrial-grade temperature range of -40°C to 105°C. It also offers three power-saving modes for maximizing power efficiency.
These MCUs also provide high-level ESD protection with contact discharge up to 8 kV and air discharge up to 15 kV. Their HBM/CDM immunity is stable at 4,000 V/1,000 V even after three Zap tests, demonstrating reliability margins that exceed conventional standards for industries such as industrial and home appliances, GigaDevice said.
In addition, the MCUs provide multi-level protection of code and data, supporting firmware upgrades, integrity and authenticity verification, and anti-rollback checks. Device security includes a secure boot and secure firmware update platform, along with hardware security features such as user secure storage areas. Other features include a built-in hardware security engine integrating SHA-256 hash algorithms, AES-128/256 encryption algorithms, and a true random number generator. Each device has a unique independent UID for device authentication and lifecycle management.
A multi-layered hardware security mechanism is centered around multi-channel watchdogs, power and clock monitoring, and hardware CRC. In addition, the GD32F5xx series’ software test library is certified to the German IEC 61508 SC3 (SIL 2/SIL 3) for functional safety. The series provides a complete safety package, including key documents such as a safety manual, FMEDA report, and safety self-test library.
The GD32 MCUs feature a full-chain development ecosystem. This includes the free GD32 Embedded Builder IDE, GD-LINK debugging, and the GD32 all-in-one programmer. Tool providers such as Arm, KEIL, IAR, and SEGGER also support this series, including compilation development and trace debugging.
The GD32F503/505 series is available in several package types, including LQFP100/64/48, QFN64/48/32, and BGA64. Samples are available, along with datasheets, software libraries, ecosystem guides, and supporting tools. Development boards are available on request. Mass production is scheduled to start in December. The series will be available through authorized distributors.
The post 32-bit MCUs deliver industrial-grade performance appeared first on EDN.
Board-to-board connectors reduce EMI

Molex LLC develops a quad-row shield connector, claiming the industry’s first space-saving, four-row signal pin layout with a metal electromagnetic interference (EMI) shield. The quad-row board-to-board connector achieves up to a 25-dB reduction in EMI compared to a non-shielded quad-row solution. These connectors are suited for space-constrained applications such as smart watches and other wearables, mobile devices, AR/VR applications, laptops, and gaming devices.
Shielding protects connectors from external electromagnetic noise such as nearby components and far-field devices that can cause signal degradation, data errors, and system faults. The quad-row connectors eliminate the need for external shielding parts and grounding by incorporating the EMI shields, which saves space and simplifies assembly. It also improves reliability and signal integrity.
(Source: Molex LLC)
The quad-row board-to-board connectors, first released in 2020, offers a 30% size reduction over dual-row connector designs, Molex said. The space-saving staggered-circuit layout positions pins across four rows at a signal contact pitch of 0.175 mm.
Targeting EMI challenges at 2.4-6 GHz and higher, the quad-row layout with the addition of an EMI shield mitigates both electromagnetic and radio frequency (RF) interference, as well as signal integrity issues that create noise.
The quad-row shield meets stringent EMI/EMC standards to lower regulatory testing burdens and speed product approvals, Molex said.
The new design also addresses the most significant requirements related to signal interference and incremental power requirements. These include how best to achieve 80 times the signal connections and four times the power delivery compared to a single-pin connector, Molex said.
The quad-row connectors offer a 3-A current rating to meet customer requirements for high power in a compact design. Other specifications include a voltage rating of 50 V, a dielectric withstanding voltage of 250 V, and a rated insulation resistance of 100 megohms. Click here for the datasheet.
Samples of the quad-row shield connectors are available now to support custom inquiries. Commercial availability will follow in the second quarter of 2026.
The post Board-to-board connectors reduce EMI appeared first on EDN.
5-V ovens (some assembly required)—part 2

In the first part of this Design Idea (DI), we looked at simple ways of keeping critical components at a constant temperature using a linear approach. In this second part, we’ll investigate something PWM-based, which should be more controllable and hence give better results.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Adding PWM to the oven
As before, this starts with a module based on a TO-220 package, the tab of which makes a decent hotplate on which our target component(s) can be mounted. Figure 1 shows this new circuit, which compares the voltage from a thermistor/resistor pair with a tri-wave and uses the result to vary the duty cycle of the heating current. Varying the amplitude and level of that tri-wave lets us tune the circuit’s performance.
This looks too simple and perhaps obvious to be completely original, but a quick search found nothing very similar. At least this was designed from scratch.
Figure 1 A tri-wave oscillator, a thermistor, and a comparator work together to pulse-width modulate the current through R7, the main heating element. Q1 switches that current and also helps with the heating.
U1a forms a conventional oscillator running at around 1 kHz. Neither the frequency nor the exact wave-shape on C1 is critical. R1 and R2+R3 determine the tri-wave’s offset, and R4 its amplitude. U1b compares the voltage across the thermistor with the tri-wave, as shown in Figure 2. When the temperature is low so that voltage is higher than any part of the tri-wave, U1b’s output will be solidly low, turning on Q1 to heat up R7 as fast as possible.
As the temperature rises, the voltages start to overlap and proportional control kicks in, progressively reducing the on-time so that the heat input is proportional to the difference between the actual and target temperatures. By the time the set-point has been reached, the on-time is down to ~18%. This scheme minimizes or even eliminates overshoot. (Thermal time-constants—ignored for the moment—can upset this a little.)

Figure 2 Oscilloscope captures showing the operation of Figure 1’s circuit.
Once the circuit is stable, Th1 will have the same resistance as R6, or 3.36 kΩ at our nominal target of 50°C (or 50.03007…°C, assuming perfect components), so Figure 1’s point B will be at half-rail. To keep that balance, the tri-wave must be offset upwards so that slicing gives our 18% figure at the set-point. Setting R3 to 1k0 achieved that. The performance after starting can be seen in Figure 3. (The first 40 seconds or so is omitted because it’s boring.)

Figure 3 From cold, Figure 1’s circuit stabilizes in two to three minutes. The upper trace is U1b’s output, heavily filtered. Also shown are Th1’s temperature (magenta) and that of the hotplate as measured by an external thermistor probe (cyan).
The use of Q1 as an over-driven emitter follower needs some explanation. First thoughts were to use an NPN Darlington or an n-MOSFET as a switch (with U1b’s inputs swapped), but that meant that the collector or drain—which we want to use as a hotplate—would be flapping up and down at the switching frequency.
While the edges are slowish, they could still couple capacitively to a target device: potentially bad news. With a PNP Darlington, the collector can be at ground, give or take a handful of millivolts. (The fine copper wire used to connect the module to the outside world has a resistance of about 1 Ω per meter.) Q1 drops ~1.3 V and so provides about a third of the heating, rather like the corresponding device in Part 1. This is a good reason to stay with the idea of using a TO-220’s tab as that hotplate—at least for the moment. Q1 could be a p-MOSFET, but R7 would then need to be adjusted to suit its (highly variable) VGS(on): fiddly and unrealistic.
LED1 starts to turn on once the set-point is near and becomes brighter as the duty cycle falls. This worked as well in practice as the long-tailed pair approach used in Part 1’s Figure 4.
The duty cycle is given as 18%, but where does that figure come from? It’s the proportion of the input heat that leaks out once the circuit has stabilized, and that depends on how well the module is thermally insulated and how thin the lead-out wires are. With a maximum heating current of 120 mA (600 mW in), practical tests gave that 18% figure, implying that ~108 mW is being lost. With a temperature differential of ~30°C, that corresponds to an overall thermal resistance of ~280°C/W. (Many DIL ICs are quoted as around 100°C/W.)
Some more assembly required
The final build is mechanically quite different and uses a custom-built hotplate instead of a TO-220’s tab. It’s shown in Figure 4.

Figure 4 Our new hotplate is a scrap of copper sheet with the heater resistors glued to it symmetrically, with Th1 on one side and room for the target component(s) on the other. The third picture shows it fixed to the lower block of insulating foam, with fine wires meandered and ready for terminating. Not shown: an extra wire to ground the copper. Please excuse the blobby epoxy. I’d never get a job on a production line.
R7 now comprises four -33 Ω resistors in series/parallel, which are epoxied towards the ends of a piece of copper, two on each side, with Th1 centered on one side. The other side becomes our hotplate area, with a sweet spot directly above the thermistor. Thermally, it is symmetrical, so that—all other things being equal, which they rarely are—our target component will be heated exactly like Th1.
The drive circuit is a variant on Figure 1, the main difference being Q1, which can now be a small but low-RON n-MOSFET as it’s no longer intended to dissipate any power. R3 and R4 are changed to give a tri-wave amplitude of ~500 mV pk–pk at a frequency of ~500 Hz to optimize the proportional control. Figure 5 and Figure 6 show the schematic and its performance. It now stabilizes within a degree after one minute and perhaps a tenth after two, with decent tracking between the internal (Th1) and hotplate temperatures. The duty cycle is higher, largely owing to the different construction; more (and bulkier) insulation would have reduced it, improving efficiency.

Figure 5 The driving circuit for the new hotplate.

Figure 6 How Figure 5’s circuit performs.
The intro to Part 1 touched on my original oven, which needed to stabilize the operation of a logarithmically tuned oscillator. It used a circuit similar to Part 1’s Figure 5 but had a separate power transistor, whose dissipation was wasted. The logging diode was surrounded by a thermally-insulated cradle of heating resistors and the control thermistor.
It worked well and still does, but these circuits improve on it. Time for a rebuild? If so, I’ll probably go for the simplest, Part 1/Figure 1 approach. For higher-power use, Figure 5 (above) could probably be scaled to use different heating resistors fed from a separate and larger voltage. Time for some more experimental fun, anyway.
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- 5-V ovens (some assembly required)—part 1
- Fixing a fundamental flaw of self-sensing transistor thermostats
- Self-heated ∆Vbe transistor thermostat needs no calibration
- Take-back-half thermostat uses ∆Vbe transistor sensor
- Dropping a PRTD into a thermistor slot—impossible?
The post 5-V ovens (some assembly required)—part 2 appeared first on EDN.
Achieving analog precision via components and design, or just trim and go

I finally managed to get to a book that has been on my “to read” list for quite a while: “Beyond Measure: The Hidden History of Measurement from Cubits to Quantum Constant” (2022) by James Vincent (Figure 1). It traces the evolution of measurement, its uses, and its motivations, and how measurement has shaped our world, from ancient civilizations to the modern day. On a personal note, I found the first two-thirds of the book a great read, but in my opinion, the last third wandered and meandering off topic. Regardless, it was a worthwhile book overall.

Figure 1 This book provides a fascinating journey through the history of measurement and how different advances combined over the centuries to get us to our present levels. Source: W. W. Norton & Company Inc.
Several chapters deal with the way measurement and the nascent science of metrology were used in two leading manufacturing entities of the early 20th century: Rolls-Royce and Ford Motor Company, and the manufacturing differences between them.
Before looking at the differences, you have to “reset” your frame of reference and recognize that even low-to-moderate volume production in those days involved a lot of manual fabrication of component parts, as the “mass production” machinery we now take as a given often didn’t exist or could only do rough work.
Rolls-Royce made (and still makes) fine motor cars, of course. Much of their quality was not just in the finish and accessories; the car was entirely mechanical except for the ignition system. They featured a finely crafted and tuned powertrain. It’s not a myth that you could balance a filled wine glass on the hood (bonnet) while the engine was running and not see any ripples in the liquid’s surface. Furthermore, you could barely hear the engine at all at a time when cars were fairly noisy.
They achieved this level of performance using careful and laborious manual adjustments, trimming, filing, and balancing of the appropriate components to achieve that near-perfect balance and operation. Clearly, this was a time-consuming process requiring skilled and experienced craftspeople. It was “mass production” only in terms of volume, but not in terms of production flow as we understand it today.
In contrast, Henry Fiord focused on mass production with interchangeable parts that would work to design objective immediately when assembled. Doing so required advances in measurement of the components at Ford’s factory to weed out incoming substandard parts and statistical analysis of quality, conformance, and deviations. Ford also sent specialists to suppliers’ factories to improve both their production processes and their own metrology.
Those were the days
Of course, those were different times in terms of industrial production. When Wright brothers needed a gasoline engine for the 1903 Flyer, few “standard” engine choices were available, and none came close to their size, weight, and output-power needs.
So, their in-house mechanic Charlie Taylor machined an aluminum engine block, fabricated most parts, and assembled an engine (1903 Wright Engine) in just six weeks using a drill press and a lathe; it produced 12 horsepower, 50% above the 8 horsepower that their calculations indicated they needed (Figure 2).

Figure 2 Perhaps in an ultimate do-it-yourself project, Charlie Taylor, mechanic for Wright brothers, machined the aluminum engine block and fabricated most of the parts and then assembled the complete engine in six weeks (reportedly working only from rough sketches). Source: Wright Brothers Aeroplane Company
Which approach is better—fine adjusting and trims, or use of a better design and superior components? There’s little doubt that the “quality by components” approach is the better tactic in today’s world where even customized cars make use of many off-the-shelf parts.
Moreover, the required volume for a successful car-production line mandates avoiding hand-tuning of individual vehicles to make their components plug-and-play properly. Even Rolls-Royce now uses the Ford approach, of course; the alternative is impractical for modern vehicles except for special trim and accessories.
Single unit “perfection” uses both approaches
In some cases, both calibration and use of better topology and superior components combine for a unique design. Not surprisingly, a classic example is one of the first EDN articles by late analog-design genius Jim Williams, “This 30-ppm scale proves that analog designs aren’t dead yet”. Yes, I have cited it in previous blogs, and that’s no accident (Figure 3).

Figure 3 This 1976 EDN article by Jim Williams set a standard for analog signal-chain technical expertise and insight that has rarely been equaled. Source: EDN
In the article, he describes his step-by-step design concept and fabrication process for a portable weigh scale that would offer 0.02% absolute accuracy (0.01 lb over a 300-pound range). Yet, it would never need adjustment to be put into use. Even though this article is nearly 50 years old, it still has relevant lessons for our very different world.
I believe that Jim did a follow-up article about 20 years later, where he revisited and upgraded that design using newer components, but I can’t find it online.
Today’s requirements were unimaginable—until recently
Use of in-process calibration is advancing due to techniques such as the use of laser-based interferometry. For example, the positional accuracy of the carriage, which moves over the wafer, needs to be in the sub-micrometer range.
While this level of performance can be achieved with friction-free air bearings, they cannot be used in extreme-ultraviolet (EUV) systems since those operate in an ultravacuum environment. Instead, high-performance mechanical bearings must be used, even though they are inferior to air bearings.
There are micrometer-level errors in the x-axis and y-axis, and the two axes are also not perfectly orthogonal, resulting in a system-level error typically greater than several micrometers across the 300 × 300-mm plane. To compensate, manufacturers add interferometry-based calibration of the mechanical positioning systems to determine the error topography of a mechanical platform.
For example, with a 300-mm wafer, the grid is scanned in 10-mm steps, and the interferometer determines the actual position. This value is compared against the motion-encoder value to determine a corrective offset. After this mapping, the system accuracy is improved by a factor of 10 and can achieve an absolute accuracy of better than 0.5 µm in the x-y plane
Maybe too smart?
Of course, there are times when you can be a little too clever in the selection of components when working to improve system performance. Many years ago, I worked for a company making controllers for large machines, and there was one circuit function that needed coarse adjustment for changes in ambient temperature. The obvious way would have been to use a thermistor or a similar component in the circuit.
But our lead designer—a circuit genius by any measure—had a “better” idea. Since the design used a lot of cheap, 100-kΩ pullup resistors with poor temperature coefficient, he decided to use one of those instead of the thermistor in the stabilization loop, as they were already on the bill of materials. The bench prototype and pilot-run units worked as expected, but the regular production units had poor performance.
Long story short: our brilliant designer had based circuit stabilization on the deliberate poor tempco of these re-purposed pull-up resistors and associated loop dynamic range. However, our in-house purchasing agent got a good deal on some resistors of the same value and size, but with a much tighter tempco. Additionally, getting a better component that was functionally and physically identical for less money seemed like a win-win.
That was fine for the pull-up role, but it meant that the transfer function of temperature to resistance was severely compressed. Identifying that problem took a lot of aggravation and time.
What’s your preferred approach to achieving a high-precision, accurate, stable analog-signal chain and front-end? Have you used both methods, or are you inherently partial to one over the other? Why?
Bill Schweber is a degreed senior EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features. Prior to becoming an author and editor, he spent his entire hands-on career on the analog side by working on power supplies, sensors, signal conditioning, and wired and wireless communication links. His work experience includes many years at Analog Devices in applications and marketing.
Related Content
- JPL & NASA’s latest clock: almost perfect
- The Wright Brothers: Test Engineers as Well as Inventors
- Precision metrology redefines analog calibration strategy
- Inter-satellite link demonstrates metrology’s reach, capabilities
The post Achieving analog precision via components and design, or just trim and go appeared first on EDN.
LED illumination addresses ventilation (at the bulb, at least)

The bulk of the technologies and products based on them that I encounter in my everyday interaction with the consumer electronics industry are evolutionary (and barely so in some cases) versus revolutionary in nature. A laptop computer, a tablet, or a smartphone might get a periodic CPU-upgrade transplant, for example, enabling it to complete tasks a bit faster and/or a bit more energy-efficiently than before. But the task list is essentially the same as was the case with the prior product generation…and the generation before that…and…not to mention that the generational-cadence physical appearance also usually remains essentially the same.
Such cadence commonality is also the case with many LED light bulbs I’ve taken apart in recent years, in no small part because they’re intended to visually mimic incandescent precursors. But SANSI has taken a more revolutionary tack, in the process tackling an issue—heat–with which I’ve repeatedly struggled. Say what you (rightly) will about incandescent bulbs’ inherent energy inefficiency, along with the corresponding high temperature output that they radiate—there’s a fundamental reason why they were the core heat source for the Easy-Bake Oven, after all:

But consider, too, that they didn’t integrate any electronics; the sole failure points were the glass globe and filament inside it. Conversely, my installation of both CFL and LED light bulbs within airflow-deficient sconces in my wife’s office likely hastened both their failure and preparatory flickering, due to degradation of the capacitors, voltage converters and regulators, control ICs and other circuitry within the bulbs as well as their core illumination sources.
That’s why SANSI’s comparatively fresh approach to LED light bulb design, which I alluded to in the comments of my prior teardown, has intrigued me ever since I first saw and immediately bought both 2700K “warm white” and 5000K “daylight” color-temperature multiple-bulb sets on sale at Amazon two years ago:

They’re smaller A15, not standard A19, in overall dimensions, although the E26 base is common between the two formats, so they can generally still be used in place of incandescent bulbs (although, unlike incandescents, these particular LED light bults are not dimmable):
Note, too, their claimed 20% brighter illumination (900 vs 750 lumens) and 5x estimated longer usable lifetime (25,000 hours vs 5,000 hours). Key to that latter estimation, however, is not only the bulb’s inherent improved ventilation:

Versus metal-swathed and otherwise enclosed-circuitry conventional LED bulb alternatives:

But it is also the ventilation potential (or not) of wherever the bulb is installed, as the “no closed luminaires” warning included on the sticker on the left side of the SANSI packaging makes clear:

That said, even if your installation situation involves plenty of airflow around the bulb, don’t forget that the orientation of the bulb is important, too. Specifically, since heat rises, if the bulb is upside-down with the LEDs underneath the circuitry, the latter will still tend to get “cooked”.
Perusing our patientEnough of the promo pictures. Let’s now look at the actual device I’ll be tearing down today, starting with the remainder of the box-side shots, in each case, and as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:






Open ‘er up:
lift off the retaining cardboard layer, and here’s our 2700K four-pack, which (believe it or not) had set me back only $4.99 ($1.25/bulb) two years back:

The 5000K ones I also bought at that same time came as a two-pack, also promo-priced, this time at $4.29 ($2.15/bulb). Since they ended up being more expensive per bulb, and because I have only two of them, I’m not currently planning on also taking one of them apart. But I did temporarily remove one of them and replace it in the two-pack box with today’s victim, so you could see the LED phosphor-tint difference between them. 5000K on left, 2700K on right; I doubt there’s any other design difference between the two bulbs, but you never know…

Aside from the aforementioned cardboard flap for position retention above the bulbs and a chunk of Styrofoam below them (complete with holes for holding the bases’ end caps in place):

There’s no other padding inside, which might have proven tragic if we were dealing with glass-globe bulbs or flimsy filaments. In this case, conversely, it likely suffices. Also note the cleverly designed sliver of literature at the back of the box’s insides:

Now, for our patient, with initial overview perspectives of the top:

Bottom:

And side:

Check out all those ventilation slots! Also note the clips that keep the globe in place:

Before tackling those clips, here are six sequential clockwise-rotation shots of the side markings. I’ll leave it to you to mentally “glue” the verbiage snippets together into phrases and sentences:
Diving in for illuminated understanding
Now for those clips. Downside: they’re (understandably, given the high voltage running around inside) stubborn. Upside: no even-more-stubborn glue!
Voila:




Note the glimpses of additional “stuff” within the base, thanks to the revealing vents. Full disclosure and identification of the contents is our next (and last) aspiration:


As usual, twist the end cap off with a tongue-and-groove slip-joint (“Channellock”) pliers:


and the ceramic substrate (along with its still-connected wires and circuitry, of course) dutifully detaches from the plastic base straightaway:



Not much to see on the ceramic “plate” backside this time, aside from the 22µF 200V electrolytic capacitor poking through:

The frontside is where most of the “action” is:

At the bottom is a mini-PCB that mates the capacitor and wires’ soldered leads to the ceramic substrate-embedded traces. Around the perimeter, of course, is the series-connected chain of 17 (if I’ve counted correctly) LEDs with their orange-tinted phosphor coatings, spectrum-tuned to generate the 2700K “warm white” light. And the three SMD resistors scattered around the substrate, two next to an IC in the upper right quadrant (33Ω “33R0” and 20Ω “33R0”) and another (33Ω “334”) alongside a device at left, are also obvious.
Those two chips ended up generating the bulk of the design intrigue, in the latter case still an unresolved mystery (at least to me). The one at upper right is marked, alongside a company logo that I’d not encountered before, as follows:
JWB1981
1PC031A
The package also looks odd; the leads on both sides are asymmetrically spaced, and there’s an additional (fourth) lead on one side. But thanks to one of the results from my Google search on the first-line term, in the form of a Hackaday post that then pointed at an informative video:
This particular mystery has, at least I believe, been solved. Quoting from the Hackaday summary (with hyperlinks and other augmentations added by yours truly):
The chip in question is a Joulewatt JWB1981, for which no datasheet is available on the internet [BD note: actually, here it is!]. However, there is a datasheet for the JW1981, which is a linear LED driver. After reverse-engineering the PCB, bigclivedotcom concluded that the JWB1981 must [BD note: also] include an onboard bridge rectifier. The only other components on the board are three resistors, a capacitor, and LEDs.
The first resistor limits the inrush current to the large smoothing capacitor. The second resistor is to discharge the capacitor, while the final resistor sets the current output of the regulator. It is possible to eliminate the smoothing capacitor and discharge resistor, as other LED circuits have done, which also allow the light to be dimmable. However, this results in a very annoying flicker of the LEDs at the AC frequency, especially at low brightness settings.
Compare the resultant schematic shown in the video with one created by EDN’s Martin Rowe, done while reverse-engineering an A19 LED light bulb at the beginning of 2018, and you’ll see just how cost-effective a modern design approach like this can be.
That only leaves the chip at left, with two visible soldered contacts (one on each end), and bare on top save for a cryptic rectangular mark (which leaves Google Lens thinking it’s the guts of a light switch, believe it or not). It’s not referenced in “Big Clive’s” deciphered design, and I can’t find an image of anything like it anywhere else. Diode? Varistor to protect against voltage surges? Resettable fuse to handle current surges? Multiple of these? Something(s) else? Post your [educated, preferably] guesses, along with any other thoughts, in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Disassembling a LED-based light that’s not acting quite right…right?
- Teardown: Bluetooth-enhanced LED bulb
- Teardown: Zigbee-controlled LED light bulb
- Freeing a three-way LED light bulb’s insides from their captivity
- Teardown: What killed this LED bulb?
- Slideshow: LED Lighting Teardowns
The post LED illumination addresses ventilation (at the bulb, at least) appeared first on EDN.
Makefile vs. YAML: Modernizing verification simulation flows

Automation has become the backbone of modern SystemVerilog/UVM verification environments. As designs scale from block-level modules to full system-on-chips (SoCs), engineers rely heavily on scripts to orchestrate compilation, simulation, and regression. The effectiveness of these automation flows directly impacts verification quality, turnaround time, and team productivity.
For many years, the Makefile has been the tool of choice for managing these tasks. With its rule-based structure and wide availability, Makefile offered a straightforward way to compile RTL, run simulations, and execute regressions. This approach served well when testbenches were relatively small and configurations were simple.
However, as verification complexity exploded, the limitations of Makefile have become increasingly apparent. Mixing execution rules with hardcoded test configurations leads to fragile scripts that are difficult to scale or reuse across projects. Debugging syntax-heavy Makefiles often takes more effort than writing new tests, diverting attention from coverage and functional goals.
These challenges point toward the need for a more modular and human-readable alternative. YAML, a structured configuration language, addresses many of these shortcomings when paired with Python for execution. Before diving into this solution, it’s important to first examine how today’s flows operate and where they struggle.
Current scenario and challenges
In most verification environments today, Makefile remains the default choice for controlling compilation, simulation, and regression. A single Makefile often governs the entire flow—compiling RTL and testbench sources, invoking the simulator with tool-specific options, and managing regressions across multiple testcases. While this approach has been serviceable for smaller projects, it shows clear limitations as complexity increases.
Below is an outline of key challenges.
- Configuration management: Test lists are commonly hardcoded in text or CSV files, with seeds, defines, and tool flags scattered across multiple scripts. Updating or reusing these settings across projects is cumbersome.
- Readability and debugging: Makefile syntax is compact but cryptic, which makes debugging errors non-trivial. Even small changes can cascade into build failures, demanding significant engineer time.
- Scalability: As testbenches grow, adding new testcases or regression suites quickly bloats the Makefile. Managing hundreds of tests or regression campaigns becomes unwieldy.
- Tool dependence: Each Makefile is typically tied to a specific simulator, for instance, VCS, Questa, and Xcelium. Porting the flow to a different tool requires major rewrites.
- Limited reusability: Teams often reinvent similar flows for different projects, with little opportunity to share or reuse scripts.
These challenges shift the engineer’s focus away from verification quality and coverage goals toward the mechanics of scripting and tool debugging. Therefore, the industry needs a cleaner, modular, and more portable way to manage verification flows.
Makefile-based flow
A traditional Makefile-based verification flow centers around a single file containing multiple targets that handle compilation, simulation, and regression tasks. See the representative structure below.

This approach offers clear strengths: immediate familiarity with software engineers, no additional tool requirements, and straightforward dependency management. For small teams with stable tool chains, this simplicity remains compelling.
However, significant challenges emerge with scale. Cryptic syntax becomes problematic; escaped backslashes, shell expansions, and dependencies create arcane scripting rather than readable configuration. Debug cycles lengthen with cryptic error messages, and modifications require deep Maker expertise.
Tool coupling is evident in the above structure—compilation flags, executable names, and runtime arguments are VCS-specific. Supporting Questa requires duplicating rules with different syntax, creating synchronization challenges.
So, maintenance of overhead grows exponentially. Adding tests requires multiple modifications, parameter changes demand careful shell escaping, and regression management quickly outgrows Maker’s capabilities, forcing hybrid scripting solutions.
These drawbacks motivate the search for a more human-readable, reusable configuration approach, which is where YAML’s structured, declarative format offers compelling advantages for modern verification flows.
YAML-based flow
YAML (YAML Ain’t Markup Language) provides a human-readable data serialization format that transforms verification flow management through structured configuration files. Unlike Makefile’s imperative commands, YAML uses declarative key-value pairs with intuitive indentation-based hierarchy.
See below this YAML configuration structure that replaces complex Makefile logic:


The modular structure becomes immediately apparent through organized directory hierarchies. As shown in Figure 1, a well-structured YAML-based verification environment separates configurations by function and scope, enabling different team members to modify their respective domains without conflicts.

Figure 1 The block diagram highlights the YAML-based verification directory structure. Source: ASICraft Technologies
Block-level engineers manage component-specific test configurations, IP1 andIP2, while integration teams focus on pipeline and regression management. Instead of monolithic Makefiles, teams can organize configurations across focused files: build.yml for compilation settings, sim.yml for simulation parameters, and various test-specific YAML files grouped by functionality.
Advanced YAML features like anchors and aliases eliminate configuration duplication using the DRY (Don’t Repeat Yourself) principle.

Tool independence emerges naturally since YAML contains only configuration data, not tool-specific commands. The same YAML files can drive VCS, Questa, or XSIM simulations through appropriate Python parsing scripts, eliminating the need for multiple Makefiles per tool.
Of course, YAML alone doesn’t execute simulations; it needs a bridge to EDA tools. This is achieved by pairing YAML with lightweight Python scripts that parse configurations and generate appropriate tool commands.
Implementation of YAML-based flow
The transition from YAML configuration to actual EDA tool execution follows a systematic four-stage process, as illustrated in Figure 2. This implementation addresses the traditional verification challenge where engineers spend excessive time writing complex Makefiles and managing tool commands instead of focusing on verification quality.

Figure 2 The YAML-to-EDA phase bridges the YAML configuration. Source: ASICraft Technologies
YAML files serve as comprehensive configuration containers supporting diverse verification needs.
- Project metadata: Project name, descriptions, and version control
- Tool configuration: EDA tool selection, licenses, and version specifications
- Compilation settings: Source files, include directories, definitions, timescale, and tool-specific flags
- Simulation parameters: Tool flags, snapshot paths, and log directory structures
- Test specifications: Test names, seeds, plusargs, and coverage options
- Regression management: Test lists, reporting formats, and parallel execution settings

Figure 3 Here is a view of Python YAML parsing workflow phases. Source: ASICraft Technologies
The Python implementation demonstrates the complete flow pipeline. Starting with a simple YAML configuration:

The Python script loads and processes the configuration below:

When executed, the Python script produces clear output, showing the command translation, as illustrated below:

The complete processing workflow operates in four systematic phases, as detailed in Figure 3.
- Load/parse: The PyYAML library converts YAML file content into native Python dictionaries and lists, making configuration data accessible through standard Python operations.
- Extract: The script accesses configuration values using dictionary keys, retrieving tool names, file lists, compilation flags, and simulation parameters from the structured data.
- Build commands: The parser intelligently constructs tool-specific shell commands by combining extracted values with appropriate syntax for the target simulator (VCS or Xcelium).
- Display/execute: Generated commands are shown for verification or directly executed through subprocess calls, launching the actual EDA tool operations.
This implementation creates true tool-agnostic operation. The same YAML configuration generates VCS, Questa, or XSIM commands by simply updating the tool specification. The Python translation layer handles all syntax differences, making flows portable across EDA environments without configuration changes.
The complete pipeline—from human-readable YAML to executable simulation commands—demonstrates how modern verification flows can prioritize engineering productivity over infrastructure complexity, enabling teams to focus on test quality rather than tool mechanics.
Comparison: Makefile vs. YAML
Both approaches have clear strengths and weaknesses that teams should evaluate based on their specific needs and constraints. Table 1 provides a systematic comparison across key evaluation criteria.

Table 1 See the flow comparison between Makefile and YAML. Source: ASICraft Technologies
Where Makefiles work better
- Simple projects with stable, unchanging requirements
- Small teams already familiar with Make syntax
- Legacy environments where changing infrastructure is risky
- Direct execution needs required for quick debugging without intermediate layers
- Incremental builds where dependency tracking is crucial
Where YAML excels
- Growing complexity with multiple test configurations
- Multi-tool environments supporting different simulators
- Team collaboration where readability matters
- Frequent modifications to test parameters and configurations
- Long-term maintenance across multiple projects
The reality is that most teams start with Makefiles for simplicity but eventually hit scalability walls. YAML approaches require more expansive initial setup but pay dividends as projects grow. The decision often comes down to whether you’re optimizing for immediate simplicity or long-term scalability.
For established teams managing complex verification environments, YAML-based flows typically provide better return on investment (ROI). However, teams should consider practical factors like migration effort and existing tool integration before making the transition.
Choosing between Makefile and YAML
The challenges with traditional Makefile flows are clear: cryptic syntax that’s hard to read and modify, tool-specific configurations that don’t port between projects, and maintenance overhead that grows with complexity. As verification environments become more sophisticated, these limitations consume valuable engineering time that should focus on actual test development and coverage goals.
The YAML-based flows address these fundamental issues through human-readable configurations, tool-independent designs, and modular structures that scale naturally. Teams can simply describe verification intent—run 100 iterations with coverage—while the flow engine handles all tool complexity automatically. The same approach works from block-level testing to full-chip regression suites.
Key benefits realized with YAML
- Faster onboarding: New team members understand YAML configurations immediately.
- Reduced maintenance: Configuration changes require simple text edits, not scripting.
- Better collaboration: Clear syntax eliminates the “Makefile expert” bottleneck.
- Tool flexibility: Switch between VCS, Questa, or XSIM without rewriting flows.
- Project portability: YAML configurations move cleanly between different projects.
The choice between Makefile and YAML approaches ultimately depends on project complexity and team goals. Simple, stable projects may continue benefiting from Makefile simplicity. However, teams managing growing test suites, multiple tools, or frequent configuration changes will find YAML-based flows providing better long-term returns on their infrastructure investment.
Meet Sangani is ASIC verification engineer at ASICraft Technologies.
Hitesh Manani is senior ASIC verification engineer at ASICraft Technologies.
Shailesh Kavar is ASIC verification technical manager at ASICraft Technologies.
Related Content
- Addressing the Verification Bottleneck
- Making Verification Methodology and Tool Decisions
- Gate level simulations: verification flow and challenges
- Specifications: The hidden bargain for formal verification
- Shift-Left Verification: Why Early Reliability Checks Matter
The post Makefile vs. YAML: Modernizing verification simulation flows appeared first on EDN.
Computer-on-module architectures drive sustainability

Sustainability has moved from corporate marketing to a board‑level mandate. For technology companies, this shift is more than meeting environmental, social, and governance frameworks; it reflects the need to align innovation with environmental and social responsibility among all key stakeholders.
Regulators are tightening reporting requirements while investors respond favorably to sustainable strategies. Customers also want tangible progress toward these goals. The debate is no longer about whether sustainability belongs in technology roadmaps but how it should be implemented.
The hidden burden of embedded and edge systemsElectronic systems power a multitude of devices in our daily lives. From industrial control systems and vital medical technology to household appliances, these systems usually run around the clock for years on end. Consequently, operating them requires a lot of energy.
Usually, electronic systems are part of a larger ecosystem and are difficult to replace in the event of failure. When this happens, complete systems are often discarded, resulting in a surplus of electronic waste.
Rapid advances in technology make this issue more pronounced. Processor architectures, network interfaces, and security protocols become obsolete in shorter cycles than they did just a few years ago. As a result, organizations often retire complete systems after a brief service life, even though the hardware still meets its original requirements. The continual need to update to newer standards drives up costs and can undermine sustainability goals.
Embedded and edge systems are foundational technologies driving critical infrastructure in industrial automation, healthcare, and energy applications. As such, the same issues with short product lifecycles and limited upgradeability put them in the same unfortunate bucket of electronic waste and resource consumption.
Bridging the gap between performance demands and sustainability targets requires rethinking system architectures. This is where off-the-shelf computer-on-module (COM) designs come in, offering a path to extended lifecycles and reduced waste while simultaneously future-proofing technology investments.
How COMs extend product lifecyclesOpen embedded computing standards such as COM Express, COM-HPC, and Smart Mobility Architecture (SMARC) separate computing components—including processors, memory, network interfaces, and graphics—from the rest of the system. By separating the parts from the whole, they allow updates by swapping modules instead of by requiring a complete system redesign.
This approach reduces electronic waste, conserves resources, and lowers long‑term costs, especially in industries where certifications and mechanical integration make complete redesigns prohibitively expensive. These sustainability benefits go beyond waste reduction: A modular system is easier to maintain, repair, and upgrade, meaning fewer devices end up prematurely as electronic waste.
Recommended Why system consolidation for IT/OT convergence matters
Open standards that enable longevityTo simplify the development and manufacturing of COMs and to ensure interchangeability across manufacturers, consortia such as the PCI Industrial Computer Manufacturing Group (PICMG) promote and ratify open standards.
One of the most central standards in the embedded sector is COM Express. This standard defines various COM sizes, such as Type 6 or Type 10, to address different application areas; it also offers a seamless transition from legacy interfaces to modern differential interfaces, including DisplayPort, PCI Express, USB 3.0, or SATA. COM Express, therefore, serves a wide range of use cases from low-power handheld medical equipment to server-grade industrial automation infrastructure.
Expanding on these efforts, COM-HPC is the latest PICMG standard. Addressing high-performance embedded edge and server applications, COM-HPC arose from the need to meet increasing performance and bandwidth requirements that previous standards couldn’t achieve. COM-HPC COMs are available with three pinout types and six sizes for simplified application development. Target use cases range from powerful small-form-factor devices to graphics-oriented multi-purpose designs and robust multi-core edge servers.
COM-HPC, including congatec’s credit-card-sized COM-HPC Mini, provides high performance and bandwidth for all AI-powered edge computing and embedded server applications. (Source: congatec)
Alongside COM Express and COM-HPC, the Standardization Group for Embedded Technologies developed the SMARC standard to meet the demands of power-saving, energy-efficient designs requiring a small footprint. Similar in size to a credit card, SMARC modules are ideal for mobile and portable embedded devices, as well as for any industrial application that requires a combination of small footprint, low power consumption, and established multimedia interfaces.
As credit-card-sized COMs, SMARC modules are designed for size-, weight-, power-, and cost-optimized AI applications at the rugged edge. (Source: congatec)
As a company with close involvement in developing COM Express, COM-HPC, and SMARC, congatec is invested in the long-term success of more sustainable architectures. Offering designs for common carrier boards that can be used for different standards and/or modules, congatec’s approach allows product designers to use a single carrier board across many applications, as they simply swap the module when upgrading performance, removing the need for complex redesigns.
Virtualization as a path to greener systemsOn top of modular design, extending hardware lifecycles requires intelligent software management. Hypervisors, a software tool that creates and manages virtual machines, add an important software layer to the sustainability benefits of COM architectures.
Virtualization allows multiple workloads to coexist securely on a single module, meaning that separate boards aren’t required to run essential tasks such as safety, real-time control, and analytics. This consolidation simultaneously lowers energy consumption while decreasing the demand for the raw materials, manufacturing, and logistics associated with more complex hardware.
Hypervisors such as congatec aReady.VT are real-time virtualization software tools that consolidate functionality that previously required multiple dedicated systems in a single hardware platform. (Source: congatec)
Enhancing sustainability through COM-based designs
The rapid adoption of technologies such as edge AI, real‑time analytics, and advanced connectivity has inspired industries to strive for scalable platforms that also meet sustainability goals. COM architectures are a great example, demonstrating that high performance and environmental responsibility are compatible. They show technology and business leaders that designing sustainability into product architectures and technology roadmaps, rather than treating it as an afterthought, makes good practical and financial sense.
With COM-based modules already providing a flexible and field-proven foundation, the embedded sector is off to a good start in shrinking environmental impact while preserving long-term innovation capability.
The post Computer-on-module architectures drive sustainability appeared first on EDN.
Solar-powered cars: is it “déjà vu” all over again?

I recently came across a September 18 article by the “future technology” editor at The Wall Street Journal, “Solar-Powered Cars and Trucks Are Almost Here” (sorry, behind paywall, but your local library may have free access). The author was positively gushing about companies such as Aptera Motors (California), which will “soon” be selling all-solar-powered cars. On a full daylight charge, they can do a few tens of miles, then it’s time to park in the Sun for that totally guilt-free “fill up.”
Figure 1 The Aptera solar-powered three-wheel “car” can go between 15 and 40 miles on a full all-solar charge. Source: Aptera Motors
The article focused on the benefits and innovations, such as how Aptera claims to have developed solar panels that withstand road hazards, including rocks kicked up at high speed, and similar advances.
The solar exposure-versus-distance numbers are very modest, to be polite. While people living in a sunny environment could add up to 40 miles (64 km) of range a day in summer months, from panels alone, that drops to around 15 miles (24 km) a day in northern climates in winter. Aptera says its front-wheel-drive version goes from 0 to 60 mph (96 km/hour) in 6 seconds, and has a top speed of 101 mph (163 km/hr).
The article also mentions that Aptera is planning to sell its ruggedized panels to Telo Trucks, a San Carlos (Calif) maker of a 500-horsepower mini-electric truck estimated to ship next year, which uses solar panels to extend its range by 15 to 30 supplemental miles per day.
Then I closed my eyes and thought, “Wait, haven’t I heard this story before?” Sure enough, I looked through my notes and saw that I had commented on Aptera’s efforts and those of others back in a 2021 blog, “Are solar-powered cars the ultimate electric vehicles?” Perhaps it’s no surprise, but the timeline then was also “coming soon.”
The laws of physics conspire to make this a very tough project. This sort of ambitious project requires advances across multiple disciplines. There are the materials for the vehicle itself, batteries, rugged solar panels, battery-management electronics — it’s a long list. These are closely tied to key ratios beginning with power and energy to weight.
Did I mention it’s a three-wheel vehicle (with all the stability issues that brings), seats two people, and is technically classified as a motorcycle despite its fully enclosed cabin? Or that it has to meet vehicle safety mandates and regulations? Will drivers likely need power-draining air conditioning unless they drive open-air, especially as the vehicle needs to be parked in the sun by definition?
I don’t intend to disparage the technological work, innovation, and hard work (and money) they have put into the project. Nonetheless, no matter how you look at it, it’s a lot of effort and retail price (estimated to be around $40,000) for a modest 15 to 40 miles of range. That’s a lot of dollar pain for very modest environmental gain, if any.
Is the all-electric vehicle analogous to the flying car?
Given today’s technology and that of the foreseeable future, I think the path of a truly viable all-solar car (at any price) is similar to that other recurrent dream: the flying car. Many social observers say that the hybrid vehicle (different meaning of “hybrid” here, of course) was brought into popular culture in 1962 by the TV show The Jetsons – but there had been articles in magazines such as Popular Science even before that date.

Figure 2 The flying car that is often discussed was likely inspired by the 1962 animated series “The Jetsons.” Source: Thejetsons.fandom.com
Roughly every ten years since then, the dream resurfaces and there’s a wave of articles in the general media about all the new flying cars under development and road/air test, and how actual showroom models are “just around the corner.” However, it seems like we are always approaching but not making the turn around that corner; Terrafugia’s massive publicity wave, followed by subsequent bankruptcy, is just one example.
The problem for flying cars, however attractive the concept may be, is that the priority needs and constraints for a ground vehicle, such as a car, are not aligned with those of an aircraft; in fact, they often contradict each other.
It’s difficult enough in any vehicle-engineering design to find a suitable balance among tradeoffs and constraints – after all, that’s what engineering is about. For the flying car, however, it is not so much about finding the balance point as it is about reconciling dramatically opposing issues. In addition, both classes of vehicles are subject to many regulatory mandates related to safety, and those add significant complexity.
Sometimes, it’s nearly impossible to “square the circle” and come up with a viable and acceptable solution to opposing requirements. Literally, “to square the circle” refers to the geometry challenge of constructing a square with the same area as a given circle but using only a compass and straightedge, a problem posed by the ancient Greeks and which was proven impossible in 1882. Metaphorically, the phrase means to attempt or solve something that seems impossible, such as combining two fundamentally different or incompatible things.
What’s the future for these all-solar “cars”? Unlike talking heads, pundits, and journalists, I’ll admit that I have no idea. They may never happen, they may become an expensive “toy” for some, or they may capture a small but measurable market share. Once prototypes are out on the street getting some serious road mileage, further innovations and updates may make them more attractive and perhaps less costly—again, I don’t know (nor does anyone).
Given the uncertainties associated with solar-powered and flying cars, why do they get so much attention? That’s an easy question to answer: they are fun and fairly easy to write about and the coverage gets attention. After all, they are more exciting to present and likely to attract more attention than silicon-carbide MOSFETs.
What’s your sense of the reality of solar-powered cars? Are they a dream with too many real-world limitations? Will they be a meaningful contribution to environmental issues, or an expensive virtue-signaling project—assuming they make it out of the garage and become highway-rated, street-legal vehicles?
Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.
Related Content
- Are solar-powered cars the ultimate electric vehicles?
- Keep solar panels clean from dust, fungus
- Home solar-supply topologies illustrate tradeoff realities
- Solar-Driven TEG Advances via Fabrication, Not Materials
References
- Smithsonian Magazine, “Recapping ‘The Jetsons’: Episode 03 – The Space Car”
- Popular Science, “The Flying Car Gets Real”(2008)
- Aircraft Owners and Pilots Association, “AOPA Terrafugia pulls US plug on Transition flying car” (2021)
The post Solar-powered cars: is it “déjà vu” all over again? appeared first on EDN.



