Українською
  In English
Feed aggregator
A tutorial on instrumentation amplifier boundary plots—Part 1

In today’s information-driven society, there’s an ever-increasing preference to measure phenomena such as temperature, pressure, light, force, voltage and current. These measurements can be used in a plethora of products and systems, including medical diagnostic equipment, home heating, ventilation and air-conditioning systems, vehicle safety and charging systems, industrial automation, and test and measurement systems.
Many of these measurements require highly accurate signal-conditioning circuitry, which often includes an instrumentation amplifier (IA), whose purpose is to amplify differential signals while rejecting signals common to the inputs.
The most common issue when designing a circuit containing an IA is the misinterpretation of the boundary plot, also known as the common mode vs. output voltage, or VCM vs. VOUT plot. Misinterpreting the boundary plot can cause issues, including (but not limited to) signal distortion, clipping, and non-linearity.
Figure 1 depicts an example where the output of an IA such as the INA333 from Texas Instruments has distortion because the input signal violates the boundary plot (Figure 2).

Figure 1 Instrumentation amplifier output distortion is caused by VCM vs. VOUT violation. Source: Texas Instruments

Figure 2 This is how VOUT is limited by VCM. Source: Texas Instruments
This series about IAs will explain common- versus differential-mode signaling, basic operation of the traditional three-operational-amplifier (op amp) topology, and how to interpret and calculate the boundary plot.
This first installment will cover the common- versus differential-mode voltage and IA topologies, and show you how to derive the internal node equations and transfer function of a three-op-amp IA.
The IA topologies
While there are a variety of IA topologies, the traditional three-op-amp topology shown in Figure 3 is the most common and therefore will be the focus of this series. This topology has two stages: input and output. The input stage is made of two non-inverting amplifiers. The non-inverting amplifiers have high input impedance, which minimizes loading of the signal source.

Figure 3 This is how a traditional three-op-amp IA looks like. Source: Texas Instruments
The gain-setting resistor, RG, allows you to select any gain within the operating region of the device (typically 1 V/V to 1,000 V/V). The output stage is a traditional difference amplifier. The ratio of R2 to R1 sets the gain of the difference amplifier. The balanced signal paths from the inputs to the output yield an excellent common-mode rejection ratio (CMRR). Finally, the output voltage, VOUT, is referred to as the voltage applied to the reference pin, VREF.
Even though three-op-amp IAs are the most popular topology, other topologies such as the two op amps offer unique benefits (Figure 4). This topology has high input impedance and single resistor-programmable gain. But since the signal path to the output for each input (V+IN and V-IN) is slightly different, this topology degrades CMRR performance, especially over frequency. Therefore, this type of IA is typically less expensive than the traditional three-op-amp topology.

Figure 4 The schematic shows a two-op-amp IA. Source: Texas Instruments
The IA shown in Figure 5 has a two-op-amp IA input stage. The third op amp, A3, is the output stage, which applies gain to the signal. Two external resistors set the gain. Because of the imbalanced signal paths, this topology also has degraded CMRR performance (<90dB). Therefore, devices with this topology are typically less expensive than traditional three-op-amp IAs.

Figure 5 A two-op-amp IA is shown with output gain stage. Source: Texas Instruments
While the aforementioned topologies are the most prevalent, there are several unique IAs, including current mirror, current feedback, and indirect current feedback.
Figure 6 depicts the current mirror topology. This type of IA is preferable because it enables an input common-mode range that extends to both supply voltage rails, also known as the rail-to-rail input. However, this benefit comes at the expense of bandwidth. Compared to two-op-amp IAs, this topology yields better CMRR performance (100dB or greater). Finally, this topology requires two external resistors to set the gain.

Figure 6 This is how current mirror topology looks like. Source: Texas Instruments
Figure 7 shows a simplified schematic of the current feedback topology. This topology leverages super-beta transistors (Q1 and Q2) to buffer input signal and forces it across the gain-setting resistor, RG. The resulting current flows through R1 and R2, which create voltages at the outputs of A1 and A2. The difference amplifier, A3, then rejects the common-mode signal.

Figure 7 Simplified schematic displays the current feedback topology. Source: Texas Instruments
This topology is advantageous because super-beta transistors yield a low input offset voltage, offset voltage drift, input bias current, and input noise (current and voltage).
Figure 8 depicts the simplified schematic of an indirect current feedback IA. This topology has two transconductance amplifiers (gm1 and gm2) and an integrator amplifier (gm3). The differential input voltage is converted to a current (IIN) by gm1. The gm2 stage converts the feedback voltage (VFB-VREF) into a current (IFB). The integrator amplifier matches IIN and IFB by changing VOUT, thereby adjusting VFB.

Figure 8 This schematic highlights the indirect current feedback topology. Source: Texas Instruments
One significant difference when compared to the previous topology is the rejection of the common-mode signal. In current feedback IAs (and similar architectures), the common-mode signal is rejected by the output stage difference amplifier, A3. Indirect current feedback IAs, however, reject the common-mode signal immediately at the input (gm1). This provides excellent CMRR performance at DC over frequency and independent of gain.
CMRR performance does not degrade if there is impedance on the reference pin (unlike other traditional IAs). Finally, this topology requires two resistors to set the gain, which may deliver excellent performance across temperature if the resistors have well-matched drift behavior.
Common- and differential-mode voltage
The common-mode voltage is the average voltage at the inputs of a differential amplifier. A differential amplifier is any amplifier (including op amps, difference amplifiers and IAs) that amplifies a differential signal while rejecting the common-mode voltage.
The inverting terminal connects to a constant voltage, VCM. Figure 9 depicts a more realistic definition of the input signal where two voltage sources represent VD. Each source has half the magnitude of VD. Performing Kirchhoff’s voltage law around the input loop proves that the two representations are equivalent.

Figure 9 The above schematic shows an alternate definition of common- and differential-mode voltages. Source: Texas Instruments
Three-op-amp IA analysis
Understanding the boundary plot requires an understanding of three-op-amp IA fundamentals. Figure 10 depicts a traditional three-op-amp IA with an input signal—with input and output nodes A1, A2 and A3 labeled.

Figure 10 A three-op-amp IA is shown with input signal and node labels. Source: Texas Instruments
Equation 1 depicts the overall transfer function of the circuit in Figure 10 and defines the gain of the input stage, GIS, and the gain of the output stage, GOS. Notice that the common-mode voltage, VCM, does not appear in the output-voltage equation, because an ideal IA completely rejects common-mode input signals.

Noninverting amplifier input stage
Figure 11 depicts a simplified circuit that enables the derivation of node voltages VIA1 and VOA1.

Figure 11 The schematic shows a simplified circuit for VIA1 and VOA1. Source: Texas Instruments
Equation 2 calculates VIA1:

The analysis for VOA1 simplifies by applying the input-virtual-short property of ideal op amps. The voltage that appears at the RG pin connected to the inverting terminal of A2 is the same as the voltage at V+IN. Superposition results are shown in Equation 3, which simplifies to Equation 4.


Applying a similar analysis to A2 (Figure 12) yields Equation 5, Equation 6 and Equation 7.

Figure 12 This is a simplified circuit for VIA2 and VOA2. Source: Texas Instruments



Difference amplifier output stage
Figure 13 shows that A3, R1 and R2 make up the difference amplifier output stage, whose transfer function is defined in Equation 8.

Figure 13 The above schematic displays difference amplifier input (VDIFF). Source: Texas Instruments

Equation 9, Equation 10 and Equation 11 use the equations for VOA1 and VOA2 to derive VDIFF in terms of the differential input signal, VD, as well as RF and the gain-setting resistor, RG.



Substituting Equation 11 for VDIFF in Equation 8 yields Equation 12, which is the same as Equation 1.

In most IAs, the gain of the output stage is 1 V/V. If the gain of the output stage is 1 V/V, Equation 12 simplifies to Equation 13:

Figure 14 determines the equations for nodes VOA3 and VIA3.

Figure 14 This diagram highlights difference amplifier internal nodes. Source: Texas Instruments
The equation for VOA3 is the same as VOUT, as shown in Equation 14:

Using superposition as shown in Equation 15 determines the equation for VIA3. The voltage at the non-inverting node of A3 sets the amplifier’s common-mode voltage. Therefore, only VOA2 and VREF affect VIA3.

Since GOS=R2/R1, Equation 15 can be rewritten as Equation 16:

Part 2 highlights
The second part of this series will use the equations from the first part to plot each internal amplifier’s input common-mode and output-swing limitation as a function of the IA’s common-mode voltage.
Peter Semig is an applications manager in the Precision Signal Conditioning group at Texas Instruments (TI). He received his bachelor’s and master’s degrees in electrical engineering from Michigan State University in East Lansing, Michigan.
Related Content
- Instrumentation amplifier input-circuit strategies
- Discrete vs. integrated instrumentation amplifiers
- New Instrumentation Amplifier Makes Sensing Easy
- Instrumentation amplifier VCM vs VOUT plots: part 1
- Instrumentation amplifier VCM vs. VOUT plots: part 2
The post A tutorial on instrumentation amplifier boundary plots—Part 1 appeared first on EDN.
Please, don't hurt me!
| Tonight I've sawn a to220 insulated mosfet, so It can fit where i want This is a stereo audio amplifier for my car, and that MOSFET will turn switch the whole module on with the electric antenna signal [link] [comments] |
MACOM agrees exclusive license to manufacture products based on HRL’s 40nm T3L GaN-on-SiC process
ADI upgrades its embedded development platform for AI

Analog Devices, Inc. simplifies embedded AI development with its latest CodeFusion Studio release, offering a new bring-your-own-model capability, unified configuration tools, and a Zephyr-based modular framework for runtime profiling. The upgraded open-source embedded development platform delivers advanced abstraction, AI integration, and automation tools to streamline the development and deployment of ADI’s processors and microcontrollers (MCUs).
CodeFusion Studio 2.0 is now the single entry point for development across all ADI hardware, supporting 27 products today, up from five in the last year, when first introduced in 2024.
Jason Griffin, ADI’s managing director, software and AI strategy, said the release of CodeFusion Studio 2.0 is a major leap forward in ADI’s developer-first journey, bringing an open extensible architecture across the company’s embedded ecosystem with innovation focused on simplicity, performance, and speed.
CodeFusion Studio 2.0 streamlines embedded AI development. (Source: Analog Devices Inc.)
A major goal of CodeFusion Studio 2.0 is to help teams move faster from evaluation to deployment, Griffin said. “Everything from SDK [software development kit] setup and board configuration to example code deployment is automated or simplified.”
Griffin calls it a “complete evolution of how developers build on ADI technology,” by unifying embedded development, simplifying AI deployment, and providing performance visibility in one cohesive environment. “For developers and customers, this means faster design cycles, fewer barriers, and a shorter path from idea to production.”
A unified platform and streamlined workflowCodeFusion Studio 2.0, based on Microsoft’s Visual Studio Code, features a built-in model compatibility checker, performance profiling tools, and optimization capabilities. The unified configuration tools reduce complexity across ADI’s hardware ecosystem.
The new Zephyr-based modular framework enables runtime AI/ML workload profiling, offering layer-by-layer analysis and integration with ADI’s heterogeneous platforms. This eliminates toolchain fragmentation, which simplifies ML deployment and reduces complexity, Griffin noted.
“One of the biggest challenges that developers face with multicore SoCs [system on chips] is juggling multiple IDEs [integrated development environments], toolchains, and debuggers,” Griffin explained. “Each core whether Arm, DSP [digital signal processor], or MPU [microprocessor] comes with its own setup and that fragmentation slows teams down.
“In CodeFusion Studio 2.0, that changes completely,” he added. “Everything now lives in a single unified workspace. You can configure, build, and debug every core from one environment, with shared memory maps, peripheral management, and consistent build dependencies. The result is a streamlined workflow that minimizes context switching and maximizes focus, so developers spend less time on setup and more time on system design and optimization.”
CodeFusion Studio System Planner also is updated to support multicore applications and expanded device compatibility. It now includes interactive memory allocation, improved peripherals setup, and streamlined pin assignment.
CodeFusion Studio 2.0 adds interactive memory allocation (Source: Analog Devices Inc.)
The growing complexity in managing cores, memory, and peripherals in embedded systems is becoming overwhelming, Griffin said. The system planner gives “developers a clear graphical view of the entire SoC, letting them visualize cores, assign peripherals, and define inter-core communication all in one workspace.”
In addition, with cross-core awareness, the environment validates shared resources automatically.
Another challenge is system optimization, which is addressed with multicore profiling tools, including the Zephyr AI profiler, system event viewer, and ELF file explorer.
“Understanding how the system behaves in real time, and finding where your performance can improve is where the Zephyr AI profiler comes in,” Griffin said. “It measures and optimizes AI workflows across ADI hardware from ultra-low-power edge devices to high-performance multicore systems. It supports frameworks like TensorFlow Lite Micro and TVM, profiling latency, memory and throughput in a consistent and streamlined way.”
Griffin said the system event viewer acts like a built-in logic analyzer, letting developers monitor events, set triggers, and stream data to see exactly how the system behaves. It’s invaluable for analyzing, synchronization, and timing across cores, he said.
The ELF file explorer provides a graphical map of memory and flash usage, helping teams make smarter optimized decisions.
CodeFusion Studio 2.0 also gives developers the ability to download SDKs, toolchains, and plugins on demand, with optional telemetry for diagnostic and multicore support.
Doubling down on AICodeFusion Studio 2.0 simplifies the development of AI-enabled embedded systems with support for complete end-to-end AI workflows. This enables developers to bring their own models and deploy them in ADI’s range of processors from low-power edge devices to high-performance DSPs.
“We’ve made the workflow dramatically easier,” Griffin said. “Developers can now import, convert, and deploy AI models directly to ADI hardware. No more stitching together separate tools. With the AI deployment tools, you can assign models to specific cores, verify compatibility, and profile performance before runtime, ensuring every model runs efficiently on the silicon right from the start.”
Manage AI models with CodeFusion Studio 2.0 from import to deployment (Source: Analog Devices Inc.)
Easier debugging
CodeFusion Studio 2.0 also adds new integrated debugging features that bring real-time visibility across multicore and heterogeneous systems, enabling faster issue resolution, shorter debug cycles, and more intuitive troubleshooting in a unified debug experience.
One of the toughest parts of embedded development is debugging multicore systems, Griffin noted. “Each core runs its own firmware on its own schedule often with its own toolchain making full visibility a challenge.”
CodeFusion Studio 2.0 solves this problem, he said. “Our new unified debug experience gives developers real-time visibility across all cores—CPUs, DSPs, and MPUs—in one environment. You can trace interactions, inspect shared resources, and resolve issues faster without switching between tools.”
Developers spend more than 60% of their time doing debugging, Griffin said, and ADI wanted to address this challenge and reduce that time sink.
CodeFusion Studio 2.0 now includes core dump analysis and advanced GDB integration, which includes custom JSON and Python scripts for both Windows and Linux with multicore support.
A big advance is debugging with multicore GDP core dump analysis and RTOS awareness working together in one intelligent uniform experience, Griffin said.
“We’ve added core dump analysis, built around Zephyr RTOS, to automatically extract and visualize crash data; it helps pinpoint root causes quickly and confidently,” he continued. “And the new GDB toolbox provides advanced scripting performance, tracing and automation, making it the most capable debugging suite ADI has ever offered.”
The ultimate goal is to accelerate development and reduce risk for customers, which is what the unified workflows and automation provides, he added.
Future releases are expected to focus on deeper hardware-software integration, expanded runtime environments, and new capabilities, targeting growing developer requirements in physical AI.
CodeFusion Studio 2.0 is now available for download. Other resources include documentation and community support.
The post ADI upgrades its embedded development platform for AI appeared first on EDN.
32-bit MCUs deliver industrial-grade performance

GigaDevice Semiconductor Inc. launches a new family of high-performance GD32 32-bit general-purpose microcontrollers (MCUs) for a range of industrial applications. The GD32F503/505 32-bit MCUs expand the company’s portfolio based on the Arm Cortex-M33 core. Applications include digital power supplies, industrial automation, motor control, robotic vacuum cleaners, battery management systems, and humanoid robots.
(Source: GigaDevice Semiconductor Inc.)
Built on the Arm v8-M architecture, the GD32F503/505 series offers flexible memory configurations, high integration, and built-in security functions, and features an advanced digital signal processor, hardware accelerator and a single-precision floating-point unit. The GD32F505 operates at a frequency of 280 MHz, while the GD32F503 runs at 252 MHz. Both devices achieve up to 4.10 CoreMark/MHz and 1.51 DMIPS/MHz.
The series offers up to 1024 KB of Flash and 192 KB of SRAM. Users can allocate code-flash, data-flash, and SRAM location through scatter loading based on their specific application, which allows users to tailor memory resources according to their requirements, GigaDevice said.
The GD32F503/505 series also integrates a set of peripheral resources, including three analog-to-digital converters with a sampling rate of up to 3 Ms/s (supporting up to 25 channels), one fast comparator, and one digital-to-analog converter. For connectivity, it supports up to three SPIs, two I2Ss, two I2Cs, three USARTs, two UARTs, two CAN-FDs, and one USBFS interface.
The timing system features one 32-bit general-purpose timer, five 16-bit general-purpose timers, two 16-bit basic timers, and two 16-bit PWM advanced timers. This translates into precise and flexible waveform control and robust protection mechanisms for applications such as digital power supplies and motor control.
The operating voltage range of the GD32F503/505 series is 2.6V to 3.6 V, and it operates over the industrial-grade temperature range of -40°C to 105°C. It also offers three power-saving modes for maximizing power efficiency.
These MCUs also provide high-level ESD protection with contact discharge up to 8 kV and air discharge up to 15 kV. Their HBM/CDM immunity is stable at 4,000 V/1,000 V even after three Zap tests, demonstrating reliability margins that exceed conventional standards for industries such as industrial and home appliances, GigaDevice said.
In addition, the MCUs provide multi-level protection of code and data, supporting firmware upgrades, integrity and authenticity verification, and anti-rollback checks. Device security includes a secure boot and secure firmware update platform, along with hardware security features such as user secure storage areas. Other features include a built-in hardware security engine integrating SHA-256 hash algorithms, AES-128/256 encryption algorithms, and a true random number generator. Each device has a unique independent UID for device authentication and lifecycle management.
A multi-layered hardware security mechanism is centered around multi-channel watchdogs, power and clock monitoring, and hardware CRC. In addition, the GD32F5xx series’ software test library is certified to the German IEC 61508 SC3 (SIL 2/SIL 3) for functional safety. The series provides a complete safety package, including key documents such as a safety manual, FMEDA report, and safety self-test library.
The GD32 MCUs feature a full-chain development ecosystem. This includes the free GD32 Embedded Builder IDE, GD-LINK debugging, and the GD32 all-in-one programmer. Tool providers such as Arm, KEIL, IAR, and SEGGER also support this series, including compilation development and trace debugging.
The GD32F503/505 series is available in several package types, including LQFP100/64/48, QFN64/48/32, and BGA64. Samples are available, along with datasheets, software libraries, ecosystem guides, and supporting tools. Development boards are available on request. Mass production is scheduled to start in December. The series will be available through authorized distributors.
The post 32-bit MCUs deliver industrial-grade performance appeared first on EDN.
I accidentally made a teardown museum
| | Found out that the FCC basically lets you peek inside almost any device that emits RF energy. looked into a few cool products, then spent a bit too much time combing through filings that ended up becoming a huge photo set. Here are a few examples! [link] [comments] |
Board-to-board connectors reduce EMI

Molex LLC develops a quad-row shield connector, claiming the industry’s first space-saving, four-row signal pin layout with a metal electromagnetic interference (EMI) shield. The quad-row board-to-board connector achieves up to a 25-dB reduction in EMI compared to a non-shielded quad-row solution. These connectors are suited for space-constrained applications such as smart watches and other wearables, mobile devices, AR/VR applications, laptops, and gaming devices.
Shielding protects connectors from external electromagnetic noise such as nearby components and far-field devices that can cause signal degradation, data errors, and system faults. The quad-row connectors eliminate the need for external shielding parts and grounding by incorporating the EMI shields, which saves space and simplifies assembly. It also improves reliability and signal integrity.
(Source: Molex LLC)
The quad-row board-to-board connectors, first released in 2020, offers a 30% size reduction over dual-row connector designs, Molex said. The space-saving staggered-circuit layout positions pins across four rows at a signal contact pitch of 0.175 mm.
Targeting EMI challenges at 2.4-6 GHz and higher, the quad-row layout with the addition of an EMI shield mitigates both electromagnetic and radio frequency (RF) interference, as well as signal integrity issues that create noise.
The quad-row shield meets stringent EMI/EMC standards to lower regulatory testing burdens and speed product approvals, Molex said.
The new design also addresses the most significant requirements related to signal interference and incremental power requirements. These include how best to achieve 80 times the signal connections and four times the power delivery compared to a single-pin connector, Molex said.
The quad-row connectors offer a 3-A current rating to meet customer requirements for high power in a compact design. Other specifications include a voltage rating of 50 V, a dielectric withstanding voltage of 250 V, and a rated insulation resistance of 100 megohms. Click here for the datasheet.
Samples of the quad-row shield connectors are available now to support custom inquiries. Commercial availability will follow in the second quarter of 2026.
The post Board-to-board connectors reduce EMI appeared first on EDN.
I just got the connector book! Wow!
| | Holy cow, it's huge, it covers everything! These are just a few random pages. I already learned a lot. [link] [comments] |
5-V ovens (some assembly required)—part 2

In the first part of this Design Idea (DI), we looked at simple ways of keeping critical components at a constant temperature using a linear approach. In this second part, we’ll investigate something PWM-based, which should be more controllable and hence give better results.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Adding PWM to the oven
As before, this starts with a module based on a TO-220 package, the tab of which makes a decent hotplate on which our target component(s) can be mounted. Figure 1 shows this new circuit, which compares the voltage from a thermistor/resistor pair with a tri-wave and uses the result to vary the duty cycle of the heating current. Varying the amplitude and level of that tri-wave lets us tune the circuit’s performance.
This looks too simple and perhaps obvious to be completely original, but a quick search found nothing very similar. At least this was designed from scratch.
Figure 1 A tri-wave oscillator, a thermistor, and a comparator work together to pulse-width modulate the current through R7, the main heating element. Q1 switches that current and also helps with the heating.
U1a forms a conventional oscillator running at around 1 kHz. Neither the frequency nor the exact wave-shape on C1 is critical. R1 and R2+R3 determine the tri-wave’s offset, and R4 its amplitude. U1b compares the voltage across the thermistor with the tri-wave, as shown in Figure 2. When the temperature is low so that voltage is higher than any part of the tri-wave, U1b’s output will be solidly low, turning on Q1 to heat up R7 as fast as possible.
As the temperature rises, the voltages start to overlap and proportional control kicks in, progressively reducing the on-time so that the heat input is proportional to the difference between the actual and target temperatures. By the time the set-point has been reached, the on-time is down to ~18%. This scheme minimizes or even eliminates overshoot. (Thermal time-constants—ignored for the moment—can upset this a little.)

Figure 2 Oscilloscope captures showing the operation of Figure 1’s circuit.
Once the circuit is stable, Th1 will have the same resistance as R6, or 3.36 kΩ at our nominal target of 50°C (or 50.03007…°C, assuming perfect components), so Figure 1’s point B will be at half-rail. To keep that balance, the tri-wave must be offset upwards so that slicing gives our 18% figure at the set-point. Setting R3 to 1k0 achieved that. The performance after starting can be seen in Figure 3. (The first 40 seconds or so is omitted because it’s boring.)

Figure 3 From cold, Figure 1’s circuit stabilizes in two to three minutes. The upper trace is U1b’s output, heavily filtered. Also shown are Th1’s temperature (magenta) and that of the hotplate as measured by an external thermistor probe (cyan).
The use of Q1 as an over-driven emitter follower needs some explanation. First thoughts were to use an NPN Darlington or an n-MOSFET as a switch (with U1b’s inputs swapped), but that meant that the collector or drain—which we want to use as a hotplate—would be flapping up and down at the switching frequency.
While the edges are slowish, they could still couple capacitively to a target device: potentially bad news. With a PNP Darlington, the collector can be at ground, give or take a handful of millivolts. (The fine copper wire used to connect the module to the outside world has a resistance of about 1 Ω per meter.) Q1 drops ~1.3 V and so provides about a third of the heating, rather like the corresponding device in Part 1. This is a good reason to stay with the idea of using a TO-220’s tab as that hotplate—at least for the moment. Q1 could be a p-MOSFET, but R7 would then need to be adjusted to suit its (highly variable) VGS(on): fiddly and unrealistic.
LED1 starts to turn on once the set-point is near and becomes brighter as the duty cycle falls. This worked as well in practice as the long-tailed pair approach used in Part 1’s Figure 4.
The duty cycle is given as 18%, but where does that figure come from? It’s the proportion of the input heat that leaks out once the circuit has stabilized, and that depends on how well the module is thermally insulated and how thin the lead-out wires are. With a maximum heating current of 120 mA (600 mW in), practical tests gave that 18% figure, implying that ~108 mW is being lost. With a temperature differential of ~30°C, that corresponds to an overall thermal resistance of ~280°C/W. (Many DIL ICs are quoted as around 100°C/W.)
Some more assembly required
The final build is mechanically quite different and uses a custom-built hotplate instead of a TO-220’s tab. It’s shown in Figure 4.

Figure 4 Our new hotplate is a scrap of copper sheet with the heater resistors glued to it symmetrically, with Th1 on one side and room for the target component(s) on the other. The third picture shows it fixed to the lower block of insulating foam, with fine wires meandered and ready for terminating. Not shown: an extra wire to ground the copper. Please excuse the blobby epoxy. I’d never get a job on a production line.
R7 now comprises four -33 Ω resistors in series/parallel, which are epoxied towards the ends of a piece of copper, two on each side, with Th1 centered on one side. The other side becomes our hotplate area, with a sweet spot directly above the thermistor. Thermally, it is symmetrical, so that—all other things being equal, which they rarely are—our target component will be heated exactly like Th1.
The drive circuit is a variant on Figure 1, the main difference being Q1, which can now be a small but low-RON n-MOSFET as it’s no longer intended to dissipate any power. R3 and R4 are changed to give a tri-wave amplitude of ~500 mV pk–pk at a frequency of ~500 Hz to optimize the proportional control. Figure 5 and Figure 6 show the schematic and its performance. It now stabilizes within a degree after one minute and perhaps a tenth after two, with decent tracking between the internal (Th1) and hotplate temperatures. The duty cycle is higher, largely owing to the different construction; more (and bulkier) insulation would have reduced it, improving efficiency.

Figure 5 The driving circuit for the new hotplate.

Figure 6 How Figure 5’s circuit performs.
The intro to Part 1 touched on my original oven, which needed to stabilize the operation of a logarithmically tuned oscillator. It used a circuit similar to Part 1’s Figure 5 but had a separate power transistor, whose dissipation was wasted. The logging diode was surrounded by a thermally-insulated cradle of heating resistors and the control thermistor.
It worked well and still does, but these circuits improve on it. Time for a rebuild? If so, I’ll probably go for the simplest, Part 1/Figure 1 approach. For higher-power use, Figure 5 (above) could probably be scaled to use different heating resistors fed from a separate and larger voltage. Time for some more experimental fun, anyway.
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- 5-V ovens (some assembly required)—part 1
- Fixing a fundamental flaw of self-sensing transistor thermostats
- Self-heated ∆Vbe transistor thermostat needs no calibration
- Take-back-half thermostat uses ∆Vbe transistor sensor
- Dropping a PRTD into a thermistor slot—impossible?
The post 5-V ovens (some assembly required)—part 2 appeared first on EDN.
SuperLight launches SLP-2000 full-spectrum SWIR supercontinuum laser
Voyant appoints former Valeo exec Clément Nouvel as CEO
Nuvoton Launches Industrial-Grade, High-Security MA35 Microprocessor with Smallest Package and Largest Stacked DRAM Capacity
Nuvoton Technology announced the MA35D16AJ87C, a new member of the MA35 series microprocessors. Featuring an industry-leading 15 x 15 mm BGA312 package with 512 MB of DDR SDRAM stacked inside, the MA35D16AJ87C streamlines PCB design, reduces product footprint, and lowers EMI, making it an excellent fit for space-constrained industrial applications.
Key Highlights-
- Dual 64-bit Arm Cortex-A35 cores plus a Cortex-M4 real-time core
- Integrated, independent TSI (Trusted Secure Island) security hardware
- 512 MB DDR SDRAM stacked inside a 15 x 15 mm BGA312 package
- Supports Linux and RTOS, along with Qt, emWin, and LVGL graphics libraries
- Industrial temperature range: -40°C to +105°C
- Ideal for factory automation, industrial IoT, new energy, smart buildings, and smart cities
The MA35D16AJ87C is built on dual Arm Cortex-A35 cores (Armv8-A architecture, up to 800 MHz) paired with a Cortex-M4 real-time core. It supports 1080p display output with graphics acceleration and integrates a comprehensive set of peripherals, including 17 sets of UARTs, 4 sets of CAN-FD interfaces, 2 sets of Gigabit Ethernet ports, 2 sets of SDIO 3.0 interfaces, and 2 sets of USB 2.0 ports, among others, to meet diverse industrial application needs.
To address escalating IoT security challenges, the MA35D16AJ87C incorporates Nuvoton’s independently designed TSI (Trusted Secure Island) hardware security module. It supports Arm TrustZone technology, Secure Boot, and Tamper Detection, and integrates a complete hardware cryptographic engine suite (AES, SHA, ECC, RSA, SM2/3/4), a true random number generator (TRNG), and a key store. These capabilities help customers meet international cybersecurity requirements such as the Cyber Resilience Act (CRA) and IEC 62443.
The MA35D16AJ87C is supported by Nuvoton’s Linux and RTOS platforms and is compatible with leading graphics libraries including Qt, emWin, and LVGL, helping customers shorten development cycles and reduce overall development costs. The Nuvoton MA35 Series is designed for industrial-grade applications and is backed by a 10- year product supply commitment.
The post Nuvoton Launches Industrial-Grade, High-Security MA35 Microprocessor with Smallest Package and Largest Stacked DRAM Capacity appeared first on ELE Times.
Nokia and Rohde & Schwarz collaborate on AI-powered 6G receiver to cut costs, accelerate time to market
Nokia and Rohde & Schwarz have created and successfully tested a 6G radio receiver that uses AI technologies to overcome one of the biggest anticipated challenges of 6G network rollouts, coverage limitations inherent in 6G’s higher-frequency spectrum.
The machine learning capabilities in the receiver greatly boost uplink distance, enhancing coverage for future 6G networks. This will help operators roll out 6G over their existing 5G footprints, reducing deployment costs and accelerating time to market.
Nokia Bell Labs developed the receiver and validated it using 6G test equipment and methodologies from Rohde & Schwarz. The two companies will unveil a proof-of-concept receiver at the Brooklyn 6G Summit on November 6, 2025.
Peter Vetter, President of Bell Labs Core Research at Nokia, said: “One of the key issues facing future 6G deployments is the coverage limitations inherent in 6G’s higher-frequency spectrum. Typically, we would need to build denser networks with more cell sites to overcome this problem. By boosting the coverage of 6G receivers, however, AI technology will help us build 6G infrastructure over current 5G footprints.”
Nokia Bell Labs and Rohde & Schwarz have tested this new AI receiver under real world conditions, achieving uplink distance improvements over today’s receiver technologies ranging from 10% to 25%. The testbed comprises an R&S SMW200A vector signal generator, used for uplink signal generation and channel emulation. On the receive side, the newly launched FSWX signal and spectrum analyzer from Rohde & Schwarz is employed to perform the AI inference for Nokia’s AI receiver. In addition to enhancing coverage, the AI technology also demonstrates improved throughput and power efficiency, multiplying the benefits it will provide in the 6G era.
Michael Fischlein, VP Spectrum & Network Analyzers, EMC and Antenna Test at Rohde & Schwarz, said: “Rohde & Schwarz is excited to collaborate with Nokia in pioneering AI-driven 6G receiver technology. Leveraging more than 90 years of experience in test and measurement, we’re uniquely positioned to support the development of next-generation wireless, allowing us to evaluate and refine AI algorithms at this crucial pre-standardization stage. This partnership builds on our long history of innovation and demonstrates our commitment to shaping the future of 6G.”
The post Nokia and Rohde & Schwarz collaborate on AI-powered 6G receiver to cut costs, accelerate time to market appeared first on ELE Times.
ESA, MediaTek, Eutelsat, Airbus, Sharp, ITRI, and R&S announce world’s first Rel-19 5G-Advanced NR-NTN connection over OneWeb LEO satellites
European Space Agency (ESA), MediaTek Inc., Eutelsat, Airbus Defence and Space, Sharp, the Industrial Technology Research Institute (ITRI), and Rohde & Schwarz (R&S) have conducted the world’s first successful trial of 5G-Advanced Non-Terrestrial Network (NTN) technology over the Eutelsat’s OneWeb low Earth orbit (LEO) satellites compliant with 3GPP Rel-19 NR-NTN configurations. The tests pave the way for deployment of 5G-Advanced NR-NTN standard, which will lead to future satellite and terrestrial interoperability within a large ecosystem, lowering the cost of access and enabling the use of satellite broadband for NTN devices around the world.
The trial used OneWeb satellites, communicating with the MediaTek NR-NTN chipset, and ITRI’s NR-NTN gNB, implementing 3GPP Release 19 specifications including Ku-band, 50 MHz channel bandwidth and conditional handover (CHO). The OneWeb satellites, built by Airbus, carry transparent transponders, with Ku-band service link, Ka-band feeder link, and adopt the “Earth-moving beams” concept. During the trial, the NTN user terminal with a flat panel antenna developed by SHARP – successfully connected over satellite to the on-ground 5G core using the gateway antenna located at ESA’s European Space Research and Technology Centre (ESTEC) in The Netherlands.
David Phillips, Head of the Systems, Strategic Programme Lines and Technology Department within ESA’s Connectivity and Secure Communications directorate, said: “By partnering with Airbus Defence and Space, Eutelsat and partners, this innovative step in the integration of terrestrial and non-terrestrial networks proves why collaboration is an essential ingredient in boosting competitiveness and growth of Europe’s satellite communications sector.”
Mingxi Fan, Head of Wireless System and ASIC Engineering at MediaTek, said: “As a global leader in terrestrial and non-terrestrial connectivity, we continue in our mission to improve lives by enabling technology that connects the world around us, including areas with little to no cellular coverage. By making real-world connections with Eutelsat LEO satellites in orbit, together with our ecosystem partners, we are now another step closer to bring the next generation of 3GPP-based NR-NTN satellite wideband connectivity for commercial uses.”
Daniele Finocchiaro, Head of Telecom R&D and Projects at Eutelsat, said: “We are proud to be among the leading companies working on NTN specifications, and to be the first satellite operator to test NTN broadband over Ku-band LEO satellites. Collaboration with important partners is a key element when working on a new technology, and we especially appreciate the support of the European Space Agency.”
Elodie Viau, Head of Telecom and Navigation Systems at Airbus, said: “This connectivity demonstration performed with Airbus-built LEO Eutelsat satellites confirms our product adaptability. The successful showcase of Advanced New Radio NTN handover capability marks a major step towards enabling seamless, global broadband connectivity for 5G devices. These results reflect the strong collaboration between all partners involved, whose combined expertise and commitment have been key to achieving this milestone.”
Masahiro Okitsu, President & CEO, Sharp Corporation, said: “We are proud to announce that we have successfully demonstrated Conditional Handover over 5G-Advanced NR-NTN connection using OneWeb constellation and our newly developed user terminals. This achievement marks a significant step toward the practical implementation of non-terrestrial networks. Leveraging the expertise we have cultivated over many years in terrestrial communications, we are honored to bring innovation to the field of satellite communications as well. Moving forward, we will continue to contribute to the evolution of global communication infrastructure and strive to realize a society where everyone is seamlessly connected.”
Dr. Pang-An Ting, Vice President and General Director of Information and Communications Research Laboratories at ITRI, said: “In this trial, ITRI showcased its advanced NR-NTN gNB technology as an integral part of the NR-NTN communication system, enabling conditional handover on the Rel-19 system. We see great potential in 3GPP NTN communication to deliver ubiquitous coverage and seamless connectivity in full integration with terrestrial networks.”
Goce Talaganov, Vice President of Mobile Radio Testers at Rohde & Schwarz, said: “We at Rohde & Schwarz are excited to have contributed to this industry milestone with our test and measurement expertise. For real-time NR-NTN channel characterization, we used our high-end signal generation and analysis instruments R&S SMW200A and FSW. Our CMX500-based NTN test suite replicated the Ku-band conditional handover scenarios in the lab. This rigorous testing, which addresses the challenges of satellite-based communications, paved the way for further performance optimization of MediaTek’s and Sharp’s 5G-Advanced NTN devices.”
The post ESA, MediaTek, Eutelsat, Airbus, Sharp, ITRI, and R&S announce world’s first Rel-19 5G-Advanced NR-NTN connection over OneWeb LEO satellites appeared first on ELE Times.
Decoding the AI Age for Engineers: What all Engineers need to thrive in this?
As AI tools increasingly take on real-world tasks, the roles of professionals, from copywriters to engineers, are undergoing a rapid and profound redefinition and reenvisioning. This swift transformation, characteristic of the AI era, is shifting core fundamentals and operational practices. Sometimes AI complements people, other times it replaces them, but most often, it fundamentally redefines their role in the workplace.
In this story, we look further into the emerging roles and responsibilities of an engineer as AI tools gain greater traction, while also tracking the industry’s shifting expectations through the eyes of prominent names from the electronics and semiconductor industry. The resounding messages? Engineers must anchor themselves in foundational principles and embrace systems-level thinking to thrive.
The Siren Song of AI/ML
There’s no doubt that AI and Machine Learning (ML) are the current darlings of the tech world, attracting a huge talent pool. Raghu Panicker, CEO of Kaynes Semicon, notes this trend: “Engineers today at large are seeing that there are more and more people going after AI, ML, data science.” While this pursuit is beneficial, he issues a crucial caution. He urges engineers to “start to re-look at the hardcore electronics,” pointing out the massive advancements happening across the semiconductor and systems space that are being overlooked.
The engineering landscape is broadening beyond just circuit design. Panicker highlights that a semiconductor package today involves less purely “semiconductors” and more physics, chemistry, materials science, and mechanical engineering. This points to a diverse, multi-faceted engineering future.
The Bright Future in Foundations and Manufacturing
The industry’s optimism about the future of electronics, especially in manufacturing, is palpable. With multiple large-scale projects, including silicon and display fabs, being approved, Panicker sees a “very, very bright” future for Electronics and Manufacturing in India.
He stresses that manufacturing is a career path engineers should take “very seriously,” noting that while design attracts the larger paychecks, manufacturing is catching up and has significant, long-term promise. He also brings up the practical aspect of efficiency, stating that minimizing test time is critical for cost-effective customer solutions, requiring a deep understanding of the trade, often gained through specialized programs.
Innovate, Systematize, Tinker: The Engineer’s New Mandate
Building on this theme, Shitendra Bhattacharya, Country Head of Emerson’s Test and Measurement group, emphasizes the need for a community of innovators. He challenges the new generation of engineers to “think innovation, think systems,” which requires them to “get down to dirtying their hands.”
Bhattacharya is vocal about the danger of focusing solely on the “cooler or sexier looking fields like AI and ML.” He asserts that the future growth of the industry, particularly in India, hinges on local innovation and the creation of homegrown products and OEMs. To achieve this, he calls for a shift toward integrated coursework at the university level.
“System design requires you to understand engineering fundamentals. Today, that is missing at many levels… knowing only one domain is not good enough for it. It will not cut it.” – Shitendra Bhattacharya, Emerson
This call for system design thinking —the ability to bring different fields of engineering together—is a key takeaway for thriving in the AI age.
The Return of the ‘Tinkerer’
This focus on fundamental, hands-on knowledge is echoed strongly by Raja Manickam, CEO of iVP Semicon. He reflects on how the education system’s pivot toward coding and computer science led to the loss of skills like tinkering and a foundational understanding of “basics of physics, basics of electricity.”
Manickam argues that AI’s initial impact will be felt most acutely by IT engineers, and the core electronics sector needs engineers who are “more fundamentally strong.” The emphasis is on the joy and necessity of building things from the very scratch. To future-proof their careers, engineers must actively cultivate this foundational, tangible skill set.
The AI Enabler: Transforming the Value Chain
While the focus must return to engineering basics, it’s vital to recognize that AI is not a threat to be avoided but a tool to be mastered. Amit Agnihotri, Chief Operating Officer at RS Components & Control, provides a clear picture of how AI is already transforming the semiconductor value chain end-to-end.
AI is embedded in:
- Design: Driving simulation and optimization to improve power/performance trade-offs.
- Manufacturing: Assisting testing, yield analytics, and smarter process control.
- Supply Chain: Enhancing forecasting, allocation, and inventory strategies with predictive analytics.
- Customer Engagement: Providing personalized guidance and virtual technical support to accelerate time-to-market.
Agnihotri explains that companies like RS Components leverage AI to improve component discovery, localize inventory, and provide data-backed design-in support, accelerating prototyping and scaling with confidence.
Conclusion: Engineering for Longevity
The AI age presents an exciting paradox for engineers. To successfully leverage the most advanced tools, they must first become profoundly proficient in the most fundamental aspects of their discipline. The future belongs not to those who chase the shiniest new technology in isolation, but to those who view AI as an incredible enabler layered upon an unshakeable foundation of physics, materials science, system-level design, and hands-on tinkering.
Engineers who embrace this philosophy—being both an advanced AI user and a foundational master—will be the true architects of the next wave of innovation in the core electronics and semiconductor industry. The message from the industry is clear: Get back to the basics, think in systems, and start innovating locally. That is the wholesome recipe for a thriving engineering career in the AI era.
The post Decoding the AI Age for Engineers: What all Engineers need to thrive in this? appeared first on ELE Times.
Achieving analog precision via components and design, or just trim and go

I finally managed to get to a book that has been on my “to read” list for quite a while: “Beyond Measure: The Hidden History of Measurement from Cubits to Quantum Constant” (2022) by James Vincent (Figure 1). It traces the evolution of measurement, its uses, and its motivations, and how measurement has shaped our world, from ancient civilizations to the modern day. On a personal note, I found the first two-thirds of the book a great read, but in my opinion, the last third wandered and meandering off topic. Regardless, it was a worthwhile book overall.

Figure 1 This book provides a fascinating journey through the history of measurement and how different advances combined over the centuries to get us to our present levels. Source: W. W. Norton & Company Inc.
Several chapters deal with the way measurement and the nascent science of metrology were used in two leading manufacturing entities of the early 20th century: Rolls-Royce and Ford Motor Company, and the manufacturing differences between them.
Before looking at the differences, you have to “reset” your frame of reference and recognize that even low-to-moderate volume production in those days involved a lot of manual fabrication of component parts, as the “mass production” machinery we now take as a given often didn’t exist or could only do rough work.
Rolls-Royce made (and still makes) fine motor cars, of course. Much of their quality was not just in the finish and accessories; the car was entirely mechanical except for the ignition system. They featured a finely crafted and tuned powertrain. It’s not a myth that you could balance a filled wine glass on the hood (bonnet) while the engine was running and not see any ripples in the liquid’s surface. Furthermore, you could barely hear the engine at all at a time when cars were fairly noisy.
They achieved this level of performance using careful and laborious manual adjustments, trimming, filing, and balancing of the appropriate components to achieve that near-perfect balance and operation. Clearly, this was a time-consuming process requiring skilled and experienced craftspeople. It was “mass production” only in terms of volume, but not in terms of production flow as we understand it today.
In contrast, Henry Fiord focused on mass production with interchangeable parts that would work to design objective immediately when assembled. Doing so required advances in measurement of the components at Ford’s factory to weed out incoming substandard parts and statistical analysis of quality, conformance, and deviations. Ford also sent specialists to suppliers’ factories to improve both their production processes and their own metrology.
Those were the days
Of course, those were different times in terms of industrial production. When Wright brothers needed a gasoline engine for the 1903 Flyer, few “standard” engine choices were available, and none came close to their size, weight, and output-power needs.
So, their in-house mechanic Charlie Taylor machined an aluminum engine block, fabricated most parts, and assembled an engine (1903 Wright Engine) in just six weeks using a drill press and a lathe; it produced 12 horsepower, 50% above the 8 horsepower that their calculations indicated they needed (Figure 2).

Figure 2 Perhaps in an ultimate do-it-yourself project, Charlie Taylor, mechanic for Wright brothers, machined the aluminum engine block and fabricated most of the parts and then assembled the complete engine in six weeks (reportedly working only from rough sketches). Source: Wright Brothers Aeroplane Company
Which approach is better—fine adjusting and trims, or use of a better design and superior components? There’s little doubt that the “quality by components” approach is the better tactic in today’s world where even customized cars make use of many off-the-shelf parts.
Moreover, the required volume for a successful car-production line mandates avoiding hand-tuning of individual vehicles to make their components plug-and-play properly. Even Rolls-Royce now uses the Ford approach, of course; the alternative is impractical for modern vehicles except for special trim and accessories.
Single unit “perfection” uses both approaches
In some cases, both calibration and use of better topology and superior components combine for a unique design. Not surprisingly, a classic example is one of the first EDN articles by late analog-design genius Jim Williams, “This 30-ppm scale proves that analog designs aren’t dead yet”. Yes, I have cited it in previous blogs, and that’s no accident (Figure 3).

Figure 3 This 1976 EDN article by Jim Williams set a standard for analog signal-chain technical expertise and insight that has rarely been equaled. Source: EDN
In the article, he describes his step-by-step design concept and fabrication process for a portable weigh scale that would offer 0.02% absolute accuracy (0.01 lb over a 300-pound range). Yet, it would never need adjustment to be put into use. Even though this article is nearly 50 years old, it still has relevant lessons for our very different world.
I believe that Jim did a follow-up article about 20 years later, where he revisited and upgraded that design using newer components, but I can’t find it online.
Today’s requirements were unimaginable—until recently
Use of in-process calibration is advancing due to techniques such as the use of laser-based interferometry. For example, the positional accuracy of the carriage, which moves over the wafer, needs to be in the sub-micrometer range.
While this level of performance can be achieved with friction-free air bearings, they cannot be used in extreme-ultraviolet (EUV) systems since those operate in an ultravacuum environment. Instead, high-performance mechanical bearings must be used, even though they are inferior to air bearings.
There are micrometer-level errors in the x-axis and y-axis, and the two axes are also not perfectly orthogonal, resulting in a system-level error typically greater than several micrometers across the 300 × 300-mm plane. To compensate, manufacturers add interferometry-based calibration of the mechanical positioning systems to determine the error topography of a mechanical platform.
For example, with a 300-mm wafer, the grid is scanned in 10-mm steps, and the interferometer determines the actual position. This value is compared against the motion-encoder value to determine a corrective offset. After this mapping, the system accuracy is improved by a factor of 10 and can achieve an absolute accuracy of better than 0.5 µm in the x-y plane
Maybe too smart?
Of course, there are times when you can be a little too clever in the selection of components when working to improve system performance. Many years ago, I worked for a company making controllers for large machines, and there was one circuit function that needed coarse adjustment for changes in ambient temperature. The obvious way would have been to use a thermistor or a similar component in the circuit.
But our lead designer—a circuit genius by any measure—had a “better” idea. Since the design used a lot of cheap, 100-kΩ pullup resistors with poor temperature coefficient, he decided to use one of those instead of the thermistor in the stabilization loop, as they were already on the bill of materials. The bench prototype and pilot-run units worked as expected, but the regular production units had poor performance.
Long story short: our brilliant designer had based circuit stabilization on the deliberate poor tempco of these re-purposed pull-up resistors and associated loop dynamic range. However, our in-house purchasing agent got a good deal on some resistors of the same value and size, but with a much tighter tempco. Additionally, getting a better component that was functionally and physically identical for less money seemed like a win-win.
That was fine for the pull-up role, but it meant that the transfer function of temperature to resistance was severely compressed. Identifying that problem took a lot of aggravation and time.
What’s your preferred approach to achieving a high-precision, accurate, stable analog-signal chain and front-end? Have you used both methods, or are you inherently partial to one over the other? Why?
Bill Schweber is a degreed senior EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features. Prior to becoming an author and editor, he spent his entire hands-on career on the analog side by working on power supplies, sensors, signal conditioning, and wired and wireless communication links. His work experience includes many years at Analog Devices in applications and marketing.
Related Content
- JPL & NASA’s latest clock: almost perfect
- The Wright Brothers: Test Engineers as Well as Inventors
- Precision metrology redefines analog calibration strategy
- Inter-satellite link demonstrates metrology’s reach, capabilities
The post Achieving analog precision via components and design, or just trim and go appeared first on EDN.
КПІшники серед переможців Sikorsky Challenge 2025!
XIV Міжнародний фестиваль інноваційних проєктів Sikorsky Challenge зібрав 123 проєкти з 12 держав 🇺🇦🇺🇸🇮🇹🇫🇷🇰🇷🇪🇸🇮🇪🇱🇺🇮🇸🇮🇱🇦🇿🇵🇱



