Українською
  In English
Збирач потоків
Київська політехніка спільно з Українсько-Японським центром КПІ розпочинає співпрацю з Digital Knowledge Co., Ltd
🇯🇵🇺🇦📃Меморандум про співпрацю з японською компанією, що розробляє платформи для онлайн-курсів, забезпечує розробку, впровадження й підтримку рішень для онлайн-навчання, відкриває нові можливості в секторі цифрової освіти.
Wearables for health analysis: A gratefulness-inducing personal experience

What should you do if your wearable device tells you something’s amiss health-wise, but you feel fine? With this engineer’s experience as a guide, believe the tech and get yourself checked.
Mid-November was…umm…interesting. After nearly two days with an elevated heart rate, which I later realized was “enhanced” by cardiac arrhythmia, I ended up overnighting at a local hospital for testing, medication, procedures, and observation. But if not for my wearable devices, I never would have known I was having problems, to my potentially severe detriment.
I felt fine the entire time; the repeated alerts coming from my smart watch and smart ring were my sole indication to seek medical attention. I’ve conceptually discussed the topic of wearables for health monitoring plenty of times in the past. Now, however, it’s become deeply personal.
Late-night, all-night alertsSunday evening, November 16, 2025, my Pixel Watch smartwatch began periodically alerting me to an abnormally high heart rate. As you can see from the archived reports from Fitbit (the few-hour data gaps each day reflect when the Pixel Watch is on the charger instead of my wrist):
![]()
![]()
![]()
and my Oura Ring 4:



for the prior two days, my normal sleeping heart rate is in the low-to-mid 40s bpm (beats per minute) range. However, during the November 16-to-17 overnight cycle, both wearable devices reported that I was spiking the mid-140s, along with a more general bpm elevation-vs-norm:
![]()

![]()

By Monday evening, I was sufficiently concerned that I shared with my wife what was going on. She recommended that in addition to continued monitoring of my pulse rate and trend, I should also use the ECG (i.e., EKG, for electrocardiogram) app that was built into her Apple Watch Ultra. I first checked to see whether there was a similar app on my Pixel Watch. And indeed, there was: Fitbit ECG. A good overview video is embedded within some additional product documentation:
Here’s an example displayed results screenshot directly from my watch, post-hospital visit, when my heart was once again thankfully beating normally:

I didn’t think to capture screenshots that Monday night—my thoughts were admittedly on other, more serious matters—but here’s a link to the Fitbit-generated November 17 evening report as a PDF, and here’s the captured graphic:

The average bpm was 110. And the report summary? “Atrial Fibrillation: Your heart rhythm shows signs of atrial fibrillation (AFib), an irregular heart rhythm.”
The next morning (PDF, again), when I re-did the test:
![]()


my average bpm was now 140. And the conclusion? “Inconclusive high heart rate: If your heart rate is over 120 beats per minute, the ECG app can’t assess your heart rhythm.”
The data was even more disconcerting this time, and the overall trend was in a discouraging direction. I promptly made an emergency appointment for that same afternoon with my doctor. She ran an ECG on the office equipment, whose results closely (and impressively so) mirrored those from my Pixel Watch. Then she told me to head directly to the closest hospital; had my wife not been there to drive me, I probably would have been transported in an ambulance.
Thankfully, as you may have already noticed from the above graphs, after bouts of both atrial flutter and fibrillation, my heart rate began to return to its natural rhythm by late that same evening. Although the Pixel Watch battery had died by ~6 am on Wednesday morning, my recovery was already well away:
![]()
and the Oura Ring kept chugging along to document the normal heartbeat restoration process:

I was discharged on Wednesday afternoon with medication in-hand, along with instructions to make a follow-up appointment with the cardiologist I’d first met at the hospital emergency room. But the “excitement” wasn’t yet complete. The next morning, my Pixel Watch started yelling at me again, this time because my heart rate was too low:
![]()

My normal resting heart rate when awake is in the low-to-mid 50s. But now it was ~10 points below that. I had an inkling that the root cause might be a too-high medication dose, and a quick call to the doctor confirmed my suspicion. Splitting each tablet in two got things back to normal:
![]()

![]()

As I write this, I’m nearing the end of a 30-day period wearing a cardiac monitor; a quite cool device, the details of which I’ll devote to an upcoming blog post. My next (and ideally last) cardiologist appointment is a month away; I’m hopeful that this arrhythmia event was a one-time fluke.
Regardless, my unplanned hospital visit, specifically the circumstances that prompted it, was more than a bit of a wakeup call for this former ultramarathoner and broader fitness activity aficionado (admittedly a few years and a few pounds ago). And that said, I’m now a lifelong devotee and advocate of smart watches, smart rings and other health monitoring wearables as effective adjuncts to traditional symptoms that, as my case study exemplifies, might not even be manifesting in response to an emerging condition…assuming you’re paying sufficient ongoing attention to your body to be able to notice them if they were present.
Thoughts on what I’ve shared today? As always, please post ‘em in the comments!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- Wearable trends: a personal perspective
- The Pixel Watch: An Apple alternative with Google’s (and Fitbit’s) personal touch
- The Smart Ring: Passing fad, or the next big health-monitoring thing?
- The Oura Ring 4: Does “one more” deliver much (if any) more?
The post Wearables for health analysis: A gratefulness-inducing personal experience appeared first on EDN.
How to design a digital-controlled PFC, Part 2

In Part 1 of this article series, I explained the system block diagram and each of the modules of digital control. In this second installment, I’ll talk about how to write firmware to implement average current-mode control.
Average current-mode control
Average current-mode control, as shown in Figure 1, is common in continuous-conduction-mode (CCM) power factor correction (PFC). It has two loops: a voltage loop that works as an outer loop and a current loop that works as an inner loop. The voltage loop regulates the PFC output voltage (VOUT) and provides current commands to the current loop. The current loop forces the inductor current to follow its reference, which is modulated by the AC input voltage.
Figure 1 Average current-mode control is common in CCM PFC, where a voltage loop regulates the PFC output voltage and provides current commands to the current loop. Source: Texas Instruments
Normalization
Normalizing all of the signals in Figure 1 will enable the ability to handle different signal scales and prevent calculations from overflowing.
For VOUT, VAC, and IL, multiply their analog-to-digital converter (ADC) reading by a factor of , (assuming a 12-bit ADC):
For VREF, multiply its setpoint by a factor of):

where R1 and R2 are the resistors used in Figure 4 from Part 1 of this article series.
After normalization, all of the signals are in the range of (–1, +1). The compensator GI output d is in the range of (0, +1), where 0 means 0% duty and 1 means 100% duty.
Digital voltage-loop implementation
As shown in Figure 1, an ADC senses VOUT for comparison to VREF. Compensator GV processes the error signal, which is usually a proportional integral (PI) compensator, as I mentioned in Part 1. The output of this PI compensator will become part of the current reference calculations.
VOUT has a double-line frequency, which couples to the current reference and affects total harmonic distortion (THD). To reduce this ripple effect, set the PFC voltage-loop bandwidth much lower than the AC frequency; for example, around 10Hz. This low voltage-loop bandwidth will cause VOUT to dip too much when a heavy load is applied, however.
Meeting the load transient response requirement will require a nonlinear voltage loop. When the voltage error is small, use a small Kp, Ki gain. When the error exceeds a threshold, using a larger Kp, Ki gain will rapidly bring VOUT back to normal. Figure 2 shows a C code example for this nonlinear voltage loop.

Figure 2 C code example for this nonlinear voltage-loop gain. Source: Texas Instruments
Digital current-loop implementation takes 3 steps:
Step 1: Calculating the current reference
As shown in Figure 1, Equation 3 calculates the current-loop reference, IREF:
![]()
where A is the voltage-loop output, C is the AC input voltage a,nd B is the square of the AC root-mean-square (RMS) voltage.
Using the AC line-measured voltage subtracted by the AC neutral-measured voltage will obtain the AC input voltage (Equation 4 and Figure 3):
![]()

Figure 3 VAC calculated by subtracting AC neutral-measured voltage from AC line-measured voltage. Source: Texas Instruments
Equation 5 defines the RMS value as:

With Equation 6 in discrete format:
![]()
where V(n) represents each ADC sample, and N is the total number of samples in one AC cycle.
After sampling VAC at a fixed speed, it is squared, then accumulated in each AC cycle. Dividing the number of samples in one AC cycle calculates the square of the RMS value.
In steady state, you can treat both voltage-loop output A and the square of VAC RMS value B as constant; thus, only C (VAC) modulates IREF. Since VAC is sinusoidal, IREF is also sinusoidal (Figure 4).

Figure 4 Sinusoidal current reference IREF due to sinusoidal VAC. Source: Texas Instruments
Step 2: Calculating the current feedback signal
If you compare the shape of the Hall-effect sensor output in Figure 5 from Part 1 and IREF in Figure 4 from this installment, they have the same shape. The only difference is that the Hall-effect sensor output has a DC offset; therefore, it cannot be used directly as the feedback signal. You must remove this DC offset before closing the loop.
Figure 5 Calculating the current feedback signal. Source: Texas Instruments
Also, the normalized Hall-effect sensor output is between (0, +1); after subtracting the DC offset, its magnitude becomes (–0.5, +0.5). To maintain the (–1, +1) normalization range, multiply it by 2, as shown in Equation 7 and Figure 5:
![]()
Step 3: Closing the current loop
Now that you have both the current reference and feedback signal, let’s close the loop. During the positive AC cycle, the control loop has standard negative feedback control. Use Equation 8 to calculate the error going to the control loop:
![]()
During the negative AC cycle, the higher the inductor current, the lower the value of the Hall-effect sensor output; thus, the control loop needs to change from negative feedback to positive feedback. Use Equation 9 to calculate the error going to the control loop:
![]()
Compensator GI processes the error signal, which is usually a PI compensator, as mentioned in Part 1. Sending the output of this PI compensator to the pulse-width modulation (PWM) module will generate the corresponding PWM signals. During a positive cycle, Q2 is the boost switch and controlled by D; Q1 is the synchronous switch and controlled by 1-D. Q4 remains on and Q3 remains off for the whole positive AC half cycle. During a negative cycle, the function of Q1 and Q2 swaps: Q1 becomes the boost switch controlled by D, while Q2 works as a synchronous switch controlled by 1-D. Q3 remains on, and Q4 remains off for the whole negative AC half cycle.
Loop tuning
Tuning a PFC control loop is similar to doing so in an analog PFC design, with the exception that here you need to tune Kp, Ki instead of playing pole-zero. In general, Kp determines how fast the system responds. A higher Kp makes the system more sensitive, but a Kp value that’s too high can cause oscillations.
Ki removes steady-state errors. A higher Ki removes steady-state errors more quickly, but can lead to instability.
It is possible to tune PI manually through trial and error – here is one such tuning procedure:
- Set Kp, Ki to zero.
- Gradually increase Kp until the system’s output starts to oscillate around the setpoint.
- Set Kp to approximately half the value that caused the oscillations.
- Slowly increase Ki to eliminate any remaining steady-state errors, but be careful not to reintroduce oscillations.
- Make small, incremental adjustments to each parameter to achieve the intended system performance.
Knowing the PFC Bode plot makes loop tuning much easier; see reference [1] for a PFC tuning example. One advantage of a digital controller is that it can measure the Bode plot by itself. For example, the Texas Instruments Software Frequency Response Analyzer (SFRA) enables you to quickly measure the frequency response of your digital power converter [2]. The SFRA library contains software functions that inject a frequency into the control loop and measure the response of the system. This process provides the plant frequency response characteristics and the open-loop gain frequency response of the closed-loop system. You can then view the plant and open-loop gain frequency response on a PC-based graphic user interface, as shown in Figure 6. All of the frequency response data is exportable to a CSV file or Microsoft Excel spreadsheet, which you can then use to design the compensation loop.

Figure 6 The Texas Instruments SRFA tool allows for the quick frequency response measurement of your power converter. Source: Texas Instruments
System protection
You can implement system protection through firmware. For example, to implement overvoltage protection (OVP), compare the ADC-measured VOUT with the OVP threshold and shut down PFC if VOUT exceeds this threshold. Since most microcontrollers also have integrated analog comparators with a programmable threshold, using the analog comparator for protection can achieve a faster response than firmware-based protection. Using an analog comparator for protection requires programming its digital-to-analog converter (DAC) value. For an analog comparator with a 12-bit DAC and 3.3V reference, Equation 10 calculates the DAC value as:
![]()
where VTHRESHOLD is the protection threshold, and R1 and R2 are the resistors used in Figure 4 from Part 1.
State machine
From power on to turn-off, PFC operates at different states at different conditions; these states are called the state machine. The PFC state machine transitions from one state to another in response to external inputs or events. Figure 7 shows a simplified PFC state machine.

Figure 7 Simplified PFC state machine that transitions from one state to another in response to external inputs or events. Source: Texas Instruments
Upon power up, PFC enters an idle state, where it measures VAC and checks if there are any faults. If no faults exist and the VAC RMS value is greater than 90V, the relay closes and the PFC starts up, entering a ramp-up state where the PFC gradually ramps up its VOUT by setting the initial voltage-loop setpoint equal to the measured actual VOUT voltage, then gradually increasing the setpoint. Once VOUT reaches its setpoint, the PFC enters a regulate state and will stay there until an abnormal condition occurs, such as overvoltage, overcurrent or overtemperature. If any of these faults occur, the PFC shuts down and enters a fault state. If the VAC RMS value drops below 85V, triggering VAC brownout protection, the PFC also shuts down and enters an idle state to wait until VAC returns to normal.
Interruption
A PFC has many tasks to do during normal operation. Some tasks are urgent and need processing immediately, some tasks are not so urgent and can be processed later, and some tasks need processing regularly. These different task priorities are handled by interruption. Interruptions are events detected by the digital controller that cause a preemption of the normal program flow by pausing the current program and transferring control to a specified user-written firmware routine called the interrupt service routine (ISR). The ISR processes the interrupt event, then resumes normal program flow.
Firmware structure
Figure 8 shows a typical PFC firmware structure. There are three major parts: the background loop, ISR1, and ISR2.

Figure 8 PFC firmware structure with three major parts: the background loop, ISR1, and ISR2.. Source: Texas Instruments
The firmware starts from the function main(). In this function, the controller initializes its peripherals, such as configuring the ADC, PWM, general-purpose input/output, universal asynchronous receiver transmitter (UART), setup protection threshold, configure interrupt, initialize global variable, etc. The controller then enters a background loop that runs infinitely. This background loop contains non-time-critical tasks and tasks that do not need processing regularly.
ISR2 is an interrupt service routine that runs at 10KHz. The triggering of ISR2 suspends the background loop. The CPU jumps to ISR2 and starts executing the code in ISR2. Once ISR2 finishes, the CPU returns to where it was upon suspension and resumes normal program flow.
The tasks in ISR2 that are time-critical or processed regularly include:
- Voltage-loop calculations.
- PFC state machine.
- VAC RMS calculations.
- E-metering.
- UART communication.
- Data logging.
ISR1 is an interrupt service routine running at every PWM cycle: for example, if the PWM frequency is 65KHz, then ISR1 is running at 65KHz. ISR1 has a higher priority than ISR2, which means that if ISR1 triggers when the CPU is in ISR2, ISR2 suspends, and the CPU jumps to ISR1 and starts executing the code in ISR1. Once ISR1 finishes, the CPU goes back to where it was upon suspension and resumes normal program flow.
The tasks in ISR1 are more critical than those in ISR2 and need to be processed more quickly. These include:
- ADC measurement readings.
- Current reference calculations.
- Current-loop calculations.
- Adaptive dead-time adjustments.
- AC voltage-drop detection.
- Firmware-based system protection.
The current loop is an inner loop of average current-mode control. Because its bandwidth must be higher than that of the voltage loop, put the current loop in faster ISR1, and put the voltage loop in slower ISR2.
AC voltage-drop detection
In a server application, when an AC voltage drop occurs, the PFC controller must detect it rapidly and report the voltage drop to the host. Rapid AC voltage-drop detection becomes more important when using a totem-pole bridgeless PFC.
As shown in Figure 9, assuming a positive AC cycle where Q4 is on, the turn-on of synchronous switch Q1 discharges the bulk capacitor, which means that it is no longer possible to guarantee the holdup time.

Figure 9 The bulk capacitor discharging after the AC voltage drops. Source: Texas Instruments
To rapidly detect an AC voltage drop, you can use a firmware phase-locked loop (PLL) [3] to generate an internal sine-wave signal that is in phase with AC input voltage, as shown in Figure 10. Comparing the measured VAC with this PLL sine wave will determine the AC voltage drop, at which point all switches should turn off.

Figure 10 Rapid AC voltage-drop detection by using a firmware PLL to generate an internal sine-wave signal that is in phase with AC input voltage. Source: Texas Instruments
Design your own digital control
Now that you have learned how to use firmware to implement an average current-mode controller, how to tune the control loop, and how to construct the firmware structure, you should be able to design your own digitally controlled PFC. Digital control can do much more. In the third installment of this article series, I will introduce advanced digital control algorithms to reduce THD and improve the power factor.

Bosheng Sun is a system engineer and Senior Member Technical Staff at Texas Instruments, focused on developing digitally controlled high-performance AC/DC solutions for server and industry applications. Bosheng received a Master of Science degree from Cleveland State University, Ohio, USA, in 2003 and a Bachelor of Science degree from Tsinghua University in Beijing in 1995, both in electrical engineering. He has published over 30 papers and holds six U.S. patents.
Related Content
- How to design a digital-controlled PFC, Part 1
- Digital control for power factor correction
- Digital control unveils a new epoch in PFC design
- Power Tips #124: How to improve the power factor of a PFC
- Power Tips #115: How GaN switch integration enables low THD and high efficiency in PFC
- Power Tips #116: How to reduce THD of a PFC
References
- Sun, Bosheng, and Zhong Ye. “UCD3138 PFC Tuning.” Texas Instruments application report, literature No. SLUA709, March 2014.
- Texas Instruments. n.d. SFRA powerSUITE digital power supply software frequency response analyzer tool for C2000
MCUs. Accessed Dec. 9, 2025. - Bhardwaj, Manish. “Software Phase Locked Loop Design Using C2000
Microcontrollers for Single Phase Grid Connected Inverter.” Texas Instruments application report, literature No. SPRABT3A, July 2017.
The post How to design a digital-controlled PFC, Part 2 appeared first on EDN.
From Insight to Impact: Architecting AI Infrastructure for Agentic Systems
Courtesy: AMD
The next frontier of AI is not just intelligent – it’s agentic. As enterprises move toward systems capable of autonomous action and real-time decision-making, the demands on infrastructure are intensifying.
In this IDC-authored blog, Madhumitha Sathish, Research Manager, High Performance Computing, examines how organisations can prepare for this shift with flexible, secure, and cost-effective AI infrastructure strategies. Drawing on IDC’s latest research, the piece highlights where enterprises stand today and what it will take to turn agentic AI potential into measurable business impact.
Agentic AI Is Reshaping Enterprise Strategy
Artificial intelligence has become foundational to enterprise transformation. In 2025, the rise of agentic AI, systems capable of autonomous decision-making and dynamic task execution, is redefining how organisations approach infrastructure, governance, and business value. These intelligent systems don’t just analyse data; they act on it, adapting in real time across datacenter, cloud, and edge environments.
Agentic AI can reallocate compute resources to meet SLAs, orchestrate cloud deployments based on latency and compliance, and respond instantly to sensor failures in smart manufacturing or logistics. But as IDC’s July 2025 survey of 410 IT and AI infrastructure decision-makers reveals, most enterprises are still figuring out how to harness this potential.
IDC Insight: 75% Lack Clarity on Agentic AI Use Cases
According to IDC, more than 75% of enterprises report uncertainty around agentic AI use cases. This lack of clarity poses real risks where initiatives may stall, misalign with business goals, or introduce compliance challenges. Autonomous systems require robust oversight, and without well-defined use cases, organisations risk deploying models that behave unpredictably or violate internal policies.
Scaling AI: Fewer Than 10 Use Cases at a Time
IDC found that 83% of enterprises launch fewer than 10 AI use cases simultaneously. This cautious approach reflects fragmented strategies and limited scalability. Only 21.7% of organisations conduct full ROI analyses for proposed AI initiatives, and just 22.2% ensure alignment with strategic objectives. The rest rely on assumptions or basic assessments, which can lead to inefficiencies and missed opportunities.
Governance and Security: A Growing Priority
As generative and agentic AI models gain traction, governance and security are becoming central to enterprise readiness. IDC’s data shows that organisations are adopting multilayered data governance strategies, including:
- Restricting access to sensitive data
- Anonymising personally identifiable information
- Applying lifecycle management policies
- Minimising data collection for model development
Security testing is also evolving. Enterprises are simulating adversarial attacks, testing for data pollution, and manipulating prompts to expose vulnerabilities. Input sanitisation and access control checks are now standard practice, reflecting a growing awareness that AI security must be embedded throughout the development pipeline.
Cost Clarity: Infrastructure Tops the List
AI initiatives often falter due to unclear cost structures. IDC reports that nearly two-thirds of GenAI projects begin with comprehensive cost assessments covering infrastructure, licensing, labor, and scalability. Among the most critical cost factors:
- Specialised infrastructure for training (60.7%)
- Infrastructure for inferencing (54.5%)
- Licensing fees for LLMs and proprietary tools
- Cloud compute and storage pricing
- Salaries and overhead for AI engineers and DevOps teams
- Compliance safeguards and governance frameworks
Strategic planning must account for scalability, integration, and long-term feasibility.
Infrastructure Choices: Flexibility Is Essential
IDC’s survey shows that enterprises are split between building in-house systems, purchasing turnkey solutions, and working with systems integrators. For training, GPUs, high-speed interconnects, and cluster-level orchestration are top priorities. For inferencing, low-latency performance across datacenter, cloud, and edge environments is essential.
Notably, 77% of respondents say it’s very important that servers, laptops, and edge devices operate on consistent hardware and software platforms. This standardisation simplifies deployment, ensures performance predictability, and supports model portability.
Strategic Deployment: Data center, Cloud, and Edge
Inferencing workloads are increasingly distributed. IDC found that 63.9% of organisations deploy AI inference workloads in public cloud environments, while 50.7% continue to leverage their own datacenters. Edge servers are gaining traction for latency-sensitive applications, especially in sectors like manufacturing and logistics. Inferencing on end-user devices remains limited, reflecting a strategic focus on reliability and infrastructure consistency.
Looking Ahead: Agility, Resilience, and Cost-Efficient Infrastructure
As enterprises prepare for the next wave of AI innovation, infrastructure agility and governance sophistication will be paramount. Agentic AI will demand real-time responsiveness, energy-efficient compute, and resilient supply chains. IDC anticipates that strategic infrastructure planning can help in lowering operational costs while improving performance density by optimizing power and cooling demands. Enterprises can also avoid unnecessary spending through workload-aware provisioning and early ROI modelling across AI environments. Sustainability will become central to infrastructure planning, and semiconductor availability will be a strategic priority.
The future of AI isn’t just about smarter models; it’s about smarter infrastructure. Enterprises that align strategy with business value, governance, and operational flexibility will be best positioned to lead in the age of agentic intelligence.
The post From Insight to Impact: Architecting AI Infrastructure for Agentic Systems appeared first on ELE Times.
IIIT Hyderabad’s customised chip design and millimetre-wave circuits for privacy-preserving sensing and intelligent healthcare systems
In an age where governance, healthcare and mobility increasingly rely on data, how that data is sensed, processed and protected matters deeply. Visual dashboards, spatial maps and intelligent systems have become essential tools for decision-making, but behind every such system lies something less visible and far more fundamental: electronics.
Silicon-To-System Philosophy
At IIIT Hyderabad, the Integrated Circuits – Inspired by Wireless and Biomedical Systems, IC-WiBES research group led by Prof. Abhishek Srivastava, is rethinking how electronic systems are designed; not as isolated chips, but as end-to-end technologies that move seamlessly from silicon to real-world deployment. The group follows a simple but powerful philosophy: vertical integration from chip design to system-level applications.
Rather than treating integrated circuits, signal processing and applications as separate silos, the group works across all three layers simultaneously. This “dual-track” approach allows researchers to design custom chips while also building complete systems around them, ensuring that electronics are shaped by real-world needs rather than abstract specifications.
Why Custom Chips Still Matter
In many modern systems, off-the-shelf electronics are sufficient. But for strategic applications such as healthcare monitoring, privacy-preserving sensing, space missions, or national infrastructure, generic hardware often becomes a bottleneck. The IIIT-H team focuses on designing application-specific integrated circuits (ASICs) that offer greater flexibility, precision and energy efficiency than commercial alternatives. These chips are not built in isolation; they evolve continuously based on feedback from real deployments, ensuring that circuit-level decisions directly improve system performance.
Millimetre Wave Electronics
One of the lab’s most impactful research areas is millimetre-wave (mmWave) radar sensing, a technology increasingly used in automotive safety but still underexplored for civic and healthcare applications. Unlike cameras, mmWave radar can operate in low light, fog, rain and dust – all while preserving privacy. By transmitting and receiving high-frequency signals, these systems can detect motion, distance and even minute vibrations, such as the movement of a human chest during breathing.
Contactless Healthcare Monitoring
This capability has opened up new possibilities in non-contact health monitoring. The team has developed systems that can measure heart rate and respiration without wearables or cameras, which is particularly useful in infectious disease wards, elderly care, and post-operative monitoring. These systems combine custom electronics, signal processing and edge AI to extract vital signs from extremely subtle radar reflections. Clinical trials are already underway, with deployments planned in hospital settings to evaluate real-world performance.
Privacy-First Sensing For Roads
The same radar technology is being applied to road safety and urban monitoring. In poor visibility conditions, such as heavy rain or fog, traditional camera-based systems struggle. Radar-based sensing, however, continues to function reliably. The researchers have demonstrated systems that can detect and classify vehicles, pedestrians and cyclists with high accuracy and low latency, even in challenging environments. Such systems could inform traffic planning, accident analysis and smart city governance, without raising surveillance concerns.
Systems Shaping Chips
A defining feature of the lab’s work is the feedback loop between systems and circuits. When limitations emerge during field testing, such as signal interference or noise, the insights directly inform the next generation of chip designs. This has led to innovations such as programmable frequency-modulated radar generators, low-noise oscillators and high-linearity receiver circuits, all tailored to the demands of real applications rather than textbook benchmarks.
Building Rare Electronics Infrastructure
Supporting this research is a rare, high-frequency electronics setup at IIIT Hyderabad, capable of measurements up to 44 GHz – facilities available at only a handful of institutions nationwide. The lab has also led landmark milestones, including the institute’s first fully in-house chip tape-out and participation in international semiconductor design programs that provide broad access to advanced electronic design automation tools.
Training Full Stack Engineers
Beyond research outputs, the group is shaping a new generation of engineers fluent across the entire electronics stack- from transistor-level design to algorithms and applications. “Our students learn how circuit-level constraints shape system intelligence – a rare but increasingly critical skill,” remarks Prof. Srivastava. This cross-disciplinary training equips students for roles in national missions, deep-tech startups, academia and advanced semiconductor industries, where understanding how hardware constraints affect system intelligence is increasingly critical.
Academic Research to National Relevance
With sustained funding from multiple agencies, dozens of top-tier publications, patents in progress and early-stage technology transfers underway, the lab’s work reflects a broader shift in Indian research – one that is towards application-driven electronics innovation.
Emphasising that progress in deep-tech research isn’t linear, Prof. Srivastava remarks that at IC-WIBES, circuits, systems, and algorithms mature together. “Sometimes hardware leads. Sometimes applications expose flaws. The key is patience, persistence, and constant feedback. The lab isn’t trying to replace every component with custom silicon. Instead, we are focused on strategic intervention – designing custom chips where they matter most.”
The post IIIT Hyderabad’s customised chip design and millimetre-wave circuits for privacy-preserving sensing and intelligent healthcare systems appeared first on ELE Times.
Can the SDV Revolution Happen Without SoC Standardization?
Speaking at the Auto EV Tech Vision Summit 2025, Yogesh Devangere, who heads the Technical Center at Marelli India, turned attention to a layer of the Software-Defined Vehicle (SDV) revolution that often escapes the spotlight: the silicon itself. The transition from distributed electronic control units (ECUs) to centralized computing is not just a software story—it is a System-on-Chip (SoC) story.
While much of the industry conversation revolves around features, over-the-air updates, AI assistants, and digital cockpits, Devangere argued that none of it is possible without a fundamental architectural shift inside the vehicle. If SDVs represent the future of mobility, then SoCs are the engines quietly driving that future.
From 16-Bit Controllers to Heterogeneous Superchips
Automotive electronics have evolved dramatically over the past two decades. What began as simple 16-bit microcontrollers has now transformed into complex, heterogeneous SoCs integrating multiple CPU cores, GPUs, neural processing units, digital signal processors, hardware security modules, and high-speed connectivity interfaces—all within a single chipset.
“These SoCs are what enable the SDV journey,” Devangere explained, describing them as high-performance computing platforms that can consolidate multiple vehicle domains into centralized architectures. Unlike traditional ECUs designed for single-purpose tasks, modern SoCs are built to manage diverse functions simultaneously—from ADAS image processing and AI model deployment to infotainment rendering, telematics, powertrain control, and network management. This manifests a structural shift in the automotive industry.
Centralized Computing Is the Real Transformation
The move toward SDVs, in a way, is a move toward centralized computing. Simply stated, instead of dozens of independent ECUs scattered across the vehicle, OEMs are increasingly experimenting with domain controller architectures or centralized controllers combined with zonal controllers. In both cases, the SoC becomes the computational heart of the system, and this consolidation enables:
- Higher processing power
- Cross-domain feature integration
- Over-the-air (OTA) updates
- AI-driven functionality
- Flexible software deployment across operating systems such as Linux, Android, and QNX
A key enabler in this architecture is the hypervisor layer, which abstracts hardware from software and allows multiple operating systems to run independently on shared silicon. This flexibility is essential in a transition era where AUTOSAR (AUTomotive Open System ARchitecture) and non-AUTOSAR stacks coexist. AUTOSAR is a global software standard for automotive electronic control units (ECUs). It defines how automotive software should be structured, organized, and communicated, so that different suppliers and OEMs can build compatible systems.
But while the architectural promise is compelling, Devangere made it clear that implementation is far from straightforward.
The Architecture Is Not StandardizedOne of the most critical challenges he highlighted is the absence of hardware-level standardization. “Every OEM is implementing SDV architecture in their own way,” he noted. Some opt for multiple domain controllers; others experiment with centralized controllers and zonal approaches. The result is a fragmented ecosystem.
Unlike the smartphone world—where Android runs on broadly standardized hardware platforms—automotive SoCs lack a unified framework. There is currently no hardware consortium defining a common architecture. While open-source software efforts such as Eclipse aim to harmonize parts of the software stack, the hardware layer remains highly individualized. The consequence is complexity. Tier-1 suppliers cannot rely on long lifecycle platforms, as SoCs evolve rapidly. What might be viable today could become obsolete within a few years.
In an industry accustomed to decade-long product cycles, that volatility is disruptive.
Complexity vs. Time-to-MarketIf architectural fragmentation were not enough, development timelines are simultaneously shrinking. Designing with SoCs is inherently complex. A single SoC program often involves coordination among five to nine suppliers. Hardware validation must account for electromagnetic compatibility, thermal performance, and interface stability across multiple cores and peripherals. Software integration spans multi-core configurations, multiple operating systems, and intricate stack dependencies.
Yet market expectations continue to demand faster launches. “You cannot go back to longer development cycles,” Devangere observed. The pressure to innovate collides with the technical realities of high-complexity chip integration.
Power, Heat, and the Hidden Engineering BurdenBeyond software flexibility and AI capability lies a more fundamental engineering constraint: energy. High-performance SoCs generate significant heat and demand careful power management—critical in electric vehicles where battery efficiency is paramount. Many current architectures still rely on companion microcontrollers for power and network management, while the SoC handles high-compute workloads.
Balancing performance with energy efficiency, ensuring timing determinism across multiple simultaneous functions, and maintaining safety compliance remain non-trivial challenges. As vehicles consolidate ADAS, infotainment, telematics, and control systems onto shared silicon, resource management becomes as important as raw processing capability.
Partnerships Over IsolationGiven the scale of complexity, Devangere emphasized collaboration as the only viable path forward. SoC development and integration are rarely the work of a single organization. Semiconductor suppliers, Tier-1 system integrators, software stack providers, and OEMs must align early in the architecture phase.
Some level of standardization—particularly at the hardware architecture level—could significantly accelerate development cycles. Without it, the industry risks “multiple horses running in different directions,” as one audience member aptly put it during the discussion.
For now, that standardization remains aspirational.
The Real Work of the SDV EraThe excitement surrounding software-defined vehicles often focuses on user-facing features—AI assistants, personalized interfaces, downloadable upgrades. Devangere’s message was more grounded. Behind every seamless update, every AI-enabled feature, and every connected service lies a dense web of silicon complexity. Multi-core processing, heterogeneous architectures, thermal constraints, validation cycles, and fragmented standards form the invisible scaffolding of the SDV transformation.
The car may be becoming a computer on wheels. But building that computer—robust, safe, efficient, and scalable—remains one of the most demanding engineering challenges the automotive industry has ever faced.
And at the center of it all is the SoC.
The post Can the SDV Revolution Happen Without SoC Standardization? appeared first on ELE Times.
ElevateX 2026, Marking a New Chapter in Human Centric and Intelligent Automation
Teradyne Robotics today hosted ElevateX 2026 in Bengaluru – its flagship industry forum bringing together Universal Robots (UR) and Mobile Industrial Robots (MiR) to spotlight the next phase of human‑centric, collaborative, and intelligent automation shaping India’s manufacturing and intralogistics landscape.
Designed as a high‑impact platform for industry leadership and ecosystem engagement, ElevateX 2026 convened 25+ CEO/CXO leaders, technology experts, startups, and media, reinforcing how Indian enterprises are progressing from isolated automation pilots to scalable, business‑critical deployments. Ots)
Teradyne Robotics emphasized the rapidly expanding role of flexible and intelligent automation in enabling enterprises to scale confidently and safely. With industrial collaborative robots (cobots) and autonomous mobile robots (amr’s) becoming mainstream across sectors, the company underlined its commitment to driving advanced automation, skill development, and stronger industry‑partner ecosystems in India.
The event showcased several real‑world automation applications featuring cobots and amr’s across key sectors, including Automotive, F&B, FMCG, Education, and Logistics. These demos highlighted the ability of Universal Robots and MiR to help organizations scale quickly, redeploy easily, and improve throughput and workforce efficiency.
Showcasing high‑demand applications from palletizing and welding to material transport, machine tending, and training, the demonstrations reflected how Teradyne Robotics enables faster ROI, simpler deployment, and safe automation across high‑mix and high‑volume operations.
Speaking at the event, James Davidson, Chief Artificial Intelligence Officer, Teradyne Robotics, said, “Automation is entering a defining era – one where intelligence, flexibility, and human-centric design are no longer optional, but fundamental to how businesses innovate, scale, and compete. AI is transforming robots from tools that simply execute tasks into intelligent collaborators that can perceive, learn, and adapt in dynamic environments. In India, we are witnessing a decisive shift from experimentation to enterprise-wide adoption, and ElevateX 2026 reflects this momentum – bringing the ecosystem together to explore how collaborative and intelligent automation can become a strategic growth engine for both established enterprises and the next generation of startups.”
Poi Toong Tang, Vice President of Sales, Asia Pacific, Teradyne Robotics, added, “India is rapidly emerging as one of the most important and dynamic automation markets in Asia Pacific. Organizations today are not just looking to automate – they are looking to build operations that are flexible, resilient, and future-ready. The demand is for modular automation that delivers faster ROI and can evolve alongside business needs. Through Universal Robots and MiR, we are enabling end-to-end automation across production and intralogistics, helping Indian companies scale with confidence and compete on a global stage.”
Sougandh K.M., Business Director – South Asia, Teradyne Robotics, said, “India’s automation journey will be defined by collaboration across its ecosystem — by partners, system integrators, startups, and skilled talent working together to turn technology into real impact. At Teradyne Robotics, our belief is simple: automation should be for anyone and anywhere, and robots should enable people to do better work, not work like robots. Our focus is on automating tasks that are dull, dirty, and dangerous, while helping organizations improve productivity, safety, and quality. ElevateX 2026 is about lowering barriers to adoption and building long-term capability in India, making automation practical, scalable, and accessible, and positioning Teradyne Robotics as a trusted partner in every stage of that growth journey .”
Customer Case Story Testimonial/Teaser
A key highlight of ElevateX 2026 was the spotlight on customer success, and Origin stood out. As a fast‑growing U.S. construction tech startup, they shared how partnering with Universal Robots is driving measurable impact through improved productivity, stronger safety, and consistently high‑quality project outcomes powered by collaborative automation.
Yogesh Ghaturle, the Co-founder and CEO of Origin, said, “Our goal is to bring true autonomy to the construction site, transforming how the world builds. Executing this at scale requires a technology stack where every component operates with absolute predictability. Universal Robots provides the robust, operational backbone we need. With their cobots handling the mechanical precision, we are free to focus on deploying our intelligent systems in the real world.”
The post ElevateX 2026, Marking a New Chapter in Human Centric and Intelligent Automation appeared first on ELE Times.
The Architecture of Edge Computing Hardware: Why Latency, Power and Data Movement Decide Everything
Courtesy: Ambient Scientific
Most explanations of edge computing hardware talk about devices instead of architecture. They list sensors, gateways, servers and maybe a chipset or two. That’s useful for beginners, but it does nothing for someone trying to understand how edge systems actually work or why certain designs succeed while others bottleneck instantly.
If you want the real story, you have to treat edge hardware as a layered system shaped by constraints: latency, power, operating environment and data movement. Once you look at it through that lens, the category stops feeling abstract and starts behaving like a real engineering discipline.
Let’s break it down properly.
What edge hardware really is when you strip away the buzzwords
Edge computing hardware is the set of physical computing components that execute workloads near the source of data. This includes sensors, microcontrollers, SoCs, accelerators, memory subsystems, communication interfaces and local storage. It is fundamentally different from cloud hardware because it is built around constraints rather than abundance.
Edge hardware is designed to do three things well:
- Ingest data from sensors with minimal delay
- Process that data locally to make fast decisions
- Operate within tight limits for power, bandwidth, thermal capacity and physical space
If those constraints do not matter, you are not doing edge computing. You are doing distributed cloud.
This is the part most explanations skip. They treat hardware as a list of devices rather than a system shaped by physics and environment.
The layers that actually exist inside edge machines
The edge stack has four practical layers. Ignore any description that does not acknowledge these.
- Sensor layer: Where raw signals are produced. This layer cares about sampling rate, noise, precision, analogue front ends and environmental conditions.
- Local compute layer: Usually MCUs, DSP blocks, NPUs, embedded SoCs or low-power accelerators. This is where signal processing, feature extraction and machine learning inference happen.
- Edge aggregation layer: Gateways or industrial nodes that handle larger workloads, integrate multiple endpoints or coordinate local networks.
- Backhaul layer: Not cloud. Just whatever communication fabric moves selective data upward when needed.
These layers exist because edge workloads follow a predictable flow: sense, process, decide, transmit. The architecture of the hardware reflects that flow, not the other way around.
Why latency is the first thing that breaks and the hardest thing to fix
Cloud hardware optimises for throughput. Edge hardware optimises for reaction time.
Latency in an edge system comes from:
- Sensor sampling delays
- Front-end processing
- Memory fetches
- Compute execution
- Writeback steps
- Communication overhead
- Any DRAM round-trip
- Any operating system scheduling jitter
If you want low latency, you design hardware that avoids round-trip to slow memory, minimises driver overhead, keeps compute close to the sensor path and treats the model as a streaming operator rather than a batch job.
This is why general-purpose CPUs almost always fail at the edge. Their strengths do not map to the constraints that matter.
Power budgets at the edge are not suggestions; they are physics
Cloud hardware runs at hundreds of watts. Edge hardware often gets a few milliwatts, sometimes even microwatts.
Power is consumed by:
- Sensor activation
- Memory access
- Data movement
- Compute operations
- Radio transmissions
Here is a simple table with the numbers that actually matter.
| Operation | Approx Energy Cost |
| One 32-bit memory access from DRAM | High tens to hundreds of pJ |
| One 32-bit memory access from SRAM | Low single-digit pJ |
| One analogue in memory MAC | Under 1 pJ effective |
| One radio transmission | Orders of magnitude higher than compute |
These numbers already explain why hardware design for the edge is more about architecture than brute force performance. If most of your power budget disappears into memory fetches, no accelerator can save you.
Data movement: the quiet bottleneck that ruins most designs
Everyone talks about computing. Almost no one talks about the cost of moving data through a system.
In an edge device, the actual compute is cheap. Moving data to the compute is expensive.
Data movement kills performance in three ways:
- It introduces latency
- It drains power
- It reduces compute utilisation
Many AI accelerators underperform at the edge because they rely heavily on DRAM. Every trip to external memory cancels out the efficiency gains of parallel compute units. When edge deployments fail, this is usually the root cause.
This is why edge hardware architecture must prioritise:
- Locality of reference
- Memory hierarchy tuning
- Low-latency paths
- SRAM-centric design
- Streaming operation
- Compute in memory or near memory
You cannot hide a bad memory architecture under a large TOPS number.
Architectural illustration: why locality changes everything
To make this less abstract, it helps to look at a concrete architectural pattern that is already being applied in real edge-focused silicon. This is not a universal blueprint for edge hardware, and it is not meant to suggest a single “right” way to build edge systems. Rather, it illustrates how some architectures, including those developed by companies like Ambient Scientific, reorganise computation around locality by keeping operands and weights close to where processing happens. The common goal across these designs is to reduce repeated memory transfers, which directly improves latency, power efficiency, and determinism under edge constraints.
Figure: Example of a memory-centric compute architecture, similar to approaches used in modern edge-focused AI processors, where operands and weights are kept local to reduce data movement and meet tight latency and power constraints.
How real edge pipelines behave, instead of how diagrams pretend they behave
Edge hardware architecture exists to serve the data pipeline, not the other way around. Most workloads at the edge look like this:
- The sensor produces raw data
- Front end converts signals (ADC, filters, transforms)
- Feature extraction or lightweight DSP
- Neural inference or rule-based decision
- Local output or higher-level aggregation
If your hardware does not align with this flow, you will fight the system forever. Cloud hardware is optimised for batch inputs. Edge hardware is optimised for streaming signals. Those are different worlds.
This is why classification, detection and anomaly models behave differently on edge systems compared to cloud accelerators.
The trade-offs nobody escapes, no matter how good the hardware looks on paper
Every edge system must balance four things:
- Compute throughput
- Memory bandwidth and locality
- I/O latency
- Power envelope
There is no perfect hardware. Only hardware that is tuned to the workload.
Examples:
- A vibration monitoring node needs sustained streaming performance and sub-millisecond reaction windows
- A smart camera needs ISP pipelines, dedicated vision blocks and sustained processing under thermal pressure
- A bio signal monitor needs to be always in operation with strict microamp budgets
- A smart city air node needs moderate computing but high reliability in unpredictable conditions
None of these requirements match the hardware philosophy of cloud chips.
Where modern edge architectures are headed, whether vendors like it or not
Modern edge workloads increasingly depend on local intelligence rather than cloud inference. That shifts the architecture of edge hardware toward designs that bring compute closer to the sensor and reduce memory movement.
Compute in memory approaches, mixed signal compute block sand tightly integrated SoCs are emerging because they solve edge constraints more effectively than scaled-down cloud accelerators.
You don’t have to name products to make the point. The architecture speaks for itself.
How to evaluate edge hardware like an engineer, not like a brochure reader
Forget the marketing lines. Focus on these questions:
- How many memory copies does a singleinference require
- Does the model fit entirely in local memory
- What is the worst-case latency under continuous load
- How deterministic is the timing under real sensor input
- How often does the device need to activate the radio
- How much of the power budget goes to moving data
- Can the hardware operate at environmental extremes
- Does the hardware pipeline align with the sensor topology
These questions filter out 90 per cent of devices that call themselves edge capable.
The bottom line: if you don’t understand latency, power and data movement, you don’t understand edge hardware
Edge computing hardware is built under pressure. It does not have the luxury of unlimited power, infinite memory or cool air. It has to deliver real-time computation in the physical world where timing, reliability and efficiency matter more than large compute numbers.
If you understand latency, power and data movement, you understand edge hardware. Everything else is an implementation detail.
The post The Architecture of Edge Computing Hardware: Why Latency, Power and Data Movement Decide Everything appeared first on ELE Times.
Edge AI in a DRAM shortage: Doing more with less

Memory is having a difficult year. As manufacturers prioritize DDR5 and high-bandwidth memory (HBM) for data centers and large-scale AI workloads, availability has tightened and costs have risen sharply: up to 3–4x compared to Q3 2025 levels and market signals suggest the peak has not yet been reached.
Even hyperscalers—typically at the frontline—are reportedly receiving only about 70% of their allocated volumes, and analysts expect tight conditions to persist well into 2026 and possibly even 2027.
The strain isn’t evenly distributed, with the steepest price hikes and longest lead times concentrated in higher-capacity modules. Those components sit directly in the path of cloud infrastructure demand, and their pricing reflects it. On the other hand, lower-capacity modules (1-2 GB) have remained accessible and far more stable.
This trend is now influencing how teams think about system design. AI workloads built around large memory footprints now run into procurement challenges; systems engineered to operate within modest memory baselines avoid both the price spikes and the uncertainty. The outcome is important: in a shortage, architecture built for efficiency gives teams more strategic freedom compared to architectures built for abundance.
The most effective solution: DRAM-less AI accelerator
In a constrained memory market, the most robust solution is also the simplest: remove the dependency on external DRAM entirely. Take the case of Hailo-8 and Hailo-8L AI accelerators. By keeping the full inference pipeline on-chip, Hailo-8/8L eliminate the most expensive and supply-constrained component in the system.
In practical terms, avoiding DRAM can reduce bill of materials by up to $100 per device, while also improving power efficiency, latency, and system reliability. Though not every AI application can avoid DRAM.
Generative AI workloads inherently require more memory, and systems that run them will continue to rely on external DRAM. But even in this case, memory constraints strongly favor moving inference closer to the edge.
Running generative AI on the edge allows teams to work with smaller, domain-specific models rather than large, general-purpose ones designed for the cloud. Smaller models translate directly into smaller DRAM requirements, reducing cost, easing procurement, and improving power efficiency. This is where edge-focused accelerators come into play, enabling efficient generative AI inference while keeping memory footprints as lean as possible.
Privacy and latency have long shaped the case for running intelligence on the device. In 2025, another factor cemented it: the expectation that generative AI simply be there. Users now rely on transcription, summarization, audio cleanup, translation, and basic reasoning often with no tolerance for startup delays or network dependency.
Recent cloud outages from AWS, Azure and Cloudflare underscored how fragile cloud-only assumptions can be. When the networks faced disruptions, everyday features across consumer apps and enterprise workflows failed. Even brief interruptions highlighted how a single infrastructure dependency can take down tools that users now rely on dozens of times a day.
As AI moves deeper into everyday workflows and users expect agentic AI capabilities to be available instantly, a hybrid approach proves far more resilient. Keep frequently used intelligence local, either on the device or in a nearby gateway, while using the cloud for heavier or less frequent tasks. And crucially, when models are small enough to operate within 1-2 GB of memory, that hybrid approach becomes far easier to implement using memory configurations that are still readily sourced.
Small models change the equation
Until recently, generative AI required the memory and compute scale of the cloud. A new class of small language models (SLMs) and compact vision language models (VLMs) now deliver strong instruction following, reliable tool use, and competitive benchmark performance at a fraction of the parameters.
Releases like IBM’s Granite 4.0 Nano line demonstrate how far efficient architectures have come. These models show that some generative AI tasks and applications no longer need massive, expensive system memory—they need well-defined domains, optimized inference paths, and efficient pre- and post-processing.
For hardware teams, this evolution has many practical benefits. Smaller models reduce the “memory tax” that has been baked into AI design for years. When an entire intelligence pipeline can operate in 1-2 GB of DRAM, several constraints loosen simultaneously:
- Costs fall as systems avoid the inflated pricing of high-capacity DRAM.
- Supply-chain risk drops as lower-capacity memory chips remain easier to procure.
- Power consumption improves because smaller models with hardware-assisted offload (NPU or AI accelerator) run cooler and more efficiently.
- System reliability increases as local inference keeps essential features online even during network outages.
An AI architecture designed for efficiency rather than abundance fits squarely within the ethos of edge computing. Many high-value agentic AI tasks—summarizing a conversation, describing an image, or translating speech—do not require massive models. In narrow domains, compact models can deliver faster, more private and consistent results because they operate with fewer unknowns.
The path forward
If the DRAM shortage proves anything, it’s that the most resilient AI systems are the ones designed around constraints, not excess. Teams are re-evaluating assumptions about model size, memory baselines, and what “good enough” looks like for common tasks. They’re recognizing that domain-specific intelligence often performs better than brute-force scale—especially in environments that demand consistency, privacy, and low power draw.
Edge AI fits naturally within this moment. Its memory profile lines up with the DRAM capacities that remain accessible, and its deployment model brings stability to the tasks users rely on most. As supply tightness continues, organizations that invest in leaner model design and hybrid deployment strategies will be better positioned to deliver stable, responsive AI without absorbing high memory costs.
Avi Baum is chief technology officer (CTO) and co-founder of Hailo.
Special Section: AI Design
- The AI design world in 2026: What you need to know
- AI workloads demand smarter SoC interconnect design
- AI’s insatiable appetite for memory
- The AI-tuned DRAM solutions for edge AI workloads
- Designing edge AI for industrial applications
- Round pegs, square holes: Why GPGPUs are an architectural mismatch for modern LLMs
- Bridging the gap: Being an AI developer in a firmware world
- Why power delivery is becoming the limiting factor for AI
- Silicon coupled with open development platforms drives context-aware edge AI
- Designing energy-efficient AI chips: Why power must Be an early design consideration
The post Edge AI in a DRAM shortage: Doing more with less appeared first on EDN.
Govt Bets Big on Chips: India Semiconductor Mission 2.0 Gets ₹1,000 Crore Funding
In a significant push for the nation’s tech ambitions, the Government of India has earmarked Rs. 1,000 crores for the India Semiconductor Mission (ISM) 2.0 in the Union Budget 2026-27.
The new funding aims to supercharge domestic production, with investments slated for semiconductor manufacturing equipment, local IP development, and supply chain fortification both within India and on the international stage.
This upgraded version of the ISM will focus on industry-driven research and the refinement of training centres to enhance technology advancement, thereby fostering a skilled workforce for the future growth of the industry.
With India aiming for self-reliance through boosting domestic manufacturing in multiple sectors, the need for semiconductor manufacturing has exponentially increased.
Recently, Qualcomm tapped out the most advanced 2nm chips led by Indian engineering teams. This is a major boost to Indian semiconductor aspirations.
The first phase of the ISM was supported by a Rs. 76,000 crores incentive scheme, with ten projects worth Rs. 1.60 lakh crores approved by December, 2025, covering the entire manufacturing spectrum from fabrication units, packaging to assembly, and testing infrastructure development.
By: Shreya Bansal, Sub-editor
The post Govt Bets Big on Chips: India Semiconductor Mission 2.0 Gets ₹1,000 Crore Funding appeared first on ELE Times.
UK–Bulgaria collaboration developing Green Silicon Carbide wafer factory
7 Segment Display Decoder
| Here’s a decoder I made in my class! It takes the binary inputs from the four switches and uses a seven-segment display to turn them into decimal numbers. Made with a 7447 CMOS IC. I know it’s very disorganized and I could certainly get better at saving space. I’m still new to building circuits, but I still think it’s really cool! [link] [comments] |
Microchip and Hyundai Collaborate, Exploring 10BASE-T1S SPE for Future Automotive Connectivity
The post Microchip and Hyundai Collaborate, Exploring 10BASE-T1S SPE for Future Automotive Connectivity appeared first on ELE Times.
Wolfspeed accelerates AI-powered manufacturing and operations with Snowflake
My first proper inverter bridge with CM200 IGBT bricks
| Thinking of using it for either an induction heater or a dual resonant solid state tesla coil, but next up will be having to deal with annoying gate drive stuff first. [link] [comments] |
Київська політехніка зміцнює партнерство з Центральною школою Ліона
Делегація КПІ ім. Ігоря Сікорського на чолі з ректором Анатолієм Мельниченком відвідала Центральну школу Ліона (ÉCL) — один з провідних інженерних закладів 🇫🇷 Франції та партнера університету з 2006 року.
Self-oscillating sawtooth generator spans 5 decades of frequencies

There are many ways of generating analog sawtooth waveforms with oscillating circuits. Here’s a method that employs a single supply voltage rail to produce a buffered signal whose frequency can be varied over a range from 10Hz to 1MHz (Figure 1).

Figure 1 The sawtooth output waveform is the signal “saw” available at the output of op amp U1a. Its frequency is set by the value of resistor R6 which can vary from 120 Ω to 12 MΩ.
Wow the engineering world with your unique design: Design Ideas Submission Guide
U3, powered through R5, uses Q2 and R6 to create a constant current source. U3 enforces a constant voltage Vref of 1.2 V between its V+ and FB pins. Q2 is a high-beta NPN transistor that passes virtually all of R2’s current Vref/R6 through its collector to charge C3 with a constant current, producing the linear ramp portion of this ground-referenced sawtooth.
Op-amp U1 buffers this signal and applies it to an input of comparator U2a. The comparator’s other input’s voltage causes its output to transition low when the sawtooth rises to 1 volt. U2A, R1, Q1, R8, C1, and U2b produce a 100 ns one-shot signal at the output of U2b, which drives the gate of M1 high to rapidly discharge C3 to ground.
The frequency of the waveform is 1.2 / ( R6 × C3 ) Hz. With the availability of U3’s Vref tolerances as low as 0.2% and a 0.1% tolerance for R6, the circuit’s overall tolerances are generally limited by an at best 1% C3 combined with the parasitic capacitances of M1.
Waveforms at several different frequencies are seen in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6, and Figure 7.
Figure 2 10 Hz sawtooth for an R6 of 12 MΩ.

Figure 3 100 Hz sawtooth for an R6 of 1.2 MΩ.

Figure 4 1 kHz sawtooth for an R6 of 120 kΩ.

Figure 5 10 kHz sawtooth for an R6 of 12 kΩ.

Figure 6 100 kHz sawtooth for an R6 of 1.2 kΩ.

Figure 7 1 MHz sawtooth for an R6 of 120 Ω.
Figures 3 and 4 show near-ideal sawtooth waveforms. But Figure 2, with its 12 MΩ R6, shows that even when “off,” M1 has a non-infinite drain-source resistance which contributes to the non-linearity of the ramp. It’s also worth noting that although U3’s FB pin typically pulls less than 100 nA, that’s the current that the 12 MΩ R6 is intended to source, so waveform frequency accuracy for this value of resistor is problematic.
Figures 5, 6, and 7 show progressive increases in the effects of the 100nS discharge time for C3 and of the finite recovery time of the op amp when its output saturates near the ground rail.
These circuits do not require any matched-value components. Accuracies are improved by the use of precision versions of R4, R6, R7, and U3, but the circuit’s operation does not necessitate these.
Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.
Related Content
- Simple sawtooth generator operates at high frequency
- Adjustable triangle/sawtooth wave generator using 555 timer
- DAC (PWM) Controlled Triangle/Sawtooth Generator
- Another PWM controls a switching voltage regulator
The post Self-oscillating sawtooth generator spans 5 decades of frequencies appeared first on EDN.
Full circle current loops: 4mA-20mA to 0mA-20mA

A topic that has recently drawn a lot of interest (!) and no fewer than four separate design articles (!!) here in Design Ideas, is the conversion of 0 to 20mA current sources into industrial standard 4mA to 20mA current loop signals. Here’s the list—so far—in reverse chronological order. Apologies if (as is quite possible) I’ve missed one—or N.
- Another silly simple precision 0/20mA to 4/20mA converter
- Silly simple precision 0/20mA to 4/20mA converter
- Combine two TL431 regulators to make versatile current mirror
- A 0-20mA source current to 4-20mA loop current converter
With so much energy already devoted to that one side of this well-tossed coin, it seemed only fair to pay a little attention to the flip side of the conversion function coin. Figure 1 shows the result. Its (fairly) simple circuit performs a precision conversion from 4-20mA to 0-20mA. Here’s how it works.
Figure 1 The flip side of the current conversion coin: Iout = (IinR1 – 1.24v)/R2 = 1.25(Iin – 4mA).
Wow the engineering world with your unique design: Design Ideas Submission Guide
The core of the circuit is the Vin = IR1 = 1.24 V to 7.20 V developed by the 4-20mA input working into R1 and sensed by the Vref input of Z1. The principle in play is discussed in Figure 1 of “Precision programmable current sink.”
The resulting Z1 cathode current is (Iin R1 – Vref)/R2 = 0 to 20 mA as I increases from 4 mA to 20 mA. Or it would be if not for the phenomenon of Vref modulation by Z1 cathode voltage. The D1, Q2 cascode pair greatly attenuates this effect by holding Z1’s cathode voltage near zero and constant. It also extends Z1’s cathode voltage limit from an inadequate 7 V to the 30 V capability of Q2. Of course, a different choice for Q2 could extend it further. But if 30 V will do, the >1000 typical beta of the 5089 is good for accuracy.
Current booster Q1 extends Z1’s 15 mA max current limit while also reducing thermal effects. The net result holds Z1’s maximum power dissipation to single-digit milliwatts.
With 0.1% precision R1 and R2 and the ±0.5% tolerance TLV431B, better than 1% accuracy can be achieved with the untrimmed Figure 1 circuit. If this level of precision is still inadequate, manual post-assembly trim can be added with just two extra parts, as shown in Figure 2. Calibration is achieved with one pass.
- Set input current to 4.00 mA
- Adjust R4 for output current of ~50 µA. Note this is only 0.25% of full-scale, so don’t worry about hitting it exactly. You probably won’t.
- Set input current to 20 mA
- Adjust R5 for an output current of 20 mA

Figure 2 R4 and R5 trims allow post-assembly precision optimization.
Input max overhead voltage is 8 V, output overhead is 9 V. Worst case (resistor limited) fault current with 24 V supply = 80 mA.
Readers may notice a capacitor labeled “Ca” in Figures 1 and 2. This is the “Ashu capacitance” that Design Idea (DI) contributor and current source circuitry expert Ashutosh Sapre discovered to be essential for frequency stability of the cascode topology. Thanks, Ashu!
And a closing note. Since the output scale factor is set by and inversely proportional to R2, if any full-scale other than 20 mA is desired, it’s easily achieved by an appropriate choice for R2.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Another silly simple precision 0/20mA to 4/20mA converter
- Silly simple precision 0/20mA to 4/20mA converter
- Combine two TL431 regulators to make versatile current mirror
- A 0-20mA source current to 4-20mA loop current converter
The post Full circle current loops: 4mA-20mA to 0mA-20mA appeared first on EDN.
HexaTech launches 3”-diameter aluminium nitride substrate
Are non-magnetic connectors in your future?

Many years ago, I overheard an engineer, with whom I had some project contact, make a casual remark about an RF connector situation, asking “what’s the big deal, it’s just a connector?” That statement was enough to make me wonder about his overall professional judgment.
Connectors may look simple but they are not, of course, as they must combine electrical requirements with mechanical issues and incorporate suitable materials for both body and contact. The materials and platings of their contacts are especially intricate as they blend metallurgical chemistry with other factors such as manufacturability, flexibility, resilience, and resistance objectives.
In recent years, there’s been an added demand on connectors: the need to be non-magnetic. Technically, this means the connector’s materials exhibit extremely low magnetic susceptibility, as they neither generate magnetic fields nor interact with external ones in any significant way.
Note that the term “magnetic connector” is also used for a connector/cable that relies on a magnetic force to both make and maintain a connection. In this arrangement, the plug and the socket have corresponding magnets or magnetic faces to make a self-aligning connection. They are designed for quick, easy, and, often, “break-away” disconnection to protect ports from wear and damage. But the magnetic/non-magnetic connectors here are not these.
Is it easy to visually distinguish a magnetic connector from a non-magnetic one? Maybe, maybe not. Some non-magnetic connectors have a different surface sheen or glow compared to conventional connectors, while others have different color (Figure 1). Of course, some magnetic ones also have a different color depending on the finish, so it’s not a certainty. Fortunately, magnetism is easy enough to test.


Figure 1 These two RF connectors are non-magnetic; other than their color, they look like magnetic connectors. So, color alone is not a definitive indicator. Source: Rosenberger Group
Even minute amounts of magnetic “interference” can have significant consequences in high-frequency or magnetically sensitive systems. Therefore, the objective of non-magnetic component design is to make these parts “magnetically invisible”. So, they don’t distort the surrounding field or interfere with nearby sensors or measurement instruments.
This is especially crucial in environments where magnetic fields play an active role, such as MRI systems, particle accelerators, and quantum computers:
- In MRI systems, magnetic components can distort the magnetic field lines, leading to degraded system performance, measurement inaccuracies, and artifacts in imaging results. In contrast, non-magnetic components minimize these disturbances by maintaining field uniformity.
- In precision RF and microwave metrology, magnetic components can bias sensor readings or create unpredictable phase errors. For example, a magnetic connector near a current probe could influence the magnetic coupling, altering the measured waveform.
- In systems such as scanning electron microscopes, where magnetic fields steer and direct the electrons to supercolliders, where superconducting magnets keep the particle centered as they are being accurate, the magnetic field must be precisely shaped and controlled.
- In the “hot” field of quantum computing, the qubits—the quantum bits that carry computational information—are extremely sensitive to external magnetic fields. Even minor magnetic impurities in nearby materials can cause decoherence, leading to computational errors or reduced qubit lifetime.
Non-magnetic connectors provide low–loss signal transmission and maintain stable performance across temperature cycles—without contributing to unwanted magnetic noise. In these cryogenic systems, even small amounts of magnetic interaction could invalidate experimental results.
A non-magnetic connector will typically have a low magnetic susceptibility of less than 10-5 (think back Electromagnetics 101: susceptibility is a dimensionless ratio) and a magnetic field strength of less than 0.1 milligauss. That’s at least one to two orders of magnitude less than standard connectors.
Making the non-magnetic connector
It may seem that all that’s required to make a non-magnetic connector is to use non-magnetic material such as copper. If only it were that easy, as non-magnetic materials have very different mechanical and electrical attributes, which affect connector performance and consistency.
A connector has three elements: the body, usually made of nylon or an engineered plastic and not a magnetic consideration; the contact or terminal pin, usually phosphor bronze, beryllium copper, or brass; and the surface plating(s), which can be copper, nickel, gold, tin, silver, palladium, or other metal.
The plating is the largest challenge, as it’s critical to long-term performance of the contact surfaces. The magnetic metals that are the concern here are iron, cobalt, and nickel, notes the Samtec video “Exploring Non-Magnetic Interconnects” (Figure 2).

Figure 2 Trouble zone in the periodic table: these three elements are the source of most of the magnetic problems. Solid-state physics analysis explains why this is so. Source: Samtec Inc.
The simple solution would be to avoid using these metals and instead use brass or aluminum for connector bodies with silver or gold plating. However, that’s often undesirable for performance reasons.
There are other options. For example, Samtec uses a nickel-phosphorus electrodeposited coating that works as a barrier layer between the copper-alloy base metal and subsequent outer layers. This barrier is needed to prevent migration of the copper to the surface-layer gold or tin of the connector pins, which would degrade the performance of that layer.
But wait—isn’t nickel one of the troublesome metals? Yes, but that’s where metallurgists bring some technical “magic” to the story. By adding phosphorus to the nickel, the ferromagnetism associated with high-purity nickel is reduced. This is because the added phosphorus interrupts the nickel’s atomic dipoles, causing the metal to become non-magnetic.
This is not the only option for going non-magnetic. Palladium provides a non-magnetic layer but is a costly alternative to nickel. Associated fasteners can be made of austenitic stainless steel (grades 304 or 316), which is non-magnetic due to its unique crystalline structure.
Other possibilities are eliminating the nickel completely, but this requires thicker copper and gold layers to slow the migration; use of a copper/tin/zinc alloy (Cu/Sn/Zn) called Tri-M3 as a barrier layer; or use of nickel-tungsten (Ni/W—tradename Xtalics). The goal is to reduce to grain size to nanoparticles and so disrupt the possibilities for alignment of the magnetic domains.
There are several ways to devise and fabricate non-magnetic connectors. It requires pure materials, deep-physics insight, metallurgical expertise, and precise control of production process. Assessing the non-magnetic characteristics involves sophisticated instrumentation to measure the magnetic permeability of the materials and connectors.
Each vendor has its own approach and a set of trade-offs regarding connector performance. Designers have many connector parameters to consider with respect to performance, solderability, number of mating cycles, supply-chain risk, and more.
The good news is that the increasing need for such connector means they are not items only available from one or two specialty suppliers. Nearly every manufacturer of RF connectors also offers non-magnetic versions, so users have many options for their connector needs and bill of materials.
What’s the price difference between magnetic and non-magnetic connectors? A quick, unscientific sampling showed that the non-magnetic ones were two to three times the price of their magnetic counterparts. It’s trivial to say that cost is a secondary concern in the applications where they are needed, but that is likely true.
Have you ever used non-magnetic connectors? Was the need for them identified in advance, or was it recognized after regular connectors were used, with problems identified and then linked to the magnetic connectors?
Certainly, the next time someone says, “it’s just a connector,” you can offer them firm evidence that’s not the case at all.
Related Content
- Consumer connectors get ruggedized
- Be aware of connector mating-cycle limits
- Giving Connector Contacts Adequate Consideration
- Through-hole connector resolves surface-mount dilemma
- Give Me Back My External Wi-Fi Antenna Connector, Please
The post Are non-magnetic connectors in your future? appeared first on EDN.



