Українською
  In English
Feed aggregator
Found 2 Raytheon 7489s (mfd 1973) to repair a Pacman board.
![]() | submitted by /u/weirdal1968 [link] [comments] |
Silicon Austria Labs and TU Graz launch joint Power Electronics Research Laboratory
USPTO gives ruling on EPC patent disputed by Innoscience
Breaking Boundaries with Photonic Chips and Optical Computing
As traditional semiconductor-based computing approaches its physical and energy efficiency limits, photonic chips and optical computing have emerged as transformative solutions. By harnessing the speed and parallelism of light, these technologies offer significant advantages over conventional electronics in high-performance computing (HPC), artificial intelligence (AI), and data centers. Optical computing has the potential to revolutionize the way information is processed, enabling faster, more energy-efficient computation with lower latency.
The Fundamentals of Photonic ChipsPhotonic chips leverage integrated photonics to manipulate light for computing, communication, and sensing applications. Unlike traditional chips that use electrons as the primary carriers of information, photonic chips use photons, which can travel at the speed of light with minimal energy loss. Key components of photonic chips include:
- Waveguides: Optical channels that guide light through a photonic circuit, analogous to electrical traces in traditional chips.
- Modulators: Convert electrical signals into optical signals by modulating light properties such as intensity or phase.
- Detectors: Convert optical signals back into electrical signals for further processing.
- Resonators and Interferometers: Facilitate advanced signal processing functions such as filtering, multiplexing, and logic operations.
- Photonic Crystals: Control the flow of light by creating periodic dielectric structures, enhancing optical confinement and manipulation.
Optical computing aims to replace or supplement electronic computation with light-based logic operations. This transition offers several key advantages:
- Unparalleled Speed: Photons travel at the speed of light, reducing signal delay and increasing processing throughput.
- Low Energy Consumption: Unlike electrical circuits that suffer from resistive heating, photonic systems dissipate minimal heat, enhancing energy efficiency.
- Massive Parallelism: Optical systems can process multiple data streams simultaneously, significantly improving computational throughput.
- Reduced Signal Crosstalk: Optical signals do not experience the same interference as electrical signals, reducing errors and noise in computation.
Silicon photonics integrates optical components onto a silicon platform, enabling compatibility with existing semiconductor fabrication techniques. Key innovations in silicon photonics include:
- On-chip Optical Interconnects: Replace traditional copper interconnects with optical waveguides to reduce power consumption and signal delay.
- Optical RAM and Memory: Photonic memory elements store and retrieve data using light, enhancing data transfer speeds.
- Electro-Optical Modulators: Convert electronic signals to optical signals efficiently, allowing seamless integration into existing computing architectures.
Optical computing relies on photonic logic gates to perform fundamental computations. These gates operate using:
- Nonlinear Optical Effects: Enable all-optical switching without electronic intermediaries.
- Mach-Zehnder Interferometers (MZI): Implement XOR, AND, and OR logic functions using light phase interference.
- Optical Bistability: Maintains state information in optical latches, paving the way for optical flip-flops and memory elements.
With the growing demand for AI processing, photonic neural networks offer an alternative to traditional GPUs and TPUs. Optical deep learning accelerators employ:
- Matrix Multiplication with Light: Perform multiply-accumulate operations at light speed using photonic interference.
- Optical Tensor Processing Units (TPUs): Enhance AI inference by leveraging photonic components for ultra-fast computation.
- Wavelength-Division Multiplexing (WDM): Enables parallel processing by encoding multiple data streams onto different wavelengths of light.
Quantum computing benefits immensely from photonics due to the inherent properties of quantum light. Advancements in quantum photonic processors include:
- Single-Photon Sources and Detectors: Essential for quantum information processing and cryptographic applications.
- Quantum Key Distribution (QKD): Enables ultra-secure communication leveraging the principles of quantum entanglement.
- Optical Quantum Logic Gates: Facilitate complex quantum computations with minimal decoherence.
Modern data centers face thermal constraints and power limitations due to electronic interconnects. Photonic interconnects dramatically reduce power consumption and increase bandwidth, making them an ideal solution for high-speed data transmission between servers and storage units.
2. Artificial Intelligence and Machine Learning AccelerationAI workloads rely on extensive matrix operations, which photonic computing executes at orders of magnitude faster speeds than traditional electronics. Companies like Lightmatter and Lightelligence are pioneering photonic AI accelerators to enhance deep learning performance while reducing energy costs.
3. Telecommunications and Optical NetworksFiber-optic networks already leverage photonics for data transmission, but photonic computing extends these advantages to real-time processing. Photonic switches enable ultra-fast data routing, improving the efficiency of 5G and future 6G networks.
4. Healthcare and BiophotonicsOptical computing is revolutionizing biomedical imaging and diagnostics. Photonic chips enable high-resolution imaging techniques such as optical coherence tomography (OCT) and bio-sensing applications, enhancing early disease detection.
5. Defense and AerospaceThe military and aerospace industries require ultra-fast, secure processing for signal intelligence, radar systems, and cryptographic applications. Optical computing’s speed and resistance to electromagnetic interference make it a critical enabler for next-generation defense systems.
Challenges and Future Roadmap 1. Fabrication Complexity and ScalabilityWhile photonic chips leverage semiconductor manufacturing techniques, integrating large-scale optical circuits remains a challenge. Standardizing fabrication methods and developing CMOS-compatible photonic components are essential for commercial scalability.
2. Hybrid Photonic-Electronic ArchitecturesDespite the advantages of photonic computing, hybrid architectures that integrate both electronic and optical components are likely to dominate in the near term. Developing efficient electro-optic interfaces remains a key research focus.
3. Software and Algorithm DevelopmentCurrent software is optimized for electronic computation, requiring a shift in programming paradigms for photonic systems. Developing photonic-aware compilers and simulation tools will accelerate adoption.
4. Energy Efficiency and Power ConsumptionWhile photonic computing reduces heat dissipation, the challenge lies in optimizing light generation and detection components to minimize power consumption further.
Conclusion: The Dawn of the Photonic Computing EraPhotonic chips and optical computing represent a paradigm shift in computation, offering unparalleled speed, efficiency, and scalability. As silicon photonics, quantum optics, and neuromorphic photonic computing continue to advance, the technology is poised to revolutionize AI, data centers, telecommunications, and beyond. Overcoming fabrication, software, and integration challenges will be crucial for realizing the full potential of photonic computing, marking the beginning of a new era in information processing.
The post Breaking Boundaries with Photonic Chips and Optical Computing appeared first on ELE Times.
Design a feedback loop compensator for a flyback converter in four steps

Due to their versatility, ease of design, and low cost, flyback converters have become one of the most widely used topologies in power electronics. Its structure derives from one of the three basic topologies—specifically, buck-boost topology. However, unlike buck-boost converters, flyback topologies allow the voltage output to be electrically isolated from the input power supply. This feature is vital for industrial and consumer applications.
Among the different control methods used to stabilize power converters, the most widely used control method is peak current mode, which continuously senses the primary current to provide important protection for the power supply.
Additionally, to obtain a higher design performance, it’s common to regulate the converter with the output that has the highest load using a technique called cross-regulation.
This article aims to show engineers how to correctly design the control loop that stabilizes the flyback converter in order to provide optimal functionality. This process includes minimizing the stationary error, increasing/decreasing the bandwidth as required, and increasing the phase/gain margin as much as possible.
Closed-loop flyback converter block diagram
Before making the necessary calculations for the controller to stabilize the peak current control mode flyback, it’s important to understand the components of the entire closed-loop system: the converter averaged model and the control loop (Figure 1).
Figure 1 Here is how the components look in the entire closed-loop system. Source: Monolithic Power Systems
The design engineer’s main interest is to study the behavior of the converter under load changes. Considering a fixed input voltage (VIN), the open-loop transfer function can be modeled under small perturbations produced in the duty cycle to study the power supply’s dynamic response.
The summarized open-loop system can be modeled with Equation 1: (1)
Where G is the current-sense gain transformed to voltage and GC(s) and GCI(s) are the transfer functions of the flyback converter in terms of output voltage and magnetizing current response (respectively) under small perturbations in the duty cycle. GαTS is the modeling of the ramp compensation to avoid the double-pole oscillation at half of the switching frequency.
Flyback converter control design
There are many decisions and tradeoffs involved in designing the flyback converter’s control loop. The following sections of the article will explain the design process step by step. Figure 2 shows the design flow.
Figure 2 The design flow highlights control loop creation step by step. Source: Monolithic Power Systems
Control loop design process and calculations
Step 1: Design inputs
Once the converter’s main parameters have been designed according to the relevant specifications, it’s time to define the parameters as inputs for the control loop design. These parameters include the input and output voltages (VIN and VOUT, respectively), operation mode, switching frequency (fSW), duty cycle, magnetizing inductance (LM), turns ratio (NP:NS), shunt resistor (RSHUNT), and output capacitance (COUT). Table 1 shows a summary of the design inputs for the circuit discussed in this article.
Table 1 Here is a summary of design inputs required for creating control loop. Source: Monolithic Power Systems
To design a flyback converter compensator, it’s necessary to first obtain all main components that make the converter. Here, HF500-40 flyback regulator is used to demonstrate design of a compensator using optocoupler feedback. This device is a fixed-frequency, current-mode regulator with built-in slope compensation. Because the converter works in continuous conduction mode (CCM) at low line input, a double-pole oscillation at half of the switching frequency is produced; built-in slope compensation dampens this oscillation, making its effect almost null.
Step 2: Calculate parameters of the open-loop transfer function
It’s vital to calculate the parameters of the open-loop transfer function and calculate the values for all of the compensator’s parameters that can optimize the converter at the dynamic behavior level.
The open-loop transfer function of the peak current control flyback converter (also including the compensation ramp factor) can be estimated with Equation 2:
(2)
Where D’ is defined by the percentage of time that the secondary diode (or synchronous FET) is active during a switching cycle.
The basic canonical model can be defined with Equation 3: (3)
Note that the equivalent series resistance (ESR) effect on the output capacitors has been included in the transfer function, as it’s the most significant parasitic effect.
By using Equation 2 and Equation 3, it’s possible to calculate the vital parameters.
The resonant frequency (fO) can be calculated with Equation 4:
(4)
After inputting the relevant values, fO can be calculated with Equation 5: (5)
The right-half-plane zero (fRHP) can be estimated with Equation 6: (6)
The q-factor (Q) can be calculated with Equation 7: (7)
After inputting the relevant values, Q can be estimated with Equation 8: (8)
The DC gain (K) can be calculated with Equation 9: (9)
After inputting the relevant values, K can be estimated with Equation 10: (10)
The high-frequency zero (fHF) can be calculated with Equation 11:
(11)
It’s important to note that with current mode control, it’s common to obtain values well below 0.5 for Q. With this in mind, the result of the second-degree polynomial in the denominator of the transfer function ends up giving two real and negative poles. This is different from voltage-control mode or when there is a very large compensation ramp, which results in two complex conjugate poles.
The two real and negative poles can be estimated with Equation 12: (12)
The new open-loop transfer function can be calculated with Equation 13: (13)
The cutoff frequency (fC) can be estimated with Equation 14: (14)
The following sections will explain how the frequency compensator design achieves power supply stability and excellent performance.
Step 3: Frequency compensator design
Once the open-loop transfer function is modeled, it’s necessary to design the frequency compensator such that it achieves the best performance possible. Because the frequency response of the above transfer function has two separate poles—one at a low frequency and one at a high frequency—a simple Type II compensator can be designed. This compensator does not need an additional zero, which is not the case in voltage-control mode because there is a double pole that produces a resonance.
To minimize the steady-state error, it’s necessary to design an inverted-zero (or a pole at the origin) because it produces higher gains at low frequencies. To ensure that the system’s stability is not impacted, the frequency must be at least 10 times lower than the first pole, calculated with Equation 15:
(15)
Due to the ESR parasitic effect at high frequencies, it’s necessary to design a high-frequency pole to compensate for and remove this effect. The pole can be estimated with Equation 16:
(16)
On the other hand, it’s common to modify the cutoff frequency to achieve a higher or lower bandwidth and produce faster or slower dynamic responses, respectively. Once the cutoff frequency is selected (in this case, fC is increased up to 6.5 kHz, or 10% of fSW), the compensator’s middle-frequency gain can be calculated with Equation 17: (17)
Once the compensator has been designed within the frequency range, calculate the values of the passive components.
Step 4: Design the compensator’s passive components
The most common Type II compensator used for stabilization in current control mode flyback converters with cross-regulation is made up of an optocoupler feedback (Figure 3).
Figure 3 Type-II compensator is made up with optocoupler feedback. Source: Monolithic Power Systems
The compensator transfer function based on optocoupler feedback can be estimated with Equation 18: (18)
The middle-frequency gain is formed in two stages: the optocoupler gain and the adjustable voltage reference compensator gain, calculated with Equation 19:
(19)
It’s important to calculate the maximum resistance to correctly bias the optocoupler. This resistance can be estimated with Equation 20: (20)
The parameters necessary to calculate RD can be found in the optocoupler and the adjustable voltage reference datasheets. Table 2 shows the typical values for these parameters from the optocoupler.
Table 2 Here are the main optocoupler parameters. Source: Monolithic Power Systems
Table 3 shows the typical values for these parameters from the adjustable voltage reference.
Table 3 The above data shows adjustable voltage reference parameters. Source: Monolithic Power Systems
Once the above parameters have been obtained, RD can be calculated with Equation 21: (21)
Once the value of R3 is obtained (in this case, R3 is internal to the HF500-40 controller, with a minimum value of 12 kΩ), as well as the values for R1, R2, and RD (where RD = 2 kΩ), RF can be estimated with Equation 22: (22)
Where GCOMP is the compensator’s middle frequency gain, calculated with Equation (17). GCOMP is used to adjust the power supply’s bandwidth.
Because the inverted zero and high-frequency pole were already calculated, CF and CFB can be calculated with Equation 23 and Equation 24, respectively. (23)
(24)
Once the open-loop system and compensator have been designed, the loop gain transfer function can be estimated with Equation 25: (25)
Equation 25 is based on Equation 13 and Equation 18.
It’s important to calculate the phase and gain margins to ensure the stability of power supply.
The phase margin can be calculated with Equation 26: (26)
After inputting the relevant values, the phase margin can be calculated with Equation 27: (27)
If the phase margin exceeds 50°, it’s an important parameter necessary to comply with certain standards.
At the same time, the gain margin can be approximated with Equation 28: (28)
Equation 29 is derived from Equation 25 at the specified frequency: (29)
In this scenario, the gain margin is below -10dB, which is another important parameter to consider, particularly regarding compliance with regulation specifications. If the result is close to 0dB, some iteration is necessary to decrease the value; otherwise, the performance is suboptimal. This iteration must start by decreasing the value of the cutoff frequency.
This complete transfer function provides stability to the power supply and the best performance made possible by:
- Minimizing steady-state error
- Minimizing ESR parasitic effect
- Increasing bandwidth of power supply up to 6.5 kHz
Final design
After calculating all the passive component values for the feedback loop compensator and determining the converter’s main parameters, the entire flyback can be designed using the flyback regulator. Figure 4 shows the circuit’s final design using all calculated parameters.
Figure 4 Here is how the final design circuit schematic looks like. Source: Monolithic Power Systems
Figure 5 shows the bode plot of the complete loop gain frequency response.
Figure 5 Bode plot is shown for the complete loop gain frequency response. Source: Monolithic Power Systems
Obtaining the flyback averaged model via small-signal analysis is a complex process to most accurate approximation of the converter’s transfer functions. In addition, the cross-regulation technique involves secondary-side regulation through optocoupler feedback and an adjustable voltage reference, which complicates calculations.
However, by following the four steps explained in this article, a good approximation can be obtained to improve the power supply’s performance, as the output with the heaviest load is directly regulated. This means that the output can react quickly to load changes.
Joan Mampel is application engineer at Monolithic Power Systems (MPS).
Related Content
- Power Tips: Compensating Isolated Power Supplies
- Details on compensating voltage mode buck regulators
- Power Tips #139: How to simplify AC/DC flyback design with a self-biased converter
- Modeling and Loop Compensation Design of Switching Mode Power Supplies, Part 1
- Modeling and Loop Compensation Design of Switching Mode Power Supplies, Part 2
The post Design a feedback loop compensator for a flyback converter in four steps appeared first on EDN.
Hot-swap controller protects AI servers

The XDP711-001 48-V digital hot-swap controller from Infineon offers programmable SOA current control for high-power AI servers. It provides I/O voltage monitoring with an accuracy of ≤0.4% and system input current monitoring with an accuracy of ≤0.75% across the full ADC range, enhancing fault detection and reporting.
Built on a three-block architecture, the XDP711-001 integrates high-precision telemetry, digital SOA control, and high-current gate drivers capable of driving up to eight N-channel power MOSFETs. It is designed to drive multiple MOSFETs in parallel, supporting the development of power delivery boards for 4-kW, 6-kW, and 8-kW applications.
The controller operates within an input voltage range of 7 V to 80 V and can withstand transients up to 100 V for 500 ms. It provides input power monitoring with reporting accuracy of ≤1.15% and features a high-speed PMBus interface for active monitoring.
Programmable gate shutdown for severe overcurrent protection ensures shutdown within 1 µs. With options for external FET selection, one-time programming, and customizable fault detection, warning programming, and de-glitch timers, the XDP711-001 offers flexibility for various use cases. Additionally, its analog-assisted digital mode maintains backward compatibility with legacy analog hot swap controllers.
The XDP711-001 will be available for order in mid-2025. For more information on the XPD series of protection and monitoring ICs, click here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Hot-swap controller protects AI servers appeared first on EDN.
Snapdragon G chips drive next-gen handheld gaming

Qualcomm unveiled the Snapdragon G series, a lineup of three chips for advanced handheld, dedicated gaming devices. The G3 Gen 3, G2 Gen 2, and G1 Gen 2 SoCs support various play styles and form factors, enabling gamers to play cloud, console, Android, or PC games.
Snapdragon G3 Gen 3 is the first in the G Series to support Lumen, Unreal Engine 5’s dynamic global illumination and reflections technology, for Android handheld gaming. Gen3 Gen 3 offers 30% faster CPU performance, 28% faster graphics, and improved power efficiency over the previous generation. Wi-Fi 7 support reduces latency and boosts bandwidth.
Snapdragon G2 Gen 2 is optimized for gaming and cloud gaming at 144 frames/s, delivering 2.3x faster CPU performance and 3.8x faster GPU capabilities compared to G2 Gen 1. It also supports Wi-Fi 7 for faster, more reliable connections.
Snapdragon G1 Gen 2 targets a wider audience, supporting 1080p at 120 frames/s over Wi-Fi. Designed for cloud gaming on handheld Android devices, it boosts CPU performance by 80% and GPU performance by 25% for smooth gameplay.
Starting this quarter, OEMs like AYANEO, ONEXSUGAR, and Retroid Pocket will release devices powered by the Snapdragon G series. For more details on all three platforms, click here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Snapdragon G chips drive next-gen handheld gaming appeared first on EDN.
MCUs support ASIL C/SIL 2 safety

Microchip’s AVR SD entry-level MCUs feature built-in functional safety mechanisms and a dedicated safety software framework. Intended for applications requiring rigorous safety assurance, they meet ASIL C and SIL 2 requirements and are developed under a TÜV Rheinland-certified functional safety management system.
Hardware safety features include a dual-core lockstep CPU, dual ADCs, ECC on all memory, an error controller, error injection, and voltage and clock monitors. These features reduce fault detection time and software complexity. The AVR SD family detects internal faults quickly and deterministically, meeting Fault Detection Time Interval (FDTI) targets as low as 1 ms to enhance reliability and prevent hazards.
Microchip’s safety framework software integrates with MCU hardware features to manage diagnostics, enabling the devices to detect errors and initiate a safe state autonomously. The AVR SD microcontrollers serve as main processors for critical tasks such as thermal runaway detection and sensor monitoring while consuming minimal power. They can also function as coprocessors, mirroring or offloading safety functions in systems with safety integrity levels up to ASIL D/SIL 3.
Prices for the AVR SD microcontrollers start at $0.93 each in lots of 5000 units, with lower pricing for higher volumes.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post MCUs support ASIL C/SIL 2 safety appeared first on EDN.
Broad GaN FET lineup eases design headaches

Nexperia has expanded its GaN FET portfolio with 12 new E-mode devices, available in both low- and high-voltage options. The additions address the demand for higher efficiency and compact designs across consumer, industrial, server/computing, and telecommunications markets. Nexperia’s portfolio includes both cascode and E-mode GaN FETs, available in a wide variety of packages, providing flexibility for diverse design needs.
The new offerings include 40-V bidirectional devices (RDS(on) <12 mΩ), designed for overvoltage protection, load switching, and low-voltage applications such as battery management systems in mobile devices and laptop computers. These devices provide critical support for applications requiring efficient and reliable switching.
Also featured are 100-V and 150-V devices (RDS(on) <7 mΩ), useful for synchronous rectification in power supplies for consumer devices, DC/DC converters in datacom and telecom equipment, photovoltaic micro-inverters, Class-D audio amplifiers, and motor control systems in e-bikes, forklifts, and light electric vehicles. The release also includes 700-V devices (RDS(on) >140 mΩ) for LED drivers and power factor correction (PFC) applications, along with 650-V devices (RDS(on) >350 mΩ) suitable for AC/DC converters, where slightly higher on-resistance is acceptable for the specific application.
To learn more about Nexperia’s E-mode GaN FETs, click here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Broad GaN FET lineup eases design headaches appeared first on EDN.
NVIDIA switches scale AI with silicon photonics

NVIDIA’s Spectrum-X and Quantum-X silicon photonics-based network switches connect millions of GPUs, scaling AI compute. They achieve up to 1.6 Tbps per port and up to 400 Tbps aggregate bandwidth. NVIDIA reports the switch platforms use 4x fewer lasers for 3.5x better power efficiency, 63x greater signal integrity, 10x higher network resiliency at scale, and 1.3x faster deployment than conventional networks.
Spectrum-X Photonics Ethernet switches support 128 ports of 800 Gbps or 512 ports of 200 Gbps, delivering 100 Tbps of total bandwidth. A high-capacity variant offers 512 ports of 800 Gbps or 2048 ports of 200 Gbps, for a total throughput of 400 Tbps.
Quantum-X Photonics InfiniBand switches provide 144 ports of 800 Gbps, achieved using 200 Gbps SerDes per port. Built-in liquid cooling keeps the onboard silicon photonics from overheating. According to NVIDIA, Quantum-X Photonics switches are 2x faster and offer 5x higher scalability for AI compute fabrics compared to the previous generation.
NVIDIA’s silicon photonics ecosystem includes collaborations with TSMC, Coherent, Corning, Foxconn, Lumentum, and SENKO to develop an integrated silicon-optics process and robust supply chain.
Quantum-X Photonics InfiniBand switches are expected to be available later this year. Spectrum-X Photonics Ethernet switches will be coming in 2026 from leading infrastructure and system vendors. Learn more about NVIDIA’s silicon photonics here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post NVIDIA switches scale AI with silicon photonics appeared first on EDN.
Quantum Critical Metals stakes Prophecy Germanium-Gallium-Zinc Project in northern British Columbia
Pages
