Українською
  In English
Збирач потоків
Skyworks and Qorvo to merge into $7.7bn-revenue RF, analog & mixed-signal semiconductor firm
5-V ovens (some assembly required)—part 1

The ovens in this two-part Design Idea (DI) can’t even warm that leftover half-slice of pizza, let alone cook dinner, but they can keep critical components at a constant temperature. In the first part, we’ll look at a purely analog approach, saving something PWM-based for the second.
Perhaps you want to build a really wide-range LF oscillator with a logarithmic sweep, using no more than a resistor, an op-amp, and a diode for the log element. That diode needs to be held at a constant temperature for accuracy and stability: it needs ovening (if there is such a verb).
Wow the engineering world with your unique design: Design Ideas Submission Guide
I made such a device some years ago, and was reminded of it when spotting how a bead thermistor fitted rather nicely into the hole in a TO-220’s tab. (Cluttered workbenches can sometimes trigger interesting cross-fertilizations.) Now, can we turn that tab into a useful temperature-stabilized hotplate, suitable for mounting heat-sensitive components on? Ground rules: aim at a rather arbitrary 50°C, make the circuitry as simple as possible, use a 5-V supply, and keep the consumption low.
This is a practical exploration of how to use a transistor, a thermistor, and as little else as possible to get the job done. It lacks the elegance and sophistication of designs that use a transistor as both a sensor and a source of heat, but it is simpler.
Figure 1 shows the schematic of a simple version needing only a 2-wire connection, along with two photos indicating its construction. It was slimmed down from a more complex but less successful initial idea, which we’ll look at later.
Figure 1 A simple oven circuit, heated by both R2 and Q2. The NTC thermistor Th1 provides feedback, the set point being determined by R1. Note how critical components are thermally tied together as they are all built onto the TO-220 package, as shown in the photos. Also note the fine lead wires to reduce heat loss once the assembly is heat-insulated.
Both R2 and Q2 can contribute to heating. On a cold start (literally) Th1’s resistance is high so that the Darlington pair Q1 and Q2 has enough base voltage to saturate it, with (most of) the rail voltage across R2. As the assembly heats up, Th1’s resistance drops, reducing the drive to Q1/2. The rail now appears across both R2 and Q2, with the latter taking over as the main, though now reduced, source of heat. This gives a degree of proportional control, reducing the drive as the set-point is approached. That base drive depends not only on the ratio of R2 to Th1 but also on Q1/2’s effective VBE, which needs to be temperature-stabilized—as indeed it is. Consumption varies from ~90 mA when cold to ~30 mA when stable.
Setting and measuring the temperature
R1 sets the stabilization temperature, the target being 50°C. Experimentally, 12k worked best, giving a stable hotplate temperature of 49.6°C for an ambient of 19.5°C. Cooling the surroundings to -0.5°C left the hotplate at 48.8°C, so that the hotplate temperature falls by 0.04°C for each degree drop outside. Better thermal insulation would have reduced that.
The measuring probe was a 10k thermistor equipped with fine wires and stuck to the hotplate with thermal paste, the module being wrapped in ~12 mm of foam—and we’ll come back to that. Thermal paste and heat shrink could have been used for the main assembly but dabs of epoxy worked well and kept the hotplate surface flat. Metal-loaded, high-temperature epoxy conducts heat several times better than the plain-vanilla variety while still being an electrical insulator, though that may make little difference given reasonable physical contact.
Other resistors and transistors
R2 is fairly critical. A higher value than 47R heats up slower than is necessary, while a lower one does so too fast, leading to the temperature overshooting because of the limited proportional control. Experiments showed that 47R was close to optimal, with minimal overshoot and thus the fastest stabilization time. The hotplate temperature settles to within a degree in around two minutes and is almost spot-on after three minutes.
Neither Q1 nor Q2 is critical, but the E-line package of a ZTX300 (for example) fits better than a TO-92 would. But why not use an integrated Darlington like the TIP122? Alas, such devices incorporate base–emitter resistors, nominally 10k and 150R, which load Th1 unpredictably. Trying one picked at random showed that R1 needed to be ~7k8 for a set-point of 50°C.
Similarly, this also works with Q1/2 replaced by a MOSFET, with R1’s value now depending on the gate threshold; 3k9 was close for a BUK553. BJTs are far more predictable: build this as drawn, and it should be within a degree, with Q1/2’s VBE settling at ~1.18 V; use a random MOSFET, and it could be anywhere.
Access all areas
The next variant, shown in Figure 2, is electrically similar but provides access to useful circuit nodes to help monitor its performance. It was also easier to experiment with.

Figure 2 While electrically the same as Figure 1, this brings out most circuit nodes to help with experimentation and monitoring, including the LEDs on “pin 3”.
Now we can see what we’re doing! The LEDs give a simple status indication, the green one lighting when it’s close to the set-point rather than fully stable. Figure 3 shows the effect, along with traces for Q1/2’s Vcc—allowing us to read the current in the transistors and R2—and the hotplate temperature. The latter is accurate, but the voltage and current scales are less so because they assume a precise 5-V supply and a 50-Ω load rather than the measured 4.94 V and 47Ω plus stray resistance. This module stabilized at ~50.6°C.

Figure 3 Measurements taken from Figure 2’s circuit for about three minutes after a cold start.
So much for the basic circuit. Now, it needs thermal insulation to keep the heat in, a block of foam being the obvious choice. But foams have widely differing thermal conductivities. Expanded polystyrene or polyethylene will work, but the foamed polyisocyanurate or similar used for wall insulation panels is around twice as good—and offcuts are often freely available from builders’ skips/dumpsters! Figure 4 shows the module from Figure 2 mounted on/in a block of it, with at least 10 mm of foam around any part of the circuit module.
Wikipedia has an illuminating plot of the thermal conductivities of many materials, including our foams and epoxies. The article of which it is a part has a lot of useful background, too.

Figure 4 The module from Figure 2 mounted on a block of foam. The intermediate connecting wires are meandered across its surface to minimize heat loss. Note the diode, typical of a component needing stabilization, stuck to the hotplate, ready for its new connections to be treated similarly.
The fine lead wires—0.15 mm diameter, as used with wiring pencils—are meandered over the surface to lengthen the thermal paths. Copper has a thermal conductivity some 19,000 times greater than the foam: 384 W/m·K vs ~0.02 W/m·K. In very crude terms, for a given thermal path length and temperature gradient, a single, short 0.11-mm-diameter copper wire will leak heat at about the same rate as the entire surface area of our foam block (~6000 mm2). Ironically, perfect insulation would be bad, as the innards could never cool to recover from an overshoot. This build took 620 seconds to cool by 63% of the way to ambient.
Hot stuff
Disconnecting Th1 in Figure 2’s circuit let the module heat up to the max while still allowing monitoring—or would have done, had I not chickened out when its resistance dropped to 720 Ω, for just over 100°C. (The epoxy was rated to 110°C.) That was with the full insulation; in free air, it struggled to reach 70°C—the rating for other components.
One subtle problem is the inevitable mismatch between the sensing thermistor and the target device, as analyzed in a Stephen Woodward DI, which also implies that the position of the target on the hotplate will affect its actual temperature. We’ll ignore that for the moment, because we’re more interested in constancy than precision, but will return to it in Part 2.
Finishing at the starting point
The foregoing circuits were actually simplifications of my starting point, which is shown in Figure 5. When the temperature is stable at ~50°C, point A is at half-rail. R3 is chosen so that U1’s output will turn Q1/2 on just enough to maintain that. However, while the extra gain improves the temperature regulation, it also causes some overshoot. R3 or R2 must be trimmed to set the temperature: fiddly, and not really designable. R3 was calculated at 4k12 but needed ~5k6 in reality. That’s why I gave up on this approach.

Figure 5 The original circuit that suffered from overshoot. The LEDs give a too-high/too-low temperature indication.
The long-tailed pair of Darlingtons (Q3, Q4) sense the difference between the thermistor voltage—half the rail when stable, as noted—and a half-rail reference, so that the red LED will be on when the temperature is low, the green one lighting while it’s high, with both on at the stable point. Full-red to full-green takes ~300 mV differential, or ~±3°C. This works but gives no better indication than the LEDs in Figure 2. (The low-power Darlingtons used seem to omit those extra, internal resistors. Q1/2 could now be replaced by that TIP122, as it’s driven by a low-impedance source. R4 is purely to protect against current surges.)
Figure 6 plots its performance when starting from cold, showing the overshoot and recovery. Compare this with Figure 3.

Figure 6 The start-up performance of Figure 5’s circuit.
If I were building something similar in any quantity, I wouldn’t do it like this: SMDs and a flexible circuit would be much cleaner. For example, a 2512 power resistor for R2 (or R5 in Figure 5), pressed flat, with some insulation, against the power transistor’s tab would probably be ideal.
In Part 2, we’ll see how even a simple PWM-based circuit can give better proportional control and hence generally better performance. The bad news: we may eventually abandon the TO-220 tab in favor of another way of assembling our hotplate.
Related Content
- Fixing a fundamental flaw of self-sensing transistor thermostats
- Self-heated ∆Vbe transistor thermostat needs no calibration
- Take-back-half thermostat uses ∆Vbe transistor sensor
- Dropping a PRTD into a thermistor slot—impossible?
The post 5-V ovens (some assembly required)—part 1 appeared first on EDN.
NTX Embedded Launches Human Interface Platform With Design Support Program
Centre Clears ₹5,532 Crore Investment for Seven Electronics Manufacturing Projects
In a significant effort to enhance India’s electronics ecosystem, the Union Government has cleared investment of ₹5,532 crore for seven projects under the Electronics Components Manufacturing Scheme (ECMS). The initiative is to further India’s transition from assembling imported components to manufacturing core electronic materials and parts within the country.
Union Electronics and IT Minister Ashwini Vaishnaw announced that the projects are a “transformational step” towards developing a self-reliant and innovation-led electronics manufacturing ecosystem.
The projects cleared recently distributed across Tamil Nadu (5 units), Andhra Pradesh (1 unit), and Madhya Pradesh (1 unit) will create over ₹36,000 crore worth of component production and generate over 5,000 direct employment opportunities.
The ECMS will facilitate local manufacturing of key components like Multi-Layer and HDI PCBs, Camera Modules, Copper Clad Laminates (CCL), and Polypropylene Films. These components form the backbone of thousands of diverse products, ranging from smartphones and electric vehicles to medical devices and defence technology.
The cleared projects will satisfy some 20% of India’s domestic demand for PCBs and 15% of its camera module needs, while the production of CCLs will be entirely localized, with 60% of the output being export-focused.
The ECMS initiative has drawn a strong response from industry players, with 249 applications already submitted, signaling robust interest in the program. Combined, they amount to potential investments of ₹1.15 lakh crore, production value of ₹10.34 lakh crore, and 1.42 lakh job opportunities the largest-ever pledge in India’s electronics industry.
The program is likely to sharply reduce import dependence, improve supply chain resilience, and attract high-skill employment in manufacturing and R&D. The components produced under ECMS will help feed key industries like defence, telecommunication, renewable energy, and electric vehicles.
Vaishnaw highlighted that ECMS synergizes with flagships such as the Production Linked Incentive (PLI) scheme and the India Semiconductor Mission (ISM).
“India is transforming from being an assembling country to a product country designing, producing, and exporting sophisticated electronic gear. ECMS fills the critical gap between devices and components and manufacturing and innovation,” he added.
With this approval, India makes another decisive step towards becoming a world electronics manufacturing hub, driven by indigenous innovation, large-scale investment, and increasing self-reliance.
The post Centre Clears ₹5,532 Crore Investment for Seven Electronics Manufacturing Projects appeared first on ELE Times.
Nuvoton’s M55M1 AI MCU Debuts with Built-in NPU for Entry-Level AI Performance
Nuvoton Technology has launched its latest generation AI microcontroller, the NuMicro M55M1, specifically designed for edge applications such as AI data recognition and intelligent audio. Positioned as a rare entry-level AI solution in the market, the M55M1 integrates an NPU delivering up to 110 GOPS of AI computing power, providing over 100 times the inference performance compared to traditional 1GHz MCUs. Paired with Nuvoton’s self-developed NuML Tool Kit, it enables developers to quickly get started with AI applications in a familiar MCU development environment. A variety of AI models are also available for trial, including face recognition, object detection, audio command recognition, and anomaly detection, effectively lowering the technical barrier and accelerating product deployment.
To meet diverse AI application scenarios, the M55M1 is a 32-bit microcontroller based on the Arm Cortex-M55 core, equipped with Arm Ethos-U55, offering up to 110 GOPS of computing power and a built-in Helium vector processor. Compared to Arm’s existing DSPs, it delivers up to 15 times higher performance. To address AI model requirements, it provides up to 1.5 MB of RAM, 2 MB of Flash, and supports external HyperRAM/OctoSPI expansion. The M55M1 not only features powerful computing capabilities and a flexible architecture but also offers a highly integrated development environment. Through the NuML Tool Kit, developers can easily port AI models to the M55M1 platform using familiar MCU firmware development methods. This architecture is suitable for a wide range of edge AI applications, such as predictive maintenance analysis for factory equipment, analysis for various home appliances and medical sensing devices, as well as endpoint AI applications like keyword spotting, echo cancellation, and image recognition.
On the general MCU operation side, the M55M1 is equipped with a Cortex-M55 core running at up to 220 MHz and offers five low-power modes. It also supports a wide range of peripherals, including CCAP, DMIC, I2C, SPI, Timer, UART, ADC, and GPIO, all of which can operate in low-power modes. In addition, the M55M1 features multi-level security mechanisms, including secure boot, Arm TrustZone, a hardware crypto engine, and Arm PSA Certified Level 2 compliance, providing reliable protection for IoT and embedded applications.
The post Nuvoton’s M55M1 AI MCU Debuts with Built-in NPU for Entry-Level AI Performance appeared first on ELE Times.
🏆 Кубок ректора КПІ 2025 з дисциплін Dota 2 та Counter Strike 2
Спробуйте свої сили у найпопулярніших серед КПІшників дисциплінах у новому формат, зареєструйтесь на великий турнір нашого університету - Dota 2 та Counter Strike 2
✍️Формат кваліфікацій: 5x5 | Online | Captains Mode - Dota 2 / FaceIT - CS2
📅 Коли - для обох дисциплін:
Європейська інтеграція та інтелектуальна власність: досвід реалізації проєкту Еразмус+ Жан Моне на ФСП
У середині жовтня в межах Erasmus+ Days в КПІ ім. Ігоря Сікорського 2025 відбулася низка заходів, зокрема щодо інформування та консультування потенційних учасників цієї програми.
Anritsu Supports EU Market Expansion by Ensuring Safety and Compliance of 5G Wireless Devices
ANRITSU CORPORATION has enhanced the functions of its New Radio RF Conformance Test System ME7873NR to support 5G wireless device conformance tests and compliance with the ETSI EN 301 908-25 standard under the European Radio Equipment Directive (RED).
By using these enhanced functions, manufacturers can ensure regulatory compliance for 5G wireless devices sold in the EU and guarantee product quality and reliability. Anritsu is dedicated to supporting smooth market entry for products into the EU.
RED is an EU legal framework defining the safety, electromagnetic compatibility (EMC), radio-spectrum efficiency, and cybersecurity requirements of wireless devices in the EU. With the spread of wireless technologies, such as 5G, the ETSI EN 301 908-25 standard for 5G NR devices has been established based on 3GPP Release 15 regulating 5G specifications, and wireless products now sold in the EU must comply with this standard.
Through this latest enhancement, Anritsu continues to play a key role in deployment of commercial 5G services, helping create a 5G-empowered society.
The New Radio RF Conformance Test System ME7873NR is a 5G test platform compliant with 3GPP standards and is certified by both the Global Certification Forum (GCF) and PCS Type Certification Review Board (PTCRB).
In addition to supporting Frequency Range 1 (FR1, Sub-6 GHz), combining the system with an OTA (CATR) chamber 2 adds support for Frequency Range 2 (FR2, mmWave). The flexible configuration and customizable design provide an upgrade path from current ME7873LA systems, offering enhanced 5G compatibility at a lower capital cost.
The post Anritsu Supports EU Market Expansion by Ensuring Safety and Compliance of 5G Wireless Devices appeared first on ELE Times.
High-Accuracy Time Transfer Solution Delivers Sub-Nanosecond Timing Up to 800 km via Long-Haul Optical Networks
Governments across the globe are requesting critical infrastructure operators to adopt additional time sources alongside GNSS to enhance resilience and reliability, ensuring uninterrupted operations in the face of potential disruptions or service limitations. Microchip Technology announced the release of the TimeProvider 4500 v3 grandmaster clock (TP4500) designed to deliver sub-nanosecond accuracy for time distribution across 800 km long-haul optical transmission.
This innovative solution provides critical infrastructure operators with the missing link the industry has been waiting for in terms of complementary Positioning, Navigation and Timing (PNT). The TP4500 provides a resilient, terrestrial solution in the absence of Global Navigation Satellite Systems (GNSS) for precise timing, alleviating physical obstruction, security and signal interference costs associated with GNSS-dependent deployments.
Most current deployments require GNSS at grandmaster sites, but the TP4500 enables highly resilient synchronization without relying on GNSS. The TP4500 supports time reference provided by UTC(k) UTC time provided by national labs, and is the first grandmaster to offer a premium capability that delivers High Accuracy Time Transfer (HA-TT) as defined by ITU-T G.8271.1/Y.1366.1 (01/2024) to meet 5 nanoseconds (ns) time delay over 800 km (equating to 500 picoseconds (ps) average per node, assuming 10 nodes), setting a new industry benchmark for accuracy.
The TP4500 system can be configured with multiple operation modes to form an end-to-end architecture known as virtual PRTC (vPRTC), capable of delivering PRTC accuracy over a long-distance optical network. vPRTC is a carrier-grade architecture for terrestrial distribution of HA-TT, which has been widely deployed in operator networks throughout the world. HA-TT is a proven and cost-effective approach, as opposed to other alternative PNT solutions that have no wide adoption into critical infrastructure networks to date, have low Technology Readiness Levels (TRL) and are still dependent on GNSS as the ultimate source of time.
“The TimeProvider 4500 v3 grandmaster is a breakthrough solution that empowers operators to deploy a terrestrial, standards-based timing network with unprecedented accuracy and resilience,” said Randy Brudzinski, corporate vice president of Microchip’s frequency and time systems business unit at Microchip. “This innovation reflects Microchip’s commitment to delivering the most advanced and reliable timing solutions for the world’s most essential services.”
TimeProvider 4500 v3 is a key steppingstone towards support of the ITU-T G.8272.2 standard, which defines a coherent network reference time clock (cnPRTC) in amendment 2 (2024). An cnPRTC architecture ensures highly accurate, resilient, and robust timekeeping throughout a telecom network. This allows stable, network-wide ePRTC time accuracy, even during periods of regional or network-wide GNSS unavailability or other failures and interruptions.
Key features of the TimeProvider 4500 v3 series:
- Sub-nanosecond accuracy: Delivers 5 ns time delay over long distances up to 800 km Terrestrial alternative to GNSS: Enables critical infrastructure to operate with resilient synchronization mechanisms independent of GNSS
- Seamless integration: Standards-based terrestrial network for time transfer, easily integrated with off-the-shelf small form-factor pluggable and existing Ethernet and optical deployments
- Exclusive capability: Premium software features available only on the TP4500 v3, integrating Microchip’s PolarFire FPGA and Azurite synthesizer for unmatched precision
Optimized for telecom, utilities, transportation, government, and defense, the TP4500 grandmaster ensures precise and resilient timing where it matters most. This latest version provides operators with a scalable solution for secure and reliable time distribution over long distances.
The post High-Accuracy Time Transfer Solution Delivers Sub-Nanosecond Timing Up to 800 km via Long-Haul Optical Networks appeared first on ELE Times.
Behind the curve: A practical look at trailing-edge dimmers

Trailing-edge dimmers offer smoother, quieter control for modern lighting systems—but their inner workings often remain overlooked. This post sheds light on the circuitry behind the silence. Sometimes, the most elegant engineering hides in the fade, where silence is not a flaw but a feature.
Let’s get started.
Dimmers serve as an effective interface for controlling energy-efficient lighting systems. And dimming methodologies are broadly categorized into forward-phase dimming (leading-edge), reverse-phase dimming (trailing-edge), and four-wire dimming, commonly referred to as 0–10 V analog dimming.
This post specifically examines reverse-phase dimming, also known as trailing-edge dimming, which is particularly well-suited for electronic low-voltage (ELV) transformers and modern LED drivers. Its smoother voltage waveform and inherently lower electromagnetic interference (EMI) make it ideal for applications requiring silent operation and compatibility with capacitive loads.
Leading and trailing edge dimming
In a leading-edge dimmer—also known as a triac dimmer or incandescent dimmer—the electrical current (sinusoidal signal) is interrupted at the beginning of the AC input waveform, immediately after the zero crossing. This dimming method is traditionally used with incandescent lamps or magnetic low-voltage transformers.
On the other hand, a trailing-edge dimmer interrupts the current at the end of the AC input waveform, just before the zero crossing (Figure 1). This technique is better suited for electronic drivers or low-voltage transformers with capacitive loads.

Figure 1 In trailing-edge dimming waveform, conduction begins mid-cycle, and current is interrupted before zero crossing to suit capacitive loads. Source: Author
In a nutshell, a trailing-edge dimmer is an electrical device used to adjust the brightness of lights in a room or space. It operates by reducing the voltage supplied to the light source, resulting in a softer, dimmer glow.
Unlike leading-edge dimmers—which cut the voltage at the beginning of each AC waveform—trailing-edge dimmers reduce the voltage at the end of the waveform. This “trailing edge” approach enables smoother, more precise dimming, especially at lower brightness levels.
Trailing-edge dimmers are particularly well-suited for LED lighting. They tend to be more efficient, generate less heat, and offer better compatibility with modern electronic drivers. The result is a quieter, flicker-free dimming experience that feels more natural to the eye.

Figure 2 The popular DimEzy brand for trailing-edge rotary dimmers embodies compact engineering optimized for retrofit installations. Source: LiquidLEDs
It’s important to note that most mains-powered LED bulbs are not dimmable. Even among those labeled as dimmable, compatibility with dimmer types can vary. Many require dedicated trailing-edge dimmers to function correctly; using the wrong dimmer may lead to flickering, limited dimming range, or even premature failure. Always check the bulb’s specifications and pair it with a suitable dimmer for reliable, smooth performance.
Moreover, since LED bulbs and dimmers are mains-operated, even minor mishandling can lead to electric shock or fire hazards. Always choose compatible components and follow safety guidelines.
Trailing-edge dimmer design: The starting point
Building a trailing edge dimmer is not trivial; but it’s far from overcomplicated. Below is a conceptual block diagram for those poised at the starting line.

Figure 3 A conceptual block diagram highlights the key functional units coordinating trailing-edge dimming. Source: Author
From the block diagram above, several distinct functional stages interact with each other to perform the overall dimming functionality. In a trailing-edge dimmer circuit, the power supply delivers a stable low-voltage DC source to power control and switching stages. The zero-crossing (ZC) detector pinpoints the exact moment the AC waveform crosses zero volts, providing a timing reference for phase control.
Based on this, the timing control block calculates a delay to determine when to switch off the load during each half-cycle, shaping the trailing edge of the waveform. This delayed signal is then fed to the gate driver, which conditions it to reliably switch the power MOSFETs, the primary switching elements that interrupt current partway through each cycle, enabling smooth dimming with minimal noise and flicker.
So, for your trailing-edge dimmer, the selection of components involves careful consideration of their roles in the dimming process.
- Power supply (DC): This supply will power the control circuitry, including the digital logic and gate drivers. Its voltage and current rating must be sufficient to reliably operate these components, especially under varying load conditions.
- Zero-crossing (ZC) detector: This detector is fundamental for timing the dimming cycle. It senses when the AC waveform crosses zero, providing a synchronization point. The ZC detector should be fast and accurate to ensure precise dimming.
- Timing control: This element, often integrated with digital logic, dictates the duration for which the power MOSFET remains on during each AC half-cycle. For trailing-edge dimming, the gate pulse is enabled at the ZC signal and disabled after a specific ON-time pulse width.
- Digital logic: This is the brain of the dimmer, interpreting user input—for instance, from a potentiometer or button—and controlling the timing logic. It might involve simple logic gates or a microcontroller. One document mentions a triple 3-input NOR gate for control, indicating the use of basic digital logic.
- Gate drivers: Gate drivers are essential for efficiently switching power. They provide the necessary current and voltage levels to turn the MOSFETs on and off quickly, minimizing switching losses and heat generation. Proper selection ensures a clean gate drive signal.
- Power MOSFETs: The power MOSFET acts as the main switching element, controlling the power delivered to the load. It must be chosen based on the load’s voltage and current requirements, with low on-state resistance (Rdson) for efficiency and adequate heat dissipation capabilities. For AC dimming, devices capable of handling the AC voltage and current, such as specific MOSFETs or IGBTS designed for phase control, are necessary.
Recall that a trailing-edge dimmer operates using transistor switches that begin conducting at the start of each half sine wave. These switches remain active for a defined conduction angle, after which they turn off, effectively truncating the AC waveform delivered to the load.
This approach results in smoother current transitions. The electronic load benefits from the gentle rise of the sine wave, and once the switch turns off, any residual energy stored in inductive or capacitive components naturally dissipates to zero. This behavior contributes to quieter operation and improved compatibility with sensitive electronic loads.
Up next is the practical schematic of a trailing edge phase control rotary wall dimmer designed without a microcontroller and originally introduced by STMicroelectronics over a decade ago.
Although this elegant concept now calls for a few updates—mainly due to the unavailability of certain key components (fortunately, drop-in replacements exist)—it remains an invaluable design reference, at least to me. I could not have expressed it better myself, so here is the link to its full documentation.

Figure 4 Rotary wall dimmer circuit employs reverse-phase control to regulate mixed lighting loads. Source: STMicroelectronics
Happy dimming
In summary, there is not much more to add regarding trailing-edge dimmers for now. However, it’s worth noting that these dimmers can also be built using a microcontroller, which is especially useful for smart lighting systems. Compared to specialized dimmer ICs, microcontrollers provide more freedom to create custom dimming profiles, incorporate user interfaces, and connect with smart home technologies like Wi-Fi or Bluetooth.
That is all for now. But don’t let the dimming stop here.
Dive deeper into the fascinating world of trailing-edge dimmers. Experiment with different component combinations, explore their impact on dimming performance, and share your discoveries with us.
What will you create next? Let’s know your thoughts or any challenges you encounter as you build your own dimming solutions. Your insights could light the way for others.
Happy dimming!
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Dimmer With A MOSFET
- Secrets of Analog Dimming
- A matter of light — PWM dimming
- How to design a dimming fluorescent electronic ballast
- DC/DC Converter Considerations for Smart Lighting Designs
The post Behind the curve: A practical look at trailing-edge dimmers appeared first on EDN.
Microchip’s New PCIe Switches Brings AI Hardware Up to Speed
Homemade EMG sensor that can be used to control a video game without a physical game controller - Detailed explanations, calculations, and schematics provided - See video's description for the links.
| submitted by /u/ScienceDIY [link] [comments] |
Rad-hard buck controller integrates gate drive

Infineon Technologies AG claims the industry’s first radiation-hardened (rad-hard) buck controller with an integrated gate drive. The RIC70847 buck controller targets point-of-load power rails in commercial space systems and other extreme environments. Applications include distributed satellite power systems and digital processing payloads, including FPGA and ASIC systems.
(Source: Infineon Technologies AG)
The RIC70847 comprises a 17.1-V buck controller with a 5-V (output) half-bridge gate drive, suited for applications with a power input range of 4.75 V to 15 V and power output range of 0.6 V to 5.25 V. The device meets the MIL spec temperature range of -55°C to 125°C and applications that require a total ionizing dose rating of up to 100 krad (Si) and single event effects characterized up to a linear energy transfer of 81.9 MeV·cm²/mg.
The rad-hard buck controller incorporates load line regulation and fixed-frequency peak current mode control, which is reported to deliver exceptional transient response while reducing the number of output capacitors required. In addition, the high step-down voltage ratios, combined with the 5-V half-bridge gate driver, simplify the design process and minimize component count for a more compact and efficient power management design.
The high level of integration also improves system reliability and reduces the risk of component failure, Infineon said.
The RIC70847 buck controller is housed in a hermetically-sealed 24-lead flatpack or die form, and works seamlessly with logic-level transistors, such as Infineon’s rad-hard R8 power FET. It is available now, along with the RIC70847EVAL1 DC/DC buck controller evaluation board. The eval board features an integrated dynamic load step circuit for transient testing and supports a range of output capacitance and inductor configurations.
The post Rad-hard buck controller integrates gate drive appeared first on EDN.
PTC thermistors save space

Vishay Intertechnology, Inc. launches a new series of insulated, surface-mount inrush current limiting positive temperature coefficient (PTC) thermistors. The Vishay BCcomponents PTCES series devices offer maximum energy handling up to 340 J with high maximum voltages of 1,200 VDC in a compact package, providing increased board-level efficiency and lower costs in automotive and industrial applications.
Vishay said the new PTCES PTC thermistors offer up to 260% higher energy-handling capabilities compared to competing devices, which helps to reduce component count to save board space and lower overall costs. These devices also offer 20% higher maximum voltages than competing devices.
(Source: Vishay Intertechnology, Inc.)
The PTC thermistors provide current limitation and overload protection in AC/DC and DC/DC converters; DC-Link, energy dump, and emergency discharge circuits; on-board chargers and battery charging equipment; and motor drives. They withstand >100,000 inrush power cycles and are AEC-Q200 qualified for shock and vibration, eliminating the need for reinforced mounting adhesives, Vishay said.
The series is comprised of solder-connected homogeneous ceramic PTCs encapsulated in a UL 94 V-0 compliant, self-extinguishing, washable plastic housing with insulation up to 3 kVAC. The devices feature a low profile of 9.6 mm and can be automatically mounted by pick-and-place equipment to reduce placement costs.
The PTC thermistors are RoHS-compliant and halogen-free. Samples and production quantities are available now, with lead times of 10 weeks. Pricing for U.S. delivery starts at $0.90 each in quantities of 1,000. Click here for the datasheet.
The post PTC thermistors save space appeared first on EDN.
Broadcom Bolsters Wireless Fronts With AI Ethernet NIC & Wi-Fi 8 Silicon
The transition from 54-V to 800-V power in AI data centers

While compute devices such as CPUs, GPUs, and XPUs are stealing the limelight in the artificial intelligence (AI) era, there is an increasing realization that powering AI at scale demands new power systems and architectures. In other words, data center operators are investing heavily in high-performance computing for AI, but there is no AI without power.
The exponential growth of AI is rapidly outstripping the capacity of the current 54-V data center power infrastructure, driving a transformation toward high-density, reliable, and safe 800-V powered data centers. Here, at this technology crossroads, the new power delivery architecture requires new power conversion solutions and safety mechanisms to prevent potential hazards and costly server downtimes.

Figure 1 AI data center power was a prominent theme at Infineon OctoberTech Silicon Valley 2025. Source: Infineon
At Infineon’s OctoberTech Silicon Valley event held on 16 October 2025 in Mountain View, California, this tectonic shift in data center power infrastructure was a major highlight. The company demonstrated 800-V AI data center power architectures built around silicon, silicon carbide (SiC), and gallium nitride (GaN) technologies.
Infineon has also joined hands with Nvidia to maximize the value of every watt in AI server racks through modular and scalar power architectures. The two companies will work together on data center power aspects, such as hot-swap controller functionality, which enables future server boards to operate in 800-V power architectures. It will facilitate the exchange of server boards on an 800 VDC bus while the entire rack continues operating through controlled pre-charging and discharging of the boards.
At Infineon OctoberTech Silicon Valley, Peter Wawer, division president of green industrial power at Infineon Technologies, spoke with EDN to explain the transition to AI data centers to 800-VDC architectures. He also walked through the demo to show how 800-V power is delivered to AI server racks.
The advent of solid-state circuit breakers
“We are seeing a switch to an 800-VDC architecture in AI data centers, which is a major step forward to establishing powerful AI gigafactories of the future,” Wawer said. “The power consumption of an AI server rack is estimated to increase from around 120 kilowatts to 500 kilowatts, and to 1 megawatt by the end of the decade.”
Inevitably, it calls for higher efficiency and reduced losses as computing power continues to scale at an unprecedented rate. “This evolution brings new challenges,” Wawer acknowledged. “When you want to exchange server boards on an 800-V bus while the entire rack continues operating, you are dealing with substantial power levels.”
For instance, engineers need controlled pre-charging and discharging to avoid dangerous inrush currents and ensure safe maintenance without downtime. While traditional protective devices like fuses and mechanical breakers have served reliably for decades, they were not designed for the ultra-fast fault response required in today’s high-voltage, high-speed environments, where microseconds matter.
That’s where the next generation of solid-state circuit breakers (SSCBs) comes in. The new data center architectural shift is leading to the emergence of SSCBs, which will modernize AI data centers while replacing electromagnetic transformers. SSCBs respond to faults in microseconds with very high precision, which makes power distribution in AI data centers safer, faster, and more efficient.

Figure 2 SSCBs will replace electromagnetic transformers that currently connect the grid to power infrastructure in data centers. Source: Infineon
“To enable these next-generation SSCBs, Infineon introduced the CoolSiC JFET family earlier this year,” Wawer told EDN. “These JFETs offer the ability to combine ultra-low on-resistance—1.5 mΩ at 750 V and 2.3 mΩ at 1200 V—to ensure robust performance even under tough conditions.”
Reliability is another key advantage, he added. “These JFETs are designed to handle sudden voltage spikes and current surges, responding quickly to faults and helping prevent equipment damage or downtime.” Their packaging—aided by top-side cooling and Infineon’s .XT interconnect technology—helps AI data center power systems stay cool and reliable even in the most demanding environments.
These JFETs also reduce the need for external clamping circuits, simplifying system design and enabling more compact and cost-effective solutions. Besides AI data centers, this SSCB technology can help protect electric vehicles (EVs), industrial automation and smart grids, making power distribution safer, more efficient, and ready for the future.
Solid-state transformers, hot-swap controllers, and power modules
At OctoberTech Silicon Valley, Infineon also demonstrated a power system built around high-voltage CoolSiC components for high-voltage DC power distribution to IT racks powered by a solid-state transformer (SST). “The SSTs will be crucial in gigawatt-scale AI datacenters,” Wawer said.
An SST is a power-electronics stack for connecting the grid to data center power distribution. It replaces the conventional systems based on a low-frequency transformer made of copper and steel and an AC-DC converter, enabling a dramatic reduction in size and weight, end-to-end efficiency, and reduced CO2 footprint.
Next, Infineon unveiled a reference board for hot-swap controllers for 400-V and 800-V power architectures in AI data centers. The hot-swap controller functionality is vital to providing the highest levels of protection, maximizing server uptime, and ensuring optimal performance. The REF_XDP701_4800 hot-swap controller reference design is optimized for future 400-V/800-V rack architectures.

Figure 3 Hot-swapping controller designs demonstrated at OctoberTech in Silicon Valley are optimized for 400-V/800-V data center rack architectures. Source: Infineon
Then there were trans-inductance voltage regulator (TLVR) modules specifically designed for high-performance AI data centers. Infineon’s TDM22545T modules combine OptiMOS technology power stages with TLVR inductors to bolster power density, improve electrical and thermal efficiency, and enhance signal quality with reduced transients.
The proprietary inductor design delivers ultra-fast transient response to dynamic load changes from AI workloads without compromising electrical or thermal efficiency. Moreover, the inductance architecture minimizes the number of output capacitors, reducing the overall size of the voltage regulator (VR) and lowering bill-of-materials (BOM) costs.

Figure 4 The TLVR modules deliver benchmark power density and transient response crucial in AI data centers. Source: Infineon
Transition to new power architectures
Jim McGregor, principal analyst at Tirias Research, acknowledges that it’s becoming increasingly challenging to power AI data centers from the grid to the chip level. “It’s critical that power design engineers continuously improve efficiency, power density, and signal integrity of power conversion from the grid to the core.”
Especially when an AI server costs 30 times as much as a traditional server. Furthermore, there is an increasing need to simplify system design, enabling more compact, cost-effective solutions for powering AI data centers.
The imminent shift from the current 54-V data center power infrastructure to a centralized 800-V architecture is part of this design journey in the rapidly evolving world of AI data centers. That inevitably calls for new building blocks—hot-swap controllers, SSCBs, and SSTs—to successfully migrate to new power architectures.
These power-electronics building blocks are now available, which means the transition to 400-V/800-V AI data centers isn’t far off.
Related Content
- Solving power challenges in AI data centers
- AI Data Centers Need Huge Power-Backup Systems
- EDN Talks to Infineon About the AI Data Center Evolution
- Data center power meets rising energy demands amid AI boom
- As Data Center Growth Soars, Startup Uses AI to Cut Power Binge
The post The transition from 54-V to 800-V power in AI data centers appeared first on EDN.
Recapped an old NOS Heatkit PS 4 today, here is the result
| | I recapped an old but brand-new looking 50-60's Heatkit tube power supply. those where made back in the days to be used on the hobbyist workbench as a power supply specialized for building tube amps or radio equipment with tubes. They are like your regular linear PSU, but with voltages for Filament (typical low voltages 1.2-24v / 6.3v) and 0-400v for High voltage supply for the Anode/grid/Cathode supply. It went up in smoke last time I fired it up, and I found the old paper caps to be dry, so I've just rewired the whole thing, haven't fired it up yet, but thought I'd show it to you guys before I blow it up. /s [link] [comments] |
My first HDMI swap.
| | I’ve been watching YouTube videos lately of people repairing ps4 ps5 and other consoles and I thought I’ll give it a try. Bought all of the necessary stuff to get me started and this I’d my first swap on a PS4. Everything works fine and sold it the same day. [link] [comments] |
A fresh gander at a mesh router

In one of my recent teardowns, commenting on the variety of piece parts included with the manufacturer’s various products in its streaming media box line, I noted:
I would not want to be the person in charge of managing onn. product contents inventory…
Seeming diversity, but under-the-hood commonalityMultiply that sentiment by 100x or so and you’ve got a sense of my feelings about the poor folks who manage the inventories of (and forecast the future sales of) router manufacturers’ product lines. Today’s teardown victim is from Linksys, but the situation’s very much the same at ASUS, (Amazon) eero, Netgear, TP-Link or any of the other hardware providers.
There are now only a few foundation silicon suppliers, and (unlike the relatively recent past), the pace of technology evolution has notably slowed of late, particularly in the wireless realm. The most significant innovation of the past decade has been mesh networking, which only indirectly deals with the Wi-Fi signals being broadcast to and from any particular network node, mostly focusing instead on the node-to-node handoffs as LAN clients move through the network.
The results? Supplier-to-supplier and product-to-product enclosure and other cosmetics differences, but based on essentially the same underlying hardware, differentiated by software (along with, for example, antenna type and quantity and DRAM capacity variations), as each company strives to differentiate in any (preferably low-cost) way possible to squeeze whatever profit is left from an increasingly mature market. Sometimes, product line diversification (as we’ll see today) involves little more than new stickers on the outside of the device and packaging and an altered product name embedded in the firmware. And all this tweaking ends up causing ongoing stress headaches for each company’s pitiable product line managers.
Prepping for a sooner-or-later home office LAN transitionToday’s analysis is a prescient example of what I’m conceptually talking about…two examples, although, at least for the foreseeable future, you’ll only be seeing the insides of one of them. At the tail end of one of my writeups from late last year, wherein I unsuccessfully (to date, at least) strove to figure out how to eliminate my LAN’s ongoing dependence on the lightning-sensitive spans of wired Ethernet running around the outside of my house, I mentioned that:
I also plan to eventually try out newer Wi-Fi technology, to further test the hypothesis that “wires beat wireless every time”. Nearing 3,000 words, I’ll save more details on that for another post to come.
That “newer Wi-Fi technology” isn’t the primary focus of this post, either, but for now I’ll at least provide an entrée. Right now, I’m running a multi-node LAN mesh based on Google Nest Wifi routers, which implement Wi-Fi 5 (802.11ac) technology, specifically AC2200 4×4:4 albeit absent MU-MIMO. One other important “twist” here is that the backhaul connection between the network nodes is wired Ethernet, not Wi-Fi. The setup’s been operational for three years now, thankfully running quite stably, actually.
But, as with its OnHub predecessors (one of which, from TP-Link, I tore down back in mid-2020) I’d run in a mesh configuration for the prior five years, Google will eventually end support for Google Nest Wifi in favor of the newer Nest Wifi Pro and its potential successors. Indicative of my forecast, Google already pulled both the Nest Wifi and prior-gen Google Wifi (one of which I dissected back in early 2022) from its online store effective the beginning of 2024 (I plan to dissect both a Nest Wifi router and access point post-support cessation).
At that point, I’ll need to upgrade my LAN once again. Fortunately, I’ve already got the successors in hand…a bunch of them, actually, counting spares. Last September (as well as several times prior, which I hadn’t noticed at the time), Amazon subsidiary Woot sold factory-refurbished Linksys LN1301 routers for $14.99 each (plus $5 off one via a coupon code):

Also known as the MX4300, it’s a beefy Wi-Fi 6 AX4200 unit with one WAN and three LAN wired Ethernet ports, along with a USB 3.0 port, based on a 1.4 GHz quad-core CPU (identity to be revealed shortly) and with 2 GBytes of RAM and 1 GByte of flash memory. It supports both MU-MIMO and OFDMA and claims to deliver up to 4.2 Gbps of aggregate wireless bandwidth.
Linksys also refers to it as a “Tri-band” router, although given that it’s not a Wi-Fi 6E device, this doesn’t mean that it supports the newest 6 GHz Wi-Fi band. Instead, it concurrently supports two different 5 GHz band ranges, one predominantly intended for optional node-to-node wireless mesh backhaul interconnect (with wired Ethernet being the other backhaul option).
Speaking of mesh, here’s the kicker…well, one of the two. Although not advertised as being mesh-compatible, it turns out that if, after you set up the primary router, you then direct-connect other secondary “child” units to it, an undocumented setup menu screen enables activating mesh connectivity between them. And (here’s the other kicker), the LN1301/MX4300 is also supported by both the DD-WRT and OpenWRT open-source communities, providing ongoing-maintained options to Linksys’ closed-source and (likely) end-of-life’d firmware.
To that “end-of-life” note, the fundamental reason why Linksys was selling the LN1301/MX4300 so inexpensively, it turns out, was as an inventory purge; the company then dropped the device (originally intended for use by small businesses, not consumers) from its product line. Upfront suspecting that this was the case, I went ahead and purchased the maximum quantity of ten units per Woot account, and then also asked my wife to pick up another one (using the same $5-off quantity-one coupon) from her Woot account. That’ll give me plenty of units for both my current four-node mesh topology and as-needed spares…and eventually I may decide to throw caution to the wind and redirect one of the spares to a (presumed destructive) teardown, too.

For now, I’ll focus my teardown attention on an alternative, more humbly equipped Linksys router I subsequently acquired. A month after my LN1301/MX4300 binge, Woot sold a two-pack of factory-refurbished Velop (Linksys’ brand name for its mesh-compatible devices) VLP01 AC1200 routers for $19.99, minus another $5-off coupon, therefore $14.99 plus tax. VLP0102, by the way, is Linksys’ naming scheme for the two-pack…VLP0101 is the single-unit kit, while VLP0103 refers to the three-device mesh bundled variant. Stock images to start:

Walmart’s website indicates that the VLP01 was (it’s now out of stock and presumably EOL’d as well) a Walmart-exclusive product, which explains why you can’t find a dedicated product page for it on Linksys’ own website. Instead, there’s the WHW01 series, spec’d as AC1300 devices. Anyhoo, what prompted my acquisition was three main motivations:
- They were inexpensive, and I already had plenty of LN1301/MX4300s, so I could rationalize devoting one of them to a teardown
- Since I planned on doing wired backhaul anyway, I didn’t need super-robust wireless capabilities, particularly at the mesh node in my wife’s office, and
- This (grammatically-tweaked-by-me) thread at the Woot Forum page caught my eye:
- Can these be meshed with the previous $15 Linksys router deal (Linksys LN1301 WiFi 6 Router)?
- Couldn’t find a direct answer on the Linksys site, but someone asked this same question on Reddit, and Linksys answered: “All of our intelligent mesh systems are compatible with each other. Just ensure that you designate the one with superior specifications as the parent or main node.”
- Yes, you can. I did this. You will need [to set up] the LN1301 as the parent and then set these up as the [child] nodes.
This support page on the Linksys website documents and supports the Woot forum claim.
Packaging and contents preliminariesNow for some images of our patient, beginning with an outer box shot of what I got…which, I’ve just noticed, claims that it’s an AC2400 configuration
(I’m guessing this is because Linksys is mesh-adding the two devices’ theoretical peak bandwidths together? Lame, Linksys, lame…):
Speaking of which, here are those two devices:

Along with what’s underneath ‘em:

Wall wart first, as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

Now for the router itself:


“Only” one LAN port this time, along with the WAN port and power input connector:


Onward:

Status LED up top, along with an abundance of (passive; no fan in this design) ventilation holes:

And at the bottom, power and reset switches along with verbiage including the all-important FCC ID, Q87-03331, which interestingly (and unsurprisingly) documents this product as being the WHW01, not the Walmart-relabeled and (slightly) de-spec’d VLP01:

Ordinarily, I would have begun my search for a pathway to the interior by focusing on that bottom panel, but an iFixit teardown of the WHW01 that I’d stumbled across during my research (which, truth be told, I actually didn’t realize until my teardown was complete and I’d begun this writeup was of the same hardware, due to the product name variance and “AC2400” silliness) instead advised me to start at the top instead:

Top off and to the side, complete with flips and focus shifts:

Now standalone:


Next, let’s ditch those two screws:



And now we can (re)turn our attention to the bottom. As usual, the rubber feet are first to go, revealing screw heads underneath ‘em:



Buh-bye:



And we have liftoff:

Another set of flips and focus shifts:

Followed by more standalone shots:



And now, free of its upper and lower encumbrances, the inner assembly lifts right out:

Gotta love those focus shifts! The enclosure’s just so tall, don’cha know:

The inner assembly exhibits some pretty nifty engineering. There’s a metal plate on top of one side of the PCB, a finned heat sink on the other side surrounded by a plastic shroud (to which the Bluetooth antenna is attached), and a plastic grill (that you sorta already saw already from those previous inside-from-top still-assembled shots) on the top end with the 2.4 and 5 GHz antennae stuck to it and the LED mini-PCB inserted within it. Side shots first:

Top end:

And bottom end:

Let’s ditch the plastic piece around the Ethernet ports and power connector first. It unclipped and pulled right off with absolutely no fuss:


Removing three screws enables the extrication of the metal plate on one side of the PCB:




Don’t worry; I’ll be getting to those two Faraday cages shortly:

But first, I want to get the topside plastic grill and the other-side plastic shroud off:


The two Wi-Fi antennas’ connections are begging for unclipping:

There’s the LED mini-PCB, still in place:

And there we are:

Some standalone shots of the top-end grill piece, topside first:

Then the underside:

Now the four…err…side sides:


I’m guessing that “P2” references the 2.4 GHz antenna structure, while “P5” is for…err, again…5 GHz. Agree or disagree, readers?

Next up, the side shroud. Outer portion first, revealing (among other things) the aforementioned Bluetooth antenna:

And now the inside:

Next, the LCD mini-PCB.

The largest chip on this side is labeled as follows:
9633
11 02
D819
My guess is that it’s an LED driver, like this PCA9633 from NXP Semiconductors. And on the other side is, of course, the multicolor LED itself:

From the online documentation for the WHW01 (which, I’m guessing, works the same as the VLP01):
- Blue (blinking): Node is starting up
- Blue (solid): Node is working properly
- Purple (blinking): Node is paired with phone for setup
- Purple (solid): Node is ready for setup
- Red (blinking): Node lost connection to the primary node
- If this is your primary node, ensure it’s securely connected to your modem
- Red (solid): Node lost internet connection
- Yellow (solid): Node is too far from another Velop node
And speaking of which, here’s a link to the PDF of the WHW01 user guide, which also references the VLP01 on the cover page!
Next up, let’s get that big finned heatsink off:

Fortunately, with all the retaining screws now removed, it lifted right off straightaway:


Oh, goodie, two more Faraday cages underneath!

Let’s deal with these first, before returning to the two on the other side that we saw before:

Remove the thermal tape from the inside of one, bend back the other…

And surprisingly, at least to me, the system SoC is not on this (formerly finned heatsink-augmented) side of the PCB. On the left is a Winbond W632GU6MB-12 2 Gbit DDR3 SDRAM. And on the right is a CSR (now Qualcomm) 8811 Bluetooth 4.2 controller, unsurprising given the antenna connector’s proximity to it.
There’s one more chip I want to point out on this side of the PCB, at the bottom:

It’s a Macronix MX25L1606E 16 Mbit serial NOR flash memory. (Briefly) hold that thought
Multiple nonvolatile memoriesWrapping up, let’s revisit the PCB’s other side, this time post-removal of the black plastic pieces:

At the top is another Winbond device, this time a serial NAND flash memory chip, the 2 Gbit 25M02GV. It’s based on high-reliability SLC (single-level cell) technology, and given comparative capacity, I’m guessing it contains the bulk of system software, with the Macronix chip on the other side relegated to boot and recovery code (or something like that…mebbe it holds updatable configuration data instead, although EEPROM would seem to be a superior choice?).
Cage tops off…

Along the left:

are (top-to-bottom) two Skyworks SKY85330-11 2.4GHz 256QAM RF front-end modules (FEMs), followed by two chips labeled:
SKY
748
2K01D
WikiDevi (or if you prefer, DeviWiki) says that they’re Skyworks SKY7482I001 5 GHz FEMs, although I can’t find such a chip on Skyworks’ website, so once again…
I’m pretty sure they’re right about the 5 GHz FEM part, but I’m questioning the specific part number…then again, I can’t find an online reference to the SKY7482K01D, either. My working theory is that we’re actually looking at the SKY85748-11, and Skyworks just didn’t have room to print the “85” portion of the part number on the package.
To their right, and formerly under two pads of thermal tape, one connecting the cage to the metal plate and the other between the cage and IC, is the dominant heat generator of the design, Qualcomm’s IPQ4018 dual-band 802.11ac controller, which also handles wired Ethernet MAC duties. To its right is the companion Qualcomm Atheros QCA8072 dual-port Ethernet PHY. So basically what we’ve got here is a Linksys-branded and software-customized Qualcomm reference design. And above the QC8072 (and below the two wired Ethernet ports) is the Link-PP HN36201CG dual-port transformer module. There’s nothing notable under the sheet metal square in between the IPQ4018 and QCA8072, by the way, in case you were wondering.
More than 2,500 words in, that’s “all” I’ve got for you today.
There’s another surprise waiting in the wings, but I’ll save that for another teardown another (near-future, I promise) day. Until then, please share your thoughts with me (and your fellow readers) in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Is it time to upgrade to mesh networking?
- The whole-house LAN: Achilles-heel alternatives, tradeoffs, and plans
- Lightning strikes…thrice???!!!
- Teardown: The router that took down my wireless network
- Teardown: Prying open Google Wifi
- Perusing Walmart’s onn. 4K Pro Streaming Device with Google TV: Storage aplenty
- Inside Walmart’s onn. 4K Plus: A streaming device with a hidden bonus
The post A fresh gander at a mesh router appeared first on EDN.
Infineon adds SPICE-based model generation to IPOSIM platform for more accurate system-level simulation
The Infineon Power Simulation Platform (IPOSIM) from Infineon Technologies AG is widely used to calculate losses and thermal behavior of power modules, discrete devices, and disc devices. The platform now integrates a SPICE-based model generation tool that incorporates external circuitry and gate driver selection into system-level simulations. The tool delivers more accurate results for static, dynamic, and thermal performance, taking into consideration non-linear semiconductor physics of the devices. This enables advanced device comparison under a wide range of operating conditions and faster design decisions. Developers can also customize their application environment to reflect real-world operating conditions directly within the workflow. As a result, they can optimize the application performance, shorten time-to-market, and reduce costly design iterations. IPOSIM integrates SPICE to support a wide range of applications where switching power and thermal performance are critical, including electric vehicle (EV) charging, solar, motor drives, energy storage systems (ESS), and industrial power supplies.
In the global transition to a decarbonized future, power electronics are essential for enabling cleaner energy systems, sustainable transportation, and more efficient industrial processes. This transformation increases the demand for advanced simulation and validation tools that allow designers to innovate early in the development cycle. At the same time, they must deliver highly efficient, high-power-density designs such as EV chargers, solar inverters, motor drives, and industrial power supplies, while minimizing design iterations and reducing development costs. Switching losses and thermal performance are decisive factors in this process, yet traditional hardware testing remains time-consuming, costly, and limited in capturing real-world conditions.
With the integration of SPICE, IPOSIM brings the simulation of real switching behavior fully online and helps users optimize their designs at an early stage of the development process. By extending system simulation to real-world conditions, the models make it possible to factor in critical parameters such as stray inductance, gate voltage and dead time. The device characterization reflects the switching behavior under more realistic operating scenarios, taking the selected gate driver into account. The capability is fully integrated into IPOSIM’s multi-device comparison workflow, enabling users to select devices marked with the SPICE icon, configure application environments, and follow a guided simulation process. With its system-level accuracy and intuitive workflow, IPOSIM’s new SPICE-based models enable faster device selection and more reliable design decisions.
The post Infineon adds SPICE-based model generation to IPOSIM platform for more accurate system-level simulation appeared first on ELE Times.



