Збирач потоків

Low-power Wi-Fi 6 MCUs preserve IoT battery life

EDN Network - 3 години 27 хв тому

Renesas has announced the RA6W1 dual-band Wi-Fi 6 wireless MCU, to be followed by the RA6W2 Wi-Fi 6 and BLE combo MCU. Based on an Arm Cortex-M33 CPU running at 160 MHz, these low-power microcontrollers dynamically switch between 2.4-GHz and 5-GHz bands in real time, ensuring a stable, high-speed connection.

The RA6W1 and RA6W2 MCUs use Target Wake Time (TWT) to let IoT devices sleep longer, extending battery life and reducing network congestion. They consume as little as 200 nA to 4 µA in deep sleep and under 50 µA while checking for data, enabling devices to stay connected for a year or more on a single battery. This makes them well-suited for applications requiring real-time control, remote diagnostics, and over-the-air updates— for example, environmental sensors, smart home devices, and medical monitors.

Alongside the RA6W1 and RA6W2 MCUs, Renesas launched two fully integrated modules designed to reduce development time and accelerate time to market. The Wi-Fi 6 (RRQ61001) and Wi-Fi 6/BLE combo (RRQ61051) modules feature built-in antennas, certified RF components, and wireless protocol stacks that comply with global network standards.

The RA6W1 MCU in WLCSP and FCQFN packages, as well as the RRQ61001 and RRQ61051 modules, are available now. The RA6W2 MCU in a BGA package is scheduled for release in Q1 2026.

Renesas Electronics 

The post Low-power Wi-Fi 6 MCUs preserve IoT battery life appeared first on EDN.

Automotive buck converter is I2C-tuned

EDN Network - 3 години 27 хв тому

Optimized for automotive point-of-load (POL) applications, Diodes’s AP61406Q 5.5-V, 4-A synchronous buck converter provides a versatile I2C programming interface. The I2C 3.0-compatible serial interface supports SCL clock rates up to 3.4 MHz and allows configuration of PFM/PWM modes, switching frequencies (1 MHz, 1.5 MHz, 2 MHz, or 2.5 MHz), and output-current limits of 1A, 2 A, 3 A, and 4 A. The output voltage is adjustable in 20-mV increments.

The AP61406Q uses a proprietary gate-driver scheme to suppress switching-node ringing without slowing MOSFET transitions, helping reduce high-frequency radiated EMI. It operates from an input of 2.3 V to 5.5 V and integrates 75-mΩ high-side and 33-mΩ low-side MOSFETs for efficient step-down conversion. Constant on-time (COT) control further minimizes external components, eases loop stabilization, and delivers low output-voltage ripple.

Offered in a W-QFN1520-8/SWP (Type UX) package, the converter is AEC-Q100 qualified for operation from –40°C to +125°C. Its protection suite—including high-side and low-side current-sense protection, UVLO, VIN OVP, peak and valley current limiting, and thermal shutdown—enhances reliability.

AP61406Q product page 

Diodes

The post Automotive buck converter is I2C-tuned appeared first on EDN.

SiC power modules deliver up to 608 A

EDN Network - 3 години 27 хв тому

SemiQ continues to expand its Gen3 QSiC MOSFET portfolio with 1200-V power modules offering high current density and low thermal resistance. The new seven-device lineup includes high-current S3 half-bridge, B2T1 six-pack, and B3 full-bridge modules designed to meet the needs of EV chargers, energy storage systems, and industrial motor drives.

Two of the devices handle currents up to 608 A with a junction-to-case thermal resistance of just 0.07 °C/W in a 62‑mm S3 half-bridge format. The three six-pack modules integrate a three-phase power stage into a compact housing, offering on-resistance from 19.5 mΩ to 82 mΩ, an optimized layout, and minimal parasitic effects. The two full-bridge modules combine current handling up to 120 A with on-resistance as low as 8.6 mΩ and a thermal resistance of 0.28 °C/W.

All parts undergo wafer-level gate-oxide burn-in and are breakdown-tested above 1350 V. Gen3 modules operate at lower gate voltages (18 V/-4.5 V) and reduce both on-resistance and turn-off energy losses up to 30% versus previous generations.

The power modules are available immediately. Explore SemiQ’s entire line of Gen3 MOSFET power modules here.

SemiQ

The post SiC power modules deliver up to 608 A appeared first on EDN.

Handheld analyzers cut through dense RF traffic

EDN Network - 3 години 28 хв тому

With 120-MHz gap-free IQ streaming, Keysight’s N99xxD-Series FieldFox analyzers ensure every signal event is captured. This capability lets users stream and replay complex RF activity to quickly pinpoint issues and verify system performance. The result is deeper analysis and greater confidence that key signal details are not overlooked in the field.

The N99xxD-Series includes 14 handheld models—combo or spectrum analyzers—covering frequencies from 14 GHz to 54 GHz. Each model supports more than 25 software-defined FieldFox applications, including vector network analysis, spectrum and real-time spectrum analysis, noise figure measurement, EMI analysis, pulse signal generation, and direction-finding.

Key capabilities of the N99xxD-Series include:

  • 120-MHz IQ streaming with SFP+ 10-GbE interfaces for uninterrupted data capture
  • Wideband signal analysis and playback for troubleshooting, spectrum monitoring, and interference detection
  • Field-to-lab workflow to recreate real-world signals for lab analysis
  • High RF performance with ±0.1 dB amplitude accuracy without warm-up

A technical overview of Keysight’s FieldFox handheld analyzers and D-Series information can be found here.

Keysight Technologies 

The post Handheld analyzers cut through dense RF traffic appeared first on EDN.

MOSFETs bring 750-V capability to TOLL package

EDN Network - 3 години 28 хв тому

Now in mass production, Rohm’s SCT40xxDLL series of SiC MOSFETs in TOLL (TO-Leadless) packages delivers high power-handling capability in a compact, low-profile form factor. According to ROHM, the TOLL package provides roughly 39% better thermal performance than conventional TO-263-7L packages.

The SCT40xxDLL lineup consists of six devices, each rated for a 750-V maximum drain-source voltage, compared to the 650-V limit typical of standard TOLL packages. This higher voltage rating enables lower gate resistance and a larger safety margin for surge voltages, helping to further reduce switching losses.

In AI servers and compact PV inverters, rising power requirements coincide with pressure to reduce system size, increasing the need for higher-density MOSFETs. In slim totem-pole PFC designs with thickness limits near 4 mm, Rohm’s new devices cut footprint to 11.68×9.9 mm (about 26% smaller) and reduce package height to 2.3 mm, about half that of typical devices.

The 750-V SiC MOSFETs are available from distributors such as DigiKey, Mouser, and Farnell. For details and datasheets, click here.

Rohm Semiconductor 

The post MOSFETs bring 750-V capability to TOLL package appeared first on EDN.

Wise Integration, Powernet and KEC sign MoU to co-develop SMPS solutions for AI server power supplies in Korea

Semiconductor today - 5 годин 44 хв тому
Fabless company Wise Integration of Hyeres, France, together with switched-mode power supply (SMPS) manufacturer Powernet Technologies Corp and power semiconductor firm KEC Corp (both of Seoul, South Korea), have signed a strategic memorandum of understanding (MoU) to co-develop next-generation SMPS solutions designed specifically for AI server applications in South Korea. The partnership aligns with the South Korea’s push to expand AI infrastructure and build out the next generation of high-density data centers...

Splitting voltage with purpose: A guide to precision voltage dividers

EDN Network - 6 годин 39 хв тому

Voltage division is not just about ratios; it’s about control, clarity, and purpose. This little guide explores precision voltage dividers with quiet confidence, and sheds light on how they shape signal levels, reference points, and measurement accuracy.

A precision voltage divider produces a specific fraction of its input voltage using carefully matched resistive components. It’s designed for accurate, stable voltage scaling—often used to shape signal levels, generate reference voltages, or condition inputs for measurement. Built with low-tolerance resistors, these dividers ensure consistent performance across temperature and time, making them essential in analog design, instrumentation, and sensor interfacing (Figure 1).

Figure 1 Representation of an SOT23 precision resistor-divider illustrates two tightly matched resistors with accessible terminals at both ends and the midpoint. Source: Author

A side note: While the term precision voltage divider broadly refers to any resistor-based circuit that scales voltage, precision resistor-divider typically denotes a tightly matched resistor pair in a single package, for example, SOT23. These integrated devices offer superior ratio accuracy and thermal tracking, making them ideal for reference scaling and threshold setting in precision analog designs.

As an unbiased real-world example, the HVDP08 series from KOA is a thin-film resistor network designed for high-precision, high-voltage divider applications. It supports resistance values up to 51 MΩ, working voltages up to 1,000 V, and resistance ratios as high as 1000:1.

Figure 2 The HVDP08 high-precision, high-voltage divider achieves higher integration while reducing board space requirements and overall assembly overhead. Source: KOA

Similarly, precision decade voltage dividers—specifically engineered for use as input voltage dividers in multimeters and other range-switching instruments—are now widely available. Simply put, precision decade voltage dividers are resistor networks that provide accurate, selectable voltage ratios in powers of ten. One notable example is the EBG Series 1776-X, widely recognized for its precision and reliability.

Figure 3 EBG Series 1776-X precision decade resistors incorporate ceramic protection and laser-trimmed thin films to achieve ultra-tight tolerances. Source: Miba

Moreover, digitally programmable precision voltage dividers—such as the MAX5420 and MAX5421—are optimized for use in digitally controlled gain amplifier configurations. Programmable gain amplifiers (PGAs) allow precise, software-driven control of signal amplification, making them ideal for applications that require dynamic range adjustment, calibration, or sensor interfacing.

Poorman’s precision practice

Precision does not have to be pricey. In this section, we explore how resourceful design choices—clever resistor selection, thoughtful layout, and a dash of calibration—can yield surprisingly accurate voltage dividers without premium components. Whether you are prototyping on a budget or refining a DIY instrument, this hands-on approach proves that precision is within reach.

Achieving precision on a budget starts with clever resistor selection: Choosing resistors with tight tolerances, low temperature coefficients, and stable long-term behavior, even if they are not top-shelf brands. A thoughtful layout ensures minimal parasitic effects; short traces, good grounding, and avoiding thermal gradients all help preserve accuracy. Finally, a dash of calibration—whether through trimming, software correction, or referencing known voltages—can compensate for small mismatches and elevate a humble design into a reliable performer.

While selecting resistors, it’s important to distinguish between absolute and relative tolerance. Absolute tolerance refers to how closely each resistor matches its nominal value, say ±1% of 10 KΩ. Relative tolerance, on the other hand, describes how well matched a pair or group of resistors are to each other, regardless of their deviation from nominal. In voltage dividers, especially precision ones, relative tolerance often matters more. Even if both resistors drift slightly, as long as they drift together, the ratio—and thus the output voltage—remains stable.

As an aside, ratio tolerance refers to how closely a resistor pair maintains its intended resistance ratio, independent of their absolute values. In precision voltage dividers, this metric is key; even if both resistors drift slightly, a tight ratio tolerance ensures the output voltage remains stable. It’s a subtle but critical factor when accuracy depends more on matching than on nominal values.

Having covered the essentials, we now turn to a hands-on example, one that puts theory into practice with accessible components and practical constraints.

Operational amplifier (op-amp) circuits are commonly used to scale the output voltage of digital-to-analog converters (DACs). Two popular configurations—the non-inverting amplifier and the inverting amplifier—can both amplify the signal and adjust its DC offset.

For applications requiring output scaling without offset, the goal is to expand the voltage range of the DAC’s output while maintaining its original polarity. This setup requires the op-amp’s positive supply rail to exceed the desired maximum output voltage.

Figure 4 This output-scaling circuit extends DAC’s voltage range without altering its polarity. Source: Author

Output voltage formula: VOUT = VIN (1 + RF/RG)

Scaling in action

To scale a DAC output from 0–5 V to 0–10 V, a gain of 2.0 is required.

Using a 10K feedback resistor (RF) and a 10K gain resistor (RG), the gain becomes 2. This configuration doubles the DAC’s output voltage while preserving its zero-based reference.

You can also design op-amp circuits to scale and shift the DAC output by a specific DC offset. This is especially useful when converting a unipolar output, for example, 0 V to 2.5 V, into a bipolar range, for instance, –5 V to +5 V. But that’s a story for another day.

Precision voltage dividers may seem straightforward, but their influence on signal integrity and measurement accuracy runs deep. Whether you are working on analog front-ends, reference rails, or sensor inputs, careful resistor selection and layout choices can make or break performance.

Have a go-to divider trick or layout insight? Drop it in the comments and join the conversation.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Splitting voltage with purpose: A guide to precision voltage dividers appeared first on EDN.

How LLCs unlock innovation in automotive electronics

EDN Network - 8 годин 38 хв тому

A recent McKinsey mobility survey shows that automobile owners are prioritizing battery range, charging speeds, and reliability when considering an electric vehicle (EV). Automakers are responding to these consumer preferences by developing more resilient power systems with higher power density and advanced battery management systems (BMS) that maximize space while improving performance.

Regardless of manufacturer or vehicle type, EV architecture development prompts digital technology innovations. Yet tried-and-true analog technologies such as integrated magnetics offer measurable benefits, with inductor-inductor-capacitors (LLCs) providing stable voltage regulation and a consistent response to the load changes needed for EV charging.

LLC resonant circuits operating within switched-mode DC/DC power converters deliver wide output voltage control, soft switching in the primary, low voltage in the secondary, and slight changes in switching frequency—all requirements for EVs.

Because resonant converters have soft switching capabilities and can handle high voltages with nearly 98% efficiency, these devices can rapidly charge EVs while minimizing energy losses. Compact LLC resonant converter modules enable easy scalability and adaptability for different voltage requirements.

LLC resonant circuit 

As Figure 1 shows, LLC resonant converters include MOSFET power switches (S1 and S2), a resonant tank circuit, a high-frequency transformer, and a rectifier. S1 and S2 convert an input DC voltage into a high-frequency square wave.

Figure 1 An LLC resonant half-bridge converter with power switches S1 and S2, a resonant tank circuit, a high-frequency transformer, and a rectifier. Source: Texas Instruments

The resonant tank circuit consists of a resonant capacitor (Cr), a resonant inductor (Lr) in series with the capacitor and transformer (T1), and a magnetizing inductor (LM) in parallel with the capacitor and transformer. Using two inductors allows the tank circuit to respond to a broad range of loads and to establish stable control over the entire load range.

Oscillating at the resonant frequency (fR), the resonant tank circuit eliminates square-wave harmonics and outputs a sine wave of the fundamental switching frequency to the input of T1. Operating the circuit at a switching frequency at or near fR causes the resonant current to discharge or charge the capacitance just before the power switch changes state.

By shaping the current waveform, the resonant tank circuit causes S1 and S2 to turn on at 0 V (zero voltage switching) and turn off at 0 A (zero current switching). The resultant soft switching increases efficiency, decreases energy losses, reduces stress on power systems, and eliminates voltage and current spikes that cause electromagnetic interference (EMI). Soft switching also enables LLC resonant converters to handle a wide range of input and output voltages.

T1 provides input/output isolation. Electrically isolating the input and output circuits prevents ground loops and minimizes interference. Isolation also prevents voltage fluctuations or transients from propagating and allowing voltage variations. After T1 scales the voltage up or down, the rectifier (D1, D2, and CO) converts the sine wave into a stable DC output.

How LLC solutions support high power density 

LLC resonant converters support the growing demand for higher-power-density solutions. Since these converters operate at high switching frequencies while maintaining high efficiency, designers can integrate smaller and lighter transformers and inductors into the LLC package.

Integrating Lr and T1 into a single magnetic unit increases the converter’s power density and circuit efficiency. For EV designers, the size, weight, and cost savings gained make it possible to incorporate more functionality into limited spaces. Optimizing the T1 winding and core structure allows the converter to operate within thermal limits.

Strategically and selectively integrating protection features and intelligent control capabilities into analog controllers reduces system complexity while maintaining performance. Using LLC converters allows manufacturers to move beyond the basics toward adaptive power systems and advanced control methods.

Input power proportional control (IPPC) represents a growing focus on the intelligent power management available through LLC resonant circuits.

As shown in Figure 2, IPPC widens the control range of an LLC converter by modulating the switching frequency and comparing the input power to a control signal. By regulating the output voltage and current, the feedback loop directly controls the converter’s input power.

Figure 2 Simplified application schematic for the Texas Instruments UCC25661-Q1 LLC controller implementing the IPPC scheme. Source: Texas Instruments

With the control signal proportional to the input power, the signal becomes limited in range and limits the converter’s power output. As a result, the control signal works as a load monitor regardless of any variations in the resonant converter’s output voltage, preventing unwanted system shutdowns while also protecting valuable system components.

 LLC applications for LEVs

Light electric vehicles (LEVs) include mopeds, scooters, bikes, and golf carts. Adopting the LLC topology for onboard and external DC/DC converters in an LEV improves the charger efficiency within the battery power and voltage ranges, regardless of the charging architecture. Using an LLC resonant converter also supports the high-power density and efficiency requirements of an LEV while reducing EMI and noise.

When compared to traditional flyback converters and parallel-resonant converters, LLC converters offer specific advantages for LEVs.

One advantage exists through the operation of LLC converters at the wide input and output voltages that match LEV charging requirements. Wide-output LLC converters with IPPC work well for LEVs by supporting constant current and constant voltage charging.

Instead of going into burst mode with a low battery voltage, the converter maintains the operating mode and minimizes ripple into the battery. The stable operating mode shortens the time needed to charge the battery and extends battery life.

LLC applications for PHEVs and EVs

Plug-in hybrid electric vehicle (PHEV) and EV architectures can use LLC resonant circuits for the DC/DC converter, BMS, onboard charger (OBC), and traction inverter subsystems. Along with the high efficiency established through zero-voltage switching, resonant converters provide high power density and decreased switching losses.

Figure 3 is a block diagram of a DC/DC converter combined with an active power factor correction (PFC) circuit. The PFC brings the input current and voltage waveforms in phase and increases the system efficiency.

After applying AC power to the input of the PFC stage, the boost voltage from the PFC combines with the filtered voltage at the DC-link capacitor and becomes the input for the DC/DC converter.

Figure 3 DC/DC converter block diagram. Source: Texas Instruments

EV battery management systems monitor and control state of charge, state of health, and residual capacity to maintain the safe operating range of the battery cells.

Within these broad functions, the BMS monitors the voltage, current, and temperature of the batteries and protects against deep discharge, overcharging, overheating, and overcurrent conditions. The cell balancing function of a BMS ensures that each cell in a battery pack has a uniform charging and discharging rate.

Resonant converters provide precise energy management, scalability, and the isolated power needed for a BMS as represented in Figure 4. As EVs incorporate more loads, the power requirements for high- to low-voltage conversion increase and require a higher power density.

Figure 4 An isolated DC/DC converter isolates the high-voltage battery from the low-voltage battery. Source: Texas Instruments

The LLC topology is a good fit for OBC applications because it addresses the need to adapt the output voltage according to the battery’s charging voltage range. LLC resonant converters simply adjust the voltage with the switching frequency.

While traction inverters convert energy stored in the battery into instantaneous multiphase AC power to drive traction motors, LLC resonant circuits operate within the subsystems that support inverters. These subsystems provide input power protection, signal isolation, isolated and non-isolated DC/DC power supplies, current and voltage sensing, and signal isolation.

LLC innovation for LEVs, PHEVs, and EVs

The high efficiency and compact size of LLC resonant circuit modules maximize vehicle range while cutting costs. LLC converters do have limitations, however, when meeting the output capacity requirements of evolving technologies. Limited power capacity and performance degradation under dynamic conditions require a different approach.

As next-generation zone EV architectures become standard, newer PHEVs and EVs will rely on multiple LLC converters distributed throughout the vehicle within the zone control module to optimize distribution, preserve output power stability, and deliver higher power capacity.

New EVs will also use bidirectional DC/DC LLC converters to connect the high-voltage battery with a low-voltage supply, improve charger efficiency, facilitate charging from and discharging to the grid, and reduce space and costs.

Other improvements include producing LLC resonant converters with two transformers and integrating lightweight planar transformers into the tank circuits.

Dual transformer converters may provide wider-range output voltages while maintaining high efficiency while charging. Using planar transformers in converters reduces the weight and size of converter modules.

EV consumer acceptance

Each innovation represents another step toward widespread consumer acceptance of EVs. In turn, EV adoption reduces greenhouse gas emissions, improves local air quality, and reduces the impact on human health.

Andrew Plummer is a product marketing engineer in Texas Instruments’ high-voltage power business. He focuses on growing the automotive, energy infrastructure and aerospace and defense sectors. He graduated with a bachelor’s degree in electrical engineering from the University of Florida.

Related Content

The post How LLCs unlock innovation in automotive electronics appeared first on EDN.

Cornell develops HEMTs on single-crystal AlN substrate for RF power amplifiers

Semiconductor today - 11 годин 50 хв тому
Cornell University has developed a new transistor architecture for high-power wireless electronics that addresses supply chain vulnerabilities for gallium (‘XHEMTs on Ultrawide Bandgap Single-Crystal AlN Substrates’, Advanced Electronic Materials, 29 November 2025...

Nuvoton Emphasises Need to Strengthen Taiwan-Israel R&D Collaboration

ELE Times - 11 годин 52 хв тому

Nuvoton Technology showcased its leadership in international expansion by participating in the “Israeli-Taiwanese Business Seminar,” hosted by the Economic Division of the Taipei Economic and Cultural Office in Tel Aviv from November 15 to 23. Drawing attention to the practical advantages of its established R&D center in Israel, Nuvoton played a key role in the seminar, sharing on-the-ground insights from its cross-border expansion.

Nuvoton stated that investments in global hubs such as Israel are not only an expansion of its business footprint but also a critical part of its long-term development strategy. By connecting innovation talent and technology networks worldwide, Nuvoton aims to address emerging market challenges and opportunities while helping build a more resilient and competitive Taiwan-Israel technology ecosystem.

During the seminar, Nuvoton highlighted opportunities for collaboration in R&D, technology, and market development among global enterprises. The company’s practical experience offered the delegation concrete guidance for establishing operations in Israel and accessing its innovation resources, reflecting the collaborative spirit within Taiwan’s technology community.

Nuvoton emphasized the complementary strengths of Taiwan and Israel in the high-tech sector: Taiwan excels in IC designing, while Israel leads in software innovation. Building on these advantages, Nuvoton has accelerated its global R&D strategy by establishing an R&D center in Israel, enabling the company to strengthen its international competitiveness.

The post Nuvoton Emphasises Need to Strengthen Taiwan-Israel R&D Collaboration appeared first on ELE Times.

SemiQ launches Gen3 1200V S3 modules for high-power industrial and EV applications

Semiconductor today - 12 годин 1 хв тому
SemiQ Inc of Lake Forest, CA, USA — which designs, develops and manufactures silicon carbide (SiC) power semiconductors and 150mm SiC epitaxial wafers for high-voltage applications — has expanded its third-generation QSiC MOSFET product line, including devices with what is claimed to be an industry-leading current density and thermal resistance...

element 14’s DevKit HQ: A One Stop Development Kit Solution

ELE Times - 12 годин 15 хв тому

Engineering is all about trying and testing. According to a survey conducted by element 14, most engineering professionals feel that finding the right development kit is a major challenge. Identifying a holistic development kit is essential for most engineers before starting a project. They value standard interfaces and extensibility, often combining or modifying multiple kits to build prototypes and proof-of-concept designs.

Element 14 has come forward with DevKit HQ, a new online resource that brings evaluation boards, development kits, single board computers (SBCs), tools, and technical documents, all in one place. DevKit HQ brings together resources from key supplier product families, like Analog Devices, NXP, AMD, STMicroelectronics, Microchip, Infineon, Renesas, Raspberry Pi, BeagleBoard, Arduino, and more, for multiple purposes, such as, AI/ML, IoT, sensors, wireless, motor control, and power management. This makes it easy for developers to discover, compare, and accelerate their embedded design and innovation.

The site enables engineers to quickly find the latest development kits and modular solutions by application, along with available demo and application software. Engineers can also easily locate evaluation boards that match a supplier’s product family or series.

Additionally, the site features each kit’s datasheets, application notes, training videos, reference designs and more. Together, these resources help engineers accelerate design decisions and drive innovation across various applications, including AI, IoT, sensors, wireless, motor control and power management.

“Our mission is to make life easier for design engineers,” said Daniel Ford, Vice President of Sales at element14. “With the DevKit HQ, we’ve created the leading destination where they can search development kits by application as well as explore new technologies, experiment with the latest kits, and move from idea to prototype faster, freeing up more time to focus on innovation.’’

The post element 14’s DevKit HQ: A One Stop Development Kit Solution appeared first on ELE Times.

USB-IF Hardware Certification to Anritsu for USB4 2.0 Test Solution

ELE Times - 13 годин 40 хв тому

ANRITSU CORPORATION gets certified by the USB Implementers Forum (USB-IF) for its test solution for the latest USB4 Version 2.0 (USB4 v2) communication standard.

The solution is based on the Signal Quality Analyzer-R MP1900A and provides advanced USB device evaluation capabilities. It helps improve the quality and reliability of products implementing the USB4 Version 2.0 standard, supporting widespread deployment of next-generation high-speed interfaces.

As of December 2025, USB4 v2 is the most advanced USB standard, delivering data transfer speeds of up to 80 Gbit/s — twice that of USB4 v1 (40 Gbit/s). This supports next-generation applications, such as high-resolution video transmission, external GPUs, high-speed storage, and VR/AR devices.

Furthermore, the specification significantly improves communication performance and reliability. By introducing innovations including Pulse Amplitude Modulation 3-level (PAM3) signaling to improve bandwidth efficiency, the Frequency Variation Profile to enhance the stability of link training — a signal quality and initialization procedure — and a new TS2.CLKSW training sequence incorporating clock switching.

Current demand for evaluation and certification testing is driven primarily by semiconductor manufacturers producing USB4 v2 control ICs. Looking ahead, adoption is expected to expand to test houses for test equipment deployment and, in the long term, to consumer product manufacturers of USB4 v2 hubs, docking stations, and cables.

Product Overview: Signal Quality Analyzer-R MP1900A

The MP1900A is a high-performance Bit Error Rate Tester (BERT) supporting receiver tests for multiple high-speed interfaces, including PCIe, USB, Thunderbolt, DisplayPort, and 400 GbE/800 GbE. It combines industry-leading PPG technology for high-quality waveforms with a high-sensitivity error detector, precision jitter sources (SJ, RJ, SSC, BUJ), and noise sources (CM-I, DM-I). The MP1900A also supports link training and LTSSM analysis for comprehensive high-speed device evaluation.

The post USB-IF Hardware Certification to Anritsu for USB4 2.0 Test Solution appeared first on ELE Times.

As Energy-Efficient Chips Are Rising — HCLTech × Dolphin’s New Partnership Gives the Trend a Heavy Push

ELE Times - 14 годин 13 хв тому

Amid the ongoing push, HCLTech & Dolphin Semiconductors have announced a strategic partnership aiming to develop energy-efficient chips for IoT and data centre applications. As the world moves towards energy-efficient chips, it becomes more imperative than ever to trace the lines that are destined to become the future trends of the industry, at large. When chips come into the picture, energy efficiency is the most organic issue to crop up as it decides the longevity and reliability of the concerned chip. As per the statement by HCLTech and Dolphin Semiconductors, the partnership aims to support enterprises seeking to improve energy efficiency and performance as computing workloads increase.

What are Energy-Efficient Chips? 

Energy-efficient chips are integrated circuits designed to perform computations while minimizing power consumption to extend battery life, reduce heat generation, and also lower the operational costs. Its architecture includes specialized cores, such as Neural Processing Units (NPUs) and Graphics Processing Units (GPUs), or, more broadly, AI accelerators rather than conventional CPUs. This is to ensure that tasks are performed using the most efficient hardware possible. 

Why is it important? 

The proliferation and development of energy-efficient chips is primarily important because, as the usage increases, the power consumption would also increase, and in that case, it is important and indispensable to cut the power requirement and bring the hardware to optimum performance, keeping in mind the sustainability and operation limitations, as the power needed would affect the environment and also cost money simultaneously. 

HCLTech X Dolphin Semiconductors Partnership 

HCLTech will integrate Dolphin Semiconductor’s low-power IP directly into its SoC design workflow, creating scalable, energy-efficient chips that handle a wide range of compute needs while keeping power use in check.

At its core, energy efficiency requires a holistic, full-stack design effort — from initial architecture to the software that ultimately runs on the chip.

The post As Energy-Efficient Chips Are Rising — HCLTech × Dolphin’s New Partnership Gives the Trend a Heavy Push appeared first on ELE Times.

Advanced GAA Chips: Minimizing Voltage Loss and Improving Yield

ELE Times - 15 годин 34 хв тому

Courtesy: Lam Research

  • As advanced logic chips decrease in size, voltage loss can increase
  • An emerging solution is backside power delivery networks that use SABC architecture

The problem: As metal pitch scaling shrinks to support the next generation of logic devices, the IR (or voltage) drop from conventional frontside connections has become a major challenge.

As electricity travels through a chip’s metal wiring, some voltage gets lost because wires have resistance.

  • If the voltage drops too much, the chip’s transistors can’t get enough power and can slow down or fail.
  • In addition, the resistance of back-end-of-line (BEOL) metal lines and vias is dramatically increasing.

The solution: Backside power delivery networks (BSPDN) can address these challenges and are currently widely studied as an alternative to front-side power delivery and contact schemes.

Virtual Study Compares DBC and SABC on a GAA Device

The Semiverse Solutions team conducted a virtual study using SEMulator3D to analyze gate-all-around (GAA) devices that use BSPDN.

In the Design of Experiments (DOE), the team focused on a process window for a GAA device that uses a direct backside contact (DBC) architecture and compared it to a GAA device process window using self-aligned backside contact (SABC) architecture.

DBC architecture, used to connect contacts with source/drain structures, requires a deep silicon etch, a small edge placement error (EPE), and precise alignment when used in an advanced GAA transistor.

The Semiverse Solutions team conducted the virtual experiment to see if an SABC scheme could address these precise alignment challenges.

Analyzing the process window of a device helps engineers and researchers understand the range of manufacturing conditions under which a device can be reliably produced while meeting its performance and quality requirements.

By comparing the process windows of different architectures, researchers can identify which design offers greater tolerance to manufacturing variations, fewer defects, and better overall performance.

Figure 1 displays the major integration (process) steps for a proposed SABC scheme. The process steps are like those used during a typical GAA logic process manufacturing flow.

Figure 1. The manufacturing process flow of a proposed self-aligned backside contact (SABC) scheme

Study Methodology

The team ran multiple virtual fabrication experiments that varied the smallest critical dimensions (CD), overlay, and over-etch amount of the through-silicon via (TSV).

Virtual measurements were taken of the number of opens and shorts generated (number of nets in the structure), high-k damage (high-k material volume change), and the backside contact area of the typical structure.

The manufacturing success criteria were specified as follows:

  • Backside contact area (CT to epitaxy): ≥150 nm2
  • High K damage: <20 nm3
  • No contact with the metal gate shorts

Using these criteria, the results of each virtual experiment in the DOE were classified as a “pass” or “failure” event.

SABC Indicates Higher Yield for Advanced Logic Nodes

The DOE results are shown in Figure 2 as a set of process window contour diagrams at various CD, overlay, and over-etch amounts for both the SABC and DBC contact schemes. The green areas in Figure 2 represent “pass” results, while the red areas represent “fail” events.

Figure 2. Comparison of SABC and DBC process windows

Due to its self-aligned capabilities, the SABC approach exhibits a much larger process window (larger green area) than the DBC architecture.

The DBC process window is very narrow, especially when the TSV is 10 nm over- or under-etched. The TSV failure exhibits itself as high-k damage, source-drain to metal gate shorts caused by excessive over-etching, small contact areas created by TSV under-etch and increased EPE caused by a larger TSV CD and additional overlay errors.

The virtual study demonstrated that the SABC approach to backside power minimizes EPE and over-etch variations in the TSV process and provides a much larger and more stable process window than a DBC approach. SABC is promising for use at advanced logic nodes and may support further logic device scaling.

The post Advanced GAA Chips: Minimizing Voltage Loss and Improving Yield appeared first on ELE Times.

The Leading Five Essential Context Window Concepts In LLMs

ELE Times - 15 годин 44 хв тому

Courtesy: Micron

This story outlines five essential concepts that explain how large language models process input within a context window. Using clear examples and practical insights, it covers foundational ideas like tokenization, sequence length, and attention. The goal is to help readers better understand how context affects model behavior in AI applications. We also present results from an analytical model used to estimate system behavior, to show how scaling input and output sequence lengths impacts response time. The results highlight how decoding longer outputs takes significantly more time, pointing to the importance of fast memory systems like HBM in supporting efficient inference at scale. These concepts are useful for anyone working with or designing prompts for generative AI systems.

Context window versus length

When working with large language models, it’s important to understand the difference between concepts like context window, context length, and sequence length. These terms are often used interchangeably, which can lead to confusion. In this blog, we will define and refer to them as distinct concepts.

The context window is the model’s maximum capacity: the total number of tokens it can process at once, including both your input and the model’s output. As a simple example, let’s define the rectangle size below as equivalent to a 100,000 token context window.

The context length, on the other hand, is how much you’ve put into that space, which is the actual number of tokens—input tokens (blue) and output tokens (green)—currently in use during a conversation. For example, if a model has a 100,000-token context window and your input uses 75,000 tokens, only 25,000 tokens remain for the model’s response before it reaches the upper limit of the window.

Sequence length typically refers to the length of a single input or output sequence within that window. It’s a more granular measure used in model training and inference to track the length of each segment of text.

The context window sets the limit for how much information a model can process, but it does not directly reflect intelligence. A larger window allows more input, yet the quality of the output often depends on how well that input is structured and used. Once the window is full, the model may lose coherence, leading to unwanted outcomes (for example, hallucinations).

Tokens aren’t words

If the context window is defined by an upper limit (say 100,000), tokens are the units that measure what fits inside, and it’s important to understand that tokens are not words. The words you type into a prompt are fed to a “tokenizer,” which breaks down text into tokens. A single word may be split into several tokens. For example, “strawberry” becomes three tokens and “trifle” becomes two. In other cases, a word may consist of just one token, like “cake”.

St raw berry

We can test this with a quote from the novel “Emma” by Jane Austen.

“Seldom, very seldom, does complete truth belong to any human disclosure; seldom can it happen that something is not a little disguised or a little mistaken.”

This text contains 26 words, and when run through the tokenizer of the Mistral language model provided by lunary.ai1, it produces 36 tokens. That’s about 0.72 words per token or roughly three-fourths of a word.

The ratio varies, but for English words, you might average around 0.75 words per token. That’s why a model with a 100,000-token context window (per user) does not necessarily fit 100,000 words. In practice, you might fit closer to 75,000 English words or fewer, depending on the text.

estimatedtokens≈words∗1.33

To further check the token-to-word ratio at scale, we ran a quick analysis using eight well-known literary works from Project Gutenberg, a library of more than 75,000 free e-books. First, we counted the words in each book, then ran the texts through a tokenizer to get the token counts. After comparing the numbers, we found that the average ratio was about 0.75 words per token.

Knowing this ratio can help everyday users get more out of their interactions with AI. Most AI platforms, like ChatGPT or Claude, operate with token-based constraints. That is, they process text in tokens, not words, so it’s easy to misjudge how much content you can actually fit into a prompt or response. Because usage is often measured in tokens rather than words, knowing the ratio makes you aware of any limits so you can plan your inputs more strategically. For example, if a model has a 4,000-token input limit, that’s roughly 3,000 words. This is good to know when feeding a model a long document or dataset for tasks like finding key insights or answering questions.

Attention is not equally distributed within the context window

AI hallucinations are often misunderstood as quirky behavior or signs that a language model is buggy and unreliable. But hallucinations are not random; they often stem from how a model might process and prioritize information, which is determined by things like how well a model is trained and how it distributes attention. In transformer-based models like GPT or Claude, attention is the mechanism that helps the model decide which parts of the context are most relevant when generating a response. To better understand the concept of attention, imagine being at a noisy cocktail party. If someone calls your name, you instinctively tune in.

“Frodo! Over here!”

But what if four people call your name at once from different corners of the room?

“Frodo! It’s me, Sam!”

“Frodo! Come quick!”

“Frodo! Look this way.”

“Frodo … yesss, precious Frodo …”

You hear them all, but your focus is now split. You might even pay more attention to the voice you recognize or the one closest to you. Each sound gets a fraction of your attention, but not all equally. It’s not a perfect analogy but this is one way you can conceive of how attention works in large language models. The model pays attention to all tokens in the context window, but it gives more weight to some than to others. And that’s why attention in large language models is often described as “weighted”, meaning that not all tokens are treated equally. This uneven distribution is key to understanding how models might prioritize information and why they sometimes appear to lose focus.

More context may or may not mean better answers

A model can scan all tokens within the context window, but it doesn’t consider each token with equal interest. As the window fills (say, up to 100,000 tokens), the model’s attention becomes more diffuse. In its attempt to keep track of everything, clarity may diminish.

When this happens, the model’s grip on the conversation loosens, and a user might experience slower, less coherent responses or confusion between earlier and later parts of the conversation thread. Hallucinations, from the Latin hallucinat or “gone astray in thought,” often appear at this edge. It’s important to understand that these occurrences are not signs that the model is malfunctioning. It is actually an indication that the model is reaching its threshold, where it is operating at capacity. And here is where the model may struggle to maintain coherence or relevance across long spans of input.

From the model’s perspective, earlier tokens are still visible. But as the window fills up and its attention becomes more distributed, the precision of response may degrade. The model might misattribute facts from previous prompts or fuse unrelated ideas into something that sounds coherent but isn’t. In the case of hallucinations, the model isn’t lying. It’s reaching for a reasonable response from fragments it can no longer fully distinguish, making a guess under the strain of limited attention. And to be fair, the model is working with what it has, trying to make sense of a conversation that’s grown too big to reliably focus on. Understanding attention in this way helps explain why more context doesn’t always lead to better answers.

That said, long context windows (greater than 200,000 and now reaching 1 million or more tokens) can be genuinely useful, especially for complex reasoning and emerging applications like video processing. Newer models are being trained to handle longer contexts more effectively. With better architecture and training, models can more effectively manage attention across inputs, reducing hallucinations and improving responses. So, while more context doesn’t always lead to better answers, newer models are getting better at staying focused, even when the conversation gets really long.

Sequence length affects response time

Following the explanation of attention, it’s useful to understand how sequence length affects inference. We can now ask a practical question: What happens when we vary the sequence length?

The input sequence length affects time to first token (TTFT), the time from entering the request to receiving the first output token. TTFT matters most for GPU performance, as it reflects how quickly the GPU can process the input and then compute it to output the first token. In contrast, varying the output sequence length affects inter-token latency (ITL) or the time between each generated token. This latency is more relevant to memory usage.

To explore this further, we used a first-order analytical model to estimate end-to-end latency during LLM inference. We ran the model using Llama 3 70B on a single GPU with high-bandwidth memory (HBM3E 12H, 36GB across 8 placements), and a context window of 128,000 tokens.

The chart below shows the impact of increasing input sequence length (ISL) and output sequence length (OSL) on the entire end-to-end latency. Each measurement was taken with a batch size of 1 (i.e., a single request).

Figure. End-to-end latency per user (seconds), for both output and input sequence lengths

Key takeaways

One important takeaway when measuring latency is that it takes much more time for the model to generate a long response than to process a long prompt. The model can read and understand the input all at once, which is relatively fast even for lengthy prompts. But generating a response happens token by token, with each new token depending on everything generated so far. This takes more time because the model follows an autoregressive process, meaning each token is built on the ones before it. For example, increasing the input sequence length (ISL) from 2,000 to 125,000 tokens results in only a roughly two times increase in latency. In contrast, scaling the output sequence length (OSL) across the same range leads to a roughly 68 times increase. This difference arises because longer input sequences drive more prefill computation, which can process multiple tokens in parallel. Meanwhile, decoding is inherently sequential, generating one token at a time, and that takes more time and demands much more memory bandwidth.

The implication is that longer output sequences result in longer decode times, and that means the GPU and memory subsystem remain active longer. In this context, power efficiency at the hardware level becomes especially valuable. A memory device like Micron HBM3Ee that runs using much less power than comparable high-bandwidth memory devices can complete identical inference tasks while using less energy.

For a user, this insight underscores the importance of optimizing prompts and managing input length (trimming any unnecessary content, for example). And if you’re building real-time apps, you can usually handle longer inputs without much trouble. But keeping the output concise may help your system stay fast and responsive.

The important role of memory for context length

Inference latency depends not only on sequence length but also on how the system manages the demands on compute and memory as it processes inputs and generates outputs. Many recently released language models now advertise context windows that exceed one million tokens. These larger context windows (when fully utilized) place greater stress on the memory subsystem, which may appear to the user as slower execution and increased runtimes. Newer memory technologies will offer higher bandwidth and larger capacity to support these larger context windows, improving response times and overall throughput (tokens per second). But these performance gains raise questions about energy use. As inference workloads scale to millions of tokens, designing systems that use power efficiently becomes increasingly important. Systems that remain active for longer periods require more power, and memory devices designed to use less power without sacrificing bandwidth can help address this challenge. For example, Micron HBM3E consumes much less power than competing high-bandwidth memory devices. And this lower power can help reduce the amount of energy AI consumes during inference workloads involving millions of tokens. Looking ahead, next-generation memory technologies, like HBM4 and HBM4E, are being designed to deliver even higher memory bandwidth and capacity while improving power efficiency. These improvements, which stem from advances in process technology (Micron’s use of 1-gamma DRAM), are expected to enable faster data movement with lower energy cost. Moreover, as these technologies mature, they may further reduce latency and improve throughput and energy use in large-scale AI deployments.

The post The Leading Five Essential Context Window Concepts In LLMs appeared first on ELE Times.

Revamping a Solid-State Battery Cell

ELE Times - 15 годин 57 хв тому

Courtesy: Comsol

Ever experience these common annoyances? You’re about to leave for the day and realize you forgot to charge your phone. Or, you’re on the road and remember your EV needs a charge. The integration of solid-state batteries into electric vehicles, electronics, and energy storage systems — once realized — will leave problems like these in the past. Solid-state batteries have the potential to charge faster and last longer, all while being a safer option. Simulation can help battery designers investigate solid-state batteries to better predict their performance for future uses.

The Solid-State Battery: A Fervently Anticipated Development

Solid-state batteries (SSBs) use a solid electrolyte to conduct ions between both electrodes, whereas conventional batteries use a liquid electrolyte or gel polymer. This difference gives SSBs many advantages over lithium-ion batteries, such as a longer lifecycle. Batteries in current EVs typically last 5–8 years, while EVs with solid-state batteries could increase this to 15–20 years. In addition, while the average Li-ion battery experiences degradation at 1000 lifecycles, an SSB could remain at 90% original capacity after 5000 cycles.

Incorporating solid-state batteries into electric vehicles means less time waiting for them to charge.

SSBs can complete a charge cycle much faster than other battery types, too. While the typical Li-ion battery takes about 45 minutes to reach 80% charge, an SSB could reach the same charge in 12 minutes, or in as little as 3 minutes. SSBs are also safer for consumer use. Without a liquid electrolyte, they are much less flammable and volatile than other options. Plus, by avoiding liquid electrolytes and carbon anodes, they offer more energy storage density (Ref. 1).

A Design Challenge Spanning Decades

The solid electrolyte was first discovered by physicist Michael Faraday in the early 1830s, and its mechanisms and potential uses have been a subject of research ever since. Fast-forward to the 2020s, when a wide variety of automakers, electronics companies, and research institutions are investing a large portion of their R&D in SSBs. However, battery research and design is an expensive and resource-intensive processes. Simulation can help battery developers investigate design challenges under different operating conditions and use cases.

SSBs are subject to a phenomenon called lithiation, in which the electrodes within the solid components of the battery grow and shrink, causing mechanical stress. In addition, the movement of ions in the battery during charge–discharge cycles causes stress and volume changes. These issues can lead to reduced lifespan and energy storage in the battery and even mechanical failure.

Multiphysics modeling can be used to analyze an SSB design. In the Heterogeneous Model of a Solid-State Battery Unit Cell tutorial model, we take you through the modeling process in the COMSOL Multiphysics software.

Modeling a Solid-State Battery in COMSOL Multiphysics

The Heterogeneous Model of a Solid-State Battery Unit Cell tutorial model simulates the charge–discharge cycle in an SSB, particularly how charge and mass transport interact with solid mechanics. The model geometry is made up of a composite positive electrode, a lithium metal negative electrode, and a solid electrolyte separator, located between both electrodes.

The geometry of the solid-state battery model.

Specialized physics interfaces and features make the setup of the model straightforward. The conservation of charge, mass, and momentum can be modeled with the Lithium-Ion Battery, Transport in Solids, and Solid Mechanics interfaces, respectively. There are also specialized features for modeling:

  • Plating at the negative electrode
  • Growth and shrinkage of the positive electrode
  • Redox reaction at the electrode–solid electrolyte interfaces
The SSB model and physics settings in COMSOL Multiphysics.

The simulation of the heterogeneous SSB evaluates certain quantities at the end of charge, including the electric and ionic potentials and von Mises stress in the solid electrolyte.

The results also include the evaluation of global quantities, including the cell voltage, state of charge, and stress in the z direction of the battery.

Paving the Way for SSBs

Looking into the mechanics of solid-state batteries with simulation can help researchers, automakers, and electronics companies incorporate SSBs into components and devices in the coming years — not decades.

The post Revamping a Solid-State Battery Cell appeared first on ELE Times.

The Rise of Smart Audio: From Sound to Intelligence

ELE Times - 16 годин 13 хв тому

Courtesy: Infineon

What if your fridge warned you before it broke? Well, now it’s possible.

Imagine if your refrigerator hears a subtle vibration, predicting failure before it happens. Your oven guides you by the sound of your food sizzling. A health patch silently monitors your breathing patterns, alerting you to irregularities in real-time.

This is not science fiction. This is the sign of a new era, where audio drives intuitive, human-centric interactions between people and devices.

Audio as the interface: Beyond buttons and screens

Voice is the most natural interface we have ever known. Unlike traditional inputs, speech allows us to interact hands-free and eyes-free. This makes the technology ideal for daily tasks like cooking, driving, or assisting those with disabilities.

But to make this seamless and personal, technology needs more than a microphone. The solution demands:

  • Persistent and ultra-efficient audio processing
  • Robust AI, engineered to work together at the edge

Power and Performance — The Technology Behind Always-On Audio

Always-on audio relies on ultra-low-power architectures designed to listen continuously without draining energy. Modern microcontroller platforms now integrate autonomous analog subsystems capable of monitoring and pre-processing sound while operating in deep-sleep modes, enabling persistent listening with minimal power consumption.

Arm Cortex-M55

At the processing core, the Arm Cortex-M55 with Helium DSP extensions delivers significantly higher performance for embedded audio tasks—providing up to three times the efficiency needed for real-time signal processing, noise suppression, and on-device inference.

Neural Network Acceleration

Dedicated neural network accelerators further enhance these systems by offloading compute-intensive functions such as wake-word detection and voice activity recognition. This makes continuous listening feasible even in compact, battery-powered devices—from wearables to distributed IoT sensors—while maintaining fast response times and efficient power usage.

Intelligence – Software brings audio to life

DEEPCRAFT AI Suite is the engine behind transformative audio experiences. DEEPCRAFT Voice Assistant solution brings the following features, optimized for low-power Infineon MCUs:

  • Accurate voice commands
  • Custom wake words
  • Keyword recognition

Speech-to-intent AI offers best-in-class performance with high accuracy and minimal false detects, while supporting both native and non-native English speakers.

DEEPCRAFT Audio Enhancement cleans up noisy environments with robust AI techniques:

  • Noise suppression
  • Acoustic echo cancellation
  • Dynamic beamforming

These features are calibrated with easy-to-use tools for rapid integration. Developers can build, test, and deploy voice models via a no-code graphical UI, ensuring faster product development.

Integrated innovation – Affordability meets sophistication

Where once only expensive devices could offer clarity and smart voice recognition, PSOC Edge brings advanced edge audio to everyone.

Its built-in DSP capabilities and audio front-end middleware mean complex processing. From multi-mic beamforming to acoustic event detection, all happens within a single, efficient platform.

Furthermore, DEEPCRAFT’s AI-driven enhancements extend these advantages to cost-sensitive products like entry-level earbuds or low-cost smart sensors. This ensures everyone gets premium experiences.

Your next device, powered by Infineon audio innovation

With traditional solutions, advanced features like real-time voice and sound recognition have required expensive hardware or deep dependence on cloud-based computing. This limitation made them inaccessible to lower-end devices.

The synergy of PSOC Edge hardware and DEEPCRAFT software enables even resource-constrained devices to truly hear, understand, and act—all in real time, on ultra-low power, and with human-like intuition.

This breakthrough makes previously exclusive, cloud-dependent features available across a broader range of devices, democratizing intelligent functionality. Infineon is committed to building interfaces that empower people through natural, voice-driven interaction.

Takeaway

From predictive maintenance to voice-guided cooking, smart audio is reshaping how we live. Infineon’s integrated hardware and software make this transformation accessible to all, enabling high-quality, always-on audio experiences.

The post The Rise of Smart Audio: From Sound to Intelligence appeared first on ELE Times.

Serrated Edges: For Less Noise and Improved Fan Performance

ELE Times - 16 годин 28 хв тому

Courtesy: Cadence

Understanding Noise Reduction in Industrial Fans

Industrial fans are widely utilized across various sectors, including manufacturing, automotive, and energy production, playing a vital role in ventilation and cooling. However, a notable drawback of these powerful machines is the significant noise they produce, which can range from 70 to 120 decibels. A primary contributor to this noise is the aerodynamic turbulence created by the fan blades. Addressing the challenges posed by the noise generated by industrial fans is a continuing focus of research in this area.

One promising avenue for reducing this noise involves passive noise mitigation methods, such as modifying the trailing edges of the fan blades. By incorporating designs with features such as sawtooth or serrated edges, we can effectively reduce noise levels without compromising performance. Computational fluid dynamics (CFD) studies of industrial fan designs can help pinpoint the optimal configuration that enhances performance and minimizes operational noise.

Sawtooth and combed-sawtooth trailing-edge serrations (Avallone et al., 2018)

In the webinar on CFD for Turbomachinery: Boost Performance & Control Noise, Antonis Karasavvidis, principal customer service engineer, and Domenico Mendicino, senior product engineering manager, examine a case study on the CFD analysis of industrial fan blades with serrated edges to understand how these modifications can effectively reduce the noise and enhance performance. This blog provides an overview of the case study presented in the webinar.

Overview: CFD Simulation of Industrial Fan with Serrated Edges

This case study examines the aerodynamic and acoustic performance of a ventilation fan, focusing on modifications to the blade design and their impact on airflow and noise characteristics under turbulent flow conditions. Starting with a baseline design, a ventilation fan was initially created using mean line design tools, achieving a blade tip Mach number of about 0.2. The design features a bell mouth at the inlet and blades constructed in three sections, utilizing NACA 65 profiles. This foundational design serves as a benchmark for subsequent modifications and performance evaluations.

Blade Variations and Design Enhancements

The study examines two types of serrated trailing edges added to the baseline design to achieve noise reduction and potential performance enhancements. These include:

  • Variable Serration: A serration pattern applied with varying geometry along the blade’s trailing edge
  • Uniform Serration: A consistent pattern cut along the trailing edge

Further enhancements include mechanical features such as embossing, pivots, and fillets, which are standard in this type of turbomachinery. Assessing these blade variations allows for comprehensive insight into their aerodynamic and acoustic effects.

Mesh Generation Workflow for Accurate Simulation

In this case study, Fidelity AutoGrid generates a high-quality, low-Reynolds-number mesh comprising approximately 2 million cells in approximately 20 seconds for the baseline design. This mesh is a structured multi-block grid with matching nodes on the periodic boundaries.

Given the complex geometries associated with the serrated trailing edges, an advanced mesh generation workflow was implemented, utilizing an unstructured mesh to capture the complex blade geometry while keeping the high-quality structured multi-block grid for most of the flow path. Utilizing Fidelity AutoGrid and ANSA, structured and unstructured grid strategies were combined to capture the intricate details efficiently.

Results of CFD Simulations

Using the GPU-enabled Fidelity Flow Solver, the simulations investigated the aerodynamic performance of the baseline design, uniform, and variable serrated blades. The solver provided rapid convergence within 200 iterations for the steady-state simulation and 3,600 time steps for an unsteady run with 10 inner iterations. Leveraging GPU acceleration on the Cadence Millennium platform provided high-fidelity results within minutes, even for the mixed-grid simulations.

The results indicated:

  • Trailing Edge Effects: Serrations alter the pressure field near the trailing edges, particularly influencing the mixed-out flow downstream and the wake width
  • Geometric Influence: Longer serration teeth facilitated enhanced energy exchange, correlating with improved aerodynamic performance

Additionally, the hub’s pivot and other mechanical features induced secondary flows, disrupting velocity profiles at the outlet and creating vortices, especially in the serrated configurations.

Turbulent viscosity ratio distribution downstream of the blade for the baseline and uniform serration design

Noise Prediction and Analysis

The study evaluated noise characteristics through pressure fluctuations downstream of the trailing edge using both stationary and moving probes at different span heights. Key findings include:

  • Stationary Probes: Minor differences in noise levels at various heights, dominated by blade-passing frequencies
  • Moving Probes: Significant noise reduction effects at higher spans with serrated blades, while lower spans were governed by turbulence from the pivot and other design complexities
Pressure fluctuations from the three probes located at span heights of 25%, 50%, and 75% on three different designs.

This case study highlights the aerodynamic and acoustic advantages of serrated trailing edges in ventilation fan design. By leveraging advanced mesh generation and GPU-based CFD solvers, the study achieved efficient simulations and precise results. The findings emphasize the importance of optimizing serrated geometries and conducting far-field noise analyses to refine fan performance, reduce noise emissions, and enhance design efficiency.

The post Serrated Edges: For Less Noise and Improved Fan Performance appeared first on ELE Times.

Happy Workbench Wednesdays! A bunch of folks advised that I should clean up my space. Not done yet, but it’s a start

Reddit:Electronics - 21 година 40 хв тому
Happy Workbench Wednesdays! A bunch of folks advised that I should clean up my space. Not done yet, but it’s a start

It’s still a mess; I just reappropriated the mess to my desk for sorting later. But yeah, this environment wasn’t fit for doing anything. And it showed in the quality of my work work (permanent work from home employee) as well as the projects that I had lined up on this desk. Now at least my bench is somewhat tidy, I actually figured out the issue of this HP frequency counter

submitted by /u/Prijent_Smogonk
[link] [comments]

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів