Українською
  In English
Збирач потоків
EPC Space adds EPC7C010 and EPC7C011 half-bridge buck platforms for high-rel and rad-hard applications
NUBURU wins counter-drone directed-energy order from government defense electronics organization in Asia–Pacific
Silvaco expands partnership with APEC on silicon carbide power device development
💥 Конкурс «Передова фундаментальна наука в Україні 2027-2029»
📢 Національним фондом досліджень України оголошено конкурс проєктів з виконання наукових досліджень «Передова фундаментальна наука в Україні 2027-2029»
📢 Національним фондом досліджень України оголошено конкурс «Індивідуальні наукові проєкти 2027-2028»
🔹 Конкурс «Індивідуальні наукові проєкти 2027-2028» спрямований на підтримку актуальних індивідуальних проєктів українських вчених з виконання передових наукових досліджень і розробок.
The system architect’s sketchbook: The football has main character energy


Deepak Shankar, founder of Mirabilis Design and developer of VisualSim Architect platform for chip and system designs, has created this cartoon for electronics design engineers.
The post The system architect’s sketchbook: The football has main character energy appeared first on EDN.
Aehr gains initial order from new silicon photonics transceiver customer
What is the EDA problem worth solving with AI?

AI has become EDA’s favorite buzzword, but behind the keynotes and product names the reality is far messier. Cadence, Synopsys, and Siemens EDA are racing to brand incremental heuristics as “platform AI,” while agentic startups promise copilots that mostly smooth over the pain of using legacy tools.
At the same time, giants in the chip design industry—the users of EDA—like Samsung and Nvidia are quietly assembling their own internal AI stacks, universities are sidelined from real industrial data, and foundation model labs like OpenAI and DeepMind are treated as sophisticated pattern-matching systems rather than creators of true intelligence.
This article argues that all four camps are, in different ways, missing the real opportunity: using AI to change what kinds of hardware–software systems we can verify at all, rather than just speeding up what we already do.
It traces how business incentives, closed ecosystems, and data hoarding are holding the field back—and outlines what a genuinely transformative, open, and collaborative AI-for-chips ecosystem would need to look like.
The current AI content in EDA
For the first time in decades, chip design feels like it’s on the verge of a genuine reset. AI isn’t just a new knob on a timing engine or another heuristic in the regression farm; it’s a chance to rethink how we understand, verify, and evolve insanely complex hardware–software systems.
The question is no longer whether AI will touch chip development, but how deep it will go—and whether we’ll use it merely to polish old workflows or to expand what’s possible to design and prove correct at all.
But are we currently progressing into a direction that is worthy of problem solving? It’s not difficult to imagine how such a future would look like: specialized LLMs, APIs to connect EDA tools, serious research, and exchange of representative user data to optimize flows.
But is the industry currently set up this way?
The big 3 vendor perspective(s)
The EDA industry is loudly declaring that AI has arrived. Cadence, Siemens EDA, and Synopsys (the big three) all showcase “AI-driven” platforms, “agentic” workflows, and “generative” capabilities in their keynotes. Agentic startups promise AI copilots for chip design.
Samsung, Nvidia, and other mega-customers are quietly building their own internal AI stacks. And in the background, universities and foundation model labs like OpenAI and DeepMind are doing their own thing, mostly disconnected from this industrial theater.
Look past the branding and see something much less coherent: four camps, each optimizing for its own incentives, and none addressing the hardest verification and design problems in a serious, integrated way.
The first camp is the big three. One has a narrative that is aggressively polished: AI as a unifying fabric across architecture, implementation, verification, and signoff. On paper, it’s exactly the right idea. In practice, most of what’s publicly visible is a scattering of ML and LLM features bolted onto existing products, wrapped in a platform story that is much stronger in marketing than in reproducible methodology.
There are claims about AI-guided coverage closure and scenario generation, but far fewer detailed case studies that a skeptical verification lead could take apart and rely on. Technically, this company narrative shows it’s doing useful work; strategically, it’s primarily about defending revenue and establishing itself as the “AI platform” customers must buy into.
A second narrative takes a different tone: more pragmatic, less breathless. Their AI pitch is 10–30% improvements in regression time, PPA closure, and debug efficiency. They emphasize that ML is built into the solvers and optimizers rather than exposed as a gimmicky chatbot layer.
For organizations taping out serious silicon, this is credible and attractive: keep existing flows and get incremental wins. But that’s also the problem. It’s AI as advanced heuristics, not AI as a rethinking of verification for trillion-cycle, software-heavy, multi-die systems. The message is “do the same thing, just a bit faster,” which is business-rational and intellectually timid.
And the third narrative, for its part, grounds its AI story in hardware-assisted verification and DFT. They are at least honest about where the real pain is: emulation farms straining under 40‑billion‑gate chiplet designs; massive software stacks; and DFT and power analysis workflows that choke traditional environments. Their use of AI is mostly about better resource utilization, faster compiles, accelerated DFT workloads on emulators, and automated generation of reports and transactors.
This is important, and some of it is genuinely innovative on the infrastructure side. However, it mostly skirts the core question of correctness. There is very little about AI for deep semantic understanding of designs, for test synthesis, for inferring invariants, or for blending learning with formal reasoning at scale. This narrative is focusing on shoveling the verification mountain more efficiently, not on changing the shape of the mountain.
Across all three incumbents, the pattern is consistent. They are not leading on foundational AI for verification. They are inserting ML/LLM features into their products in ways that strengthen their moats and justify platform lock-in. Their AI is largely proprietary, closed, and bound to a single vendor ecosystem. It’s technically competent and strategically defensive.
AI startups—new “Tabula Rasa” approaches
The second camp—agentic AI vendors like ChipAgents, Moore’s Lab, and Bronco AI—looks more disruptive at a glance. They don’t try to build the solvers; instead, they target the workflow of the engineer. These systems ingest RTL, testbenches, logs, coverage reports, specifications, bug trackers, and wikis.
They use large language models plus tool APIs to answer questions like “Why did this regression fail?” or “What should I do next?” They can orchestrate multi-step flows: launch regressions, analyze results, file tickets, update documentation, and propose follow-up tests.
This is a genuine improvement over the current state of affairs where engineers burn countless hours on log archaeology and context switching between silos of information. But being critical, agentic AI today is far better at smoothing human pain points than at addressing the core technical difficulty of verification. These systems sit on top of the incumbents’ tools and rely on whatever APIs those tools expose.
If those APIs are thin, unstable, or intentionally limiting, the “agent” degrades into a clever log parser. And because current LLMs are still brittle on precise semantics, concurrency, and strict correctness, most agentic systems are pattern matchers and orchestrators, not genuine reasoning engines about hardware behaviour. They can triage, guide, and accelerate, but they rarely change what you can prove about a design.
The giant users
The third camp consists of the giant end users like Samsung and Nvidia, who look at all of this and decide to build their own AI ecosystems. They have reasons the vendors can only envy: vast proprietary design portfolios, massive software workloads, custom verification flows, and decades of institutional memory about failures and workarounds. They do something closer to what should have existed from the beginning.
They build internal copilots and agents that understand their architectures, coding styles, constraints, safety regimes, and business priorities. They integrate across the big three vendors’ tools, and a forest of in-house tools. They treat the vendors’ products as engines behind the scenes and construct a domain-specific AI layer on top.
From their point of view, this is the only rational approach. For the ecosystem, it has a downside. Each large customer ends up recreating similar internal stacks in private: similar integrations, similar prompt engineering, similar hacks to get around tool limitations. None of this is published or generalized. The most advanced “AI for chips” work is happening inside the firewalls of a few giants, and the lessons do not propagate.
It is effective and myopic at the same time.
The academic perspective
Meanwhile, the fourth camp being university research occupies an awkward and increasingly marginal position. Historically, academia has been where the big conceptual leaps in verification and synthesis occurred: SAT/SMT-based reasoning, CEGAR, IC3/PDR, and many other ideas that quietly underpin modern tools.
Today, universities explore promising combinations of learning and formal reasoning, program synthesis, and new abstractions for system behavior. But they generally lack access to full-scale industrial designs, closed commercial tools, and realistic data. Tool vendors are hesitant to open their ecosystems; customers are understandably cautious about sharing real designs. Funding pressures drive many projects toward small, benchmark-driven demonstrations rather than risky, large-scale collaborations.
The result is that some of the most interesting ideas—how to fuse symbolic reasoning with learned models, how to automatically infer specifications, and how to reason about software and hardware jointly—are explored on toy problems with no clear path into mainstream flows.
The industry, for its part, is busy shipping incremental ML wrappers, and hardly anyone is building serious bridges between the two worlds. It’s not that universities lack relevance; it is that the industry has structured itself such that the most radical research is almost guaranteed to remain peripheral.
The model foundations
Overlaying all of this are the foundation model labs: OpenAI, Anthropic, Google DeepMind, Meta, and others. These organizations are building the most capable general reasoning systems currently available, and they are rapidly evolving techniques for program synthesis, tool use, and formal-ish reasoning in natural language environments. Yet, in the EDA world, they are mostly treated as commodity model providers: grab GPT or Claude, fine-tune a narrow layer, wire up a chat interface to data logs, and call it an AI feature.
What is largely missing is serious, domain-driven co-design: injecting the structure of hardware, formal semantics, type systems, property languages, and symbolic engines into the models themselves, and conversely exposing the models’ strengths back into the verification stack.
Foundation models will never be optimal for RTL and concurrency out of the box, but the EDA incumbents have done very little to create the conditions under which such specialization could happen in a principled way. If and when one of the big model labs decides that “programs that compile to silicon” is a strategic domain, the current generation of vendor platforms will likely look quaint.
Outlook: Is the industry solving the right problem(s) and what’s the problem worth solving?
Taken together, these four camps are all underperforming relative to what is technically possible. The big three are shipping incremental heuristics and calling them platforms. Agentic vendors are improving workflows but are constrained to shallow semantics.
Samsung, Nvidia, and their peers are building powerful but private stacks that do not lift the state of the art for anyone else. Universities are generating genuinely new ideas without real channels for impact. Foundation model labs are shaping the AI substrate, but the interface with hardware design is thin and unimaginative.
The future that would move the needle is not mysterious. It would involve foundation models explicitly specialized and constrained by rich formal and domain structures; EDA tools exposing deep, stable APIs so that both research systems and agentic orchestrators can drive real flows; serious industrial–academic collaborations around real designs, software workloads, and verification obligations; and end users like Samsung and Nvidia contributing abstractions, interfaces, and benchmark problems instead of quietly hoarding bespoke solutions.
Instead, the industry is drifting toward a patchwork of proprietary “AI experiences” bound to each vendor, plus a small number of sophisticated but opaque internal efforts at a handful of giants. The risk is that we declare victory far too early—that “AI in EDA” hardens into a set of shallow, walled-garden add-ons while the central challenge of scalable correctness for software-heavy, multi-die systems remains largely unsolved.
The real question is not who can generate the flashiest AI marketing or the neatest chatbot demo inside an integrated design environment (IDE). It’s who is willing to open enough of their stack, share enough structure and data, and collaborate deeply enough that AI can change what we are capable of verifying at all, not just shave a few percent off the run time of regressions we already know how to run. Right now, no one in this ecosystem can honestly claim that mantle.
“A new hope”
Despite the current mess of walled gardens, shallow copilots, and private AI stacks, the ingredients for something far better are finally on the table. We have foundation models that can reason over code, decades of formal methods waiting to be supercharged rather than sidelined, and a new generation of engineers who are comfortable treating tools as collaborators, not black boxes.
If vendors open real APIs, if giants like Samsung and Nvidia share abstractions instead of just artifacts, and if universities and model labs are invited into serious, data-rich collaborations, AI can do more than accelerate today’s flows—it can change what we dare to design.
The hopeful view is simple: the next great leap in chips won’t come from any one camp winning the landgrab, but from all of them finally deciding that solving the hard problems together is more valuable than owning the buzzword alone.
Will we get there? Only time will tell.
Simon Davidmann is an EDA industry pioneer and serial technology entrepreneur with over 40 years of experience in simulation and verification. His career has been instrumental in shaping the foundational languages and methodologies used in modern chip design, particularly those now critical for AI/ML hardware. Davidmann was the co-creator of Superlog that became SystemVerilog. After selling Imperas to Synopsys in 2023 and being Synopsys VP for Processor Modeling & Simulation, he left Synopsys and is now an AI + EDA researcher at Southampton University, UK.
Related Content
- AI features in EDA tools: Facts and fiction
- EDA’s big three compare AI notes with TSMC
- DAC 2025: Towards Multi-Agent Systems In EDA
- How AI-based EDA will enable, not replace the engineer
- Next Gen AI EDA Startups Have Potential to Disrupt Design Automation
The post What is the EDA problem worth solving with AI? appeared first on EDN.
UK Semiconductor Centre appoints two new directors
Metallium announces off-take agreement with Indium Corp for critical & precious metals including gallium and germanium
Latest issue of Semiconductor Today now available
BugBuster – Open-source, open-hardware all-in-one debug & programming tool built on ESP32-S3
| Hey everyone, I’ve been working on BugBuster, an open-source/open-hardware debug and programming instrument designed to replace a pile of bench equipment with a single USB-C connection. The goal: give you a device that can program, debug, and manage power and peripherals remotely, so multiple users can share access to physical hardware over the network. Repo: https://github.com/lollokara/bugbuster What it is At its core it’s a software-configurable I/O tool built around the Analog Devices AD74416H and an ESP32-S3. All 12 smart I/O pins are dynamically programmable — you assign their function in software at runtime. I/O specs: ∙ Logic I/O: 1.8 V to 5 V compatible ∙ Analog input: -12 V to +12 V, 24-bit ADC ∙ Analog output: 0-12 V or 0-25 mA (source and sink) ∙ 4 channels can be connected to the high-voltage ADC/DAC simultaneously ∙ The ESP32-S3 exposes a second USB CDC port map a serial bridge to any of the 12 I/O pins directly from the desktop app Measurement modes per channel: voltage input/output, current input/output (4-20 mA loop), RTD (2/3/4-wire), digital I/O, waveform generation (sine, square, triangle, sawtooth to 100 Hz), real-time scope streaming 32-switch MUX matrix (4× ADGS2414D) lets you route signals flexibly between channels. All onboard supplies are fully programmable: ∙ USB-C PD negotiation via HUSB238 (5-20 V input, up to 20 V @ 3 A = 60 W) ∙ Two adjustable voltage domains (3-15 V each, DS4424 IDAC on LTM8063 feedback) ∙ One programmable logic voltage domain ∙ Each output port is e-fuse protected (TPS1641x) current limits and enables set in software ∙ All calibrated with NVS-persisted curves This means you can power your DUT, set its logic level, and adjust supply voltages all programmatically, all remotely. OpenOCD HAT (coming) An expansion HAT based on the RP2040 and Renesas HVPAK will add: ∙ OpenOCD - JTAG/SWD programming and debugging of targets ∙ Additional high-voltage functions from the HVPAK ∙ More I/O expansion I’m ordering PCBs next week. All is open hardware and software on the latter the structure is: ∙ Firmware: ESP-IDF + PlatformIO, FreeRTOS dualo-core (ADC polling, DAC, fault monitor, waveform gen, WiFi all concurrent) ∙ Desktop app: Tauri v2 backend (Rust) + Leptos 0.7 frontend (WASM), 17 tabs covering every hardware function ∙ Protocol: Custom binary BBP over USB CDC - COBS framing, CRC-16, < 1 ms round-trip ∙ Hardware: Altium Designer, schematics and layout in the repo [link] [comments] |
AOI showcases 25dBm ultra-high-power ELSFP for next-gen AI infrastructure
APEC 2026 showcases advances in power electronics

The annual Applied Power Electronics Conference & Exposition (APEC 2026) showcases hundreds of companies that exhibit their latest component and technology advances for system power designers across a wide range of industries. Many of these devices deliver on growing requirements for higher efficiency and higher power density, along with simplifying design to reduce complexity and accelerate time to market.
Power device manufacturers claim major technology advances, including topologies and packaging, for applications ranging from AI data centers and humanoid robotics to fast-charging mobile devices. Still a big area of development is wide-bandgap (WBG) semiconductors, including gallium nitride (GaN) and silicon carbide (SiC) power devices, addressing the need for simpler designs and more flexibility.
Here is a selection of power devices featured at APEC 2026 that target improvements in efficiency and power density, along with simplifying design and saving board space. These are used in a wide range of applications, including AI data centers, appliances, automotive, e-mobility, industrial automation, and robotics.
Breakthroughs and advancesOffering an alternative to resonant power designs, Power Integrations (PI) announced a topology that it calls a breakthrough for flyback power supply design by extending the power range of flyback converters to 440 W. The TOPSwitchGaN flyback IC family combines the company’s PowiGaN technology with its TOPSwitch IC architecture, reducing complexity and improving manufacturability. It can also eliminate heat sinks in many cases, according to PI, and shorten design time and lower total system cost.
TOPSwitchGaN ICs feature 92% efficiency across the load range—from 10% to 100% load—and exceed European Energy-related Products (ErP) regulations at less than 50-mW power consumption for standby and off modes, and it is accomplished without the need for synchronous rectification, PI said. They are suited for high-end appliances, e-bike chargers, and industrial applications.
PowiGaN switches deliver a much lower on-state resistance (RDS(on)) than silicon, which reduces conduction losses, dramatically increasing the power capability of flyback converters, PI said. Thanks to the integration of the 800-V PowiGaN switches, the devices can operate at switching frequencies of up to 150 kHz to minimize transformer size. Other specs include no-load consumption at below 50 mW at 230 VAC, including line sense, and up to 210 mW of output power for 300-mW input at 230 VAC to run housekeeping functions when units are in standby mode.
For ultra-slim designs, TOPSwitchGaN ICs are available in low-profile eSOP-12 surface-mount packaging that enables 135 W (85–265 VAC) without a heat sink for applications such as appliances. These devices are also available in an eSIP-7 package, and thanks to its vertical orientation, it minimizes the printed-circuit-board (PCB) footprint. It has a thermal impedance equivalent to a TO-220-packaged part. By mounting a metal heat sink, the extended power range is achieved for applications including power tools, e-bikes, and garage openers.
Reference designs include the DER-1079 (60-W, wide-range isolated flyback power supply unit (PSU) for appliances), the DER-1019 (356-W highline [89 V/4 A]) isolated flyback industrial PSU), and the RDR-1018 e-bike charger kit (168-W wide-range isolated flyback design).
Power Integrations’ TOPSwitchGaN flyback ICs (Source: Power Integrations)
pSemi, a Murata company, also claimed groundbreaking power products, targeting high-energy-density applications. At APEC 2026, pSemi unveiled the PE26100 multilevel buck converter for fast-charging mobile devices and the PE25304 advanced integrated charge pump switching-capacitor power module to enable high-efficiency power conversion in humanoid robotic, dexterous-hand power applications.
The PE26100 is an expanded application focus for its high‑performance PE26100 multilevel buck converter, which is now optimized for main, direct battery charging in next‑generation smartphones, tablets, and other compact mobile devices. It delivers a fast‑charging capability, high output current, up to 6 A, and high thermal performance in an ultra‑thin form factor for space‑constrained consumer electronics.
pSemi said the architecture and performance characteristics make it uniquely suited for today’s transition toward high‑power USB Power Delivery (USB‑PD) and programmable power supply (PPS) fast‑charging ecosystems. Supporting 4.5-V to 18.5-V input, the device enables four‑level buck mode for higher USB‑PD voltage inputs and three‑level buck mode for mid‑ to low input voltages. For USB PPS applications, the PE26100 can also operate as a fixed‑ratio, capacitor‑divider charge pump, offering divider ratios of 2:1 and 3:1 depending on programmed input voltage.
The PE25304 is an advanced integrated charge pump switching‑capacitor power module for high efficiency and performance in space‑constrained, high‑power applications. Designed to divide input voltage by four, the PE25304 is purpose‑built for 48-V input architectures, with a wide operating range from 20 V to 60 V, making it suited for dexterous-hand robotics and mechatronic systems. It can also be used in drones, medical devices, embedded AI modules, and industrial automation systems.
The module is housed in an ultra-low-profile package (2 mm) and can deliver up to 72 W of output power. It also features a 97% conversion efficiency, reducing power loss and thermal buildup.
Texas Instruments (TI) unveiled several isolated power modules for applications from data centers to electric vehicles that require improvements in power density, efficiency, and safety. In particular, the UCC34141-Q1 and UCC33420 isolated power modules leverage TI’s IsoShield technology. This is a proprietary multichip packaging solution that delivers up to 3× higher power density than discrete solutions in isolated power designs and shrinks solution size by as much as 70% by packing more power into smaller spaces while reducing area, cost, and weight.
Traditionally, power designers use power modules to save board space and simplify design. Advancements in packaging technology such as the IsoShield enable higher performance and efficiency gains. The IsoShield copackages a high-performance planar transformer and an isolated power stage, offering functional, basic, and reinforced isolation capabilities.
It enables a distributed power architecture, helping manufacturers meet functional safety requirements by avoiding single-point failures, TI said. In addition to shrinking the solution size, it delivers up to 2 W of power for automotive, industrial, and data center applications that require reinforced isolation. For example, the increased power density helps deliver lighter and more efficient EVs that extend range and improve performance.
TI also announced other advancements in data centers, automotive, humanoid robots, sustainable energy, and USB Type-C applications, including an 800-V to 6-V DC/DC power distribution board. Pre-production and production quantities of the isolated power modules, along with evaluation modules, reference designs, and simulation models, are available now on TI.com.
TI’s UCC34141-Q1 and UCC33420 isolated power modules (Source: Texas Instruments Inc.)
MaxLinear Inc. unveiled its modular intelligent power management solution for next-generation broadband system-on-chip (SoC) designs. The platform includes the MxL7080 power management controller, MxL76500 smart regulating stage (SRS) modules, and high-efficiency MxL76125 22-V/15-A synchronous buck regulator. It delivers a thermally optimized power architecture for high‑bandwidth, multi-service access platforms, including cable, fiber, and fixed wireless access gateways; Ethernet routers; and customer premise equipment.
The platform addresses the need for scalable, multi-rail power management architectures capable of supporting higher power density, tighter voltage tolerances, and improved thermal performance as SoC designs get more complex.
The MxL7080 power management controller, paired with four MxL76500 SRS modules, provides a reference‑based, multiphase power architecture for high‑performance SoCs. This architecture provides improved thermal distribution to reduce localized hotspots, a simplified layout and routing flexibility, and precise multi‑rail sequencing with dynamic voltage scaling support.
The MxL76125 buck regulator, housed in a 4 × 5-mm QFN package, enhances point‑of‑load (PoL) flexibility for complex broadband and access platforms. It offers a wide 5-V to 22-V input voltage range supporting 5-V, 12-V, and 20-V system rails and high efficiency up to 96%, with light‑load PFM mode to reduce idle power. Other features include a fast transient response using COT‑based control with ceramic output capacitors and integrated protection including OCP, OVP, OTP, UVLO, and short‑circuit protection.
The complete (MxL7080 + MxL76500 + MxL76125) power solution is optimized for multi-access gateway platforms. These devices are available now in RoHS-compliant, green/halogen-free, industry-standard packages. Evaluation boards and samples are available at the MxL7080, MxL76500, and MxL76125 product pages.
MaxLinear’s intelligent power management solution (Source: MaxLinear Inc.)
SiC and GaN power solutions
Microchip Technology Inc. has launched its BZPACK mSiC power modules, offering high flexibility with a range of topologies, which include half-bridge, full-bridge, three-phase, and PIM/CIB configurations. This flexibility allows power designers to optimize performance, cost, and system architecture.
Targeting demanding power-conversion environments, the BZPACK mSiC power modules exceed high voltage-high humidity-high temperature reverse bias (HV‑H3TRB) testing, surpassing the industry standard of 1,000 hours, making them suited for industrial and renewable energy applications. The modules provide a Comparative Tracking Index 600-V case, stable RDS(on) across temperature ranges, and substrate options in aluminum oxide or aluminum nitride.
The BZPACK power modules are also designed to reduce system complexity and enable faster assembly by offering a baseplate-less design with press-fit, solderless terminals and an optional pre-applied thermal interface material.
The power modules leverage Microchip’s advanced mSiC technology and performance of its MB and MC mSiC MOSFET families for industrial and automotive applications, with AEC-Q101-qualified options available. These devices support common gate-source voltages (VGS ≥ 15 V) and are available in industry-standard packages.
The MC family integrates a gate resistor, which offers benefits in improved switching control, low switching energy, and improved stability in multi-die module configurations. Package options include TO-247-4 Notch and die form (waffle pack).
Microchip offers a range of SiC diodes, MOSFETs, and gate drivers. The BZPACK mSiC power modules are available in production quantities.
Microchip’s BZPACK mSiC modules (Source: Microchip Technology Inc.)
SemiQ Inc. launched its QSiC Dual3 family of 1,200-V half-bridge MOSFET modules for motor drives in data center cooling systems, grid converters in energy storage systems, and industrial drivers. These are designed to replace IGBT modules with minimal redesign, with all MOSFET die screened using wafer-level gate-oxide burn-in tests exceeding 1,450 V.
Enabling power converters with high conversion efficiency and power density, the series of six devices includes an optional parallel Schottky barrier diode (SBD) to further reduce switching losses in high-temperature environments. Two of the family’s six devices have an RDS(on) of 1 mΩ and a power density of 240 W/in.3 in a 62 × 152-mm package. The modules also feature a low junction-to-case thermal resistance and enable a simplified system design with smaller, lighter heat sinks.
The devices include the GCMX1P0B120S4B1, GCMX1P4B120S4B1, GCMX2P0B120S4B1, GCMS1P0B120S4B1 (SBD), GCMS1P4B120S4B1 (SBD), and GCMS2P0B120S4B1 (SBD). Datasheets for the QSiC Dual3 modules can be downloaded here.
SemiQ’s QSiC Dual3 modules (Source: SemiQ Inc.)
In the GaN space, Efficient Power Conversion (EPC) introduced the EPC91121 motor drive inverter evaluation board, built around its Gen 7 EPC2366 40-V eGaN power transistor. The board is designed for fast prototyping and evaluation, integrating the key functions required for a motor drive inverter, including gate drivers, housekeeping power supplies, voltage and temperature monitoring, and current sensing.
The 40-V EPC2366 Gen 7 eGaN FET offers an ultra-low RDS(on) of 0.84 mΩ, enabling extremely efficient power conversion and fast switching performance. The three-phase inverter solution can deliver up to 70-Apeak (50-ARMS) output current from input voltages ranging between 18 V and 30 V, making it suited for battery-powered systems operating around a 24-V supply.
The platform supports PWM switching frequencies up to 150 kHz, which is significantly higher than typical silicon-based motor drives, according to EPC. This reduces magnetic component size, minimizes switching losses, and improves overall system responsiveness, the company said.
The board, measuring 79 × 80 mm, provides high-bandwidth current sensing on all three phases, supporting measurements up to ±125 A, while phase and DC-bus voltage sensing provide the feedback required for precise monitoring and advanced motor control techniques such as field-oriented control (FOC) and space-vector PWM. Other features include shaft encoder and Hall-sensor interfaces and multiple test points.
Applications include drones, robotics, industrial automation, handheld power tools, and other compact electromechanical systems in which high efficiency and power density are critical.
The EPC91121 reference design board and devices are available now from DigiKey and Mouser. Design support files, including schematic, bill of materials, and Gerber files, are available on the EPC91121 product page.
EPC’s EPC91121 BLDC motor drive evaluation board (Source: Efficient Power Conversion)
Renesas Electronics Corp. unveiled its high-voltage TP65B110HRU at APEC 2026, claiming the first bidirectional switch using depletion-mode (d-mode) GaN technology, capable of blocking both positive and negative currents in a single device with integrated DC blocking. Target applications include single-stage solar microinverters, AI data centers, and on-board EV chargers.
The device simplifies power converter designs and replaces conventional back-to-back FET switches with a single low-loss, fast-switching, easy-to-drive device, Renesas said. “By integrating bidirectional blocking functionality on a single GaN product, power conversion can be achieved in a single stage using fewer switching devices.”
This is an alternative to today’s high-power-conversion designs that use unidirectional silicon or SiC switches, which block current in only one direction when in the off state. Many of these single-stage designs use conventional unidirectional switches back to back, Renesas said, resulting in a fourfold increase in switch count and reduced efficiency.
Renesas’s 650-V SuperGaN devices are based on a proprietary, normally off technology. The TP65B110HRU combines a high-voltage bidirectional d-mode GaN chip co-packaged with two low-voltage silicon MOSFETs with high threshold voltage (3 V), high gate margin (±20 V), and built-in body diodes for efficient reverse conduction. It offers high-dV/dt capability of >100 V/ns, with minimum ringing and short delays during on/off transitions.
Comparing the Renesas bidirectional GaN switch with enhancement-mode bidirectional GaN devices, the Renesas switch is compatible with standard gate drivers that require no negative gate bias. The result is a simpler, lower-cost gate-loop design and fast, stable switching in both soft- and hard-switching operations without a performance penalty, the company said.
The TP65B110HRU bidirectional GaN switch, housed in a TOLT top-side-cooled package, is available now, along with the RTDACHB0000RS-MS-1 evaluation kit. Also available are two reference solutions (500-W Solar Microinverter and Three-Phase Vienna Rectifier System) that leverage the TP65B110HRU and other Renesas-compatible devices.
Renesas’s TP65B110HRU bidirectional GaN switch (Source: Renesas Electronics Corp.)
Renesas also announced a GaN charging solution for industrial and IoT electronics applications. The GaN-based Half-Wave LLC (HWLLC) platform supports 500-W or higher operation across IoT, industrial, and infrastructure systems. The HWLLC converter topology scales a compact power architecture from 100-W-class designs to 500 W, targeting high-speed chargers for power tools, e-bikes, and other appliances.
The topology addresses the size, heat, and efficiency penalties of legacy topologies. It also helps designers move beyond 100-W USB-C charging devices and adopt 240-W USB EPR charging to shrink proprietary brick chargers in smartphones, laptops, and many gaming systems, Renesas said. The fast-charging technology was recently incorporated into Belkin’s GaN-based Z-Charger that features Renesas’s zero-standby-power (ZSP) chip with advanced SuperGaN d-mode GaN technology.
Building on its proprietary ZSP technology, the solution encompasses four new controller ICs, including the RRW11011 interleaved power-factor correction (PFC) and HWLLC combo controller, the RRW30120 USB-PD protocol and closed-loop controller, the RRW40120 half-bridge GaN gate driver, and the RRW43110 intelligent synchronous rectifier controller.
The RRW11011 PFC with phase-shift control cancels ripple, reduces component size and cost, and balances current. It also allows designers to lower operating temperature while delivering the wide output range (5 V to 48 V) required by USB Extended Power Range (EPR) and other variable-load charging systems. The RRW30120 USB-PD protocol and closed-loop controller achieve a maximum USB power delivery of 240 W. Together in a 240-W USB EPR power adapter design, the solution claims the highest power density in the industry (3 W/cc) and 96.5% peak efficiency.
The four devices enabling the HWLLC solution are available in addition to the EBC10293 240-W USB-PD EPR evaluation board. Reference solutions include the 240-W AC/DC Adapter and 300-W Lighting Power Platform.
Renesas’s Half-Wave LLC GaN charging solution (Source: Renesas Electronics Corp.)
AI data centers
Infineon Technologies AG released several power solutions aimed at AI data centers, including voltage regulation devices, digital power controller ICs, and CoolGaN-based high-voltage intermediate bus converter (IBC) reference designs.
Infineon expanded its voltage regulation portfolio with the XDPE1E digital multiphase PWM buck controllers and TDA49720/12/06 PMBus PoL voltage regulators to deliver higher compute performance per rack in AI data centers as next-generation platforms drive new requirements for power architectures.
The XDPE1E3G6A and XDPE1E496A, digital three- and four-loop multiphase PWM buck controllers, respectively, target multi-processor AI platforms and advanced VR inductor topologies. They offer highly configurable phase allocation and fully programmable phase firing order and support multiple protocols, including PMBus, AVSBus, SVID, and SVI3. Digital features, including active transient response, fast DVID, automatic phase shedding, and PFM, help address dynamic AI loads. Infineon also offers built-in tools such as Digital Scope, Black Box recording, and protection features.
To address the increasing number of non-core rails in AI systems, which require efficient regulation with accurate monitoring and control, Infineon developed the TDA49720/12/06 family of fully integrated PoL DC/DC buck regulators with PMBus-compliant digital telemetry. This family, with 6-A, 12-A, and 20-A options in 3 × 3-mm and 3 × 3.5-mm packages, helps maximize power density and simplify layout on accelerator cards and server boards.
The PMBus telemetry enables accurate reporting of key parameters, including output voltage, load current, input voltage, and die temperature. The devices also feature a proprietary valley-current-mode constant-on-time control scheme that enables fast transient response, cycle-by-cycle current limiting, and support for all-MLCC output capacitance designs. The devices operate from 2.7-V to 16-V input and across a wide junction temperature range of −40°C to 150°C.
Infineon’s XDPE1E496A digital multiphase PWM buck controller (Source: Infineon Technologies AG)
Infineon also expanded its XDP digital power controller IC family with the XDPP1188-200C, targeting higher power levels in AI servers. The device supports intermediate bus conversions from 48 V to 12 V or lower, as well as future higher-voltage DC systems, including the conversion of ±400-V or 800-VDC bus voltage to 48 V, 24 V, or 12 V.
The XDPP1188-200C complements Infineon’s CoolGaN-based high-voltage IBC reference designs (also introduced at APEC) and supports custom high-/medium-voltage IBC designs up to 800 VDC in AI data centers. The XDPP1188-200C allows optimization for customer-specific requirements. In 48-V systems, the controller works seamlessly with medium-voltage IBC modules, delivering an optimized power supply chain from the intermediate bus to processor voltage regulation.
Key features include an advanced feed-forward control mechanism to improve response time and stability under dynamic input transient conditions, and a nonlinear fast transient response to handle the rapid power demand fluctuations in AI servers. The device also integrates advanced power management techniques at light-load conditions and supports bidirectional configuration, enabling flexible power management.
The XDPP1188-200C digital power controller is sampling now. Volume production is expected in the first quarter of 2026.
Infineon’s XDPP1188-200C digital power controller (Source: Infineon Technologies AG)
Infineon also introduced two high-voltage IBC reference designs to help customers make the shift to AI server power architectures powered by ±400 VDC and 800 VDC.
Leveraging Infineon’s 650-V CoolGaN switches, the reference designs address two architectures: The 800-VDC to 50-V design is an intermediate stage for downstream 48-V IBC modules, while the 800-VDC to 12-V design enables direct conversion for compact server boards. The XDPP1188-200C digital controller is available for custom implementations, as noted earlier, with output voltages of 48 V, 24 V, or 12 V.
The 800-VDC or ±400-V to 50-V high-voltage IBC reference design demonstrates more than 98% efficiency at full load. Leveraging Infineon’s high- and medium-voltage CoolGaN switches, EiceDRIVER gate drivers, and a PSOC microcontroller (MCU), it consists of two 3-kW 400-V to 50-V converter building blocks, which are configured in an input-series-output-parallel (ISOP) arrangement. It scales to 6-kW TDP and supports up to 10.8 kW for 400 µs, using a planar PCB integrated transformer with multiple synchronous rectifier stages and soft switching across all load conditions to reduce electromagnetic interference. It claims an exceptional 2.5-kW/in.3 power density in a 60 × 60 × 11-mm form factor.
The second reference design is an ultra-thin, high-voltage IBC demo board with an 8-mm height, which converts an 800-VDC bus voltage directly to a 12-V intermediate rail. The design delivers 6-kW TDP and supports up to 10.8 kW for 400 µs. It features a power density above 2,300 W/ in.3, up to 98.2% peak efficiency, and 97.1% efficiency at full load. It operates as an ISOP half-bridge LLC converter, leveraging Infineon’s 650-V CoolGaN and 40-V OptiMOS 7 switches, with EiceDRIVER gate drivers and a PSOC MCU.
Infineon’s high-voltage IBC demo board (Source: Infineon Technologies AG)
A host of other semiconductor solution providers highlighted their latest and greatest at APEC 2026. Toshiba America Electronic Components Inc., for example, showcased several new products and technologies, ranging from its UMOS 11 MOSFETs and top-side-cooled TOGT package to SiC modules and MCU and motor control solutions.
On display were Toshiba’s expanded family of UMOS 11 MOSFETs in industry-standard packages. These devices feature improved switching characteristics and reduced RDS(on) per area compared with the previous UMOS 10 generation. The company also highlighted its WBG semiconductor portfolio, including high-power SiC power modules for grid-level and industrial systems; 750-V and 1,200-V SiC die and modules for automotive drivetrain inverter applications; and GaN devices.
Toshiba also featured its top-side-cooled TOGT packaging that targets high-power-density applications. It enables heat dissipation through the top of the package to reduce thermal stress on the PCB.
Other solutions presented at the show include MCU and motor control solutions (MCU, MCD, and SmartMCD devices) for automotive body electronics, electronic control units (ECUs), and industrial control applications. System reference designs highlighted include high-efficiency power supply platforms such as 3-kW server PSUs for data center applications, automotive ECU power architectures, and motor control reference designs for pump and power tool systems.
Toshiba’s UMOS 11 MOSFETs (Source: Toshiba America Electronic Components Inc.)
The post APEC 2026 showcases advances in power electronics appeared first on EDN.
A fully floating BJT-based LED current driver

The circuit in Figure 1 combines a VBE-referenced current source with a current mirror to implement a simple two-terminal, fully floating LED current sink or source. This approach is well-suited for applications in which tight current accuracy is not required, such as driving LED strings where a 5–10% current tolerance is acceptable.
Figure 1 A simple, fully floating LED current driver based on a VBE-referenced current source and a BJT current mirror. The circuit operates as either a current sink or source and supports output currents up to 100 mA. Note: R2=R3. All resistors are ¼ W and 5%.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The LED driver can drive an arbitrary number of series-connected LEDs, provided the available supply voltage is at least 2.3 V. The topology supports both high-side and low-side operation, as shown in Figure 2. Output current ranges from a few milliamps up to 100 mA, with no requirement for heat sinks.

Figure 2 High-side and low-side operating configurations enabled by the fully floating nature of the LED driver.
The current source formed by BJTs Q1 and Q2 is set by resistor R1. A current mirror implemented with BJTs Q3 and Q4, using equal emitter resistors (R2 = R3), forces nearly equal currents in branches I1 and I2, as long as the voltage drop across the emitter resistors is at least 0.5 V. This requirement helps compensate for VBE mismatch between the transistors. The total LED current is therefore doubled, while power dissipation is evenly shared among the devices.
Experimental data (Table 1) confirm the expected behavior: output current scales with R1, and the minimum supply voltage increases from 2.3 V at 9.3 mA to 2.8 V at 97 mA, consistent with the headroom required by the VBE-referenced source and mirror.
|
R1 |
R2=R3 |
Iout |
Vsupplymin |
|
150Ω |
100 Ω |
9.3mA |
2.3V |
|
82 Ω |
56 Ω |
18.2mA |
2.4V |
|
33 Ω |
22 Ω |
44mA |
2.5V |
|
15 Ω |
10 Ω |
97mA |
2.8V |
Table 1 Experimental data showing R1, R2/R3, and corresponding Iout and Vsupplymin.
With a minimum operating voltage of approximately 2.8V, the circuit dissipates about 280 mW at a maximum output current of 100 mA. Higher supply voltages reduce efficiency due to increased power dissipation in the driver.
Because the LED current is VBE-dependent, it exhibits temperature sensitivity, with a temperature coefficient of approximately -0.3 %/°C. Using a resistor with a negative temperature coefficient for R1 can partially compensate for this effect.
—Luca Bruno has a Master’s Degree in Electronic Engineering from the Politecnico of Milan. He has written 16 EDN Design Ideas.
Related Content
- LED strings driven by current source/mirror
- Current mirror drives multiple LEDs from a low supply voltage
- A current mirror reduces Early effect
- A two-way mirror — current mirror that is
The post A fully floating BJT-based LED current driver appeared first on EDN.
The truth about AI inference costs: Why cost-per-token isn’t what it seems

The AI industry has converged on a deceptively simple metric: cost per token. It’s easy to understand, easy to compare, and easy to market. Every new system promises to drive it lower. Charts show steady declines, sometimes dramatic ones, reinforcing the impression that AI inference is rapidly becoming cheaper and more efficient.
But simplicity, in this case, is misleading.
A token is not a fundamental unit of cost in isolation. It is the visible output of a deeply complex system that spans model architecture, hardware design, system scaling, memory behavior, power consumption, and operational efficiency. Reducing that complexity to a single number creates a dangerous illusion: improvements in cost per token necessarily reflect improvements in the underlying system.
They often do not.
To understand what is really happening, we need to step back and look at the full system—specifically, the total cost of ownership (TCO) of an AI inference deployment.
From benchmark numbers to real systems
Most comparisons in the industry start from benchmark results. Inference benchmarks such as MLPerf provide a useful baseline because they fix key variables—model, latency constraints, and workload characteristics—allowing different systems to be evaluated under the same conditions.
Take a large-scale model such as Llama 3.1 405B. On a modern GPU system like Nvidia’s GB200 NVL72, MLPerf reports an aggregate throughput that translates to roughly 138 tokens per second per accelerator. An alternative inference-focused architecture might deliver a lower figure—say, 111 tokens per second per accelerator.
At first glance, the conclusion seems obvious: the GPU is faster.
But this is precisely where the problem begins. That number describes the performance of a single accelerator under specific benchmark conditions. It says very little about how the system behaves when deployed at scale.
And in real-world data centers, scale is everything.
The illusion of linear scaling
In theory, performance should scale linearly with the number of accelerators. Double the hardware, double the throughput. In practice, this never happens. Communication overhead, synchronization, memory contention, and architectural inefficiencies all conspire to reduce effective performance as systems grow.
This effect is captured by what is often called scaling efficiency. It’s one of the most important and most overlooked parameters in AI infrastructure.
A system that achieves 97% scaling efficiency will behave differently from one that achieves 85%, even if their per-chip performance appears comparable. Over dozens or hundreds of accelerators, that difference compounds rapidly.
This is where inference-specific architectures begin to separate themselves.
Unlike training, inference does not require backpropagation. The execution flow is more predictable, the data movement patterns are more structured, and the opportunity for optimization is significantly greater. Architectures that are purpose-built for inference can exploit this determinism to sustain high utilization across large systems.
One architecture is a case in point. By moving away from the traditional GPU execution model and adopting a deeply pipelined, dataflow-oriented design, it minimizes the coordination overhead that typically erodes scaling efficiency. The result is not just higher peak utilization, but more important, consistently high utilization at scale.
When the system flips the narrative
Once performance is evaluated at the level that actually matters—servers, racks, and data centers—the comparison often changes.
Throughput per server depends not only on per-accelerator performance, but also on how many accelerators are packed into a system and how efficiently they work together. Throughput per rack adds another layer, incorporating system density and infrastructure constraints. When power is introduced into the equation, the relevant metric becomes throughput per kilowatt.
It is at this level that architectural differences become impossible to ignore.
GPU-based systems are optimized for flexibility. They can handle a wide range of workloads, but that generality introduces inefficiencies when running highly structured inference tasks. Data must move between memory hierarchies, threads must be synchronized, and execution units often sit idle waiting for dependencies to resolve.
The architecture mentioned above takes a different approach. By eliminating the traditional memory hierarchy bottlenecks and replacing them with a large, flat register file combined with a dataflow execution model, it effectively removes the “memory wall” that limits sustained performance in GPU systems. Data is kept close to compute, and execution proceeds in a continuous pipeline rather than in discrete, synchronized steps.
The consequence is subtle but powerful: even if peak per-chip performance appears lower, the effective throughput at the system level can be significantly higher. More importantly, that performance is achieved with far greater energy efficiency.
Power: The constraint that doesn’t go away
Energy consumption is not just a cost factor; it’s the constraint that ultimately defines the scalability of AI infrastructure.
Electricity prices, power usage effectiveness (PUE), and utilization rates are not theoretical constructs. They are operational realities that directly impact the economics of every deployment. A system that consumes less energy per token has an intrinsic advantage that compounds over time.
This is where inference-native architectures again demonstrate their value.
Because the architecture’s design minimizes unnecessary data movement and maximizes pipeline utilization, it delivers more tokens per unit of energy. The metric that matters is not peak FLOPS, but tokens per kilowatt—and on that axis, architectural efficiency becomes the dominant factor.
In large-scale deployments, this translates directly into lower operating costs and improved total cost of ownership.
The hidden influence of workload assumptions
Benchmarking does not eliminate bias—it simply moves it.
Parameters such as context length, output token size, and concurrency have a profound impact on system behavior. A model running at 128K context imposes different demands than one operating at 8K. Latency, memory pressure, and throughput all shift accordingly.
Architectures that rely on heavy memory movement are particularly sensitive to these changes. As context length grows, the cost of moving data becomes increasingly dominant.
By contrast, architectures that localize data and streamline execution are more resilient to these shifts. This is another area where the architecture’s register-centric, dataflow design provides an advantage: it reduces dependence on external memory bandwidth and maintains more consistent performance across varying workloads.
From metrics to economics
When performance, power, and infrastructure are combined, the discussion moves from engineering to economics.
Total cost of ownership captures the full picture: capital expenditure, operating costs, energy consumption, and system utilization over time. It reflects not just how fast a system can run, but how efficiently it can deliver value in a real deployment.
This is where many cost-per-token claims fall apart.
A lower cost per token can be achieved in multiple ways—by improving efficiency, by adjusting assumptions, or by accepting lower margins. Without a system-level view, it’s impossible to distinguish between these scenarios.
What matters is not the headline number, but the underlying drivers.
The risk of optimizing the wrong thing
The industry’s focus on cost per token has created a subtle distortion. Instead of optimizing systems, we risk optimizing metrics. This is not unique to AI. Every technology cycle has its preferred metrics, and every metric can be gamed if taken out of context.
A truly efficient system is one that aligns performance, energy consumption, and scalability. It delivers consistent throughput, minimizes waste, and operates effectively under real-world constraints. This is precisely the direction that inference-specific architectures are taking.
The aforementioned architectural approach illustrates this shift. Rather than attempting to adapt a general-purpose architecture to an increasingly specialized workload, it starts from the workload itself and builds upward. The result is a system that is not only efficient in theory, but also in practice—at scale, under load, and within the constraints of real data centers.
Toward a more honest conversation
None of this diminishes the achievements of GPU-based systems. They have been instrumental in the rise of modern AI and remain incredibly powerful platforms. But the workloads are changing. Large language model inference is not the same as training, and it’s not the same as graphics. As the industry shifts toward deployment at scale, the limitations of general-purpose architectures become more apparent.
At the same time, new architectures, as described above, are emerging that are designed specifically for these workloads. They may not always win on peak performance metrics, but they are optimized for the realities of inference: predictable execution, high utilization, and energy efficiency.
If we want to compare these systems fairly, we need to move beyond simplified metrics and toward system-level evaluation.
The bottom line
Cost per token is not wrong—but it is incomplete.
The real question is not how cheaply a token can be produced in isolation, but how efficiently a system can deliver tokens over time, at scale, within the constraints of power, infrastructure, and workload demands.
When viewed through that lens, the path forward becomes clearer.
The next generation of AI infrastructure will not be defined by the highest peak performance or the most aggressive benchmark result. It will be defined by architectures that align performance with efficiency, and efficiency with economics.
And in that context, the industry may find that the most important innovation is not faster hardware—but better architecture.
Lauro Rizzatti is a business development executive at VSORA, a pioneering technology company offering silicon semiconductor solutions that redefine performance. He is a noted chip design verification consultant and industry expert on hardware emulation.
Related Content
- Chiplets Are The New Baseline for AI Inference Chips
- Custom AI Inference Has Platform Vendor Living on the Edge
- The next AI frontier: AI inference for less than $0.002 per query
- Revolutionizing AI Inference: Unveiling the Future of Neural Processing
- Purpose-built AI inference architecture: Reengineering compute design
The post The truth about AI inference costs: Why cost-per-token isn’t what it seems appeared first on EDN.
AOI receives new order for 800G data-center transceivers from major hyperscale customer
ROHM has added New Lineup of 17 High-Performance Op Amps Enhancing Design Flexibility
ROHM has added the new CMOS Operational Amplifier (op amp) series “TLRx728” and “BD728x” to its lineup. These are suitable for a wide range of applications, including automotive, industrial, and consumer systems. A broad lineup also makes product selection easier.
In recent years, demand for high-accuracy op amps has been rapidly increasing as automotive and industrial systems become more sophisticated, demanding faster speed, better precision, and higher efficiency. In applications requiring amplification of sensor outputs, minimising signal error and delay is essential. To meet these requirements, a well-balanced set of key characteristics is needed, including Input Offset Voltage, Noise, and Slew Rate.
These new products are high-performance op amps that offer a low input offset voltage, low noise, and high slew rate. TLRx728 features an input offset voltage of 150 μV (typ.), while the BD728x offers 1.6 mV (typ.). Both series have a noise voltage density of 12 nV/√Hz at 1kHz and a slew rate of 10 V/μs. They are therefore suitable for a wide range of precision applications, including sensor signal processing, current detection circuits, motor driver control, and power supply monitoring systems. Both series are designed to balance versatility and high performance rather than being limited to specific applications.
Application Examples
Automotive equipment, industrial equipment, and consumer electronics.
Example use case: Sensor signal processing, current detection circuits, motor driver control, and power supply monitoring systems.
The post ROHM has added New Lineup of 17 High-Performance Op Amps Enhancing Design Flexibility appeared first on ELE Times.
EEVblog 1743 - Mechanical Vibration Detection with your Oscilloscope Probe
Поїздка Шевченковими місцями із Профкомом
95 співробітників Київської політехніки побували в культурно-освітній поїздці Шевченковими місцями Черкащини.



