Feed aggregator

Wolfspeed announces subscriptions for $379m of convertible notes and $96.9m of common stock and pre-funded warrants

Semiconductor today - 1 hour 1 min ago
Wolfspeed Inc of Durham, NC, USA — which makes silicon carbide (SiC) materials and power semiconductor devices — has entered into separate, privately negotiated subscription agreements with investors pursuant to which it will place (i) $379m of its 3.5% convertible 1.5 lien senior secured notes due 2031 and (ii) 3,250,030 shares of common stock at a purchase price of $18.458 per share, and pre-funded warrants to purchase up to 2,000,000 shares of common stock at a price of $18.448 per pre-funded warrant. The issuance and sale of the notes, shares and fre-Funded warrants is expected to settle on 26 March, subject to customary closing conditions. Funds managed by new and existing investors participated in these private placements...

Single-stage design removes 48-V bus in servers

EDN Network - 3 hours 32 min ago

A DC/DC power delivery board from Navitas Semiconductor enables direct conversion from 800 V to 6 V in a single stage. Showcased at NVIDIA GTC 2026, the design eliminates the conventional 48-V intermediate bus converter stage within compute server trays, simplifying power delivery for NVIDIA AI infrastructure.

Using GaNFast power ICs, the board reaches 96.5% peak efficiency at full load with 1-MHz switching and a power density of 2.1 kW/in³. The primary side integrates sixteen 650-V GaNFast FETs in DFN 8×8 packages with dual-side cooling in a stacked full-bridge topology, while center-tapped outputs use 25-V silicon MOSFETs. High-frequency switching enables smaller passives and planar magnetics, increasing power density.

The Navitas power delivery board is about 20% thinner than a mobile phone. Its ultra-low profile allows close placement to the GPU board, minimizing loop inductance to improve transient response and power distribution efficiency.

For more information, contact a Navitas representative or email info@navitassemi.com. A timeline for availability was not provided at the time of this announcement.

Navitas Semiconductor 

The post Single-stage design removes 48-V bus in servers appeared first on EDN.

UWB SoCs extend ranging and radar performance

EDN Network - 3 hours 32 min ago

The ST64UWB family of ultra-wideband SoCs from ST provides increased range and processing capability for automotive applications. Backward compatible with IEEE 802.15.4z, the chips also support the emerging IEEE 802.15.4ab UWB standard, enabling device localization and tracking at distances of several hundred meters. Target use cases include hands-free digital keys and high-accuracy vehicle localization.

Enhancements such as multi-millisecond ranging (MMS) and narrow-band assistance (NBA) provide greater operating range and improve link robustness, particularly for devices carried in bags or rear pockets. These features also facilitate close-range direction finding for more accurate interpretation of user position and movement. In addition, IEEE 802.15.4ab strengthens radar mode for more reliable in-vehicle child presence detection.

The ST64UWAB-A100 and ST64UWB-A500 are built on an 18-nm FD-SOI process, increasing link budget by nearly 3 dB versus bulk technologies and boosting range by up to ~50% beyond IEEE 802.15.4ab. Both devices integrate an Arm Cortex-M85 core, while the ST64UWB-A500 adds AI acceleration and DSP capabilities for edge AI-based radar applications. A third device, the ST64UWB-C100, expands the lineup to cover industrial and consumer applications.

The devices are now sampling to leading Tier 1 suppliers and OEMs.

ST64UWB product page 

STMicroelectronics

The post UWB SoCs extend ranging and radar performance appeared first on EDN.

224G ICs optimize signal integrity in linear optics

EDN Network - 3 hours 32 min ago

Semtech’s 224-Gbps/lane TIAs and drivers power 800G–3.2T transceivers and optical engines for AI/ML clusters, hyperscale data centers, and cloud infrastructure. Compliant with CEI‑224G‑Linear and LPO‑MSA, they support half-retimed (LRO), linear pluggable (LPO), next‑gen (XPO), near‑packaged (NPO), and co‑packaged (CPO) optics.

The 224G TIA family—GN1834L, GN1834DL, and GN1838DL—offers quad- and octal-channel architectures with flexible layouts. On-chip equalization, high linearity, and low noise boost signal integrity for LPO and next-generation linear optics.

The 224G Mach-Zehnder Modulator (MZM) drivers—quad GN1877 and octal GN1887—support SiPho, InP MZM, and TFLN optical transmitters with tunable gain and output swing. A CEI‑224G‑Linear host-side equalizer covers a wide range of host interfaces, from compact NPO/CPO to varied LRO/LPO/XPO trace lengths.

Both the TIA and driver series integrate real-time link monitoring and telemetry, enabling proactive diagnostics to reduce link flapping and improve network reliability.

The GN1834L, GN1834DL, and GN1887 are available now; GN1838DL and GN1877 are expected in April 2026.

For more information, visit Semtech’s optical page.

Semtech

The post 224G ICs optimize signal integrity in linear optics appeared first on EDN.

Double-side cooled MOSFETs reduce server heat

EDN Network - 3 hours 32 min ago

AOS has introduced two MOSFETs—the 25‑V AONC40212 and 80‑V AONC68816—in 3.3×3.3‑mm source-down DFN packages with double-side cooling. This packaging supports high power density in DC/DC intermediate bus converters and meets the strict thermal demands of AI servers and data centers.

The MOSFETs use an optimized top-clip design on the exposed drain, enabling double-sided thermal transfer to remove heat efficiently. Compared with single-sided devices, this approach reduces thermal stress and heat buildup. The large top clip achieves a low maximum thermal resistance of 0.9 °C/W, enhancing thermal performance in demanding applications.

The AONC40202 and AONC68816 MOSFETs support continuous drain currents of 405 A and 119 A, respectively, at 25 °C, with pulsed currents up to 644 A and 476 A. The devices have maximum on-resistances of 0.7 mΩ for the 25-V part and 4.7 mΩ for the 80-V part, while maintaining junction temperatures up to 175 °C. Bottom-side thermal resistance is 1.1 °C/W for both devices.

Available now with a lead time of 14–16 weeks, the AONC40202 and AONC68816 cost $1.85 and $1.95 each in lots of 1000 units.

Alpha & Omega Semiconductor 

The post Double-side cooled MOSFETs reduce server heat appeared first on EDN.

Buck ICs improve AI data center power

EDN Network - 3 hours 32 min ago

Infineon’s XDPE1E multiphase PWM buck controllers and TDA49720/12/06 PMBus POL buck regulators streamline voltage regulation in AI data centers, helping customers boost compute performance per rack. With digital control and telemetry-enabled point-of-load regulation, these devices reduce design cycles and accelerate platform bring-up.

Designed for multiprocessor AI platforms and advanced VR inductor topologies, the XDPE1E3G6A and XDPE1E496A digital 3- and 4-loop buck controllers feature configurable phase allocation and fully programmable phase firing order. They support multiple protocols, including PMBus, AVSBus, SVID, and SVI3, ensuring compatibility across processor ecosystems. Digital control features and integrated tools help manage dynamic AI loads, reduce bench time, and improve system robustness.

The TDA49720/12/06 integrated POL buck regulators deliver 6-A, 12-A, and 20-A outputs in 3×3 mm and 3×3.5 mm packages. PMBus telemetry enables reliability monitoring and system optimization, while a proprietary valley current mode constant-on-time control ensures fast transient response, cycle-by-cycle current limiting, and all-MLCC output capacitance compatibility.

More information can be found on Infineon’s digital multiphase controller page and POL voltage regulator page. A timeline for availability was not provided at the time of this announcement.

Infineon Technologies 

The post Buck ICs improve AI data center power appeared first on EDN.

BluGlass partners with US government relations, corporate advisory and public affairs firm

Semiconductor today - 4 hours 34 min ago
BluGlass Ltd of Silverwater, Australia — which develops and manufactures gallium nitride (GaN) blue laser diodes based on its proprietary low-temperature, low-hydrogen remote-plasma chemical vapor deposition (RPCVD) technology — has partnered with US government relations, corporate advisory and public affairs firm Michael Best Strategies to enhance engagement with key decision makers within the Department of War (DoW) and Department of Energy (DoE)...

Cellular hotspots: Multi-option evaluation thoughts

EDN Network - 5 hours 49 min ago

A cellular data service upgrade prompts new (to this engineer, at least) hardware acquisitions: three models’ worth, four total devices. Smart or superfluous? Read on and decide for yourselves.

When our power went down on December 17, our broadband WAN connection and LAN still remained up for several hours, thanks to our sizeable UPS battery set fueling essential network gear, along with the NUT-controlled auto-shutdown of the multiple power-hungry HDD-based NASs also UPS-tethered. But eventually, the batteries were depleted, Comcast-supplied Ethernet and Wi-Fi both dropped, and we needed to turn to other Internet-access options.

My wife has unlimited data on her Verizon 5G cellular phone account, along with hotspot support (the latter capped at 200 GB max per month, but which my legacy unlimited AT&T 4G LTE cellular phone plan completely lacks). And her service plan is also shared among multiple devices, including several iPads. So that was one option.

AT&T longevity (and stinginess)

I’ve also long (since November 2009, I realized in perusing my email archive while writing this) had a dedicated AT&T data plan, with the associated SIM nowadays normally (at least until recently, that is) plugged into my archaic Microsoft Surface Pro X hybrid tablet/computer:

This plan, originally $29.99/month, increased by $5/month beginning in February 2016. More recently, another change arrived. My original DataConnect plan was 4G LTE-based and unlimited from a data usage standpoint. But in March 2023, AT&T converted me to a 5G successor plan, with the second month of service free and $20/month off the normal $55/month price beyond that point (both perks per my legacy customer status). That said, it was no longer unlimited; the base rate included only 50 GBytes of data use per month. Sufficient in a pinch, although not for ongoing daily usage; we average well beyond a half TByte of aggregate data payload per month on Comcast.

When the network went down, I therefore also grabbed and booted up the Surface Pro X, figuring that I’d spread out the household data usage across the multiple cellular services we were already paying for. To my surprise and dismay, however, the usual cellular data connection option in Windows 11’s network settings was missing. And when I dove into Device Manager, I learned why; “This device cannot start”, whatever that meant:

Microsoft strikes again

I tried uninstalling the relevant driver, then rebooting so that Windows would auto-reinstall it. I also tried searching for an updated version of the driver. No dice; nothing I tried worked. I was pissed, turning to Reddit to vent and seek other suggestions. What I’d already learned there was that the Windows 11 2H25 update had dropped support for legacy Arm processors, including the SQ1 (a Microsoft-branded Qualcomm Snapdragon 8cx SC8180X) and, I assumed along with it, the chipset’s integrated X24 LTE modem. And, because I’d installed Windows 11 2H25 in mid-October and it was already mid-December, I was beyond the 10-day rollback deadline.

More recently though, and on a hunch, I plugged back in the SIM, rechecked the computer’s “Network & Internet” screen and noticed that the cellular data option had magically returned, which a revisit of Device Manager confirmed:

🤷‍♂️ I have no clue what caused it to resurrect, far from what had led to its (temporary, it turns out) demise in the first place. And, by the way, after further pondering I now suspect that the now-shorter list of supported Arm processors and chipsets in Windows 11 2H25 only affects fresh installations, not upgrades of existing activated builds. It’s all for naught, however; I’ve already moved on. For any of you who wondered what I’d been doing with the SIM before I temporarily “plugged it back in” to the computer, as I intentionally teased a paragraph earlier, read on for the solution to the mystery.

Standalone hotspots: still relevant

I’ve dabbled with mobile cellular hotspots before, owned by others. And truth be told, I didn’t have to buy one this time. Last January I’d purchased on sale from Amazon two NETGEAR LM1200 cellular broadband modems, one for teardown-to-come and the other for precisely the scenario—premises power-loss connectivity backup—that I experienced in mid-December. They aren’t as-is usable, requiring tether to a router. But I have plenty of those in inventory. And had we stuck around the home more than one night I probably would have pressed the modem-plus-router combo into service, fueled by a portable power unit.

But another limitation, bandwidth, was the same one that already soured me on the Surface Pro X’s integrated modem (along in the ones in my Intel-based Surface Pros, for that matter). The LM1200 “only” supports 4G LTE, which is likely why I bought them (on closeout, I suspect) for only $19.99 each a year-plus back, versus the original $49.99 MSRP. As you’ll soon see, I used a similar “buy a generation-or-few old” stratagem with the mobile hotspots! 4G LTE support was sufficient when that’s all my AT&T service supported (and the unlimited per-month allocation was a nice bonus). But once AT&T upgraded me to 5G…well, you know what they say about shiny new objects… Truth be told, I actually bought three mobile hotspots, for reasons I’ll discuss in the following sections.

The NETGEAR Nighthawk M6 MR6110

I’ll start with the highest-end device, Netgear’s MR6110 (PDF), the entry-level member of the company’s Nighthawk M6 family. Versus its higher-end Nighthawk M6 siblings (this Mobile Internet Resource Center writeup provides a comprehensive comparison), not to mention Nighthawk M7-family successors, it:

  • Is carrier-locked to AT&T, and doesn’t support a sufficient diversity of frequency bands (presumably due to firmware versus silicon limitations) to deliver robust support for other cellular carriers, anyway
  • Is sub-6 GHz only from a spectrum standpoint, not additionally comprehending mmWave support (which, interestingly, NETGEAR dropped entirely in its Nighthawk M7 generation devices) and
  • Supports only Wi-Fi 6, not more advanced protocols

Then again, it only cost me $84.99 plus tax gently used from a legitimate eBay seller (just as I’ve mentioned before with cellular phones, you need to be careful when buying preowned goods to ensure that you haven’t acquired a device whose IMEI has already been banned by the associated cellular carrier). I also sprung for a $24.99 two-year extended warranty. And in case you’re wondering what behind the gray square “doors” at both ends of the front panel in the above stock photo, they’re TS-9 connectors that mate up with NETGEAR’s model 6000451 omnidirectional MIMO antenna, a gently used example of which I bought for $24 off eBay:

I live in a rural region outside of (and above) Golden, Colorado, with trailing-edge cellular technology deployed and spotty coverage for all carriers. To wit, using the NETGEAR MR6110’s internal antenna, I was only able to tune in LTE service…what’s the point, since I’ve already got the NETGEAR LM1200 modem-plus-router combo? But connect the external antenna, tether my laptop to the MR6110 over USB-C, and:

Huzzah! Consider me sold!

The Franklin A50 (model RG2102)

Next up…or down, depending on your perspective…is another AT&T-partner piece of hardware, Franklin’s A50. No integrated Ethernet, although you can still wired-tether to a single device over USB-C, and to an Ethernet-based router via a USB-C-to-Ethernet adapter plus a Cat5e cable. And “only” support for 20 concurrent devices, versus the NETGEAR MR6110’s 32. But user reviews rave about its battery life. It touts diverse 5G band support, and is claimed carrier-unlockable via services such as Cellcorner and Unlocklocks. That’d be convenient in case, for example, I ever wanted to switch my service to Google Fi, a T-Mobile MVNO (mobile virtual network operator). And it only set me back $34 (plus tax) used on eBay. How could I refuse?

The Franklin T9 (model RT717)

This last, lowest-end one—two of them, actually—I bought solely for experimentation purposes, both hacking and teardown. No integrated Ethernet, again. No 5G support this time, either; it only comprehends LTE. And as you can tell from the photo, this time it’s out-of-box locked to T-Mobile. But believe it or not, it’s (unofficially, again) user-unlockable for use with other carriers, not to mention user-hackable to both tweak its default settings and expand its overall feature set. Check out the following example links (in Google search results priority order) for more information:

And did I mention that each complete kit, in brand new condition this time, cost me only $13.98 plus tax (with free shipping!) on eBay? Once again, how could I resist?

More to come

As you’ve hopefully already noticed from the two photos I shared earlier, I’m already happily exploring the NETGEAR MR6110, with the other two devices to follow in short order. I’ve also already invested in carrying cases for all three, plus inexpensive spare batteries for both the MR6110 and Franklin A50 (each Franklin T9 kit came with one, so I’m set here), since all three hotspots’ portable power cells are easily user-accessible for swap-out purposes. Stay tuned for more coverage to come in the coming months. And for now, I as-always welcome your thoughts in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post Cellular hotspots: Multi-option evaluation thoughts appeared first on EDN.

MACOM’s microwave and optical solutions on display at SATShow Week

Semiconductor today - 6 hours 30 min ago
In booth 1637 at SATShow Week in the Walter E. Washington Convention Center in Washington DC (24–26 March), MACOM Technology Solutions Inc of Lowell, MA, USA is showcasing its latest RF and optoelectronics solutions for satellite communications (SATCOM) that can enable higher frequency bands, improved power efficiency and more scalable architectures, which are critical to next-generation satellite networks...

Mythic and Microchip Partner to Redefine AI Processing with Next-Gen Analogue Compute-in-Memory Technology.

ELE Times - 7 hours 16 min ago
Mythic has chosen memBrain neuromorphic hardware intellectual property (IP) from Microchip Technology’s Silicon Storage Technology (SST) subsidiary for its next-generation edge-to-enterprise Analogue Processing Units (APUs). Mythic will utilise SST’s SuperFlash embedded non-volatile memory (eNVM) bitcells to deliver high levels of analogue compute-in-memory (aCIM) performance per watt. The partnership enables Mythic to achieve 120 TOPS/watt inference processing for power-efficient AI acceleration at the edge and in the data centre: Mythic’s APUs are targeted to be up to 100 times more energy-efficient than conventional digital Graphics Processing Units (GPUs).

One hundred fifty billion units of SST SuperFlash technology that Mythic is licensing have been shipped to date. SuperFlash technology is the de facto eNVM solution for a broad spectrum of industries, including industrial, automotive, consumer, and computing, for critical data and code storage, and is licensed by all of the top 10 semiconductor foundries worldwide.

“Mythic is pioneering innovative solutions in AI inference processing and AI sensor fusion for industrial, automotive and data centre applications, effectively overcoming current AI power limitations,” said Mark Reiten, vice president of Microchip’s Edge AI business unit. “As the core memory technology for Mythic’s next-generation products, memBrain delivers significant power efficiency and high performance for both edge and data centre applications.”

The memBrain cell features:

  • Up to 8 data bits per bitcell (8 bpc) storage
  • Single-digit nanoamp (nA) bitcell read current
  • 10-year data retention at operating temperature
  • 100,000 endurance cycles
  • Full state machine control of the 8 bpc multi-state write operation
  • Single-cycle multiply-and-accumulate operations for aCIM

“Mythic selected SST after an industry-wide search of eNVM technologies and determined the memBrain cell technology best enabled us to achieve the ultra-low-power and high performance required by our customers,” said Dr Taner Ozcelik, Mythic’s chief executive officer. “Additionally, the wide foundry availability of its industry-proven SuperFlash technology, coupled with the outstanding support of the SST engineering team, has been invaluable during our product development cycle.”

SST’s memBrain technology has been developed and deployed in 40 nm and 28 nm foundry processes using production-ready SuperFlash memory. 22 nm memBrain development is planned to extend the technology roadmap. Designed to provide reliable, high-performance and low-power non-volatile storage directly on the chip, SuperFlash memory is widely used in applications that require fast access times, high endurance and data retention without the need for external memory components.

The post Mythic and Microchip Partner to Redefine AI Processing with Next-Gen Analogue Compute-in-Memory Technology. appeared first on ELE Times.

🐣 Запрошуємо на виставку "Темарі: Розмаїття барв"!

Новини - 8 hours 12 min ago
🐣 Запрошуємо на виставку "Темарі: Розмаїття барв"!
Image
kpi чт, 03/19/2026 - 11:38
Текст

Розпочала свою роботу виставка "Темарі: Розмаїття барв"! Темарі - це старовинне японське мистецтво вишивки на кульках з ниток, яке прийшло до нас з глибини століть.

Перемога КПІшників у Säkerhets-SM CTF 2026!

Новини - 8 hours 22 min ago
Перемога КПІшників у Säkerhets-SM CTF 2026!
Image
kpi чт, 03/19/2026 - 11:27
Текст

🏆 Команда dcua Навчально-наукового фізико-технічного інституту (НН ФТІ) Київської політехніки блискуче виступила у фіналі кіберзамагань у Стокгольмі, заявивши про себе як про найсильнішу не лише в Україні, а й у світі. Наші студенти випередили однолітків зі Швеції, Данії, Фінляндії, Ісландії, Норвегії, Естонії, Латвії, Литви, а також інших українських команд.

Guerrilla RF expands aerospace & defense focus with new SatCom initiative

Semiconductor today - 8 hours 51 min ago
Guerrilla RF Inc (GRF) of Greensboro, NC, USA — which develops and manufactures radio-frequency integrated circuits (RFICs) and monolithic microwave integrated circuits (MMICs) for wireless applications — says that it has expanded its focus and readiness to support the rapidly evolving satellite communications (SatCom) market. With a portfolio spanning low-noise small-signal devices through high-power RF power amplifiers, Guerrilla RF now offers more than 100 component solutions engineered for mission-critical SatCom applications across both ground-based infrastructure and spaceborne platforms...

Scoping out the chiplet-based design flow

EDN Network - 8 hours 58 min ago

Today, the design of most monolithic SoCs follows a familiar pattern. Requirements definition leads to an architectural design. Then, the design team selects and qualifies the necessary IP blocks, assembles them into the architecture, and floorplans the die. Functional verification and early power and timing estimation can begin at this point.

The team can now begin RTL synthesis, rough placement, and at least preliminary routing. As these tasks finish, most SoC design teams will bring in physical-design specialists to complete the work until signoff.

But what about a multi-die design based on chiplets? At first glance, the sequence of tasks seems nearly identical to the one for a monolithic SoC. Just substitute chiplets for IP blocks and interposer design for physical chip design, right?

Well, no. Issues and corresponding tasks in chiplet-based design diverge significantly from the flow of most monolithic chip designs. Unless you intend to build a great deal of specialized multi-die expertise in-house, these issues make it vitally important to engage, from the beginning of the project, with a design partner experienced in both chiplets and interposer design and one with deep relationships across the multi-die, global supply chain.

The chiplet path

The two paths diverge early in the design project. In concept, selecting chiplets sounds much like IP selection. However, the IP market is mature: there are sources for almost any common IP function, and specialist IP firms are willing to undertake nearly anything. And usually, IP is highly configurable, either by setting parameters for an RTL generator or by working with the provider.

Only when the SoC requirements demand a unique function or unusual operating constraints, such as market-leading performance or extreme low power, would the SoC team consider designing its own IP internally.

In contrast, the chiplet market, while growing, is still immature. Some combinations of functions may not be available. And chiplets—which are finished dies, after all—cannot be as flexible as an RTL generator tool. You may find an I/O hub chiplet with the right kinds of inputs and outputs, but you may not find one with the correct configuration, the right power, or the proper pad placement for your design.

For these reasons, chiplet-based designs often require the design of one or more chiplets, and chiplets can have very different constraints from stand-alone ICs—they aren’t just little SoCs. Chiplets usually have very high I/O densities, high-speed drivers or serial transceivers tuned to the very short interconnect runs on interposers, and precise pad placement requirements dictated by an interposer layout.

Also, because the finished module will have to be tested when test equipment has limited access to the dies, chiplets often emphasize built-in self-test (BiST) more than a conventional chip. Having a design partner familiar with these issues from the outset can save time and energy.

Memory has issues, too

One type of die in chiplet-based design deserves special mention: memory. In this era of AI everywhere, many chiplet-based architectures will include high-bandwidth memory (HBM). This is undoubtedly true for datacenter processors, but increasingly just as true for edge AI applications such as vision processing or robotics.

Unfortunately, HBM interface design, placement on the interposer, routing, and thermal analysis are all challenges that differ significantly from the issues with logic chiplets. Requirements vary from generation to generation of the HBM standard, and even vendor to vendor. In the intense competition for supply, securing a stable supply of HBM dies or die stacks is essential before locking down the interposer design.

A design partner with deep HBM experience and strong supply-chain connections can ensure your design delivers the memory bandwidth you need with HBM dies you can acquire without having to respin an interposer design.

Interposer design

That brings us to the interposer. Conceptually, interposer design is not unlike IP placement and routing on an SoC. But here, we are talking about placing physical dies on a piece of silicon—usually—and routing between physical pads that can’t be moved. In practice, the constraints and analysis tools differ from those for chip design.

Also, decisions made at this stage can impact earlier and later stages in the design flow. The limited bandwidth between chiplets may influence how the architecture is partitioned across the dies. Even spatial issues, such as how close processor chiplets may be placed to HBM stacks and how far away they may be, can influence architectural partitioning and chiplet designs.

Interposer design also includes tasks that are unfamiliar to most chip design teams. These include signal and power integration analysis, 3D electromagnetic field modeling, and thermal and mechanical analysis of the 3D structure. Furthermore, design-for-test becomes an issue. A test strategy for the completed model must reasonably achieve the required coverage and be consistent with the assembly power budget. The test strategy will also influence the choice of OSAT vendors for the assembly.

Finally, the package must be designed, not chosen off the shelf. This will require yet another set of tools and analyses. Packaging decisions will echo up and down the supply chain: interposer design, availability of materials, geographic location of capable OSAT facilities, and more will be influenced by packaging choices.

It takes a platform

The range of tasks and specialized skills necessary to bring a chiplet-based design to a global market is significantly broader than the set required for a modest SoC design. The fact that many tasks interact up and down the design flow further complicates the project. If too many specialist parties are involved, communications and change management can become a nightmare.

The best solution is not a go-it-alone determination, nor is it a scramble to pull together a horde of best-in-class specialist consultants. Nor is it necessary to turn the whole challenge over to a powerful foundry partner with limited global flexibility.

We have found that the optimum solution is a consolidation platform. This organization combines rich IP access, chiplet design experience, interposer expertise, strong relationships with HBM suppliers, multiple interposer foundries, and chip-on-wafer-capable OSATs worldwide. You need a partner with a platform to address the global challenge of chiplet-based products.

The consolidated platform is an ecosystem solution offering a global ecosystem of IP and design expertise with foundry and OSAT service partners. Source: Faraday Technology Corp.

This consolidated platform combines rich IP access, chiplet design experience, interposer expertise, strong relationships with HBM suppliers, multiple interposer foundries, and chip-on-wafer-capable OSATs worldwide.

Kenneth Lu, marketing manager at Faraday Technology, has over 20 years of experience in the semiconductor industry, spanning product engineering, IP design, and marketing for various application ICs. He currently focuses on business development in advanced packaging, processes, and related innovations.

Special Section: Chiplets Design

The post Scoping out the chiplet-based design flow appeared first on EDN.

Halo selects Eyelit to power scalable SiC wafering production with composable MES

Semiconductor today - Wed, 03/18/2026 - 23:12
Eyelit Technologies of Holmdel, N.J, USA (which provides AI-powered optimized planning, scheduling and execution systems) says that its software solution suite has been selected by laser-based silicon carbide (SiC) wafering firm Halo Industries Inc of Santa Clara, CA, USA (a 2014 spin-out from Stanford University) to support its rapidly scaling production needs...

NVIDIA and ST present new delivery boards for 800VDC architectures

Semiconductor today - Wed, 03/18/2026 - 22:21
NVIDIA of Santa Clara, CA, USA and STMicroelectronics of Geneva, Switzerland is presenting two new delivery boards for 800VDC architectures...

Infineon introduces CoolGaN-based high-voltage intermediate bus converter reference designs

Semiconductor today - Wed, 03/18/2026 - 22:12
Infineon Technologies AG of Munich, Germany has introduced two new high-voltage intermediate bus converter (HV IBC) reference designs to help customers accelerate the transition to AI server power architectures powered by ±400V and 800V DC. Enabled by Infineon’s 650V CoolGaN switches, the designs target hyperscalers, power architects, and server OEMs seeking higher rack power, lower power distribution losses, and improved thermal performance at rising AI workloads...

Built a online stripboard layout editor with live net colouring and conflict checking

Reddit:Electronics - Wed, 03/18/2026 - 17:35
Built a online stripboard layout editor with live net colouring and conflict checking

About once a year or so I have to solder up a smallish stripboard. I designed them on paper, which is kind of annoying if you make a mistake or want to change something. So this time I tried finding a simple stripboard editor but couldn't really find one that's easy and fast to use for simple projects. Therefore I just decided to create my own.

It uses a split-screen layout with a very basic schematic editor on the left and a stripboard editor on the right. You first design a schematic and then place the components on the stripboard. Having the schematic allows for conflict detection, strip colouring and checking for unfinished nets on the stripboard.

You can check it out here: https://stripboard-editor.com

My goal was to create a fast, simple to use editor for small projects where it's not worth the trouble to use a complex editor but hard enough where using paper or your head only would be annoying. (I dont make any money of this in anyway, its just a personal hobby project I think could be useful)

If you have any feedback, Id love to hear it.

Greetings, Karl

submitted by /u/Karlomatiko
[link] [comments]

Teradyne launches Photon 100 opto-electric automated test platform

Semiconductor today - Wed, 03/18/2026 - 16:10
Automated test equipment and advanced robotics provider Teradyne Inc of North Reading, MA, USA has launched the Photon 100, a comprehensive opto-electric automated test platform purpose-built to accelerate high-volume silicon photonics (SiPh) and co-packaged optics (CPO) manufacturing...

The new performance bottleneck: How more GPU memory unlocks next-gen gaming and AI PCs

ELE Times - Wed, 03/18/2026 - 14:31

Courtesy: Micron

The next era of PC performance will be defined not by more compute, but by memory scale. The rising size of game assets and AI models has outpaced GPU memory capacity until now. Micron’s latest evolution of GDDR7 marks a pivotal shift for next-generation GPUs by combining higher memory density with the scalability that modern gaming and AI workloads now demand. With expanded capacity options built to support configurations up to 96GB of graphics memory, this generation of GDDR empowers systems to keep vastly larger worlds, richer textures, and growing AI models resident in memory, reducing bottlenecks and unlocking more consistent real-time performance across high-fidelity games and AI-enhanced applications.

Visual computing: The convergence of graphics and intelligence

Visual computing is entering a new era as graphics and intelligence converge. Modern systems must not only render high-fidelity scenes in real time, but also interpret, enhance, and generate content using increasingly complex AI models. Two forces are accelerating this shift: the push toward cinematic quality gaming and the rapid emergence of AI-powered PCs. As worlds grow larger, textures more detailed, and on-device AI more integral to responsiveness and personalisation, the demands placed on GPU memory have surged. What that means is, memory capacity and efficiency now determine how smoothly a system can deliver immersive gameplay, intelligent creation tools, and real-time simulation, making memory a foundational enabler of next-generation visual computing.

Delivering unprecedented performance for high-resolution gaming

Modern games are pushing GPU architectures harder than ever. Real-time ray tracing demands continuous access to massive datasets, geometry, materials, lighting maps, and shadows, while high refresh rate displays and ultra-resolution textures multiply the data the GPU must process each frame. Add in sprawling open worlds and increasingly AI-assisted rendering techniques, and the result is a workload that easily overwhelms traditional memory limits.

The problem is that when GPU memory can’t hold all this data at once, the system is forced to constantly swap assets in and out. That leads to the issues gamers know too well: texture pop-in, mid-frame stutters, uneven frame times, and sudden drops during intense ray-traced scenes. AI-generated frames and upscaling pipelines also become less consistent when memory is constrained, because the models and intermediate buffers they rely on are constantly competing for space.

This is where next-generation GDDR capacity and bandwidth become critical. By enabling far larger datasets to remain resident in memory, GDDR7 keeps the entire visual pipeline fed: textures, lighting data, geometry sets, and AI inference models, without the bottlenecks that cause visual artefacts or performance instability. The result is smoother, more predictable real-time rendering at 4K, 5K, and 8K, even in the most demanding scenes.

To keep these visual pipelines running efficiently, the memory subsystem must deliver data rapidly and consistently.

Enabling larger, more detailed worlds with 24Gb die density

As game environments expand and visual assets grow, memory capacity becomes critical to maintaining seamless, artefact-free experiences. Micron’s new 24Gb die density enables up to 96GB of graphics memory, giving GPUs significantly more space for high-resolution textures, expansive worlds, and advanced visual effects.

This increased capacity matters to gamers because:

  • Reduces asset swapping and texture pop-in
  • Supports larger frame buffers for high-resolution displays
  • Enables richer, more detailed environments with fewer loading transitions

Creators and professional users also benefit from faster real-time rendering, more responsive GPU-accelerated workflows, and improved handling of large datasets.

Fueling AI-enhanced graphics and the rise of AI PCs

AI is rapidly becoming integral to personal computing. Neural rendering, real-time media enhancement, content generation, and AI-assisted workflows place new demands on system memory. Micron GDDR7 is built to support these emerging workloads with increased bandwidth, lower latency, and improved efficiency.

Why GDDR7 matters for AI PCs

AI-driven graphics and compute tasks rely on continuous movement of large datasets. GDDR7 accelerates these operations by improving throughput and responsiveness across GPU pipelines.

Systems built with GDDR7 benefit from:

  • Faster on-device AI inference for creation, media, and collaboration
  • Lower-latency performance across hybrid CPU-GPU-NPU workflows
  • Higher throughput for neural graphics and generative AI models
  • Improved power efficiency thanks to architectural refinements and reduced operating voltages

As AI becomes embedded into everyday PC tasks from writing, coding, editing, presenting, and gaming, memory performance will heavily influence the immediacy, intelligence, and fluidity of the experience.

Enabling the future of immersive and intelligent computing

Micron GDDR7 is more than a performance improvement; it is a foundational technology for the next decade of visual and AI computing. With 36 Gbps bandwidth, 24Gb die density, and improved efficiency, GDDR7 empowers GPU and AI PC vendors to deliver richer, more dynamic, and more intelligent computing experiences.

While NPUs are becoming essential for power-efficient, on-device AI acceleration, the most demanding visual and AI workloads still rely on the scale and parallelism of a discrete GPU. NPUs excel at sustained, low-power inference, but GPUs deliver significantly higher throughput for large models, neural graphics, advanced rendering, and gaming workloads. By pairing NPUs with discrete GPUs equipped with GDDR7, AI PCs can intelligently distribute tasks, assigning lightweight inference to the NPU while leveraging the GPU’s computing power and memory bandwidth for operations that require maximum performance. This combination unlocks capabilities far beyond what NPUs can achieve alone.

Together, Micron GDDR7 and the next wave of discrete GPUs set the stage for a new era of immersive graphics and high-performance AI computing.

The post The new performance bottleneck: How more GPU memory unlocks next-gen gaming and AI PCs appeared first on ELE Times.

Pages

Subscribe to Кафедра Електронної Інженерії aggregator