Feed aggregator

Ascent Solar announces up to $25m private placement

Semiconductor today - Mon, 01/26/2026 - 21:50
Ascent Solar Technologies Inc of Thornton, CO, USA – which designs and makes lightweight, flexible copper indium gallium diselenide (CIGS) thin-film photovoltaic (PV) panels that can be integrated into consumer products, off-grid applications and aerospace applications – has entered into definitive agreements for the purchase and sale of 1,818,182 shares of common stock (or pre-funded warrants in lieu thereof), series A warrants to purchase up to 1,818,182 shares of common stock and short-term series B warrants to purchase up to 909,091 shares of common stock at a purchase price of $5.50 per share of common stock (or per pre-funded warrant in lieu thereof) and accompanying warrants in a private placement priced at-the-market under Nasdaq rules...

A battery charger that does even more

EDN Network - Mon, 01/26/2026 - 16:25

Multifunction devices are great…as long as you can find uses for all (or at least some) of those additional functions that you end up paying for, that is.

All other factors being equal (or at least roughly comparable), I tend to gravitate toward multifunction devices instead of a suite of single-function widget alternatives. The versatile smartphone is one obvious example of this trend; while I still own a collection of both still and video cameras, for example, they mostly collect dust on my shelves while I instead regularly reach for the front and rear cameras built into my Google Pixel phones. And most folks have already bailed on standalone cameras (if they ever even had one in the first place) long ago.

Speaking of multi-function devices, as well as of cameras, for that matter, let’s take a look at today’s teardown victim, NEEWER’s Replacement Battery and Charger Set:

It comes in three variants, supporting (and bundled with two examples of) batteries for Canon (shown here), Nikon, and Sony cameras, with MSRPs ranging from $36.49 to $73.99. It’s not only a charger, over both USB-C and micro-USB input options (a USB-A to micro-USB adapter cable is included, too), but also acts as a travel storage case for those batteries as well as memory cards:

And assuming the batteries are already charged, you can use them not only to power your camera but also to recharge an external device, such as a smartphone, via the USB-A output. My only critique would be that the USB-C connector isn’t bidirectional, too, i.e., able to do double-duty as both a charging input and an external-powering output.

When life gives you damaged devices, make teardown patients

As part of Amazon’s most recent early-October Prime Big Deal Days promotion, the company marked down a portion of the inventory in its Resale (formerly Warehouse) section, containing “Quality pre-owned, used, and open box products” (their words, not mine, and in summary: where Amazon resells past customer returns). I’ve regularly mentioned it in the past as a source of widgets for both my ongoing use and in teardowns, the latter often the result of my receiving something that didn’t work or was otherwise not-as-advertised, and Amazon refunding me what I paid and telling me not to bother returning it. Resale-sourced acquisitions don’t always pan out, but they do often enough (and the savings are significant enough) that I keep coming back.

Take the NEEWER Replacement Battery and Charger Set for Canon LP-E6 batteries, for example. It was already marked down from $36.49 to $26.63 by virtue of its inclusion in the Resale section, and the Prime Big Deal Days promotion knocked off an additional 25%, dropping the per-unit price to $19.97. So, I bought all three units that were available for sale, since LP-E6 batteries are compatible not only with my two Canon EOS 5D Mark IV DSLRs and my first-generation Blackmagic Design Pocket Cinema 6K video camera but also, courtesy of their ubiquity (along with that of the Sony-originated L-series, i.e., NP-F battery form factor) useful as portable power options for field monitors, flash and constant illumination sources, and the like.

From past experience with Warehouse-now-Resale-sourced acquisitions, I expected the packaging to be less-than-pristine compared to a brand-new alternative, and reality matched the lowered expectations. Here are the front and back panels of the first two devices’ outer boxes, in the first image accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes, which you’ll also see in other photos in this piece:

Flip up the top, however, and the insides were a) complete and b) in cosmetically acceptable and fully functional shape. Here are the contents of the first box shown earlier, for example:

The aforementioned USB-A to micro-USB adapter cable:

One of the two included batteries:

The device outsides:

And finally, its insides:

The third time’s NOT the charm

The third device, on the other hand…when I saw the clear plastic bag that it came in, I knew I was in for trouble:

Removing the box from the bag only made matters visually, at least, worse:

And when I flipped open the top…yikes (I’d already taken out the LP-E6 batteries, which ended up looking and working fine, from the box when I snapped the following shots):

From a charging-and-powering standpoint, the device still worked fine, believe it or not. But the inability to securely attach the lid to the base rendered it of low value at best (there are always, of course, thick rubber bands as an alternative lid-securing scheme, but they’d still leave a gap).

So, I got in touch with Amazon, who gave me a full refund and told me to keep the device to do with as I wished. I relocated the batteries to my Blackmagic camera case. And then I added the battery charger to my teardown pile. On that note, by the way, I’ve intentionally waited until now to show you the packaging underside:

Case underside:

And one of the slips of literature:

This was the only one of the three devices I bought that had the same warning in all three places. If I didn’t know better, I’d think they’d foreseen what I later had planned for it!

Difficulty in diving in

Time to get inside:

As with my recent Amazon Smart Plug teardown, I had a heck of a time punching through the seemingly straightforward seam around the edges of the interior portion:

But finally, after some colorful language, along with collateral damage:

I wrenched my way inside, surmounting the seemingly ineffective glue above the PCB in the process. The design’s likely hardware modularity is perhaps obvious; the portion containing the battery bays is unique to a particular product proliferation, with the remainder common to all three variants.

Remove the three screws holding the PCB in place:

And it lifts right out:

That chunk out of one corner of the wire-wound inductor in the middle came courtesy of yours truly and his habit of blindly jabbing various tools inside the device during the ham-fisted disassembly process. The foam along the left edge precludes the underside LEDs (which you’ll see shortly) from shining upward, instead redirecting their outputs out the front.

IC conundrums

The large IC to the right of the foam strip, marked as follows:

0X895D45

is an enigma; my research of both the topside marked text (via traditional Google search) and the image (via Google Lens) was fruitless. I’m guessing that it’s the power management controller, handling both battery charging and output sequencing functions; more precise information from knowledgeable readers would be appreciated in the comments.

The two identical ICs along the top edge, in eight-lead SOP packages, were unfortunately no easier to ID. They’re marked as follows:

PSD (company logo) AKJG
PAP8801

And along the right edge is another IC, also in an eight-lead SOP but this time with the leads connected to the package’s long edges, and top-side stamped thusly:

SPT (company logo) SP1081F
25CT03

This last one I’m more confident of. It appears to be the SP1081F synchronous buck regulator from Chinese semiconductor supplier Wuxi Silicon Power Microelectronics. And intermingled with all these ICs are various surface-mounted passives and such.

For additional perspective, next are some side-view shots:

And, last but not least, here’s the PCB underside, revealing the four aforementioned LEDs, a smattering of test points, and not much else (unless you’re into traces, that is):

There you have it! As always, please share your insights in the comments.

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post A battery charger that does even more appeared first on EDN.

Від Межигірської фаянсової фабрики – до факультету автоматизації, промислової інженерії та екології КПІ

Новини - Mon, 01/26/2026 - 15:38
Від Межигірської фаянсової фабрики – до факультету автоматизації, промислової інженерії та екології КПІ
Image
kpi пн, 01/26/2026 - 15:38
Текст

Київська політехніка є не лише колискою інженерної освіти, з якої вийшла плеяда науково-освітніх закладів та виробництв. Її можна назвати ще й матір'ю-годувальницею, яка прихистила в себе нові підрозділи, збагативши палітру підготовки технічних кадрів.

The shift to 800-VDC power architectures in AI factories

EDN Network - Mon, 01/26/2026 - 15:00

The wide adoption of artificial-intelligence models has led to a redesign of data center infrastructure. Traditional data centers are being replaced with AI factories, specifically designed to meet the computational capacity and power requirements required by today’s machine-learning and generative AI workloads.

Data centers traditionally relied on a microprocessor-centric (CPU) architecture to support cloud computing, data storage, and general-purpose compute needs. However, with the introduction of large language models and generative AI applications, this architecture can no longer keep pace with the growing demand for computational capacity, power density, and power delivery required by AI models.

AI factories, by contrast, are purpose-built for large-scale training, inference, and fine-tuning of machine-learning models. A single AI factory can integrate several thousand GPUs, reaching power consumption levels in the megawatt range. According to a report from the International Energy Agency, global data center electricity consumption is expected to double from about 415 TWh in 2024 to approximately 945 TWh by 2030, representing almost 3% of total global electricity consumption.

To meet this power demand, a simple data center upgrade would be insufficient. It is therefore necessary to introduce an architecture capable of delivering high efficiency and greater power density.

Following a trend already seen in the automotive sector, particularly in electric vehicles, Nvidia Corporation presented at Computex 2025 an 800-VDC power architecture designed to efficiently support the multi-megawatt power demand required by the compute racks of next-generation AI factories.

Power requirements of AI factories

The power profile of an AI factory differs significantly from that of a traditional data center. Because of the large number of GPUs employed, an AI factory’s architecture requires high power density, low latency, and broad bandwidth.

To maximize computational throughput, an increasing number of GPUs must be packed into ever-smaller spaces and interconnected using high-speed copper links. This inevitably leads to a sharp rise in per-rack power demand, increasing from just a few dozen kilowatts in traditional data centers to several hundred kilowatts in AI factories.

The ability to deliver such high current levels using traditional low-voltage rails, such as 12, 48, and 54 VDC, is both technically and economically impractical. Resistive power losses, as shown in the following formula, increase exponentially with rising current, leading to a significant reduction in efficiency and requiring the use of copper connections with extremely large cross-sectional areas.

Presistive loss = V × I = R × I2

To support high-speed connectivity among multiple GPUs, Nvidia developed the NVLink point-to-point interconnect system. Now in its fifth generation, NVLink enables thousands of GPUs to share memory and computing resources for training and inference tasks as if they were operating within a single address space.

A single Nvidia GPU based on the Blackwell architecture (Figure 1) supports up to 18 NVLink connections at 100 GB/s, for a total bandwidth of 1.8 TB/s, twice that of the previous generation and 14× higher than PCIe Gen5.

A single Nvidia GPU based on the Blackwell architecture.Figure 1: Blackwell-architecture GPUs integrate two reticle-limit GPU dies into a single unit, connected by a 10-TB/s chip-to-chip link. (Source: Nvidia Corporation) 800-VDC power architecture

Traditional data center power distribution typically uses multiple, cascading power conversion stages, including utility medium-voltage AC (MVAC), low-voltage AC (LVAC, typically 415/480 VAC), uninterruptible power supply, and power distribution units (PDUs). Within the IT rack, multiple power supply units (PSUs) execute an AC-to-DC conversion before final DC-to-DC conversions (e.g., 54 VDC to 12 VDC) on the compute tray itself.

This architecture is inefficient for three main reasons. First, each conversion stage introduces power losses that limit overall efficiency. Second, the low-voltage rails must carry high currents, requiring large copper busbars and connectors. Third, the management of three-phase AC power, including phase balancing and reactive power compensation, requires a complex design.

Conversely, the transition to an 800-VDC power backbone minimizes I2R resistive losses. By doubling the distribution voltage from the industry-standard high end (e.g., 400 VDC) to 800 VDC, the system can deliver the same power output while halving the current (P = V × I), reducing power loss by a factor of four for a given conductor resistance.

By adopting this solution, next-generation AI factories will have a centralized primary AC-to-DC conversion outside the IT data hall, capable of converting MVAC directly to a regulated 800-VDC bus voltage. This 800 VDC can then be distributed directly to the compute racks via a simpler, two-conductor DC busway (positive and return), eliminating the need for AC switchgear, LVAC PDUs, and the inefficient AC/DC PSUs within the rack.

Nvidia’s Kyber rack architecture is designed to leverage this simplified bus. Power conversion within the rack is reduced to a single-stage, high-ratio DC-to-DC conversion (800 VDC to the 12-VDC rail used by the GPU complex), often employing highly efficient LLC resonant converters. This late-stage conversion minimizes resistive losses, provides more space within the rack for compute, and improves thermal management.

This solution is also capable of scaling power delivery from the current 100-kW racks to over 1 MW per rack using the same infrastructure, ensuring that the AI factory’s power-delivery infrastructure can support future increased GPU energy requirements.

The 800-VDC architecture also mitigates the volatility of synchronous AI workloads, which are characterized by short-duration, high-power spikes. Supercapacitors located near the racks help attenuate sub-second peaks, while battery energy storage systems connected to the DC bus manage slower events (seconds to minutes), decoupling the AI factory’s power demand from the grid’s stability requirements.

The role of wide-bandgap semiconductors

The implementation of 800-VDC architecture can benefit from the superior performance and efficiency offered by wide-bandgap semiconductors such as silicon carbide and gallium nitride.

SiC MOSFETs are the preferred technology for the high-voltage front-end conversion stages (e.g., AC/DC conversion of 13.8-kV utility voltage to 800 VDC, or in solid-state transformers). SiC devices, typically rated for 1,200 V or higher, offer higher breakdown voltage and lower conduction losses compared with silicon at these voltage levels, despite operating at moderately high switching frequencies. Their maturity and robustness make them the best candidates for handling the primary power entry point into the data center.

GaN HEMTs, on the other hand, are suitable for high-density, high-frequency DC/DC conversion stages within the IT rack (e.g., 800 VDC to 54 VDC or 54 VDC to 12 VDC). GaN’s material properties, such as higher electron mobility, lower specific on-resistance, and reduced gate charge, enable switching frequencies into the megahertz range.

This high-frequency operation permits the use of smaller passive components (inductors and capacitors), reducing the size, weight, and volume of the converters. GaN-based converters have demonstrated power densities exceeding 4.2 kW/l, ensuring that the necessary power conversion stages can fit within the constrained physical space near the GPU load, maximizing the compute-to-power-delivery ratio.

Market readiness

Leading semiconductor companies, including component manufacturers, system integrators, and silicon providers, are actively collaborating with Nvidia to develop full portfolios of SiC, GaN, and specialized silicon components to support the supply chain for this 800-VDC transition.

For example, Efficient Power Conversion (EPC), a company specializing in advanced GaN-based solutions, has introduced the EPC91123 evaluation board, a compact, GaN-based 6-kW converter that supports the transition to 800-VDC power distribution in emerging AI data centers.

The converter (Figure 2) steps 800 VDC down to 12.5 VDC using an LLC topology in an input-series, output-parallel (ISOP) configuration. Its GaN design delivers high power density, occupying under 5,000 mm2 with a height of 8 mm, well-suited for tightly packed server boards. Placing the conversion stage close to the load reduces power losses and increases overall efficiency.

EPC’s GaN solution for 800-VDC power architecture for AI data centers.Figure 2: The EPC GaN converter evaluation board integrates the 150-V EPC2305 and the 40-V EPC2366 GaN FETs. (Source: Efficient Power Conversion)

Navitas Semiconductor, a semiconductor company offering both SiC and GaN devices, has also partnered with Nvidia to develop an 800-VDC architecture for the emerging Kyber rack platform. The system uses Navitas’s GaNFast, GaNSafe, and GeneSiC technologies to deliver efficient, scalable power tailored to heavy AI workloads.

Navitas introduced 100-V GaN FETs in dual-side-cooled packages designed for the lower-voltage DC/DC stages used on GPU power boards, along with a new line of 650-V GaN FETs and GaNSafe power ICs that integrate control, drive, sensing, and built-in protection functions. Completing the portfolio are GeneSiC devices, built on the company’s proprietary trench-assisted planar technology, that offer one of the industry’s widest voltage ranges—from 650 V to 6,500 V—and are already deployed in multiple megawatt-scale energy storage systems and grid-tied inverter projects.

Alpha and Omega Semiconductor Limited (AOS) also provides a portfolio of components (Figure 3) suitable for the demanding power conversion stages in an AI factory’s 800-VDC architecture. Among these are the Gen3 AOM020V120X3 and the top-side-cooled AOGT020V120X2Q SiC devices, both suited for use in power-sidecar configurations or in single-step systems that convert 13.8-kV AC grid input directly to 800 VDC at the data center’s edge.

Inside the racks, AOS supports high-density power delivery through its 650-V and 100-V GaN FET families, which efficiently step the 800-VDC bus down to the lower-voltage rails required by GPUs.

In addition, the company’s 80-V and 100-V stacked-die MOSFETs, along with its 100-V GaN FETs, are offered in a shared package footprint. This commonality gives designers flexibility to balance cost and efficiency in the secondary stage of LLC converters as well as in 54-V to 12-V bus architectures. AOS’s stacked-die packaging technology further boosts achievable power density within secondary-side LLC sockets.

AOS’s GaN, SiC, power MOSFETs, and power ICs.Figure 3: AOS’s portfolio supports 800-VDC AI factories. (Source: Alpha and Omega Semiconductor Limited)

Other leading semiconductor companies also announced their readiness to support the transition to 800-VDC power architecture, including Renesas Electronics Corp. (GaN power devices) and Innoscience (GaN power devices), onsemi (SiC and silicon devices), Texas Instruments Inc. (GaN and silicon power modules and high-density power stages), and Infineon Technologies AG (GaN, SiC, and silicon power devices).

For example, Texas Instruments recently released a 30-kW reference design for powering AI servers. The design uses a two-stage architecture built around a three-phase, three-level flying-capacitor PFC converter, which is then followed by a pair of delta-delta three-phase LLC converters. Depending on system needs, the unit can be configured to deliver a unified 800-VDC output or split into multiple isolated outputs.

Infineon, besides offering its CoolSiC, CoolGaN, CoolMOS, and OptiMOS families of power devices, also introduced a 48-V smart eFuse family and a reference board for hot-swap controllers, designed for 400-V and 800-V power architectures in AI data centers. This enables developers to design a reliable, robust, and scalable solution to protect and monitor energy flow.

The reference design (Figure 4) centers on Infineon’s XDP hot-swap controller. Among high-voltage devices suitable for a DC bus, the 1,200-V CoolSiC JFET offers the right balance of low on-resistance and ruggedness for hot-swap operation. Combined with this SiC JFET technology, the digital controller can drive the device in linear mode, allowing the power system to remain safe and stable during overvoltage conditions. The reference board also lets designers program the inrush-current profile according to the device’s safety operating area, supporting a nominal thermal design power of 12 kW.

Infineon’s XDP hot-swap controller reference design.Figure 4: Infineon’s XDP hot-swap controller reference design supports 400-V/800-V data center architectures. (Source: Infineon Technologies AG)

The post The shift to 800-VDC power architectures in AI factories appeared first on EDN.

Increasing 2DEG density with aluminium nitride barriers

Semiconductor today - Mon, 01/26/2026 - 10:37
University of Michigan at Ann Arbor in the USA has reported the experimental demonstration of a record room-temperature 2DEG sheet density exceeding 1x1014cm2 in a single-channel AlN/GaN heterostructure grown by plasma-assisted molecular beam epitaxy (PAMBE), using a 9nm-thick AlN barrier...

European photonic chip industry risks losing advantage without decisive action

Semiconductor today - Mon, 01/26/2026 - 10:37
Europe has a leading position in photonic chip technology, it is reckoned. But, without targeted investment and action, the European Union (EU) risks losing its advantage as the global competition intensifies with large investments elsewhere...

Delay lines demystified: Theory into practice

EDN Network - Mon, 01/26/2026 - 07:26

Delay lines are more than passive timing tricks—they are deliberate design elements that shape how signals align, synchronize, and stabilize across systems. From their theoretical roots in controlled propagation to their practical role in high-speed communication, test equipment, and signal conditioning, delay lines bridge abstract timing concepts with hands-on engineering solutions.

This article unpacks their principles, highlights key applications, and shows how understanding delay lines can sharpen both design insight and performance outcomes.

Delay lines: Fundamentals and classifications

Delay lines remain a fundamental building block in circuit design, offering engineers a straightforward means of controlling signal timing. From acoustic propagation experiments to precision imaging in optical coherence tomography, these elements underpin a wide spectrum of applications where accurate delay management is critical.

Although delay lines are ubiquitous, many engineers rarely encounter their underlying principles. At its core, a delay line is a device that shifts a signal in time, a deceptively simple function with wide-ranging utility. Depending on the application, this capability finds its way into countless systems. Broadly, delay lines fall into three physical categories—electrical, optical, and mechanical—and, from a signal-processing perspective, into two functional classes: analog and digital.

Analog delay lines (ADLs), often referred to as passive delay lines, are built from fundamental electrical components such as capacitors and inductors. They can process both analog and digital signals, and their passive nature allows attenuation between input and output terminals.

In contrast, digital delay lines (DDLs), commonly described as active delay lines, operate exclusively on digital signals. Constructed entirely from digital logic, they do not provide attenuation across terminals. Among DDL implementations, CMOS technology remains by far the most widely adopted logic family.

When classified by time control, delay lines fall into two categories: fixed and variable. Fixed delay lines provide a preset delay period determined by the manufacturer, which cannot be altered by the circuit designer. While generally less expensive, they are often less flexible in practical use.

Variable delay lines, by contrast, allow designers to adjust the magnitude of the delay. However, this tunability is bounded—the delay can only be varied within limits specified by the manufacturer, rather than across an unlimited range.

As a quick aside, bucket-brigade delay lines (BBDs) represent a distinctive form of analog delay. Implemented as a chain of capacitors clocked in sequence, they pass the signal step-by-step much like a line of workers handing buckets of water. The result is a time-shifted output whose delay depends on both the number of stages and the clock frequency.

While limited in bandwidth and prone to noise, BBDs became iconic in audio processing—powering classic chorus, flanger, and delay effects—and remain valued today for their warm, characterful sound despite the dominance of digital alternatives.

Other specialized forms of delay lines include acoustic devices (often ultrasonic), magnetostrictive implementations, surface acoustic wave (SAW) structures, and electromagnetic bandgap (EBG) delay lines. These advanced designs exploit material properties or engineered periodic structures to achieve controlled signal delay in niche applications ranging from ultrasonic sensing to microwave phased arrays.

There are more delay line types, but I deliberately omitted them here to keep the focus on the most widely used and practically relevant categories for designers.

Figure 1 The nostalgic MN3004 BBD showcases its classic package and vintage analog heritage. Source: Panasonic

Retro Note: Many grey-bearded veterans can recall the era when memory was not etched in silicon but rippled through wire. In magnetostrictive delay line memories, bits were stored as acoustic pulses traveling through nickel wire. A magnetic coil would twist the wire to launch a pulse—which propagated mechanically—and was sensed at the far end, then amplified and recirculated.

These memories were sequential, rhythmic, and beautifully analog, echoing the pulse logic of early radar and computing systems. Mercury delay line memories offered a similar acoustic storage medium in liquid form, prized for its stable acoustic properties. Though long obsolete, they remain a tactile reminder of a time when data moved not as electrons, but as vibrations.

And from my recollection of color television delay lines, a delay line keeps the faster, high-definition luminance signal (Y) in step with the slower, low-definition chrominance signal (C). Because the narrow-band chrominance requires more processing than the wide-band luminance, a brief but significant delay is introduced. The delay line compensates for this difference, ensuring that both signals begin scanning across the television screen in perfect synchrony.

Selecting the right delay line

It’s now time to focus on choosing a delay line that will function effectively in your circuit. To ensure compatibility with your electrical network, you should pay close attention to three key specifications. The first is line type, which determines whether you need a fixed or variable delay line and whether it must handle analog or digital signals.

The second is rise time, generally defined as the interval required for a signal’s magnitude to increase from 10% to 90% of its final amplitude. The third is time delay, the actual duration by which the delay line slows down the signal, expressed in units of time. Considering these parameters together will guide you toward a delay line that matches both the functional and performance requirements of your design.

Figure 2 A retouched snip from the legacy DS1021 datasheet shows its key specifications. Source: Analog Devices

Keep in mind that the DS1021 device, once a staple programmable delay line, is now obsolete. Comparable functionality is available on DS1023 or in modern timing ICs such as the LTC6994, which deliver finer programmability and ongoing support.

Digital-to-time converters: Modern descendants of delay lines

Digital-to-time converters (DTCs) represent the contemporary evolution of delay line concepts. Whereas early delay lines stored bits as acoustic pulses traveling through wire or mercury, a DTC instead maps a digital input word directly into a precise time delay or phase shift.

This enables designers to control timing edges with sub-nanosecond accuracy, a capability central to modern frequency synthesizers, clock generation, and high-speed signal processing. In effect, DTCs carry forward the spirit of delay lines—transforming digital code into controlled timing—but with the precision, programmability, and integration demanded by today’s systems.

Coming to practical points on DTC, unlike classic delay line ICs that were sold as standalone parts, DTCs are typically embedded within larger timing devices such as fractional-N PLLs, clock-generation ICs, or implemented in FPGAs and ASICs. Designers will not usually find a catalog chip labeled “DTC,” but they will encounter the function inside modern frequency synthesizers and RF transceivers.

This integration reflects the shift from discrete delay elements to highly integrated timing blocks, where DTCs deliver picosecond-level resolution, built-in calibration, and jitter control as part of a broader system-on-chip (SoC) solution.

Wrap-up: Delay lines for makers

For hobbyists and makers, the PT2399 IC has become a refreshing antidote to the fog of complexity.

Figure 3 PT2399’s block diagram illustrates internal functional blocks. Source: PTC

Originally designed as a digital echo processor, it integrates a simple delay line engine that can be coaxed into audio experiments without the steep learning curve of PLLs or custom DTC blocks. With just a handful of passive components, PT2399 lets enthusiasts explore echoes, reverbs, and time-domain tricks, inspiring them to get their hands dirty with audio and delay line projects.

In many ways, it democratizes the spirit of delay lines, bringing timing control out of the lab and into the workshop, where curiosity and soldering irons meet. And yes, I will add some complex design pointers in the seasoned landscape—but after some lines of delay.

Well, delay lines may have shifted from acoustic pulses to embedded timing blocks, but they still invite engineers to explore timing hands‑on.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Delay lines demystified: Theory into practice appeared first on EDN.

Мета - залучити студентів до сучасного наукового дискурсу

Новини - Sun, 01/25/2026 - 12:46
Мета - залучити студентів до сучасного наукового дискурсу
Image
Інформація КП нд, 01/25/2026 - 12:46
Текст

Діяльність факультету лінгвістики КПІ ім. Ігоря Сікорського у першому семестрі 2025/2026 н.р. була відмічена численними заходами, як-от науково-практичні конференції, форуми, конкурси, що були спрямовані на посилення мотивації до науково-дослідної й іннова­ційної діяльності у студентів, формування наукового мислення, поглиблення знань з актуальних проблем науки і техніки, розвиток навичок академічного мовлення іноземною мовою, розширення міжнародних наукових зв'язків і обміну досвідом та інноваціями у міжнародному контексті між студентами різних ЗВО. Високий рівень організації факультету, ініціатива кафедр, злагоджена робота викладачів і студентів сприяли залученню великої кількості учасників, які мали нагоду представити свої здобутки та поспілкуватися у різних форматах. Нижче подано хроніку цих подій.

✅ Оголошується конкурс на заміщення посад

Новини - Sun, 01/25/2026 - 08:30
✅ Оголошується конкурс на заміщення посад kpi нд, 01/25/2026 - 08:30
Текст

Національний технічний університет України "Київський політехнічний інститут імені Ігоря Сікорського" оголошує конкурс на заміщення вакантних посад. Термін подання документів до 23.02.2026 року.

✨ Конкурс на заміщення вакантних посад завідувачів кафедр

✨ Конкурс на заміщення вакантних посад професорів

✨ Конкурс на заміщення вакантних посад доцентів, старших викладачів, викладачів, асистентів

Weekly discussion, complaint, and rant thread

Reddit:Electronics - Sat, 01/24/2026 - 18:00

Open to anything, including discussions, complaints, and rants.

Sub rules do not apply, so don't bother reporting incivility, off-topic, or spam.

Reddit-wide rules do apply.

To see the newest posts, sort the comments by "new" (instead of "best" or "top").

submitted by /u/AutoModerator
[link] [comments]

CES 2026: AI, automotive, and robotics dominate

EDN Network - Fri, 01/23/2026 - 20:00
Bosch Sensortec’s BMI563 IMU for robotics.

If the Consumer Electronics Show (CES) is a benchmark for what’s next in the electronic component industry, you’ll find that artificial intelligence permeates across all industries, from consumer electronics and wearables to automotive and robotics. Many chipmakers are placing big bets on edge AI as a key growth area along with robotics and IoT.

Here’s a sampling of the latest devices and technologies launched at CES 2026, covering AI advances for automotive, robotics, and wearables applications.

AI SoCs, chiplets, and development

Ambarella Inc. announced its CV7 edge AI vision system-on-chip (SoC), optimized for a wide range of AI perception applications, such as advanced AI-based 8K consumer products (action and 360° cameras), multi-imager enterprise security cameras, robotics (aerial drones), industrial automation, and high-performance video conferencing devices. The 4-nm SoC provides simultaneous multi-stream video and advanced on-device edge AI processing while consuming very low power.

The CV7 may also be used for multi-stream automotive designs, particularly for those running convolutional neural networks (CNNs) and transformer-based networks at the edge, such as AI vision gateways and hubs in fleet video telematics, 360° surround-view and video-recording applications, and passive advanced driver-assistance systems (ADAS).

Compared with its predecessor, the CV7 consumes 20% less power, thanks in part to Samsung’s 4-nm process technology, which is Ambarella’s first on this node, the company said. It incorporates Ambarella’s proprietary AI accelerator, image-signal processor (ISP), and video encoding, together with Arm cores, I/Os, and other functions for an efficient AI vision SoC.

The high AI performance is powered by Ambarella’s proprietary, third-generation CVflow AI accelerator, with more than 2.5× AI performance over the previous-generation CV5 SoC. This allows the CV7 to support a combination of CNNs and transformer networks, running in tandem.

In addition, the CV7 provides higher-performance ISP, including high dynamic range (HDR), dewarping for fisheye cameras, and 3D motion-compensated temporal filtering with better image quality than its predecessor, thanks to both traditional ISP techniques and AI enhancements. It provides high image quality in low light, down to 0.01 lux, as well as improved HDR for video and images.

Other upgrades include its hardware-accelerated video encoding (H.264, H.265, MJPEG), which boosts encode performance by 2× over the CV5 and its on-chip general-purpose processing upgrade to a quad-core Arm Cortex-A73, offering 2× higher CPU performance over the previous SoC. It also provides a 64-bit DRAM interface, delivering a significant improvement in available DRAM bandwidth compared with the CV5, Ambarella said. CV7 SoC samples are available now.

Ambiq Micro Inc. delivers the industry’s first ultra-low-power neural processing unit (NPU) built on its Subthreshold Power Optimized Technology (SPOT) platform. It is designed for real-time, always-on AI at the edge.

Delivering both performance and low power consumption, the SPOT-optimized NPU is claimed as the first to leverage sub- and near-threshold voltage operation for AI acceleration to deliver leading power efficiency for complex edge AI workloads. It leverages the Arm Ethos-U85 NPU, which supports sparsity and on-the-fly decompression, enabling compute-intensive workloads directly on-device, with 200 GOPS of on-device AI performance.

It also incorporates SPOT-based ultra-wide-range dynamic voltage and frequency scaling that enables operation at lower voltage and lower power than previously possible, Ambiq said, making room in the power budget for higher levels of intelligence.

Ambiq said the Atomiq SoC enables a new class of high-performance, battery-powered devices that were previously impractical due to power and thermal constraints. One example is smart cameras and security for always-on, high-resolution object recognition and tracking without frequent recharging or active cooling.

For development, Ambiq offers the Helia AI platform, together with its AI development kits and the modular neuralSPOT software development kit.

Ambiq’s Atomiq SoC.Ambiq’s Atomiq SoC (Source: Ambiq Micro Inc.)

On the development side, Cadence Design Systems Inc. and its IP partners are delivering pre-validated chiplets, targeting physical AI, data center, and high-performance computing (HPC) applications. Cadence announced at CES a partner ecosystem to deliver pre-validated chiplet solutions, based on the Cadence physical AI chiplet platform. Initial IP partners include Arm, Arteris, eMemory, M31 Technology, Silicon Creations, and Trilinear Technologies, as well as silicon analytics partner proteanTecs.

The new chiplet spec-to-packaged parts ecosystem is designed to reduce engineering complexity and accelerate time to market for developing chiplets. To help reduce risk, Cadence is also collaborating with Samsung Foundry to build out a silicon prototype demonstration of the Cadence physical AI chiplet platform. This includes pre-integrated partner IP on the Samsung Foundry SF5A process.

Extending its close collaboration with Arm, Cadence will use Arm’s advanced Zena Compute Subsystem and other essential IP for the physical AI chiplet platform and chiplet framework. The solutions will meet edge AI processing requirements for automobiles, robotics, and drones, as well as standards-based I/O and memory chiplets for data center, cloud, and HPC applications.

These chiplet architectures are standards-compliant for broad interoperability across the chiplet ecosystem, including the Arm Chiplet System Architecture and future OCP Foundational Chiplet System Architecture. Cadence’s Universal Chiplet Interconnect Express (UCIe) IP provides industry-standard die-to-die connectivity, with a protocol IP portfolio that enables fast integration of interfaces such as LPDDR6/5X, DDR5-MRDIMM, PCI Express 7.0, and HBM4.

Cadence’s physical AI chiplet platform.Cadence’s physical AI chiplet platform (Source: Cadence Design Systems Inc.)

NXP Semiconductors N.V. launched its eIQ Agentic AI Framework at CES 2026, which simplifies agentic AI development and deployment for both expert and novice device makers. It is one of the first solutions to enable agentic AI development at the edge, according to the company. The framework works together with NXP’s secure edge AI hardware to help simplify agentic AI development and deployment for autonomous AI systems at the edge and eliminate development bottlenecks with deterministic real-time decision-making and multi-model coordination.

Offering low latency and built-in security, the eIQ Agentic AI Framework is designed for real-time, multi-model agentic workloads, including applications in robotics, industrial control, smart buildings, and transportation. A few examples cited include instantly controlling factory equipment to mitigate safety risks, alerting medical staff to urgent conditions, updating patient data in real time, and autonomously adjusting HVAC systems, without cloud connectivity.

For expert developers, they can integrate sophisticated, multi-agent workflows into existing toolchains, while novice developers can quickly build functional edge-native agentic systems without deep technical experience.

The framework integrates hardware-aware model preparation and automated tuning workflows. It enables developers to run multiple models in parallel, including vision, audio, time series, and control, while maintaining deterministic performance in constrained environments, NXP said. Workloads are distributed across CPU, NPU, and integrated accelerators using an intelligent scheduling engine.

The eIQ Agentic AI Framework supports the i.MX 8 and i.MX 9 families of application processors and Ara discrete NPUs. It aligns with open agentic standards, including Agent to Agent and Model Context Protocol.

NXP has also introduced its eIQ AI Hub, a cloud-based developer platform that gives users access to edge AI development tools for faster prototyping. Developers can deploy on cloud-connected hardware boards but still have the option for on-premise deployments.

NXP’s Agentic AI framework.NXP’s Agentic AI framework (Source: NXP Semiconductors N.V.) Sensing solutions

Bosch Sensortec launched its BMI5 motion sensor platform at CES 2026, targeting high-precision performance for a range of applications, including immersive XR systems, advanced robotics, and wearables. The new generation of inertial sensors—BMI560, BMI563, and BMI570—is built on the same hardware and is adapted through intelligent software.

Based on Bosch’s latest MEMS architecture, these inertial sensors, housed in an LGA package, claim ultra-low noise and exceptional vibration robustness. They offer twice the full-scale range of the previous generation. Key specifications include a latency of less than 0.5 ms, combined with a time increment of approximately 0.6 µs, and a timing resolution of 1 ns, which can deliver responsive motion tracking in highly dynamic environments.

The sensors also leverage a programmable edge AI classification engine that supports always-on functionality by analyzing motion patterns directly on the sensor. This reduces system power consumption and accelerates customer-specific use cases, the company said.

The BMI560, optimized for XR headsets and glasses, delivers low noise, low latency, and precise time synchronization. Its advanced OIS+ performance helps capture high-quality footage even in dynamic environments for smartphones and action cameras.

Targeting robotics and XR controllers, the BMI563 offers an extended full-scale range with the platform’s vibration robustness. It supports simultaneous localization and mapping, high dynamic XR motion tracking, and motion-based automatic scene tagging in action cameras.

The BMI570, optimized for wearables and hearables, delivers activity tracking, advanced gesture recognition, and accurate head-orientation data for spatial audio. Thanks to its robustness, it is suited for next-generation wearables and hearables.

Samples are now available for direct customers. High-volume production is expected to start in the third quarter of 2026.

Bosch also announced the BMI423 inertial measurement unit (IMU) at CES. The BMI423 IMU offers an extended measurement range of ±32 g (accelerometer) and ±4,000 dps (gyroscope), which enable precise tracking of fast, dynamic motion, making it suited for wearables, hearables, and robotics applications.

The BMI423 delivers low current consumption of 25 µA for always-on, acceleration-based applications in small devices. Other key specifications include low noise levels of 5.5 mdps/√Hz (gyro) and 90 µg/√Hz (≤ 8 g) or 120 µg/√Hz (≥ 16 g) (accelerometer), along with several interface options including I3C, I2C, and serial peripheral interface (SPI).

For wearables and hearables, the BMI423 integrates voice activity detection based on bone-conduction sensing, which helps save power while enhancing privacy, Bosch said. The sensor detects when a user is speaking and activates the microphone only when required. Other on-board functions include wrist-gesture recognition, multi-tap detection, and step counting, allowing the main processor to remain in sleep mode until needed and extending battery life in compact devices such as smartwatches, earbuds, and fitness bands.

The BMI423 is housed in a compact, 2.5 × 3 × 0.8-mm3 LGA package for space-constrained devices. The BMI423 will be available through Bosch Sensortec’s distribution partners starting in the third quarter of 2026.

Bosch Sensortec’s BMI563 IMU for robotics.Bosch Sensortec’s BMI563 IMU for robotics (Source: Bosch Sensortec)

Also targeting hearables and wearables, TDK Corp. launched a suite of InvenSense SmartMotion custom sensing solutions for true wireless stereo (TWS) earbuds, AI glasses, augmented-reality eyewear, smartwatches, fitness bands, and other IoT devices. The three newest IMUs are based on TDK’s latest ultra-low-power, high-performance ICM-456xx family that offers edge intelligence for consumer devices at the highest motion-tracking accuracy, according to the company.

Instead of relying on central processors, SmartMotion on-chip software enables computational processing related to motion tracking to be offloaded to the motion sensor itself so that intelligence decisions may be made locally, which allows other parts of the system to remain in low-power mode, TDK said. In addition, the sensor fusion algorithm and machine-learning capability are reported to deliver seamless motion sensing with minimum software effort by the customer.

The SmartMotion solutions, based on the ICM-456xx family of six-axis IMUs, include the SmartMotion ICM-45606 for TWS applications including earbuds, headphones, and other hearable products; the SmartMotion ICM-45687 for wearable and IoT technology; and the SmartMotion for Smart Glasses ICM-45685, which now enables new features, including sensing whether users are putting glasses on or taking glasses off (wear detection) and vocal vibration detection for identifying the source of the speech through its on-chip sensor fusion algorithms. The ICM-45685 also enables high-precision head-orientation tracking, optical/electronic image stabilization, intuitive UI control, posture recognition, and real-time translation.

TDK’s SmartMotion ICM-45685.TDK’s SmartMotion ICM-45685 (Source: TDK Corp.)

TDK also announced a new group company, TDK AIsight, to address technologies needed for AI glasses. The company will focus on the development of custom chips, cameras, and AI algorithms enabling end-to-end system solutions. This includes combining software technologies such as eye intent/tracking and multiple TDK technologies, such as sensors, batteries, and passive components.

As part of the launch, TDK AIsight introduced the SED0112 microprocessor for AI glasses. The next-generation, ultra-low-power digital-signal processor (DSP) platform integrates a microcontroller (MCU), state machine, and hardware CNN engine. The built-in hardware CNN architecture is optimized for eye intent. The MCU features ultra-low-power DSP processing, eyeGenI sensors, and connection to a host processor.

The SED0112, housed in a 4.6 × 4.6-mm package, supports the TDK AIsight eyeGI software and multiple vision sensors at different resolutions. Commercial samples are available now.

SDV devices and development

Infineon Technologies AG and Flex launched their Zone Controller Development Kit. The modular design for zone control units (ZCUs) is aimed at accelerating the development of software-defined-vehicle (SDV)-ready electrical/electronic architectures. Delivering a scalable solution, the development kit combines about 30 unique building blocks.

With the building block approach, developers can right-size their designs for different implementations while preserving feature headroom for future models, the company said. The design platform enables over 50 power distribution, 40 connectivity, and 10 load control channels for evaluation and early application development. A dual MCU plug-on module is available for high-end ZCU implementations that need high I/O density and computational power.

The development kit enables all essential zone control functions, including I2t (ampere-squared seconds), overcurrent protection, overvoltage protection, capacitive load switching, reverse-polarity protection, secure data routing with hardware accelerators, A/B swap for over-the-air software updates, and cybersecurity. The pre-validated hardware combines automotive semiconductor components from Infineon, including AURIX MCUs, OPTIREG power supply, PROFET and SPOC smart power switches, and MOTIX motor control solutions with Flex’s design, integration, and industrialization expertise. Pre-orders for the Zone Controller Development Kit are open now.

Infineon and Flex’s Zone Controller Development Kit.Infineon and Flex’s Zone Controller Development Kit (Source: Infineon Technologies AG)

Infineon also announced a deeper collaboration with HL Klemove to advance technologies in vehicle electronic architectures for SDVs and autonomous driving. This strategic partnership will leverage Infineon’s semiconductor and system expertise with HL Klemove’s capabilities in advanced autonomous-driving systems.

The three key areas of collaboration are ZCUs, vehicle Ethernet-based ADAS and camera solutions, and radar technologies.

The companies will jointly develop zone controller applications using Infineon’s MCUs and power semiconductors, with HL Klemove as the lead in application development. Enabling high-speed in-vehicle network solutions, the partnership will also develop front camera modules and ADAS parking control units, leveraging Infineon’s Ethernet technology, while HL Klemove handles system and product development.

Lastly, HL Klemove will use Infineon’s radar semiconductor solutions to develop high-resolution and short-range satellite radar. They will also develop high-resolution imaging radar for precise object recognition.

NXP introduced its S32N7 super-integration processor series, designed to centralize core vehicle functions, including propulsion, vehicle dynamics, body, gateway, and safety domains. Targeting SDVs, the S32N7 series, with access to core vehicle data and high compute performance, becomes the central AI control point.

Enabling scalable hardware and software across models and brands, the S32N7 simplifies vehicle architectures and reduces total cost of ownership by as much as 20%, according to NXP, by eliminating dozens of hardware modules and delivering enhanced efficiencies in wiring, electronics, and software.

NXP said that by centralizing intelligence, automakers can scale intelligent features, such as personalized driving, predictive maintenance, and virtual sensors. In addition, the high-performance data backbone on the S32N7 series provides a future-proof path for upgrading to the latest AI silicon without re-architecting the vehicle.

The S32N7 series, part of NXP’s S32 automotive processing platform, offers 32 compatible variants that provide application and real-time compute with high-performance networking, hardware isolation technology, AI, and data acceleration on an SoC. They also meet the strict timing, safety, and security requirements of the vehicle core.

Bosch announced that it is the first to deploy the S32N7 in its vehicle integration platform. NXP and Bosch have co-developed reference designs, safety frameworks, hardware integration, and an expert enablement program.

The S32N79, the superset of the series, is sampling now with customers.

NXP’s S32N7 super-integration processor series.NXP’s S32N7 super-integration processor series (Source: NXP Semiconductors N.V.)

Texas Instruments Inc. (TI) expanded its automotive portfolio for ADAS and SDVs with a range of automotive semiconductors and development resources for automotive safety and autonomy across vehicle models. The devices include the scalable TDA5 HPC SoC family, which offers power- and safety-optimized processing and edge AI; the single-chip AWR2188 8 × 8 4D imaging radar transceiver, designed to simplify high-resolution radar systems; and the DP83TD555J-Q1 10BASE-T1S Ethernet physical layer (PHY).

The TDA5 SoC family offers edge AI acceleration from 10 TOPS to 1,200 TOPS, with power efficiency beyond 24 TOPS/W. This scalability is enabled by its chiplet-ready design with UCIe interface technology, TI said, enabling designers to implement different feature sets.

The TDA5 SoCs provide up to 12× the AI computing of previous generations with similar power consumption, thanks to the integration of TI’s C7 NPU, eliminating the need for thermal solutions. This performance supports billions of parameters within language models and transformer networks, which increases in-vehicle intelligence while maintaining cross-domain functionality, the company said. It also features the latest Arm Cortex-A720AE cores, enabling the integration of more safety, security, and computing applications.

Supporting up to SAE Level 3 vehicle autonomy, the TDA5 SoCs target cross-domain fusion of ADAS, in-vehicle infotainment, and gateway systems within a single chip and help automakers meet ASIL-D safety standards without external components.

TI is partnering with Synopsys to provide a virtual development kit for TDA5 SoCs. The digital-twin capabilities help engineers accelerate time to market for their SDVs by up to 12 months, TI said.

The AWR2188 4D imaging radar transceiver integrates eight transmitters and eight receivers into a single launch-on-package chip for both satellite and edge architectures. This integration simplifies higher-resolution radar systems because 8 × 8 configurations do not require cascading, TI said, while scaling up to higher channel counts requires fewer devices.

The AWR2188 offers enhanced analog-to-digital converter data processing and a radar chirp signal slope engine, both supporting 30% faster performance than currently available solutions, according to the company. It supports advanced radar use cases such as detecting lost cargo, distinguishing between closely positioned vehicles, and identifying objects in HDR scenarios. The transceiver can detect objects with greater accuracy at distances greater than 350 meters.

With Ethernet an enabler of SDVs and higher levels of autonomy, the DP83TD555J-Q1 10BASE-T1S Ethernet SPI PHY with an integrated media access controller offers nanosecond time synchronization, as well as high reliability and Power over Data Line capabilities. This brings high-performance Ethernet to vehicle edge nodes and reduces cable design complexity and costs, TI said.

The TDA54 software development kit is now available on TI.com. Samples of the TDA54-Q1 SoC, the first device in the family, will be sampling to select automotive customers by the end of 2026. Pre-production quantities of the AWR2188 transceiver, AWR2188 evaluation module, DP83TD555J-Q1 10BASE-T1S Ethernet PHY, and evaluation module are now available on request at TI.com.

Robotics: processors and modules

Qualcomm Technologies Inc. introduced a next-generation robotics comprehensive-stack architecture that integrates hardware, software, and compound AI. As part of the launch, Qualcomm also introduced its latest, high-performance robotics processor, the Dragonwing IQ10 Series, for industrial autonomous mobile robots and advanced full-sized humanoids.

The Dragonwing industrial processor roadmap supports a range of general-purpose robotics form factors, including humanoid robots from Booster, VinMotion, and other global robotics providers. This architecture supports advanced-perception, motion planning with end-to-end AI models such as VLAs and VMAs. These features enable generalized manipulation capabilities and human-robot interaction.

Qualcomm’s general-purpose robotics architecture with the Dragonwing IQ10 combines heterogeneous edge computing, edge AI, mixed-criticality systems, software, machine-learning operations, and an AI data flywheel, along with a partner ecosystem and a suite of developer tools. This portfolio enables robots to reason and adapt to the spatial and temporal environments intelligently, Qualcomm said, and is optimized to scale across various form factors with industrial-grade reliability.

Qualcomm’s growing partner ecosystem for its robotics platforms includes Advantech, APLUX, AutoCore, Booster, Figure, Kuka Robotics, Robotec.ai, and VinMotion.

Qualcomm’s Dragonwing IQ10 industrial processor.Qualcomm’s Dragonwing IQ10 industrial processor (Source: Qualcomm Technologies Inc.)

Quectel Wireless Solutions released its SH602HA-AP smart robotic computing module. Based on the D-Robotics Sunrise 5 (X5M) chip platform and with an integrated Ubuntu operating system, the module features up to 10 TOPS of brain-processing-unit computing power. The robotic computing modules target demanding robotic workloads, supporting advanced large-scale models such as Transformer, Bird’s-Eye View, and Occupancy.

The module works seamlessly with Quectel’s independent LTE Cat 1, LTE Cat 4, 5G, Wi-Fi 6, and GNSS modules, offering expanded connectivity options and a broader range of robotics use cases. These include smart displays, express lockers, electricity equipment, industrial control terminals, and smart home appliances.

The module, measuring 40.5 × 40.5 × 2.9 mm, operates over the –25°C to 85°C temperature range. It supplies a default memory of 4 GB plus 32 GB and numerous memory options. It supports data input and fusion processing for multiple sensors, including LiDAR, structured light, time-of-flight, and voice, meeting the AI and vision requirements in robotic applications.

The module supports 4k video at 60 fps with video encoding and decoding, binocular depth processing, AI and visual simultaneous localization and mapping, speech recognition, 3D point-cloud computing, and other mainstream robot perception algorithms. It provides Bluetooth, DSI, RGMII, USB 3.0, USB 2.0, SDIO, QSPI, seven UART, seven I2C, and two I2S interfaces.

The module integrates easily with additional Quectel modules, such as the KG200Z LoRa and the FCS950 Wi-Fi and Bluetooth module for more connectivity options.

Quectel’s SH602HA-AP smart robotic computing module.Quectel’s SH602HA-AP smart robotic computing module (Source: Quectel Wireless Solutions)

The post CES 2026: AI, automotive, and robotics dominate appeared first on EDN.

У КПІ ім. Ігоря Сікорського працюють пункти незламності

Новини - Fri, 01/23/2026 - 17:30
У КПІ ім. Ігоря Сікорського працюють пункти незламності
Image
kpi пт, 01/23/2026 - 17:30
Текст

У разі вимкнення світла, відсутності тепла чи інших надзвичайних ситуацій на території КПІ працюють пункти незламності, де можна зігрітися, підзарядити гаджети, попрацювати або просто перечекати.

NUBURU secures control of Orbit’s SaaS operational resilience platform

Semiconductor today - Fri, 01/23/2026 - 15:53
NUBURU Inc of Centennial, CO, USA — which was founded in 2015 and developed and previously manufactured high-power industrial blue lasers — has secured operating control of Italy-based Orbit S.r.l., a revenue-generating Software-as-a-Service (SaaS) company focused on digitalizing operational resilience, risk intelligence, and mission-critical decision support via its Orbit Open Platform. NUBURU says that the transaction strengthens its security offering capabilities and advances its multi-vertical growth strategy through the addition of a scalable, software-driven operating business...

First pcb for my esp-ecu project

Reddit:Electronics - Fri, 01/23/2026 - 15:25
First pcb for my esp-ecu project

Hey everyone,

I’ve been working on a standalone ECU project for the last couple of years, and I’ve finally got the first proper PCB made and assembled. The ECU side of this is already proven, I’ve been running it on engines for a while using smaller boards hand wired setups (single-cylinder and a four-cylinder). This PCB isn’t me starting from scratch or hoping the logic works, it’s the next step: turning something that already works into a solid, repeatable platform that’s stable, easier to test properly, and easier to keep iterating.

The whole idea is a practical ECU built around an ESP32 that I can keep improving without the usual expensive locked-down ecosystem. It’s aimed at bikes and small engines, and the firmware is already doing the real ECU stuff (fuel and ignition control, crank/cam sync, 16x16 maps, launch/ALS logic, telemetry, etc). This board is basically where it stops being a rats nest of wiring and starts becoming an actual unit.

the board itself is pretty simple. There’s nothing exotic going on hardware wise, it’s mostly just a clean way to break out signals and do the boring but important bits like input conditioning, ADC, drivers, and power. Honestly 99% of the complexity in this project has been the code and the engine logic. The PCB is mainly about turning that proven setup into a proper platform.

(Also for those wondering underside is ground fill between traces)

Hardware-wise it’s an ESP32-S3 Mini, an external ADC (MCP3008) for the analog stuff like TPS/MAP/O2, a 74HC14 for cleaning up crank/cam inputs, low-side injector drivers (IRLB3034) with flyback diodes, and a TC4427 driving the ignition outputs. The spark outputs can be jumpered for 5V or 12V depending what you’re trying to trigger, and there’s basic 12V protection plus an onboard 5V rail for sensors/modules.

Also, I know an ESP is kind of a cursed MCU choice for an ECU if you look at it purely from a “hardware timers everywhere” perspective. It’s not the obvious route. The sensible/normal choice (and what most platforms use) is STM / STM-based stuff because you’ve got a ridiculous amount of hardware timers and it makes a lot of ECU timing problems feel easy. With the ESP32 you end up having to get creative, sharing limited hardware timer resources with software layers and scheduling, and that’s where a lot of the complexity has come from on my side. But the reason I went ESP is the surrounding ecosystem: the dash connects wirelessly, the power distribution unit connects wirelessly, the tuning app is wireless, telemetry is easy, and it’s all stuff the ESP platform is just good at. So yeah, if anyone’s wondering why I chose the ESP route and made my life harder, that’s basically why. Long term I want this to be an open-source project where people can add whatever features they want, and the ESP ecosystem (and how widely supported it is) makes that way more realistic.

This first revision is intentionally big and through hole heavy. That’s on purpose, it’s way easier to probe, rework, and debug when everything isn’t tiny and packed tight. Rev 1 is always where you find the dumb mistakes, and I’d rather find them on a board that’s friendly to work on before I shrink it down and move to SMD later.

So far I’ve been going through it section by section and it’s been behaving way better than I expected for a first spin. Bench testing is still continuing though, mainly power stability, noise/EMI behavior, sensor scaling, crank/cam conditioning, and verifying injector and ignition outputs under more realistic conditions.

Once I’ve shaken out whatever issues show up, I’ll do a revision 2 to clean up what I find, and after that the plan is to shrink it down and move to SMD so it becomes a smaller, cleaner “real ECU module” style board instead of a big debug-friendly prototype.

submitted by /u/Budgetboost
[link] [comments]

Power Tips #149: Boosting EV charger efficiency and density with single-stage matrix converters

EDN Network - Fri, 01/23/2026 - 15:00

An onboard charger converts power between the power grid and electric vehicles or hybrid electric vehicles. Traditional systems use two stages of power conversion: a boost converter to implement unity power factor, and an isolated DC/DC converter to charge the batteries with isolation. Obviously, these two stages require additional components that decrease power density and increase costs.

Matrix converters use a single stage of conversion without a boost inductor and bulky electrolytic capacitors. When using bidirectional gallium nitride (GaN) power switches, the converters further reduce component count and increase power density.

Comparing two-stage power converters with single-stage matrix converters

A two-stage power converter, as shown in Figure 1, requires a boost inductor (LB) and a DC-link electrolytic capacitor (CB), as well as four metal-oxide semiconductors (MOSFETs) for totem-pole power factor correction (PFC). 

Figure 1 Two-stage power converter diagram with LB, CB, and four MOSFETs for totem-pole PFC. Source: Texas Instruments

A single-stage matrix converter, as shown in Figure 2, does not require a boost inductor nor a DC-link capacitor but does require bidirectional switches (S11 and S12). Connecting common drains or common sources of two individual MOSFETs forms the bidirectional switches. Alternatively, when adopting bidirectional GaN devices in matrix converters, the number of switches decreases. Table 1 compares the two types of converters.

Figure 2 Single-stage matrix converter diagram that does not require LB or CB, but necessitates the use of two bidirectional switches: S11 and S12 . Source: Texas Instruments 

 

Two-stage power converter (totem pole power factor correction plus DC/DC)

Single-stage matrix converter

Boost inductor

Yes

No

DC-link electrolytic capacitor

Yes

No

Fast unidirectional switches

10

4

Bidirectional switches

0

4

Slow switches

2

0

Electromagnetic interference filter

Smaller

Larger

Input/output ripple current

Smaller

Larger

Power density

Lower

Higher

Power efficiency

Lower

Higher

Control algorithm

Simple

Complicated

Table 1 A two-stage AC/DC and single-stage matrix converter comparison. Source: Texas Instruments

Single-stage matrix converter topologies

There are three major topologies applied to EV onboard charger applications.

Topology No. 1: The LLC topology

Figure 3 shows the inductor-inductor-capacitor (LLC) topology. The LLC converter regulates current or voltage by modulating switching frequencies. Lr and Cr form a resonant tank to shape the resonant current. Selecting the proper control algorithms will achieve a unity power factor.

With a three-phase AC input, the voltage ripple on the primary side is much smaller compared to a single-phase AC input. Therefore, the LLC topology is more suitable for three-phase applications. LLC converters operate at a higher frequency and realize a wider range of zero voltage switching (ZVS) than other topologies.

Figure 3 An LLC-based matrix converter with a three-phase AC input. Source: Texas Instruments

Topology No. 2: The DAB topology

Figure 4 shows a dual active bridge (DAB)-based matrix converter. The DAB topology can apply to a three-phase or single-phase AC input. Controlling the inductor current will realize unity power factor naturally. The goal of a control algorithm is to realize a wide ZVS range to reduce switching losses, reduce root-mean-square (RMS) current to reduce conduction losses, and achieve low current total harmonic distortion and unity power factor.

Triple-phase shift is necessary to achieve these goals, including primary-side internal phase shift, secondary-side internal phase shift, and external phase shift between the primary side and secondary side. Additionally, modulating the switching frequency will extend the ZVS range.

Figure 4 A DAB-based matrix converter with a single-phase AC input. Source: Texas Instruments

Topology No. 3: The SR-based topology

Figure 5 shows a series resonant (SR) matrix converter. The resonant tank formed by Lr and Cr shapes the transformer current to reduce turnoff current and turnoff losses. Meanwhile, the reactive power is reduced, as are conduction and switching losses. Compared to the LLC topology, the switching frequency of SR matrix converters is fixed, but higher than the resonant frequency.

Figure 5 An SR-based matrix converter with a single-phase AC input. Source: Texas Instruments

The control algorithm of single-stage matrix converters

In an LLC topology-based onboard charger with a three-phase AC input, switching frequency modulation regulates the charging current or voltage and uses space vector control based on grid polarity. The voltage ripple applied to the resonant tank is small. The resonant tank determines gain variations and affects the converter’s operation.

A DAB or SR DAB-based onboard charger usually adopts triple-phase shift (TPS) control to naturally achieve unity power factor, a wide ZVS range, and low RMS current. Optimizing switching frequencies further reduces both conduction and switching losses.

Figure 6 illustrates pulse width modulation (PWM) waveforms of TPS control of matrix converters for a half AC cycle (for example, Vac > 0). Figure 4 shows where PWMs connect to the power switches: d1 denotes the internal phase shift between PWM1A and PWM4A, d2 denotes the internal phase shift between PWM5A and PWM6A, and d3 denotes the external phase shift between the middle point of d1 and d2. PWM1B and PWM4B are gate drives for the second pair of bidirectional switches.

Figure 6 TPS PWM waveforms for a single-stage matrix converter for a half AC cycle. Source: Texas Instruments

Regardless of the topology selected, matrix converters require bidirectional switches, formed by connecting two GaN or silicon carbide (SiC) switches with a common drain or common source. Bidirectional GaN switches are emerging devices, integrating two GaN devices with common drains and providing bidirectional control with a single device.

Matrix converters

Matrix converters use single-stage power conversion to achieve a unity power factor and DC/DC power conversion. They provide two major advantages in onboard charger applications:

  • High power density through the use of single-stage conversion, while eliminating large boost inductors and bulky DC-link electrolytic capacitors.
  • High power efficiency through reduced switching and conduction losses, and a single power-conversion stage.

There are still many challenges to overcome to expand the use of single-stage matrix converters to other applications. High ripple current is a concern for batteries that require a low ripple charging current. Matrix converters are also more susceptible to surge conditions given the lack of DC-link capacitors. Overall, however, matrix converters are gaining popularity, especially with the emergence of wide-band-gap switches and advanced control algorithms.

Sean Xu currently works as a system engineer in Texas Instruments’ Power Design Services team to develop power solutions using advanced technologies for automotive applications. Previously, he was a system and application engineer working on digital control solutions for enterprise, data center, and telecom power. He earned a Ph.D. degree from North Dakota State University and a Master’s degree from Beijing University of Technology, respectively.

Related Content

The post Power Tips #149: Boosting EV charger efficiency and density with single-stage matrix converters appeared first on EDN.

Ascent Solar reflects on 2025 commercial progress, industry partnerships and solar PV efficiency improvements

Semiconductor today - Fri, 01/23/2026 - 10:32
Ascent Solar Technologies Inc of Thornton, CO, USA – which designs and makes lightweight, flexible copper indium gallium diselenide (CIGS) thin-film photovoltaic (PV) panels that can be integrated into consumer products, off-grid applications and aerospace applications – has commented on the commercial progress, industry partnerships and solar PV efficiency improvements it achieved in 2025, as the leadership team looks ahead to continued corporate growth in 2026...

Procurement tool aims to bolster semiconductor supply chain

EDN Network - Fri, 01/23/2026 - 10:13

An AI-enabled electronic components procurement tool claims to boost OEM productivity by leveraging a software platform that negotiates prices, tracks spending, and monitors savings in real time. It takes your bill-of-materials (BOM) and uploads it to the system while leveraging AI agents to discover form-fit-function compatible parts and more.

ChipHub, founded in 2023, is a components procurement tool that aims to optimize operations and savings for OEMs by addressing the supply chain issues at the system level.

Figure 1 A lack of control on component pricing, availability, and spending matrices makes the supply chain operations challenging. Source: ChipHub

A standard components procurement tool

Envision a procurement platform empowering OEMs to directly engage with suppliers, enhancing control over annual expenditures ranging from millions to billions of dollars. Such a platform streamlines interactions with suppliers, fostering efficient negotiations and monitoring of cost-saving metrics.

A tool that, at a very high level, enables OEMs to negotiate commercial terms directly with suppliers, all on the platform with no emails and spreadsheets. It can support millions of SKUs and thousands of suppliers with four fundamental procurement premises.

  1. A scalable platform that facilitates supplier negotiations.
  2. It offers risk reduction because the component supplier knows who the end customer is.
  3. It employs generative AI to allow technical teams to evaluate devices or specs while extracting information from the datasheet and performing cross-part analysis.
  4. It provides record-keeping features to monitor savings for procurement staff.

Enter ChipHub, an AI-driven procurement tool tailored for hardware OEMs. Its agentic system leverages Model Context Protocol (MCP) to enable collaboration between multiple AI agents and humans to deliver the information supply chain professionals need. Features like this help reform component sourcing by offering time and cost efficiencies irrespective of the OEM’s scale.

Next, ChipHub offers the unified marketplace framework (UMF), which helps procurement teams across diverse sectors such as data centers, computing, networking, storage, power, consumer goods, industrial, and automotive. Users can implement UMF in a single day and start monitoring their spending and savings in real time.

Figure 2 The procurement tool enables OEMs to negotiate commercial terms directly with component suppliers and do it right on the platform. Source: ChipHub

Users such as procurement managers use the platform to search specific parts, and the system conducts cross-part analysis to find compatible options, including real-time pricing and inventory data from various ecosystem partners. So, they don’t have to spend hours manually searching for data and building comparison matrices.

The platform uses a system of multiple AI agents, with human oversight, to navigate the supply chain and provide insights into part availability and sourcing options. “We don’t house any parts; we are just enabling supply-based management,” said Aftab Farooqi, founder and CEO of ChipHub.

Do I really know my supply chain? According to Farooqi, that’s the fundamental question for procurement managers. “If they don’t have control and visibility of their supply chain, they could be vulnerable,” he added. He also acknowledged that ChipHub isn’t a solution for all OEMs.

“They could keep doing things the way they are doing,” Farooqi said. “But they can still subscribe to this platform and have it as a validation tool.” For example, OEMs can cross-check the signal integrity analysis of a particular component.

Farooqi added that the platform can also be used by contract manufacturers (CMs) as a key tool for risk reduction because it enables spend tracking and collaboration features on the platform.

Related Content

The post Procurement tool aims to bolster semiconductor supply chain appeared first on EDN.

Pages

Subscribe to Кафедра Електронної Інженерії aggregator