Українською
  In English
Feed aggregator
Luminus leverages APC partnership to enhance energy-efficient LED lighting solutions
Automotive high-side driver withstands cold crank

ST’s VNQ9050LAJ 4-channel high-side driver controls 12-V automotive ground-connected loads via a 3-V and 5-V CMOS-compatible interface. Operating from 4-V to 28-V with typical RDS(on) as low as 50 mΩ per channel, the device remains active during cold-crank events until the supply falls to the 2.7-V (max) undervoltage shutdown threshold. This performance supports compliance with LV124 (Rev. 2013) requirements for low-voltage operation and automotive transients.

Based on ST’s VIPower-M09 technology, the driver protects resistive, capacitive, and inductive loads. Integrated current sensing uses an on-chip current mirror with a sense FET that tracks the main power FET, enabling accurate load monitoring. The sensed current is available at an external pin, where a resistor converts it to a proportional voltage for continuous diagnostics and fault detection.
The VNQ9050LAJ offers robust protection and diagnostics for 12‑V automotive loads. It features integrated current sensing for overload, short-circuit, and open-load detection. The driver also includes overvoltage clamping, thermal-transient limiting, and configurable latch-off for overtemperature or power limitation, with a dedicated fault-reset pin. Additional protections—such as electrostatic discharge, loss-of-ground, loss-of-VCC, and reverse-battery—ensure reliable operation under extreme conditions.
The VNQ9050LAJ is in production in a thermally enhanced Power-SSO16 package, priced from $1.09 each for 1000-piece orders.
The post Automotive high-side driver withstands cold crank appeared first on EDN.
Embedded capacitors improve AI/HPC power delivery

Empower has launched three embedded silicon capacitors (ECAPs) for AI and high-performance computing (HPC) processors. The portfolio includes the EC2005P (9.34 μF in a 2×2-mm package), EC2025P (18.68 μF in a 4×2-mm package), and EC2006P (36.8 μF in a 4×4-mm package). These components are designed for integration into processor substrates to support elevated current density and fast transient load demands.

As AI and HPC workloads increase, conventional board-mounted capacitors struggle to maintain low impedance and fast response. These ECAP devices provide high capacitance density with ultralow equivalent series inductance (ESL) and resistance (ESR), improving power delivery network (PDN) performance when embedded close to the die. Tight dimensional tolerances ensure compatibility with advanced packaging flows.
The ECAP portfolio also supports vertical power delivery architectures, including Empower’s Crescendo platform, to reduce loop inductance and system footprint. The devices provide a scalable approach for integrating silicon capacitance directly within processor packages.
The EC2005P, EC2025P, and EC2006P ECAPs are now in mass production. Learn more about the ECAP portfolio here.
The post Embedded capacitors improve AI/HPC power delivery appeared first on EDN.
Samsung leads with HBM4 DRAM performance

Samsung has begun mass production and commercial shipments of its HBM4 DRAM, marking what it describes as an industry first. Built on Samsung’s 6th-generation 10-nm-class DRAM process with a 4-nm logic base die, this high-bandwidth memory is optimized for performance, reliability, and energy efficiency in AI, HPC, and datacenter applications.

Samsung’s HBM4 delivers a consistent transfer speed of 11.7 Gbps — roughly 46% faster than the 8-Gbps industry standard and a 1.22× improvement over the 9.6-Gbps maximum of HBM3E. Memory bandwidth per single stack reaches up to 3.3 TB/s, a 2.7× increase over HBM3E. Current 12-layer stacking enables capacities from 24 GB to 36 GB, with future 16-layer stacks projected to expand offerings up to 48 GB.
To handle the doubled data I/Os from 1024 to 2048 pins, advanced low-power techniques were applied to the core die. Samsung’s HBM4 improves power efficiency by 40% via low-voltage TSVs and optimized power distribution, offers 10% better thermal resistance, and increases heat dissipation by 30% over HBM3E, ensuring reliable high-performance operation.
For more details on this announcement, see Samsung’s press release. Explore the broader HBM portfolio here.
The post Samsung leads with HBM4 DRAM performance appeared first on EDN.
Software accelerates 3D interconnect design

The Keysight Chiplet 3D Interconnect Designer automates the design of 3D interconnects for chiplet and 3DIC advanced packages. By removing time-consuming manual steps, the tool streamlines the optimization of complex interconnect structures—including vias, transmission lines, solder balls, and micro-bumps—while ensuring signal and power integrity in densely packed systems.

Part of Keysight’s EDA portfolio, the software provides a pre-layout workflow for advanced multi-die integration, UCIe compliance, automated routing, and robust simulation capabilities. It handles complex geometries—including hatched or waffled ground planes—that are critical for addressing manufacturing and fabrication constraints, particularly in silicon interposers and bridges.
The software can operate independently or alongside Keysight’s other EDA tools, enabling teams to seamlessly incorporate 3D interconnect workflows into their existing design environments.
To learn more about the Keysight Chiplet 3D Interconnect Designer (W3510E) and request a quote, visit the product page linked below.
The post Software accelerates 3D interconnect design appeared first on EDN.
Navitas tightens SiC losses with refined TAP

Navitas Semiconductor has announced its 5th-generation GeneSiC platform featuring high-voltage trench-assisted planar (TAP) SiC MOSFETs, describing it as a significant advancement over previous generations. The new 1200-V MOSFET line complements Navitas’ ultra-high-voltage 2.3-kV and 3.3-kV devices based on its 4th-generation GeneSiC technology.

The latest generation incorporates the company’s most compact TAP architecture to date, combining planar-gate ruggedness with trench-enabled performance gains to improve efficiency and long-term reliability. It targets high-voltage applications including AI data centers, grid and energy infrastructure, and industrial electrification.
Compared with the prior 1200-V devices, the new generation delivers a 35% improvement in RDS(on) × QGD figure of merit, reducing switching losses and enabling cooler, higher-frequency operation. About a 25% improvement in QGD/QGS ratio, together with a stable high threshold voltage (VGS,TH ≥ 3 V), strengthens switching robustness and improves immunity to parasitic turn-on in high-noise environments.
Navitas expects to introduce products based on its 5th-generation technology in the coming months. For additional information, contact a Navitas representative or email info@navitassemi.com.
The post Navitas tightens SiC losses with refined TAP appeared first on EDN.
Using integration and differentiation in an oscilloscope

Modern digital oscilloscopes offer a host of analysis capabilities since they digitize and store input waveforms for analysis. Most oscilloscopes offer basic math operations such as addition, subtraction, multiplication, division, ratio, and the fast Fourier transform (FFT). Mid- and high-end oscilloscopes offer advanced math functions such as differentiation and integration. These tools let you solve differential equations that you probably hated in your days as an engineering student. They are used the same way today in your oscilloscope measurements. Here are a few examples of oscilloscope measurements that require differentiation and integration.
Measuring current through a capacitor based on the voltage across it
The current through a capacitor can be calculated from the voltage across it using this equation:
The current through a capacitor is proportional to the rate of change, or derivative, of the voltage across it. The constant of proportionality is the capacitance. A simple circuit can be used to show how this works (Figure 1).

Figure 1 A signal generator supplies a sine wave as Vin(t). The oscilloscope measures the voltage across the capacitor. Source: Art Pini
In this simple series circuit, the current can be measured by dividing the voltage across the resistor by its value. The oscilloscope monitors the voltage across the capacitor, Vc(t), and the voltage Vin(t). Taking the difference of these voltages yields the voltage across the resistor. The current through the resistor is calculated by rescaling the difference by multiplying by the reciprocal of the resistance. The voltage across the capacitor is acquired and differentiated. The rescale function multiplies the derivative by the capacitance to obtain the current through the capacitor (Figure 2).
Figure 2 Computing the current in the series circuit using two different measurements. Source: Art Pini
Vin(t) is the top trace in the figure; it is measured as 477.8 mV RMS by measurement parameter P1, and it has a frequency of 1 MHz. Below it is Vc(t), the voltage across the capacitor, with a value of 380.2 mV RMS, as read in parameter P2. The third trace from the top, math trace F1, is the current based on the voltage drop across the resistor, which is measured as 5.718 mA RMS in parameter P3. The bottom trace, F2, shows the capacitor current, Ic(t), at 5.762 mA.
Parameter P6 reads the phase difference between the capacitor current and voltage traces F2 and M2, respectively. The phase is 89.79°, which is very close to the theoretically expected 90°.
Parameters P7 through P9 use parameter math to calculate the percentage difference between the currents measured by the two different measurements. It is 0.7%, which is respectable for the component tolerances used. Comparing the two current waveforms, we can see the differences (Figure 3).
Figure 3 Comparing the current waveforms from the two different measurement processes. Source: Art Pini
The two current measurement processes are very similar. Differentiating the capacitor voltage is somewhat noisier. This is commonly observed when using the derivative math function. The derivative is calculated by dividing the difference between adjacent sample values by the sample time interval. The difference operation tends to emphasize noise, especially when the rate of change of the signal is low, as on the peaks of the sine wave. The noise spikes at the peaks of the derivative signal are obvious. Maximizing the signal-to-noise ratio of differentiated waveforms is good practice. This can be done by filtering the signal before the math operation using the noise filters in the input channel.
Measuring current through an inductor based on the voltage across it.
A related mathematical operation, integration, can be used to determine the current through an inductor from the integral of the inductor’s voltage.
![]()
Another series circuit, this time with an inductor, illustrates the mathematical operations performed on the oscilloscope (Figure 4).

Figure 4 A signal generator supplies a sine wave as Vin(t). The oscilloscope measures the voltage across the inductor, IL(t). Source: Art Pini
The oscilloscope is configured to integrate the voltage across the inductor, VL(t), and rescale the integral by the reciprocal of the inductance. Changing the units to Amperes completes the process (Figure 5).
Figure 5 Calculating the current in the series circuit using Ohm’s law with the resistor and integrating the inductor voltage. Source: Art Pini
This process also produces similar results. The series current calculated from the resistor voltage drop is 6.625 mA, while the current calculated by integrating the inductor voltage is 6.682 mA, a difference of 0.057 mA. The phase difference between the inductor current and voltage is -89.69°.
The integration setup requires adding a constant of integration, thereby imposing an initial condition on the current. Since integration is a cumulative process, any offset will generate a ramp function. The constant in the integration setup must be adjusted to produce a level response if the integration produces a waveform that slopes up or down.
Magnetic measurements hysteresis plots
The magnetic properties of inductors and transformers can be calculated from the voltage across and the current through the inductor. The circuit in Figure 4, with appropriate input and resistance settings, can be used. Based on these inputs, the inductor’s magnetic field strength, usually represented by the symbol H, can be calculated from the measured current.
![]()
Where: H is the magnetic field strength in Amperes per meter (A/m)
IL is the current through the inductor in Amperes
n is the number of turns of wire about the inductor core
l is the magnetic path length in meters
The oscilloscope calculates the magnetic field strength by rescaling the measured capacitor current.
The magnetic flux density, denoted B, is computed from the voltage across the inductor.
![]()
Where B is the magnetic flux density in Teslas
VL is the voltage across the inductor
n is the number of turns of wire about the inductor core
A is the cross-sectional area of the magnetic core in meter2
The flux density is proportional to the integral of the inductor’s voltage. The constant or proportionality is the reciprocal of the product of the number of turns and the magnetic cross-sectional area. These calculations are built into most oscilloscope power analysis software packages, which use them to display the magnetic hysteresis plot of an inductor (Figure 6).
Figure 6 A power analysis software package calculates B and H from the inductor voltage and current and the geometry of the inductor. Source: Art Pini
The analysis software prompts the user for the inductor geometry, including n, A, and l. It integrates the inductor voltage (top trace) and scales the integral using the constants to obtain the flux density B (second trace from the top). The current (third trace from the top) is rescaled to obtain the magnetic field strength (Bottom trace. The flux density (B) is plotted against the field strength (H) to obtain the hysteresis diagram.
Area within and X-Y plot
Many applications involving cyclic phenomena result in the need to determine the area enclosed by an X-Y plot. The magnetic hysteresis plot is an example. The area inside a hysteresis plot represents the energy loss per cycle per unit volume in a magnetic core. The area within an X-Y plot can be calculated based on the X and Y signals. The oscilloscope acquires both traces as a function of time, t. The variables can be changed in the integral to calculate the area based on the acquired traces:


Note that both integration and differentiation are involved in this calculation. To implement this on an oscilloscope, we need to differentiate one trace, multiply it by the other, and integrate the result. The integral, evaluated over one cycle of the periodic waveform, equals the area contained within the X-Y plot. Here is an example using an XY plot that is easy to check (Figure 7).
Figure 7 Using a triangular voltage waveform and a square wave current waveform, the X-Y plot is a rectangle. Source: Art Pini
The area enclosed by a rectangular X-Y plot is easy to calculate based on the cursor readouts, which measure the X and Y ranges. The relative cursors are positioned at diagonally opposed corners, and the amplitude readouts for the cursors for each signal appear in the respective dialog boxes. The X displacement, the rectangle’s base, is 320.31 mV, and the Y displacement, the rectangle’s height, is 297.63 mA. The area enclosed within the rectangle is the product of the base times the height, or 95.33 mW.
Taking the derivative of the voltage signal on channel 1 yields a square wave. Multiplying it by the current waveform in channel 2 and integrating the product yields a decaying ramp (Figure 8).
Figure 8 The integrated product is measured over one input waveform cycle to obtain the area within the X-Y plot. Source: Art Pini
The area of the X-Y plot is read as the difference in the amplitudes at the cursor locations. This is displayed in the dialog box for the math trace F2, where the integral was calculated. The difference is 95.28 mW, which is almost identical to the product of the base and height. The advantage of this method is that it works regardless of the shape of the X-Y plot.
Practical examples
These are just a few practical examples of applying an oscilloscope’s integration and differentiation math to common electrical measurements that yield insights into a circuit’s behavior that are not directly measurable.
Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.
Related Content
- Oscilloscope math functions aid circuit analysis
- Perform five common debug tasks with an oscilloscope
- FFTs and oscilloscopes: A practical guide
- Use waveform math to extend the capabilities of your DSO or digitizer
- Oscilloscopes provide basic measurements
The post Using integration and differentiation in an oscilloscope appeared first on EDN.
Preemptive utilities shutdown oversight: Too much, too little, or just right?

Preventing fires and other calamities by proactively shutting off power in advance of inclement weather is dependent on forecast precision; customers’ needs should also be a considered factor.
Following up on my prior blog post, wherein I detailed my “interesting” mid-November, I’ll now share that mid-December was “interesting” as well, albeit for a different reason.
I’ve mentioned before that my residence in the Rocky Mountain foothills west of Denver, CO, is no stranger to inclement weather. Mid-year monsoon storms are a regular presence, for example, such as a September 2024 example that, like 2014 and 2015 predecessors, zapped various electronic devices, leaving them useful only as teardown patients going forward.

More generally, it tends to be “breezy” here, both on a sustained and (especially) gusty basis. See for example the multi-day screenshots I snagged as I was preparing to work on this writeup:


That said, mid-December 2025 was especially crazy. On Monday, December 15, Xcel Energy began warning of a potential preparatory outage beginning that same Wednesday, initially affecting approximately a half-million customers but downgraded a day later to roughly 50,000 (including us), along with additional potential outages as conditions both in-advance warranted and ended up being the case, resulting from high wind damage (the affected total that day ended up being 100,000+). Indeed, we ended up losing power, the result of a controlled shutoff beginning late Wednesday morning the 17th, and we also subsequently experienced extremely high winds at our location.
Here’s a screenshot I grabbed right at the initially forecasted 73 mph gust peak that evening:

and another, a couple of hours later, once the intensity had begun to dissipate, indicating that the actual peak gust at my location had been 85 mph:

Thursday the 18th was comparatively calm, and our residence power was briefly restored starting at 5:30 that evening. Early the next morning, however, the electricity went down again due to another Xcel Energy-initiated controlled shutoff, in advance of another extremely high-winds day. We got our power back to stay on Saturday evening the 20th at around 5 pm. That said, Xcel’s service to the affected region wasn’t fully restored until well into the following week.
Legal and fiscal precedentHere’s some historical background on why Xcel Energy might have made this preparatory shutoff decision, and to this degree. On December 30, 2021, a grass fire in Boulder County, Colorado (north of me), later referred to as the Marshall Fire, started and was subsequently fueled by 115 mph peak wind gusts:
The fire caused the evacuation of 37,500 people, killed two people, and destroyed more than 1,000 structures to become the most destructive fire in Colorado history.

Xcel Energy was later assigned responsibility for one of the fire’s two root causes, although Wikipedia’s entry points out that it “was neither caused by criminal negligence nor arson.”
In June 2023, Boulder County Sheriff Curtis Johnson announced that the fire’s causes had been found. He said that the fire was caused by two separate occurrences: “week-old embers on Twelve Tribes property and a sparking Xcel Energy power line.”
Wikipedia goes on to note that “Xcel Energy has faced more than 200 lawsuits filed by victims of the fire.” Those lawsuits were settled en masse two-plus years later, and less than three months ahead of the subsequent wind-related incident I’m documenting today:
On September 24, 2025, just ahead of trial, the parties reached a settlement. Pursuant to the agreement, Xcel will pay $640 million without admitting liability for the Marshall Fire. The settlement, which also includes Qwest Corp. and Teleport Communications America, resolves claims brought by individual plaintiffs, insurance companies, and public entities impacted by the fire. The resolution avoids what was anticipated to be a lengthy trial. No additional details regarding the settlement have been disclosed at this time.
A providential outcome (for us, at least)The prolonged outage, I’m thankful to say, only modestly affected my wife and me. We stuck it out at the house through Wednesday night, but given that the high winds precluded us from using our fireplaces as an alternative heart source (high winds also precluded the use of solar cell banks to recharge our various EcoFlow portable power setups, a topic which I’ll explore in detail in a subsequent post), we ended up dropping off our dog at a nearby kennel and heading “down the hill” the next (Thursday) morning to a still-powered hotel room for a few days:

Thanks in no small part to the few-hour electric power restoration overnight on Thursday, plus the cold outside temperatures, we ended up only needing to toss the contents of our kitchen refrigerator. The food in its freezer, plus that in both the refrigerator/freezer and standalone chest freezer in the garage, all survived. And both the house and its contents more generally made it through the multiple days of high winds largely unscathed.
Likely unsurprising to you, the public outcry at Xcel Energy’s shutoff decision, including but not limited to its extent and duration, has been heated. Some of it—demanding that the utility immediately bury all of its power lines, and at no cost to customers—is IMHO fundamentally, albeit understandably (we can’t all be power grid engineers, after all) ignorant. See, for example, my recent photograph of a few of the numerous high voltage lines spanning the hills above Golden:

There’s also grousing about the supposed inflated salaries of various Xcel Energy executives, for example, along with the as-usual broader complaints about Xcel Energt and other regulated monopolies.
That all said, I realize that other residents had it much worse off than us; they weren’t able to, and/or couldn’t afford to, relocate to a warm, electrified hotel room as we did, for example. Their outage(s) may have lasted longer than ours. They might have needed to throw out and replace more (and maybe all) of their refrigerated and frozen food (the same goes for grocery stores, restaurants, and the like). And their homes, businesses, and other possessions might have been damaged and/or destroyed by the high winds as well. All of it fueling frustration.
Results-rationalized actions?But that all said, at the end of the day I haven’t heard of any fires resulting from the mid-December high winds, or for that matter the more recent ones primarily in the eastern half of the state and elsewhere (the two screenshots I shared at the beginning of this writeup showed the more modest effects at my particular location) that prompted another proactive shutdown. And of course, weather forecasting is an inexact science at best, so Xcel Energy’s conservative potential over-estimation of how large a region to shut down and for how long may at least somewhat understandable, particularly in light of the recent sizeable settlement it just paid out.
In closing, I’m curious to hear what you think. Was Xcel Energy too pessimistic with its decisions and actions? Or maybe too optimistic? And is there anything it could do better and/or more to both in-advance predict and in-the-moment react to conditions in the air and on the ground, as well as to repair and revive service afterwards?
To wit, while I intended the word “oversight” in this write-up’s title to reference the following definition option:
- Supervision; watchful care.
I realized in looking up the word that two other definition options are also ironically listed:
- An omission or error due to carelessness.
- Unintentional failure to notice or consider; lack of proper attention.
Which one(s) apply in this case? Let me know your thoughts in the comments!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- Wearables for health analysis: A gratefulness-inducing personal experience
- Lightning strikes…thrice???!!!
- Lightning strike becomes EMP weapon
- Devices fall victim to lightning strike, again
The post Preemptive utilities shutdown oversight: Too much, too little, or just right? appeared first on EDN.
Звіт проректора з розвитку інфраструктури КПІ Олександра Мирончука на засіданні Вченої ради 15 грудня 2025 року "Розвиток інфраструктури університету: досягнення 2025 року"
Шановні колеги, рік тому у цих стінах Вчена рада затвердила Стратегію розвитку університету на 2025-2030 роки. У цьому документі визначено бачення нашого майбутнього, пріоритетні цілі та завдання. Сьогодні ми розглянемо досягнення Університету за 2025 рік, що стосуються розвитку його інфраструктури.
Toxics Link study Finds a Long Road to Circularity in India’s E-waste EPR Model
A new report by an environmental group, Toxics Link, reveals significant structural gaps in India’s Extended Producer Responsibility (EPR) framework for e-waste. The study, titled “Long Road to Circularity,” warns that while the EPR model is a cornerstone of waste management, it currently fails to capture a vast range of critical minerals essential for India’s green transition.
The Extended Producer Responsibility (EPR) framework mandates the recovery of only four metals—gold, copper, iron, and aluminium—leaving critical metals untapped. A vast range of valuable and critical metals, including neodymium, dysprosium, and lithium, are essential to strengthening resource security but is currently overlooked and lost.
In light of these concerns, the report examines current e-waste regulations and highlights critical challenges limiting their effective implementation. The report also underscores persistent issues, including low consumer awareness, poor financial traceability of e-waste flow and limited monitoring capacities. The study identifies several operational gaps. Key findings from the report include:
- The EPR portal currently lacks data on total market players and leaves small-scale manufacturers, online sellers, and grey-market importers outside the system.
- Information regarding non-compliance penalties and environmental compensation remains unavailable for the 2023–24 and 2024–25 fiscal years.
- Detailed data is accessible only to operators, resulting in limited public visibility into system performance.
- The current regulation does not identify and place responsibility on any individual stakeholder for the setting up of collection centres, thus making it extremely difficult for consumers to deposit their waste in the authorised e-waste stream.
- No incentive to producers and manufacturers adopting “green” product designs or for recyclers implementing high-quality, advanced recovery processes.
“While Extended Producer Responsibility is a cornerstone of India’s e-waste management framework, the principle alone cannot deliver the desired outcomes. It must be complemented by an effective and robust waste collection system, integrating the informal sector and the development of high-tech recycling facilities along with public awareness for advancing system transparency”, Satish Sinha, Associate Director, Toxics Link.
The report suggests the following key recommendations to plug some of the gaps in the
present system:
- Enhance system accountability and transparency by making data accessible to the
public. - Strengthen reverse supply chains and collection mechanisms to streamline the flow
of e-waste. - Expand consumer awareness on the advantages of e-waste recycling and the
importance of using authorised collection channels. - Integrate the informal sector into the e-waste management ecosystem.
Together, these measures can help build a stronger and more effective e-waste
management system.
The post Toxics Link study Finds a Long Road to Circularity in India’s E-waste EPR Model appeared first on ELE Times.
ESGDS’ AI platform slashes data processing time by 98% with MongoDB Atlas
ESG Data & Solutions (ESGDS) is a fast-growing Indian technology company. It builds tools to enable banks, investors, and other financial groups to track and analyse a company’s performance on Environmental, Social, and Governance (ESG) issues.
With a vast range of covered topics and multiple providers employing different types of methodologies and taxonomies, ESG data sets are notoriously difficult to work with.
Because these analyses guide critical research and investment decisions, ESGDS developed ESGSure—a bespoke research platform built on MongoDB Atlas—to address the challenge.
THEIR CHALLENGE: Overcoming the relational model limitations to unlock AI scale
ESGSure collects points from over 20,000 companies and investors—these include annual reports and corporate filings, news, and client-specific questionnaires. The platform also tracks a range of other publicly available sources, including news articles, compliance records, and sanctions lists, among others. These resources come in various formats, including videos, PDFs, transactional data in APIs, and more.
Before moving to MongoDB Atlas, ESGDS relied on several other databases, including relational databases such as PostgreSQL and Pinecone for vector search workloads. As the use cases and data sets expanded, ESGDS encountered limitations.
“Our platform needs to process massive, diverse, and unstructured data sets, so we can then use a combination of large language models (LLMs), real-time data, and vector search capabilities to deliver AI-driven granular, personalised, and actionable insights for investors,” said Arun Doraisamy, Co-Founder and Chief Technology Officer at ESGDS. “We needed more flexibility, to reduce complexity, and do that at scale. This meant moving away from a relational model and onto a database model that fit our needs.”
Several limitations drove ESGDS to seek a new database:
- Lack of flexibility and scalability: Rigid legacy relational databases lacked the schema flexibility required to dynamically store and update ESGDS’s rapidly evolving datasets. This resulted in inconsistent insights that hindered analysts’ and investors’ ability to make timely and accurate data-driven decisions. Additionally, a lack of elastic scalability throttled ESGDS’s ability to handle continuous data growth, compromising its ambitious expansion plans.
- Delayed data insights: Stale data is a significant challenge for the ESG data analysis industry—by the time it is collected and analysed, ESG data can be up to a year old. To add to this challenge, manual ESG data review in ESGDS’s legacy database took an average of 2 to 3 days per company. ESGDS wanted to automate these processes to provide investors with real-time insights.
- Complex security and compliance: ESGDS manages sensitive, private datasets for its clients. Ensuring secure storage, data encryption, and compliance with ESG frameworks and regional requirements, such as GDPR, has become increasingly complex. With expansion into highly regulated countries on its roadmap, ESGDS knew this challenge would become acute.
- Limited global portability: ESGDS needed a data platform that would easily and efficiently power growth plans across Europe, Asia Pacific, and North America. It had to support a reliable, multi-cloud, and multi-region infrastructure.
“We needed a modern, flexible model with built-in AI capabilities that could meet our complex needs, and keep evolving to support our ambitious growth and diversification goals,” said Doraisamy.
The post ESGDS’ AI platform slashes data processing time by 98% with MongoDB Atlas appeared first on ELE Times.
🚀 Триває опитування «Викладач очима студентів»
Хочете вплинути на якість викладання? Візьміть участь в опитуванні «Викладач очима студентів» в Електронному кампусі щодо якості викладання науково-педагогічними працівниками протягом минулого семестру. Це єдине офіційне опитування, яке ОБОВ’ЯЗКОВО враховується при проходженні викладачем конкурсу.
Photon Bridge demos >30mW laser output per color at wafer scale on silicon photonics
AI is stress-testing processor architectures and RISC-V fits the moment

Every major computing era has been defined not by technology, but by a dominant workload—and by how well processor architectures adapted to it.
The personal computer era rewarded general-purpose flexibility, allowing x86 to thrive by doing many things well enough. The mobile era prioritized energy efficiency above all else, enabling Arm to dominate platforms where energy, not raw throughput, was the limiting factor.
AI is forcing a different kind of transition. It’s not a single workload. It’s a fast-moving target. Model scale continues to expand through sparse and mixture-of-experts techniques that stress memory bandwidth and data movement as much as arithmetic throughput. Model architectures have shifted from convolutional networks to recurrent models to transformers and continue evolving toward hybrid and emerging sequence-based approaches.
Deployment environments span battery-constrained edge devices, embedded infrastructure, safety-critical automotive platforms, and hyperscale data centers. Processing is spread across a combination of GPUs, CPUs, and NPUs where compute heterogeneity is a given.
The timing problem
Modern AI workloads demand new operators, execution patterns, precision formats, and data-movement behaviors. Supporting them requires coordinated changes across instruction sets, microarchitectures, compilers, runtimes, and developer tooling. Those layers rarely move in lockstep.
Precision formats illustrate the challenge. The industry has moved from FP32 to FP16, BF16, INT8, and now FP8 variants. Incumbent architectures continue to evolve—Arm through SVE and SVE2, x86 through AVX-512 and AMX—adding vector and matrix capabilities.
But architectural definition is only the first step. Each new capability must propagate through toolchains, be validated across ecosystems, and ship in production silicon. Even when specifications advance quickly, ecosystem-wide availability unfolds over multiple product generations.
The same propagation dynamic applies to support sparsity, custom memory-access primitives, and heterogeneous orchestration. When workloads shift annually—or faster—the friction lies both in defining new processor capabilities and in aligning the full stack around them.

Figure 1 AI imposes multi-axis stress on processor architectures.
Traditional ISA evolution cycles—often measured in years from specification to broad silicon availability—were acceptable when workloads evolved at similar timescales. But they are structurally misaligned with AI’s rate of change. The problem is that architectural models optimized for long-term stability are now being asked to track the fast-paced and relentless reinvention of workloads.
The core issue is not performance. It’s timing.
Differentiate first, standardize later
Historically, major processor architectures have standardized first and deployed later, assuming hardware abstractions can be fully understood before being locked in. AI reverses that sequence. Many of the most important lessons about precision trade-offs, data movement, and execution behavior emerge in the development phase, while the models are still evolving.
Meta’s MTIA accelerator (MTIA ISCA23/MTIA ISCA25) makes use of custom instructions within its RISC-V–based processors to support recommendation workloads. That disclosure reflects a broader reality in AI systems: workload-specific behaviors are often discovered during product development rather than anticipated years in advance.

Figure 2 MTIA 2i architecture comprises an 8×8 array of processing elements (PEs) connected via a custom network-on-chip.

Figure 3 Each PE comprises two RISC-V processor cores and their associated peripherals (on the left) and a set of fixed-function units specialized for specific computations or data movements (on the right).
The MTIA papers further describe a model—a hardware co-design process in which architectural features, model characteristics, and system constraints evolved together through successive iterations. In such environments, the ability to introduce targeted architectural capabilities early—and refine them during development—becomes an engineering requirement rather than a roadmap preference.
In centrally governed compute architectures, extension priorities are necessarily coordinated across the commercial interests of the stewarding entity and its licensees. That coordination has ensured coherence, backward compatibility, and ecosystem stability across decades.
It also means the pace and priority of architectural change reflect considerations that extend beyond any single vendor’s system needs and accumulate costs associated with broader needs, legacy, and compatibility.
The question is whether a tightly coupled generational cadence—and a centrally coordinated roadmap—remains viable when architectural optimization across a vast array of use cases must occur within the product development cycle rather than between them.
RISC-V decouples differentiation from standardization. A small, stable base ISA provides software continuity. Modular extensions and customizations allow domain-specific capabilities within product cycles. This enables companies and teams to innovate and differentiate before requiring broad consensus.
In other words, RISC-V changes the economics of managing architectural risk. Differentiation at the architecture level can occur without destabilizing the broader software base, while long-term portability is preserved through eventual convergence.
Matrix-oriented capabilities illustrate this dynamic. Multiple vendors independently explored matrix acceleration techniques tailored to their specific requirements. Rather than fragmenting permanently, those approaches are informing convergence through RISC-V International’s Integrated Matrix Extensions (IME), Vector Matrix Extensions (VME), and Attached Matrix Extensions (AME) working groups.
The result is a path toward standardized matrix capabilities shaped by multiple deployment experiences rather than centralized generational events that need consensus ahead of time.
Standardization profiles such as RVA23 extend this approach, defining compatible collections of processor extensions while preserving flexibility beneath the surface.
In practical product terms, this structural difference shows up in development cadence. In many established architectural models, product teams anchor around a stable processor core generation and address new workload demands by attaching increasingly specialized accelerators.
Meaningful architectural evolution often aligns with major roadmap events, requiring coordinated changes across hardware resources, scheduling models, and software layers. By contrast, RISC-V’s base-and-extension model allows domain-specific capabilities to be introduced incrementally on top of a stable ISA foundation.
Extensions can be validated and supported in software without requiring a synchronized generational reset. The distinction is not about capability; it’s about where, when, and how innovation occurs in the product cycle.
From inference silicon to automotive
This difference becomes apparent in modern inference silicon.
Architectural requirements—tightly coupled memory hierarchies, custom data-movement patterns, mixed-precision execution, and accelerator-heavy fabrics—are often refined during silicon development.
Take the case of D-Matrix, which has selected a RISC-V CPU for vector compute and orchestration, memory, and workload distribution management for its 3DIMC in-memory compute inference architecture. In architectures where data movement and orchestration dominate energy and latency budgets, the control plane must adapt alongside the accelerator. Architectural flexibility in the control layer reduces development iteration friction during early product cycles.
The tension between architectural stability and workload evolution is especially visible in automotive.
ISO 26262 functional safety qualification can take years, and vehicle lifecycles span a decade or more. Yet advanced driver assistance systems (ADAS) depend on perception models that are continuously evolving with improved object detection, sensor fusion, and self-driving capabilities. As a result, the automotive industry faces a structural tension: freeze the architecture and risk falling behind or update continuously and requalify repeatedly.
A stable, safety-certified RISC-V foundation paired with controlled extensions offers one way to balance those forces—architectural continuity where validation demands it, and differentiation where workloads require it.
This approach has industry backing. Bosch, NXP, Qualcomm, Infineon, and STMicroelectronics have formed Quintauris specifically to standardize RISC-V profiles for automotive, targeting exactly this combination of long-term architectural stability with application-layer adaptability.
The fact that this represents hardware suppliers, microcontroller vendors, and system integrators simultaneously reflects how broadly the industry has recognized the problem and the approach.
A moment defined by engineering reality
RISC-V’s expanding role in AI is not a rejection of incumbent architectures, which continue to deliver performance and compatibility across a wide range of systems. It reflects a shift in engineering constraints highlighted by AI’s pace.
When workloads evolve faster than architectural generations, adaptability becomes an economic variable. The architecture that prevails is not necessarily the one that runs today’s models fastest. It’s the one that can adjust when those models change.
Legacy processor architectures provide broad stability across generations. RISC-V adds a structural advantage in adaptation velocity—the ability to accommodate differentiation within the product cycle, absorb lessons from deployment, and converge toward standardization—without forcing system architects to wait for generational events. It can adapt to tomorrow’s workloads and course-correct without breaking yesterday’s software.
Marc Evans is director of business development and marketing at Andes Technology USA, a founding premier member of RISC-V International. He is also the organizer of RISC-V Now! (www.riscv-now.com) to be held in Silicon Valley on April 20-21, 2026, a conference focused on the practical lessons of deploying RISC-V at commercial scale across AI, automotive, and data centers.
Special Section: AI Design
- The AI design world in 2026: What you need to know
- AI workloads demand smarter SoC interconnect design
- AI’s insatiable appetite for memory
- The AI-tuned DRAM solutions for edge AI workloads
- Designing edge AI for industrial applications
- Round pegs, square holes: Why GPGPUs are an architectural mismatch for modern LLMs
- Bridging the gap: Being an AI developer in a firmware world
- Why power delivery is becoming the limiting factor for AI
- Silicon coupled with open development platforms drives context-aware edge AI
- Designing energy-efficient AI chips: Why power must Be an early design consideration
- Edge AI in a DRAM shortage: Doing more with less
- AI in 2026: Enabling smarter, more responsive systems at the edge
The post AI is stress-testing processor architectures and RISC-V fits the moment appeared first on EDN.
Keysight Unveils 3D Interconnect Designer for Chiplet and 3DIC Advanced Package Designs
Keysight Technologies introduced 3D Interconnect Designer, a new addition to its Electronic Design Automation (EDA) portfolio. The solution addresses the mounting complexity of designing 3D interconnects for high-chiplet and 3DIC advanced packages used in AI infrastructure and data centre applications.
As chiplet architectures are increasingly adopted, engineers face complex 3D interconnect designs for multi-die and stacked-die applications, which traditional workflows struggle to handle efficiently. As a result, teams spend significant time manually optimising the interconnects that include vias, transmission lines, solder balls, and micro-bumps while ensuring signal and power integrity in densely packed systems. This results in more design spins and longer product development cycles, creating a bottleneck that can delay product launches and increase development costs.
Keysight EDA software streamlines the process with a dedicated workflow for designing and optimising 3D interconnects accurately. The tool handles complex geometries, including hatched or waffled ground planes, which are critical to overcome manufacturing and fabrication constraints, especially silicon processes such as interposers and bridges, in advanced package designs. By enabling engineers to quickly design, optimise, and validate 3D interconnects used in chiplets and 3DICs, it minimises iterations and speeds time-to-market.
Key benefits include:
- Accelerates Design Cycles: Streamlined automation removes time‑consuming manual steps in 3D interconnect design, minimising errors and boosting first‑pass success
- Reduced Compliance Risk: Validates designs against emerging standards such as UCIe and BoW, ex VTF (Voltage Transfer Function), early in the lifecycle, reducing the risk of late-stage failures that lead to costly redesigns
- Predicts Performance Accurately: Electromagnetic-based simulation provides precise electrical analysis of printed circuit boards (PCB) and package 3D interconnect designs
The solution integrates with Keysight’s EDA tools as well as supporting the standalone version, enabling teams to incorporate 3D interconnect design and optimisation into existing workflows. When combined with Chiplet PHY Designer, engineers can design and optimise 3D interconnects specifically for chiplets and three-dimensional integrated circuits (3DICs), ensuring accuracy and reducing costly iterations in multi-die systems.
Nilesh Kamdar, EDA Design and Verification General Manager at Keysight, said:
“With today’s complexity, manual 3D interconnect design and optimisation have become a significant bottleneck. By streamlining the process and providing early insights into potential issues like signal and power integrity, we’re enabling engineers to get products to market faster and deliver compliant designs on tighter timelines.”
The post Keysight Unveils 3D Interconnect Designer for Chiplet and 3DIC Advanced Package Designs appeared first on ELE Times.
Made a keyboard diode matrix for the first time
| Not exactly a keyboard, but the plan is to hook this up to a Pi pico whenever it arrives and use it as the F1 - F24 keys for a CCTV project I'm working on as a "Camera Control Panel" With all the IO ports on a pico I'm pretty sure I could have gave each switch it's own dedicated IO, but this felt more fun lol [link] [comments] |
Звіт проректора з міжнародних зв'язків Андрія Шишоліна на засіданні Вченої ради КПІ 15 грудня 2025 року
Звіт проректора з міжнародних зв'язків Андрія Шишоліна на засіданні Вченої ради КПІ 15 грудня 2025 року: "КПІ у світі – світ у КПІ: результати 2025 року. Міжнародна діяльність як частина ДНК сучасного університету".
SemiQ debuting SiC power solutions for AI data centers and high-power infrastructure at APEC
Silly simple supply sequencing

Frequent contributor R. Jayapal recently shared an interesting Design Idea (DI) for power supply control and sequencing in MCU-based applications that combine analog and digital circuitry: “Short push, long push for sequential operation of multiple power supplies.”
The application becomes challenging when there’s a requirement to have the digital side powered up and stable for a programmable interval (typically approximately a second or two) before the analog comes online.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Since Jayapal had already published a fine digital solution to the problem, I’ve taken the liberty of attempting an (almost painfully) simple analog version using an SPDT switch for ON/OFF control and RC time constants, and Schmidt triggers for sequencing. Figure 1 shows how it works.
Figure 1 Simple analog supply sequencing accomplished using an SPDT switch for ON/OFF control and RC time constants, and Schmidt triggers for sequencing.
Switching action begins with S1 in the OFF position and both C1 and C2 timing caps discharged. This holds U1 pin 1 at 15 V and pin 3 at 0 V. The latter holds enhancement-mode PFET Q1’s gate at 15 V, so both the transistor and the 15-Vout rail are OFF. Meanwhile, the former holds NFET Q2’s gate at zero and therefore Q2 and the 5-Vout rail are likewise OFF. No power flows to the connected loads.
Figure 2 shows what happens when S1 is flipped to ON.

Figure 2 Power sequence timing when S1 is flipped to ON, connecting C2 near ground through R3.
Moving S1 from OFF to ON connects C2 near ground through R3, charging it to the Schmidt trigger low-going threshold in about R3C2 = 1 ms. This reverses U1 pin 2 to 15 V, placing a net forward bias of 10 V on NFET Q2, turning on Q2, the 5-Vout rail, and connected loads. Thus, they will remain as long as S1 stays ON.
Meanwhile, back at the ranch, the reset of C1 has been released, allowing it to begin charging through R1. Nothing much else happens until it reaches U1’s ~10-V threshold, which requires roughly T1 = ln(3)R1C1 = 2.2 seconds for the component values shown. Of course, almost any desired interval can be chosen with different values. When R1C1 times out, U1pin4 snaps low, PFET Q1 turns ON, and 15-Vout goes live. Turn ON sequencing is therefore complete.
The right side of Figure 2 shows what happens when S1 is flipped to OFF.
Firstly, C1 is promptly discharged through R3, turning off Q1 and 15-Vout, putting it and whatever it powers to sleep. Then C2 begins ramping from near zero to 15 V, taking T2 = ln(3)R2C2 = 2.2 seconds to get to U1’s threshold. When it completes the trip, pin 2 goes low, turning Q2 and 5-Vout OFF. Turn OFF sequencing is therefore complete.
Marginal details of the design include the two 4148 diodes whose purpose is to make the sequencer’s response to losing and regaining the input rail voltage orderly, and to do so regardless of whether S1 is ON or OFF when/if they happen. Note that MOSFETs should be chosen for adequate current handling capacities. Note that since Q1 has 15 V of gate/source drive and Q2 gets 10 V, neither needs to be a sensitive logic-level device.
Figure 3 shows some alternative implementation possibilities for U1’s triggers in case using a hextuple device with 4 sections unused seems inconvenient or wasteful.

Figure 3 Alternative Schmidt trigger possibilities.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Short push, long push for sequential operation of multiple power supplies
- A step-by-step guide to power supply sequencing and supervision, Part 2
- Power-supply sequencing for low-voltage processors
- Trends in power supply sequencing
The post Silly simple supply sequencing appeared first on EDN.









