Українською
  In English
Feed aggregator
https://www.semiconductor-today.com/news_items/2026/jan/tclcsot-prima-020126.shtml
How 2025’s Constraints Became the Blueprint for Electronics System Design in 2026?
As the electronics industry looks back at 2025, a clear shift toward efficiency, miniaturisation, and—most critically—more deliberate material choices becomes evident. The year stands out as a pivotal phase in the evolution of electronics, enabling systems tailored to the increasingly demanding requirements of data centres, advanced sensing platforms, electrified systems, and next-generation semiconductor packaging. Rather than chasing raw performance, the industry in 2025 was forced to reconcile ambition with practicality—balancing sustainability goals, high-performance demands, and mounting geopolitical pressures.
Escalating power densities driven by AI-centric data centers and electrification, shrinking thermal headroom resulting from aggressive miniaturization and higher levels of integration, and growing material availability constraints shaped by geopolitics and post–Moore’s Law design dependencies collectively emerged as defining parameters in the system architectures of automotive, industrial, and infrastructure electronics.
“Performance scaling today is increasingly driven by materials-centric advanced packaging,” says Suraj Rengarajan, Head of Semiconductor Product Group, Applied Materials India. As the industry enters a new year, these forces offer a clear lens through which to examine the design choices and innovations that defined electronics in 2025. Further, to give a better idea of how electrical design is changing in its basics, Suraj from Applied Materials India adds that System-level power, performance, area, and cost are now set by co-optimizing the bonding interface, low‑k dielectrics, redistribution-layer etch, barrier/seed, copper fill, and CMP, and thermal interfaces, treating interconnect resistance and heat flux as primary design variables.
In every aspect that we will be examining in the course of our story, we will try and see how the new dynamics of the industry shaped the preferences of the design engineers to sustain the innovations and applications, including data centres, automotives, and industrial applications.
Power Efficiency over Capability!
As the electrification phenomenon rapidly spread its wings, power efficiency, and not power capacity, became the primary constraint. As the demand across the sectors increased, it raised the energy demands significantly while simultaneously tightening thermal and sustainability limits. This realisation brought the power electronics landscape into the core architectural consideration of an electrical design engineer. In such a condition, the industry moved to significantly increase the power handled per unit area- Power Density.
With AI workloads driving processor currents from a few hundred amperes to well over a thousand—without any meaningful increase in board or package footprint—power efficiency emerged as the only viable path to sustain compute scaling.
This enabled the engineers to focus on more basic and intrinsic aspects of power electronics, which as efficiency, facilitating the same at every level of electronics design. To sustain the new dynamic, the industry moved towards Wide Band-gap (WBG) technologies, including Silicon Carbide (SiC) & Gallium Nitride (GaN). This helped the engineers to prevent switching and conduction losses along with heat generation per unit area, while abiding by tighter thermal and packaging constraints. The WBG technology also pushed the efficiency of the electronic product significantly at the system level.
As power density increased, thermal removal became progressively harder, creating a self-reinforcing loop in which higher efficiency was required simply to preserve thermal headroom rather than to improve performance.
Application
In data centres, rising compute density is driving demand for compact, high-efficiency power solutions. Gallium nitride–based power supplies are gaining traction by improving efficiency, enabling higher switching frequencies, shrinking passive components, and reducing cooling needs. In some architectures, GaN also allows simplified or single-stage power conversion, lowering losses and bill-of-materials complexity while supporting higher voltages closer to the point of load.
“With AI workloads, processor current levels have scaled from a few hundred amperes to over a thousand amperes, while the physical footprint has remained largely unchanged. This has fundamentally pushed power density and efficiency to the centre of system design,” says Dr Kaushik Basu, Associate Professor at IISC Bangalore.
Thermal Limits Over Advanced Cooling
As power efficiency improvements enabled higher power densities, overall heat generation continued to rise—driven by increasing absolute power levels and the closer packing of heat sources within shrinking form factors. Under these conditions, heat was generated faster than it could be spread or dissipated, leading to steeper thermal gradients that placed greater stress on materials, interconnects, and interfaces. At the same time, as electronics moved toward more miniaturised, efficient, and reliability-critical designs, the cost, complexity, and reliability penalties associated with ever-more advanced cooling solutions became increasingly prohibitive.
“As power density increases, heat removal becomes increasingly difficult. That is why efficiency is no longer optional—there is simply no thermal headroom to absorb losses,” says Dr Basu. By 2025, the industry reached a clear realisation: cooling complexity could no longer scale indefinitely to offset rising power density. This marked a fundamental shift in design philosophy, with heat dissipation moving from a downstream mechanical consideration to a primary architectural constraint addressed early in the design cycle. “Designers are increasingly treating materials as first-class design parameters. For advanced nodes, device physics is fundamentally materials physics, ” says Suraj from Applied Materials India.
The growing adoption of advanced packaging approaches, including 2.5D and 3D packaging, was driven as much by electrical constraints as thermal ones, as rising currents made long power-delivery paths increasingly untenable due to conduction losses and localized heating. It emerged as the first line of defence against thermal stress, playing a critical role in protecting silicon devices while enabling higher levels of integration and system efficiency. Particularly, in vertically stacked 3D architectures, where multiple dies are interconnected using through-silicon vias (TSVs), thermal challenges become particularly acute due to limited heat escape paths and the formation of localised hotspots.
In such configurations, traditional air- or liquid-based cooling, or the addition of increasingly sophisticated cooling hardware, often proved insufficient, expensive, or impractical—especially in automotive, industrial, and infrastructure applications with stringent reliability and lifetime requirements. While advanced packaging shortened interconnect paths and reduced resistive losses, it also concentrated heat generation within smaller volumes, making thermal constraints more visible rather than eliminating them. “Teams now co‑simulate variability and reliability, electromigration, bias temperature instability, and time‑dependent dielectric breakdown, at the materials level alongside logic and layout,” says Suraj. As a result, thermal-aware system architecture and packaging design became indispensable in sustaining performance and reliability.
“Advanced packaging approaches such as 2.5D and 3D integration are largely driven by the need to minimise current paths and conduction losses by bringing power conversion closer to the load. However, they also make thermal challenges more visible rather than eliminating them,” says Dr Basu. Eventually, to enable the engineers to accurately predict and manage heat generation and dissipation, which is crucial for preventing component failure, optimizing performance, and ensuring safety, Thermal modeling and co-simulation have now become integral to modern electronics design.
Materials as a Design Constraint, Not a Specification
In 2025, materials in electronics moved beyond being passive specifications and emerged as hard design constraints shaping system architecture from the outset. Persistent supply-chain fragility, geopolitical uncertainty, tightening environmental regulations, and the escalating demands of AI, high-performance computing, and electrification collectively forced designers to treat material selection as a primary limiting factor influencing performance, reliability, and manufacturability.
Midway through the year, the surge in AI, HPC, and electrified platforms imposed unprecedented thermal and electrical stress on electronic systems. Materials able to withstand high power density, heat, and long lifetimes became critical design constraints, shaping device selection, power architecture, and packaging. As advanced nodes and 2.5D/3D integration pushed miniaturisation to its limits, thermal conductivity, mechanical strength, and interconnect reliability emerged as central concerns.
By late 2025, regulatory pressures further reshaped material decisions. Stricter sustainability and environmental compliance requirements, including tighter enforcement of RoHS and REACH norms, transformed lead-free, recyclable, and low-emission materials from preferences into mandatory design conditions. While breakthroughs in advanced materials and AI-driven material informatics offered new optimisation pathways, they also demanded deeper material awareness from system designers.
“We are reaching a point where clever system-level design alone is not sufficient. Addressing today’s power and thermal challenges increasingly requires improvements at the material and device level,” says Dr Basu.
Together, these forces marked 2025 as the year when material availability, compliance, and physics converged, redefining what was practically achievable in electronics design. Material choice ceased to be a downstream optimisation exercise and instead became a foundational variable that set the limits for efficiency, scalability, and long-term system viability.
Conclusion: Designing Within Limits Became the New Competitive Advantage
Power density, thermal limits, and materials are no longer independent design considerations; in high-performance systems, each now defines the operating boundary of the others. “Thermal management and power density will remain the most difficult challenges in the coming years, while material-level improvements, although critical, will take longer to mature,” says Dr Basu.
The defining lesson of 2025 was rooted in a collective shift in how electronic systems were conceived and engineered. As power efficiency replaced raw capability, thermal limits supplanted aggressive cooling, and materials evolved from passive enablers to active constraints, electronics design entered an era governed less by ambition and more by physical and systemic realities. “Efficiency is being engineered from the materials up, with interconnects, dielectrics, power delivery, cooling, and packaging treated as a coupled system,” says Suraj of Applied Materials India.
Across data centres, automotive platforms, and industrial systems, engineers confronted hard limits of heat, materials, and long-term reliability, making performance something to be balanced rather than maximised. Power electronics moved to the centre of system architecture, packaging became a critical thermal and electrical optimisation layer, and material choices began shaping designs at the architectural stage. Innovation did not slow under these constraints; it became more disciplined, integrated, and system-aware.
As electronics move forward, the lesson of 2025 is clear: the future belongs not to systems that promise peak performance on paper, but to those engineered with a deep understanding of efficiency, thermal reality, and material limits—marking the year when designing within constraints became a true engineering advantage. In an industry long defined by relentless scaling, 2025 will be remembered as the year when designing within limits became the ultimate engineering advantage.
The post How 2025’s Constraints Became the Blueprint for Electronics System Design in 2026? appeared first on ELE Times.
2025: A year in which chaos seemingly thrived

A year back, this engineer titled his 2024 retrospective “interconnected themes galore”. That said, both new and expanded connections can sometimes lead to chaotic results, yes?
As any of you who’ve already seen my precursor “2026 Look Ahead” piece may remember, we’ve intentionally flipped the ordering of my two end-of-year writeups once again this year. This time, I’ll be looking back over 2025: for historical perspective, here are my prior retrospectives for 2019, 2021, 2022, 2023, and 2024 (we skipped 2020).
As I’ve done in past years, I thought I’d start by scoring the key topics I wrote about a year ago in forecasting the year to come:
- The 2024 United States election (outcome, that is)
- Ongoing unpredictable geopolitical tensions, and
- AI: Will transformation counteract diminishing ROI?
Maybe I’m just biased, but in retrospect, I think I nailed ‘em all as being particularly impactful. In the sections that follow, I’m going to elaborate on several of the above themes, as well as discuss other topics that didn’t make my year-ago forecast but ended up being particularly notable (IMHO, of course).
Tariffs, constrained shipments, and government investments
A significant portion of the initial “2024 United States election outcome” section in my year-back look-ahead piece was devoted to the likely potential for rapidly-announced significant tariffs by the new U.S. administration against various other countries, both import- and export-based in nature, and both “blanket” and product-specific, as well as for predictable reactive tariffs and shipment constraints by those other countries in response.
And indeed this all came to pass, most notably with the “Liberation Day” Executive Order-packaged suite of import duties issued on April 2, 2025, many of which were subsequently amended (multiple times in a number of cases) in the subsequent months in response to other countries’ tit-for-tat reactions, trade agreements, and other détente cooling-off measures, and the like.
My point in bringing this all up, echoing what I wrote a year back (as well as both the month and the year before that), is not to be political. As I’ve written several times before:
I have not (and will not) reveal personal opinions on any of this.
and I will again “stay the course” this time. Whether or not tariffs are wise or, for that matter, were even legally issued as-is are decisions for the Supreme Court (near term) and the voters (eventually) to decide. So then why do I mention it at all? Another requote:
Americans are accused of inappropriately acting as if their country and its citizens are the “center of the world”. That said, the United States’ policies, economy, events, and trends inarguably do notably affect those of its allies, foes and other countries and entities, as well as the world at large, which is why I’m including this particular entry in my list.
This time, I’m going to focus on a couple of different angles on the topic. Maybe your company sells its products and/or services only within the country in which it’s headquartered. Or maybe, on the opposite end of the spectrum, it’s a multinational corporation with divisions scattered around the world. Or any point in between these spectrum extremes.
Regardless (and regardless too of whether or not it’s a U.S.-headquartered company), both the tariff and shipment-restriction policies of the U.S. and other countries will undoubtedly and notably affect your business strategies.
Unfortunately, though, while such tariff and restriction policies can be issued, amended, and rescinded “on a dime”, your company’s strategies inherently can’t be even close to as nimble, no matter how you aspire to both proactively and reactively structure your organization and its associated supply chains.
As I write these words I’m reminded, for example, of a segment I saw in a PBS NewsHour episode last weekend that discussed (among other things) Christmas goods suppliers’ financial results impacts of tariffs, along with the just-in-case speculative stockpiling they began doing a year ago in preparation (conceptually echoing my own “Chi-Fi” pre-tariff purchases at the beginning of 2025):
The other angle on the issue that I’d like to highlight involves the increasingly prevalent direct government involvement in companies’ financial fortunes.
Back in August, for example, just two weeks after initially demanding that Intel’s new CEO resign due to the perception of improper conflicts involving Chinese companies, the Trump administration announced that it was instead converting prior approved CHIPS Act funding for Intel into stock purchases, effectively transforming the U.S. into a ~10% Intel shareholder.
More recently, NVIDIA was once again approved to ship its prior-generation H200 AI accelerators into China…in exchange for the U.S. getting a 25% share of the resultant sales revenue, and following up on broader 15%-revenue-share agreements made by both AMD and NVIDIA back in August in exchange for securing China-export licenses.
And President Trump has already publicly stated that such equity and revenue-sharing arrangements, potentially broadening to also include other U.S. companies, will increasingly be the norm versus the exception in the future. Again, wise or not? I’ll keep my own opinions to myself and rely on time to answer that one. For now, I’ll just say…different.
RobotaxisWaymo is on a roll. The Google-sibling Alphabet subsidiary now blankets not only San Francisco, California (where its usage by customers is increasingly the norm versus a novelty exception) but large chunks of the broader Silicon Valley region, now including freeways and airports.
It’s also currently offering full service in Los Angeles, Phoenix (AZ), and Austin (TX) as I write these words in late December 2025, with active testing underway in roughly a dozen more U.S. municipalities, plus Japan and the UK, and with already-announced near-term service plans in around a dozen more. As Wikipedia notes:
As of November 2025, Waymo has 2,500 robotaxis in service. As of December 2025, Waymo is offering 450,000 paid rides per week. By the end of 2026, Waymo aims towards increasing this to 1 million taxi rides a week and are laying the groundwork to expand to over 20 cities, including London and Tokyo, up from the current six.
And this is key: these are fully autonomous vehicles, with no human operators inside (albeit still with remote human monitors who can, as needed, take over manual control):

Problem-free? Not exactly. Just in the few weeks prior to my writing these words, several animals have been hit, a Waymo car has wandered into an active police-presence scene, and they more generally haven’t seemingly figured out yet how to appropriately respond to school buses signaling they’re in the process of actively picking up and/or dropping off passengers.
So not perfect: those are the absolute statistics. But what about relative metrics?
Again and again, in data published both by Waymo (therefore understandably suspect) and independent observers and agencies, autonomous vehicles are seen as notably safer, both for occupants and the environment around them, than those piloted by humans…and the disparity is only growing in self-driving vehicles’ favor over time. And in China, for example, the robotaxi programs are, if anything, even more aggressive from both testing and active deployment standpoints.
To that last point, I’ll conclude this section with another note on this topic. In fairness, I feel compelled to give Tesla rare but justified kudos for finally kicking off the rollout of its own robotaxi service mid-year in Austin, after multiple yearly iterations of promises followed by delays.
Just a few days ago, as I write this, in fact, the company began testing without human monitors in the front seats (not that they were effective anyway, in at least one instance).
Agentic AIIn the subhead for my late-May Microsoft Build 2025 conference coverage, I sarcastically noted:
What is “agentic AI”? This engineer says: “I dunno, either.”
Snark aside, I truthfully already had at least some idea of what the “agentic web”, noted in the body text of that same writeup as an example of the trendy lingo that our industry is prone to exuberantly (albeit only impermanently) spew, meant. And I’ve certainly learned much more about it in the intervening months. Here’s what Wikipedia says about AI agents in its topic intro:
In the context of generative artificial intelligence, AI agents (also referred to as compound AI systems or agentic AI) are a class of intelligent agents distinguished by their ability to operate autonomously in complex environments. Agentic AI tools prioritize decision-making over content creation and do not require human prompts or continuous oversight.
And what about the aforementioned broader category of intelligent agents, of which AI agents are a subset? Glad you asked:
In artificial intelligence, an intelligent agent is an entity that perceives its environment, takes actions autonomously to achieve goals, and may improve its performance through machine learning or by acquiring knowledge. AI textbooks define artificial intelligence as the “study and design of intelligent agents,” emphasizing that goal-directed behavior is central to intelligence. A specialized subset of intelligent agents, agentic AI (also known as an AI agent or simply agent), expands this concept by proactively pursuing goals, making decisions, and taking actions over extended periods.
A recent post on Google’s Cloud Blog included, I thought, I concise summary of the aspiration:
“Agentic workflows” represent the next logical step in AI, where models don’t just respond to a single prompt but execute complex, multi-step tasks. An AI agent might be asked to “plan a trip to Paris,” requiring it to perform dozens of interconnected operations: browsing for flights, checking hotel availability, comparing reviews, and mapping locations. Each of these steps is an inference operation, creating a cascade of requests that must be orchestrated across different systems.
Key to the “interconnected operations” that are “orchestrated across different systems” is MCP, the open-source Model Context Protocol, which I highlighted in my late-May coverage. Originally created by two developers at Anthropic and subsequently announced by the company in late 2024, it’s now regularly referred to as “USB-C for AI” and has been broadly embraced and adopted by numerous organizations and their technologies and products.
Long-term trend aside, my decision to include agentic AI in my year-end list was notably influenced by the fact that agents (specifically) and AI chatbots (more generally) are already being widely implemented by developers as well as, notably, adopted by the masses. OpenAI recently added an AI holiday shopping research feature to its ChatGPT chatbot, for example, hot on the heels of competitor Google’s own encouragement to “Let AI do the hard parts of your holiday shopping”. And what of Amazon’s own Rufus AI service? Here’s TechCrunch’s beginning-of-December take on Amazon’s just-announced results:
On Black Friday, Amazon sessions that resulted in a sale were up 100% in the U.S. when the AI chatbot Rufus was used. They only increased by 20% when Rufus wasn’t used.
Trust a hallucination- and bias-prone deep learning model to pick out presents for myself and others? Not me. But I’m guessing that both to some degree now, and increasingly in the future, I’ll be in the minority.
Humanoid RobotsBy now, I’m sure that many of you have already auditioned at least one (and if you’re like me, countless examples) of the entertaining and awe-inspiring videos published by Boston Dynamics over the years (and by the way, if you’ve ever wondered why the company was subsequently acquired by Hyundai, this excellent recent IEEE Spectrum coverage of the company’s increasingly robotics-dominated vehicle manufacturing plant in Georgia is a highly recommended read). While early showcased examples such as Spot were, as its name reflects, reminiscent of dogs and other animals (assuming they had structural relevance to anything at all, that is…hold that thought), the company’s newer Atlas, along with examples from a growing list of other companies, is distinctly humanoid-reminiscent. Quoting from Wikipedia:
A humanoid robot is a robot resembling the human body in shape. The design may be for functional purposes, such as interacting with human tools and environments and working alongside humans, for experimental purposes, such as the study of bipedal locomotion, or for other purposes. In general, humanoid robots have a torso, a head, two arms, and two legs, though some humanoid robots may replicate only part of the body. Androids are humanoid robots built to more closely resemble the human physique. (The term Gynoid is sometimes used for those that resemble women.)
As Wikipedia notes, part of the motivation for this trend is the fact that the modern world has been constructed with the human body in mind, and it’s therefore more straightforward from a robotics-inclusion standpoint to create automotons that mimic their human creators (and forebears?) than to adapt the environment to more optimally suit other robot form factors. Plus, I’m sure that at least some developers are rationalizing that robots that resemble humans are more likely to be accepted alongside humans, both in the workplace and in the home.
Still, I wonder how much sub-optimization of the overall robotic implementation potential is occurring in pursuit of this seeming single-minded human mimicking aspiration. I wonder, too, how much influence early robot examples in entertainment, such as Rosie (or Rosey) from The Jetsons or Gort from The Day the Earth Stood Still, have had in shaping the early thinking of children destined to be engineers when they grew up. And from a practical financial standpoint, given the large number of humanoid robot examples coming from China alone, I can’t help but wonder just how many “androids” (the robot, not the operating system) the world really needs, and how massive the looming corporate weeding-out may be as a result.
Unforeseen acquisitionsThis last one might not have been seismically impactful from a broad industry standpoint…or then again, it may end up being so, both for Qualcomm and its competitors. Regardless, I’m including it because it personally rocked me back on my heels when I heard the news. In early October, Qualcomm announced its intention to acquire Arduino. For those of you not already familiar with Arduino, here’s Wikipedia’s intro:
Arduino is an Italian open-source hardware and software company…that designs and manufactures single-board microcontrollers and microcontroller kits for building digital devices. Its hardware products are licensed under a CC BY-SA license, while the software is licensed under the GNU Lesser General Public License (LGPL) or the GNU General Public License (GPL), permitting the manufacture of Arduino boards and software distribution by anyone.
First fruits of the merger are the UNO Q, a “next-generation single board computer featuring a “dual brain” architecture—a Linux Debian-capable microprocessor and a real-time microcontroller—to bridge high-performance computing with real-time control” and “powered by the Qualcomm Dragonwing QRB2210 processor running a full Linux environment”, and the Arduino App Lab, an “integrated development environment built to unify the Arduino development journey across Real-time OS, Linux, Python and AI flows.”
So, what’s the background to my surprise? This excerpt from IEEE Spectrum’s as-usual thorough coverage sums it up nicely: “Even so, the acquisition seems odd at first glance. Qualcomm sells expensive, high-performance SoC designs meant for flagship smartphones and PCs. Arduino sells microcontroller boards that often cost less than a large cheese pizza.”
Not to mention that Qualcomm’s historical customer base is comparatively small in number, large in per-customer volume, and rapid in each customer’s generational-uptake silicon churn, the exact opposite of Arduino’s typical customer profile (or that of Raspberry Pi, for that matter, who’s undoubtedly also “curious” about the acquisition and its outcome).
Auld Lang Syne (again)I’m writing this in late December 2025. You’ll presumably be reading it sometime in January 2026, given that I’m targeting New Year’s Day publication for it. I’ll split the difference and, as I did last year, wrap up by first wishing you all a Happy New Year! 
As usual, I originally planned to cover a number of additional topics in this piece. But (also) as usual, I ended up with more things that I wanted to write about than I had a reasonable wordcount budget to do so. Having just passed through 2,700 words, I’m going to restrain myself and wrap up, saving the additional topics (as well as updates on the ones I’ve explored here) for dedicated blog posts to come in the coming year(s). Let me know your thoughts on my top-topic selections, as well as what your list would have looked like, in the comments!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- 2024: A year’s worth of interconnected themes galore
- 2023: Is it just me, or was this year especially crazy?
- A tech look back at 2022: We can’t go back (and why would we want to?)
- A 2021 technology retrospective: Strange days indeed
- 10 consumer technology breakthroughs from 2019
- 2026 Look Ahead
The post 2025: A year in which chaos seemingly thrived appeared first on EDN.
Cleaned my workspace. Had to take a picture because it won't last!
| submitted by /u/FuzzyBumbler [link] [comments] |
Стипендіатка з ВПІ Анна Брідня: "Вірю, що все вийде!"
Молодь – найактивніша частина нашого суспільства. А студенти – її інтелектуальний потенціал. Студентська енергія – це джерело нових ідей, інновацій та змін у суспільстві. Саме на їхні плечі лягає відповідальність за відновлення та майбутнє країни. Традиційно кращих із них відзначають іменними преміями та стипендіями.
Від зірки до зірки. Містерії та феєрії різдвяно-новорічних свят
25 грудня розпочинаються містерії та феєрії різдвяно-новорічних свят. І відкриває їх найбільше після Великодня свято християнського світу – Різдво Христове. Традиційно його відзначають не лише віряни різних конфесій, а й ті, хто до релігії байдужий.
Last post of the year! What’s on the bench today?
| | Analog video tape to digital transfer! so far it has been a royal pain in my ass lately. From VHS deck issues, capture issues, software for said capture device only supporting PowerPC versions which needs Rosetta in Mac OS X (of which Apple abandoned years ago) all the way to my greatest pet peeve…the VHS-C adaptor. I currently have 3 of them, with another I ordered on eBay. The first adapter I destroyed because it made me mad. The second adapter jams up my deck causing it to have E-5 error code when I fast forward. The 3rd adapter, brand new from Amazon, causes error code E-5 on all deck functions (play, fast forward, rewind, etc). basically unusable. As much I want to destroy that one, I’m going to just return it to Amazon. instead, when I receive the one I ordered on eBay and it works as expected, I will then destroy the second adaptor. Or, I get to have 2 adapter I can destroy if the eBay one doesn’t work. I don’t have VCRs galore to try this on other decks; besides, the one I’m using now I ruled him out as the issue. It plays VHS tapes just fine. Hopefully, the eBay adapter will work out. Since this project deals with clients 3 decades worth of precious memories in the format of VHS-C, Hi8 and MiniDV, I can’t afford to damage anything. I decided to revisit this 1 week into the new year to come back into this thing with good energy and vibes. for now, I’m just pissed off at VHS-C adapters lol [link] [comments] |
Команда ФБТ перемогла на Global Greenchem Hackathon 2025
Як "Київський політехнік" повідомляв раніше, на завершення ХІV Фестивалю інноваційних проєктів "Sikorsky Challenge 2025" відповідно до його програми та в межах проєкту Організації об'єднаних націй з промислового розвитку "Глобальна програма інновацій та мереж у зеленій хімії" (GreenChem) з 31 жовтня по 2 листопада відбувся Global Greenchem Hackathon 2025.
AXT closes public offering of stock and full exercise of underwriters’ option
SCR topology transmogrifies into BJT two-wire precision current source

Recently, frequent Design Idea (DI) author Christopher Paul showcased an innovative and high performance true-two-wire current source using a depletion mode MOSFET as the pass device in “A precision, voltage-compliant current source.”
In subsequent comments the question arose whether similar performance is possible using a bipolar junction transistor instead of Christopher’s FET in a similar (looking) topology?
Wow the engineering world with your unique design: Design Ideas Submission Guide
It posed an intriguing design problem for which I offer here a possible (if implausible) solution. Bizarrely, it’s (roughly) based on the classic discrete transistor model of an SCR, shown in Figure 1.
Figure 1 SCR positive feedback loop suggests an unlikely basis for a BJT current source.
Figure 2 shows the nonlinear positive feedback loop of the thyristor morphing into a linear current source.

Figure 2 Q1 and Q3 current mirror, regulator Z1, and BJT Q1 comprise precision 2-wire current source. The source current is 1.05 * 1.24/R1, or 1.30/R1. * = 0.1% precision resistor
Shunt regulator Z1 and pass transistor Q2 form a very familiar precision current source circuit. In fact, it looks a lot like the one Christopher Paul uses in his MOSFET-based design. Negative feedback from current sense resistor R1 makes shunt regulator Z1 force Q2 to maintain a constant emitter current of 1.24v/R1.
Also, similar (looking) to Christopher Paul’s topology, bias for Z1 and Q2 is provided by a PNP current mirror. However, unlike the symmetrical mirror in Christopher Paul’s design, this one is made asymmetrical to accommodate Z1’s max recommended current rating.
Significant emitter degeneration (~2.5 volts) is employed to encourage accurate current ratios and keep positive feedback loop gain manageable so Z1 can ride herd on it.
Startup resistor R3 is needed because the bias for the transistors and regulator is provided by the SCR-ish regenerative positive feedback loop. R3 provides a trickle of current, a few hundred nanoamps, sufficient to jumpstart (trigger?) the loop when power is first applied.
To program the source for a chosen output current (Io).
If Io > 5 mA, then:
R1 = 1.30/Io
R2 = 49.9/Io
R4 = 2.40/Io
If Io < 5 mA, then:
R1 = 1.55/Io
R2 = 8/Io
R4 = 2/Io
Minimum accurate Io = 500 µA. Maximum = 200 mA.
And for a finishing touch, frequent commentator Ashutosh points out that it’s good practice to protect loads against erroneous and possibly destructive fault currents. Figure 3 suggests a flexible and highly reliable insurance policy. Wire one of these gems in series with Figure 2 and fault current concerns will vanish.

Figure 3 Accurate, robust, fast acting, self-resetting, fault current limiter where Ilimit = 1.25/R1.
In closing, I leave it to you, the reader, to decide whether Figure 2’s resemblance to Christopher Paul’s design is merely superficial, truly meaningful, outright plagiaristic, or just weird.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- A precision, voltage-compliant current source
- Active current mirror
- A current mirror reduces Early effect
- A two-way mirror—current mirror that is
- A two-way Wilson current mirror
The post SCR topology transmogrifies into BJT two-wire precision current source appeared first on EDN.
Power Tips #148: A simple software method to increase the duty-cycle resolution in DPWM

Have you ever had a duty-cycle resolution issue in your digitally controlled power supply?
In a digital pulse width modulation (DPWM)-controlled power supply, the duty-cycle adjustment is not continuous, but has a minimum step. This is one significant difference between digital control and analog control.
In order to really understand the resolution issue, let’s look at the exaggerated DPWM waveform in Figure 1.
Figure 1 An exaggerated DPWM waveform where the DPWM is acting as the output by comparing its clock counter with a preset comparison value. Source: Texas Instruments
DPWM is acting as the output by comparing its clock counter with a preset comparison value; when the counter equals the comparison value, it will generate a trigger signal, and flip the PWM outputs. When you adjust the comparison to different values, the flipping edge will act earlier or later. Because the counter value can be the only integer, the minimum adjustment step of the duty cycle is expressed by Equation 1:

The duty-cycle resolution of DPWM brings a disturbance to power-supply control. If the duty-cycle resolution is too low, it may bring limit cycle oscillations (LCOs) to the control loop and cause output voltage ripple. This problem is more serious in high-switching-frequency systems.
Let’s take a 48-V to 5-V synchronous buck converter as an example, as shown in Figure 2.

Figure 2 A 48-V to 5-V synchronous buck converter example. Source: Texas Instruments
Assuming a 500-kHz switching frequency when using 120-MHz PWM frequency, recalling Equation 1, the minimum duty-cycle step is
. The minimum duty-cycle adjustment brings the voltage difference with
, which means 4% voltage ripples of the output, shown in Figure 3. This is obviously unacceptable.

Figure 3 A low-resolution duty cycle causes output voltage ripple. Source: Texas Instruments
Increase duty-cycle resolutionThe most direct way to resolve this duty-cycle resolution issue is to use high-resolution PWM (HRPWM). HRPWM is a powerful peripheral that can reduce the adjustment step significantly—to the 10ps level—but it is typically only available in high-performance MCUs, which may be too powerful or expensive for the design.
Is there a simple method to resolve the duty-cycle resolution issue without extra cost? Can you increase the duty-cycle resolution by using software, or an algorithm?
Looking again at the DPWM waveform, the duty cycle is generated by two variables: the comparison value and the period value, which Equation 2 calculates as:
The common method of adjusting the duty cycle is changing the comparison value and keeping the ‘Period’ value in constant; in other words, the buck converter is operating in fixed switching frequency. What happens if you adjust the duty-cycle by varying the switching frequency? Mostly, a small variation of the switching frequency is not harmful but helpful to power converters, it will reduce the electromagnetic interference and help to pass the EMI regulations.
If you keep the comparison value unchanged, but adjust one count to the period value, how much is the duty-cycle variation? Is it larger or smaller than adjusting the comparison value? Please look into the Equation 3:

Keeping in mind that, the duty-cycle variation by adjusting the comparison value is
, because D is always smaller than 1, and
is nearly equal to
, you can see that
will be always smaller than
.
Which means, adjusting the period value will generate smaller variation to the duty-cycle than adjusting the comparison value. The improvement is more significant when the duty cycle is much smaller than 1. If you point out the duty-cycle values on one numerical axis with varying the period value, you will clearly see that, when you adding the period value with fixed comparison value, the duty cycle will reduce with a smaller step, as shown in Figure 4.

Figure 4 Duty-cycle values when varying both period and comparison. Source: Texas Instruments
Varying the frequencyBased on the analysis above, it is possible to generate a higher resolution by adjusting the period value. But, in power converter, the switching frequency generally can’t vary much, otherwise the magnetic component design will become very challenge. So, the next question is, how to generate the expected duty cycle with the combination of these two variables?
The method is, first, decided the comparison value with a preset period value, and then, finetune the period value to get the closed duty cycle. The fine tune process either can by increasing the period value with the larger the comparison value, or by reducing the period value with the smaller the comparison value. Figure 5 shows the flowchart of the software by increasing the period value with the larger comparison value, the decreasing method will be similar to this, just need reverse the calculate direction.

Figure 5 Software flowchart for adjusting both the comparison and period values simultaneously. Source: Texas Instruments
At last, I need to figure out that, this software method is principally independent of HRPWM hardware technology, such as a micro-edge positioner. So it is applicable to a digital control loop with HRPWM peripherals same.
Improvement resultsLet’s return to the example of the 48-V to 5-V synchronous buck converter in Figure 2. After adopting this software method, it’s possible to reduce the duty-cycle resolution too; the output voltage ripple drops tremendously to <40 mV, as shown in Figure 6. This is acceptable to most of the electrical appliance.

Figure 6 Improved output voltage ripple using the software method. Source: Texas Instruments
This method doesn’t need to use HRPWM to solve the duty-cycle resolution problem, but slightly increasing the duty-cycle resolution with a software algorithm can make your product more competitive by enabling the use of a low-end MCU.
Furthermore, this method is a purely mathematical algorithm; in other words, it is not limited to low-resolution PWM only but also works for HRPWM. So it can be used in some extremely high requirement conditions to further increase the duty-cycle resolution with HRPWM.
Desheng Guo is a system engineer at Texas Instruments, where he is responsible for developing power solutions as part of the power delivery industrial segment. He created multiple reference designs and is familiar with AC-DC power supply, digital control, and GaN products. He received a master’s degree from the Harbin Institute of Technology in power electronics in 2007, and previously worked for Huawei Technology and Delta Electronics.
Related Content
- Digital PWM Motor Control
- Power Controller with Six DPWM Outputs
- Bus-Compatible DPWM Controller
- High efficiency resonant mode implementation using digital control
The post Power Tips #148: A simple software method to increase the duty-cycle resolution in DPWM appeared first on EDN.
New year new bench
| | Got a table saw recently so I went a little overboard with the French cleats. I also made a scope cart from the remains of my last desk. Fume extraction is a work in progress and I think I need a bigger flare on the hood. Next steps are better parts storage and filling out the relay rack with test gear. If anyone has any test lead/ cable storage suggestions, I’d love to hear them [link] [comments] |
Magnetometers: Sensing the invisible fields

From ancient compasses to modern smartphones, magnetometers have quietly shaped how we sense and navigate the world. Let us explore the fundamentals behind these field-detecting devices.
Magnetic fields are all around us, yet invisible to the eye. Magnetometers turn those hidden forces into measurable signals, guiding everything from navigation systems to consumer electronics. Well, let us dive into the principles that allow a simple sensor to translate invisible forces into actionable data.
A magnetometer is a device that measures magnetism: the direction, strength, or relative change of a magnetic field at a given location. Measuring the magnetization of a magnetic material, such as a ferromagnet, is one example. A compass is a simple magnetometer: it detects the direction of the ambient magnetic field, in this case the Earth’s.
The Earth’s magnetic field can be approximated as a dipole, offset by about 440 kilometers from the planet’s center and inclined roughly 11 degrees to its rotational axis. At the surface, its strength averages around 0.4 to 0.5 gauss, about 40–50 microtesla, which is quite small compared to laboratory magnetic fields.
Only a few types of magnetometers are sensitive enough to detect such weak fields, including mechanical compasses, fluxgate sensors, Hall-effect devices, magnetoelastic instruments, and magneto resistive sensors.
One of the landmark magnetoresistive sensors from the 1990s was KMZ51 from Philips. Released in 1996, it offered high sensitivity by exploiting the magnetoresistive effect of thin-film permalloy. At its core, the device integrated a Wheatstone bridge structure, which converted changes in magnetic resistance into measurable signals.
To enhance stability and usability, Philips added built-in compensation and set/reset coils: the compensation coil provided feedback to counter drift, while the set/reset coil re-aligned the sensor’s magnetic domains to maintain accuracy. These design features made KMZ51 particularly effective for electronic compasses, current sensing, and detecting the Earth’s weak magnetic field—applications where precision and reliability were essential. KMZ51 remains a classic example of how clever sensor design can make the invisible measurable.

Figure 1 Simplified circuit diagram of KMZ51 illustrates its Wheatstone bridge and integrated compensation and set/reset coils. Source: Philips
On a related side note, deflection, compass, and fluxgate magnetometers represent three distinct stages in the evolution of magnetic sensing. The deflection magnetometer, essentially a large compass box with a pivoted needle, measures the Earth’s horizontal field by observing how an external magnet deflects the needle under the tangent law. The familiar compass magnetometer, in its simplest form, aligns a magnetic needle with the ambient field to indicate direction, a principle that has been carried forward into modern electronic compasses.
Fluxgate magnetometers, by contrast, employ a soft magnetic core driven into alternating saturation; the resulting signal in a sense coil reveals both the magnitude and direction of the external field with far greater sensitivity. Together, these instruments illustrate the progression from basic mechanical deflection to precise electronic detection, each expanding the engineer’s ability to measure and interpret the invisible lines of magnetism.
Tangent law and Tan B position in compass deflection magnetometers
In the Tan B position, the bar magnet is oriented so that the magnetic field along its equatorial line is perpendicular to the Earth’s horizontal magnetic field component. Under this arrangement, the suspended magnetic needle deflects through an angle β, and the tangent law applies:
Tanβ= B/BH
B is the magnetic field produced at the location of the needle by the bar magnet.
BH is the horizontal component of the Earth’s magnetic field, which tends to align the needle along the geographic north–south direction.
This relationship shows that the deflection angle β depends on the ratio of the magnet’s equatorial field to the Earth’s horizontal field. This simple geometric relationship makes the Tan B position a fundamental method for determining unknown magnetic field strengths, bridging classroom demonstrations with practical magnetic measurements.

Figure 2 The image illustrates magnetometer architectures—from pivoted needle to fluxgate core—across design generations. Source: Author
Quick take: Magnetometers on the workbench
Magnetometers range from fluxgate arrays orbiting in satellites to quantum sensors probing in research labs—but this session is just a quick take. The spotlight here leans toward today’s DIY enthusiasts and benchtop builders, where Hall-effect sensors and MEMS modules serve as practical entry points. Think of it as a wake-up call, sprinkled with a few lively detours, all pointing toward the components that make magnetometers accessible for everyday projects.
Hall-effect sensors remain the most approachable entry point, translating magnetic fields into voltage shifts that DIY-ers can easily measure with a scope or microcontroller. MEMS magnetometers push things further, offering compact three-axis sensing in modules that drop straight into maker projects or wearables.
These devices not only simplify experimentation but also highlight how magnetic sensing has become democratized—no longer confined to aerospace or geophysics labs but are available in breakout boards and low-cost modules.
For the benchtop builder, this means magnetometers can be explored alongside other familiar sensors, integrated into Arduino or Raspberry Pi projects, or used to probe the invisible magnetic environment around everyday circuits. In short, the practical face of magnetometers today is accessible, modular, and ready to be wired into experiments without demanding a physics lab.
Getting started with magnetometers is straightforward, thanks to readily available pre-wired modules. Popular options often incorporate ICs such as the HMC5883L, LIS3MDL, and TLV493D, among others.
Although not for the faint-hearted, it’s indeed possible to build fluxgate magnetometers from scratch. The process, however, demands precision winding of coils, careful core selection, stable drive electronics, and meticulous calibration—all of which can be daunting for DIY enthusiasts. These difficulties often make home-built designs prone to noise, drift, and inconsistent sensitivity.
For those who want reliable results without the engineering overhead, ready-made fluxgate magnetometer modules are a practical choice, offering calibrated performance and ease of integration straight out of the box. A good example is the FG-3+ fluxgate magnetic field sensor from FG Sensors, which provides compact and sensitive measurement capabilities for hobbyist and applied projects.
FG-3+ is a high-sensitivity fluxgate magnetic field sensor capable of measuring Earth’s magnetic field with up to 1,000-fold greater precision than conventional integrated IC solutions. Its output is a stable 5-volt rectangular pulse, with the pulse period directly proportional to the magnetic field strength.

Figure 3 The FG-3+ fluxgate magnetic field sensor integrates seamlessly into both experimental and applied projects. Source: FG Sensors
Closing thoughts
This marks the end of this quick-take post on magnetometers, presented in a deliberately unconventional style. We have only scratched the surface; the field is rich with subtleties and deflections that deserve deeper exploration. If this overview piqued your interest, I encourage you to experiment with sensor modules, study fluxgate designs, and share your findings with the engineering community.
And while magnetometers probably will not help you track UFOs, at least not yet, they remain a fascinating gateway into sensing the invisible forces all around us. The more we build, test, and exchange ideas, the stronger our collective understanding becomes. Onward to the next signal.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- Tiny magnetometer ups sensitivity
- Fundamentals of digital magnetic sensors
- Differential Magnetic Current and Position Sensing
- Magnetometer basics for mobile phone applications
- Deliberate diamond defect yields ultrasensitive magnetometer
The post Magnetometers: Sensing the invisible fields appeared first on EDN.
Що відкрили для себе студенти КПІ в Національному музеї літератури України
У листі до редакції кандидат культурології, доцентка кафедри української мови, літератури та культури КПІ ім. Ігоря Сікорського Лілія Динікова поділилася інформацією про враження від відвідування студентами однієї з навчальних груп факультету інформатики та обчислювальної техніки Національного музею літератури України (НМЛУ).
Space Forge generates plasma aboard ForgeStar-1 satellite
Experiment: Effect of light on forward voltage drop of a Ge diode like 1N60
| | submitted by /u/SpecialistRare832 [link] [comments] |
Made a 7 segment display!
| submitted by /u/canadianchasers [link] [comments] |
NUBURU provides year-end update regarding strategic milestones
My first project iR sensitive circuit
| submitted by /u/Life-Name3309 [link] [comments] |
Пам'яті Звезди Дмитра Володимировича
Із сумом повідомляємо, що стало відомо про загибель на війні Звезди Дмитра Володимировича (25.01.2005 – 20.11.2025).





