Українською
  In English
Feed aggregator
How A Real-World Problem Turned Into Research Impact at IIIT-H
The idea for a low-cost UPS monitoring system at IIIT-H did not begin in a laboratory or a funding proposal. It began with a familiar frustration – raised by Prakash Nayak, a campus IT staffer who was tired of equipment failures with no clear explanation.
Power outages were happening. Servers were restarting. Despite the installation of UPS units everywhere, no one could say with certainty what the UPS systems were actually doing when the lights went out. That real-world problem became the starting point for a research project that has now resulted in a ₹2,000 IoT-based device capable of tracking UPS behaviour during outages with near-second precision.
The research was documented in a paper titled “Low-cost IoT-based Downtime Detection for UPS and Behaviour Analysis,” by authors Sannidhya Gupta, Prakash Nayak, and Prof. Sachin Chaudhari. It also received the Best Paper award at the 18th International Conference on COMmunication System and NETworkS (COMSNETS-2026) Workshop on AI of Things, recently held in Bangalore.
When monitoring costs more than the problem
“Frequent power outages in developing regions cause equipment damage, operational downtime, and data loss,” says Sannidhya Gupta, noting that while UPS systems are meant to provide protection, “affordable options for monitoring their performance remain limited.” Commercial UPS monitoring tools – typically SNMP cards that collect and organise information about managed devices over IP networks – were an option, but an impractical one. According to the paper, “Commercial solutions are expensive, manufacturer-specific, and reliant on network infrastructure”. With prices exceeding ₹20,000 per unit, the campus IT team simply could not justify deploying them at scale. Worse, these tools often failed at the moment they were most needed. “These systems are unable to record data when the UPS itself loses power,” the authors point out, making post-outage diagnosis nearly impossible.
A device that watches, not interferes
Responding directly to the IT team’s request for something affordable and reliable, the team designed a non-intrusive current-monitoring device. Instead of tapping into UPS internals, it clamps onto the input and output lines, observing how current flows before, during, and after outages. “UPS input and output currents are sensed non-intrusively to detect outages, switchovers, and recovery behaviour,” the researchers explain. Additionally, the device is battery-backed, allowing it to keep recording even when both mains power and internet connectivity are lost.
From theory to campus corridors
In order to test out the system, it was deployed across four UPS installations on campus, including one unit already suspected by IT staff to be malfunctioning. Over a month, the devices collected around 3.7 million data points, automatically detecting 61 outage events. The data confirmed what the IT team had suspected but could never prove. “One UPS repeatedly showed no clear charging behaviour after outages,” reports Prakash, indicating a system that could briefly support loads but failed to properly recharge its batteries.

Smart algorithms, Simple assumptions
The backend analytics automatically labels each event into phases – normal operation, outage, stabilisation, and battery charging – without manual configuration. “All thresholds are expressed as fractions of a locally estimated baseline,” the authors note, adding that this allows the system to adapt to different installations automatically. The results were precise: no missed outages, no false alarms, and timing errors typically within three seconds.
Real-time monitoring, Ten times cheaper
A web-based dashboard now gives IT staff something they never had before: visibility. Instead of guessing whether a UPS is healthy, administrators can now see it. Plus, they have access to historical analysis of UPS behaviour. Built using off-the-shelf components, the device costs about ₹2,000 – roughly one-tenth the price of commercial monitoring cards. “Its affordability, power independence, and portability make it a practical option for cost-constrained environments,” concludes Sannidhya.
Research grounded in reality
What sets this work apart is not just the technology, but its origin. This was research born out of a real operational pain point, brought directly by the people responsible for keeping systems running. “It is important to note that IT staff, Mr. Prakash, is part of the research paper we have published. He is also part of the patent we have recently filed on this. This highlights the value of treating campus operations teams as co-creators of research problems rather than mere end users – a mindset that leads to more relevant and impactful outcomes,” states Prof. Chaudhari. In a landscape where academic research is often criticised for being disconnected from reality, this project offers a counter example of how researchers take note when a problem statement is identified, and build something that changes how systems are understood and managed.
The post How A Real-World Problem Turned Into Research Impact at IIIT-H appeared first on ELE Times.
Microchip Expands PolarFire FPGA Smart Embedded Video Ecosystem providing enhanced video connectivity
The post Microchip Expands PolarFire FPGA Smart Embedded Video Ecosystem providing enhanced video connectivity appeared first on ELE Times.
eevBLAB 137 - Youtube AI Slop Creators Are SHAMELESS!
Windows 10: Support hasn’t yet ended after all, but Microsoft’s still a fickle-at-best friend

Bowing to user backlash, Microsoft eventually relented and implemented a one-year Windows 10 support-extension scheme. But (limited duration) lifelines are meaningless if they’re DOA.
Back in November, within my yearly “Holiday Shopping Guide for Engineers”, the first suggestion in my list was that you buy you and yours Windows 11-compatible (or alternative O/S-based) computers to replace existing Windows 10-based ones (specifically ones that aren’t officially Windows 11-upgradable, that is). Unsanctioned hacks to alternatively upgrade such devices to Windows 11 do exist, but echoing what I first wrote last June (where I experimented for myself, but only “for science”, mind you), I don’t recommend relying on them for long-term use, even assuming the hardware-hack attempt is successful at all, that is:
The bottom line: any particular system whose specifications aren’t fully encompassed by Microsoft’s Windows 11 requirements documentation is fair game for abrupt no-boot cutoff at any point in the future. At minimum, you’ll end up with a “stuck” system, incapable of being further upgraded to newer Windows 11 releases, therefore doomed to fall off the support list at some point in the future. And if you try to hack around the block, you’ll end up with a system that may no longer reliably function, if it even boots at all.
A mostly compatible computing stableFortunately, all of my Windows-based computers are Windows 11-compatible (and already upgraded, in fact), save for two small form factor systems, one (Foxconn’s nT-i2847, along with its companion optical drive), a dedicated-function Windows 7 Media Center server:

(mine are white, and no, the banana’s not normally a part of the stack):

and the other, an XCY X30, largely retired but still hanging around to run software that didn’t functionally survive the Windows 10-to-11 transition:
And as far as I can recall, all of the CPUs, memory DIMMs, SSDs, motherboards, GPUs and other PC building blocks still lying around here waiting to be assembled are Windows 11-compliant, too.
One key exception to the ruleMy wife’s laptop, a Dell Inspiron 5570 originally acquired in late 2019, is a different matter:
Dell’s documentation initially indicated that the Inspiron 5570 was a valid Windows 11 upgrade candidate, but the company later backtracked due to partner Microsoft’s increasingly-over-time stingy CPU and TPM requirements. Our secondary strategy was to delay its demise by a year by taking advantage of one of Microsoft’s Windows 10 Extended Support Update (ESU) options. For consumers, there initially were two paths, both paid: spending $30 or redeeming 1,000 Microsoft Rewards points, although both ESU options covered up to 10 devices (presumably associated with a common Microsoft account). But in spite of my repeated launching of the Windows Update utility over a several-month span, it stubbornly refused to display the ESU enrollment section necessary to actualize my extension aspirations for the system:
My theory at the time was that although the system was registered under my wife’s personal Microsoft account, she’d also associated it with a Microsoft 365 for Business account for work email and such, and it was therefore getting caught by the more complicated corporate ESU license “net”. So, I bailed on the ESU aspiration and bought her a Dell 16 Plus as a replacement, instead:
That I’d done (and to be precise, seemingly had to do) this became an even more bitter already-swallowed pill when Microsoft subsequently added a third, free consumer ESU option, involving backup of PC settings in prep for the delayed Windows 11 migration to still come a year later:
Belated success, and a “tinfoil hat”-theorized root cause-and-effectAnd then the final insult to injury arrived. At the beginning of October, a few weeks prior to the Windows 10 baseline end-of-support date, I again checked Windows Update on a lark…and lo and behold, the long-missing ESU section was finally there (and I then successfully activated it on the Inspiron 5570). Nothing had changed with the system, although I had done a settings backup a few weeks earlier in a then-fruitless attempt to coax the ESU to reactively appear. That said, come to think of it, we also had just activated the new system…were I a conspiracy theorist (which I’m not, but just sayin’), I might conclude that Microsoft had just been waiting to squeeze another Windows license fee out of us (a year earlier than otherwise necessary) first.
To that last point, and in closing, a reality check. At the end of the day, “all” we did was to a) buy a new system a year earlier than I otherwise likely would have done, and b) delay the inevitable transition to that new system by a year. And given how DRAM and SSD prices are trending, delaying the purchase by a year might have resulted in an increased cash outlay, anyway. On the other hand, the CPU would have likely been a more advanced model than we ended up, too. So…
A “First World”, albeit baffling, problem, I’m blessed to be able to say in summary. How did your ESU activation attempts go? Let me (and your fellow readers) know in the comments: thanks as always in advance!
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- Updating an unsanctioned PC to Windows 11
- A holiday shopping guide for engineers: 2025 edition
- Microsoft embraces obsolescence by design with Windows 11
- Microsoft’s Build 2024: Silicon and associated systems come to the fore
The post Windows 10: Support hasn’t yet ended after all, but Microsoft’s still a fickle-at-best friend appeared first on EDN.
Latest issue of Semiconductor Today now available
Handheld enclosures target harsh environments

Rolec’s handCASE (IP 66/IP 67) handheld enclosures for machine control, robotics, and defense electronics can now be specified with a choice of lids and battery options.
These rugged diecast aluminum enclosures are ideal for industrial and military applications in which devices must survive challenging environments but also be comfortable to hold for long periods.
(Source: Rolec USA)
Robust handCASE can be specified with or without a battery compartment (4 × AA or 2 × 9 V). Two versions are available: S with an ergonomically bevelled lid, and R with a narrow-edged lid to maximize space. Both tops are recessed to protect a membrane keypad or front plate. Inside there are threaded screw bosses for PCBs or mounting plates.
The enclosures are available in three sizes: 3.15″ × 7.09″ × 1.67″, 3.94″ × 8.66″ × 1.67″ and 3.94″ × 8.66″ × 2.46″. As standard, Version S features a black (RAL 9005) base with a silver metallic top, while Version R is fully painted in light gray (RAL 7035).
Custom colors are available on request. They include weather-resistant powder coatings (F9) with WIWeB approvals and camouflage colors for military applications. These coatings are also available in a wet painted finish. They meet all military requirements, including the defense equipment standard VG 95211.
Options and accessories include a shoulder strap, a holding clip and wall bracket, and a corrosion-proof coating in azure blue (RAL 5009).
Rolec can supply handCASE fully customized. Services include CNC machining, engraving, RFI/EMI shielding, screen and digital printing, and assembly of accessories.
For more information, view the Rolec website: https://Rolec-usa.com/en/products/handcase#top
The post Handheld enclosures target harsh environments appeared first on EDN.
PhotonDelta launches Global Photonics Engineering Contest at PIC Summit USA
Breadboard Wristwatch
| submitted by /u/Electro-nut [link] [comments] |
ALLOS and Ennostar partner on 200mm GaN-on-Si LED epiwafers for micro-LED volume production
QD Laser orders Riber MBE 6000 to scale quantum dot laser production for datacoms
Складай екзамени TestDaF, TestAS, onSET, dMAT у Києві – без зайвих турбот та витрат на подорожі за кордон!
✅ Офіційно визнаний і сертифікований центр:
Екзамени організовуються та проводяться ТестДаФ-Центром, який має ліцензовану угоду з ТестДаф-Інститутом (м. Бохум, Німеччина). Проф. д. філол. н. С.М. Іваненко очолює роботу Центру з дня його заснування.
element14 and Fulham announce global distribution partnership
element14 has formed a new global distribution partnership with Fulham, expanding access to advanced LED drivers, emergency lighting and intelligent control solutions for customers across EMEA & APAC. The agreement strengthens element14’s lighting portfolio in the region, supporting engineers and buyers across commercial, industrial and architectural lighting applications.
Fulham brings more than 30 years of expertise in LED drivers, modules, emergency lighting and intelligent control systems. Headquartered in the United States, the company operates globally, with manufacturing in India, supply channels in India and China, and a strong presence across Europe. Its portfolio includes indoor and outdoor LED drivers, emergency lighting systems, UV ballasts, and smart control technologies, all designed to meet key international standards, including CE, ENEC, DALI-2, and UL.
Through this partnership, element14 will distribute Fulham’s lighting solutions globally, improving availability and access to future-ready technologies for engineers and buyers worldwide.
The agreement includes Fulham’s core lighting portfolio, including emergency lighting systems, indoor LED drivers, and constant-voltage driver platforms, with key ranges including the HotSpot Series, WorkHorse DALI-2 constant-current drivers, and the ThoroLED Series for architectural lighting, signage, and LED strip applications.
Customer benefits include:
- Broader access to certified, future-ready lighting technologies.
- Global availability through element14’s established distribution network.
- Support for a wide range of lighting applications and form factors.
- Access to Fulham’s deep technical expertise and proven product platforms.
Jose Lok, Global Product Category Director – Onboard Components & SBC, element14, said: “element14 has a strong commitment to adding value for our customers, and this partnership expands both choice and access to innovative lighting technologies. By working with Fulham, we are enabling customers worldwide to source advanced LED drivers, emergency lighting and control solutions through a trusted global distribution partner.”
Antony Corrie, CEO, Fulham, added: “Fulham is extremely excited to embark on this new relationship with element14. The partnership brings together shared values, strong heritage and a commitment to global innovation. element14 in APAC will be selling Fulham’s LED drivers, emergency battery backup solutions, exit signs and UV-C power systems across their global customer base.”
The post element14 and Fulham announce global distribution partnership appeared first on ELE Times.
India’ PLI Scheme Brings a Surge of 146% in Electronics Production
Despite geopolitical tensions, manufacturing in India has done exponentially well, with smartphones leading the trail. According to data shared by CareEdge Ratings, India’s production has surged by 146% since 2021. The Performance Linked Incentive (PLI) scheme played a significant role in boosting electronics manufacturing from Rs 2.13 lakh crore in the Financial Year 2021 to Rs 5.45 lakh crore in the Financial Year 2025.
Additionally, the boost in production was aided by USD 4billion in FDI, where 70% was to PLI beneficiaries. Apart from economic benefits, the accelerated production has triggered a massive socio-economic multiplier effect. The electronics sector has been a dominant contributor to the 9.5 lakh jobs generated across all PLI schemes, providing significant direct and indirect employment. Simultaneously, electronics have climbed to become one of India’s top export categories. By shifting from an importer to a “net exporter” of mobile phones, India is successfully narrowing its trade deficit and reducing its long-term dependency on imports from neighbouring manufacturing hubs.
While the 146% jump is a historic achievement, the roadmap ahead focuses on “Deep Localisation.” The government and industry leaders are now pivoting toward high-value components, including semiconductor packaging and display manufacturing. As of January 2026, this momentum positions India to reach its goal of a $300 billion electronics production ecosystem, solidifying its role as a critical alternative in the global “China Plus One” supply chain strategy.
The post India’ PLI Scheme Brings a Surge of 146% in Electronics Production appeared first on ELE Times.
Snow Lake extends option agreement for Mound Lake Gallium Project
Photon Design showcasing simulation tool innovations at Photonics West
📋Кошторис на 2026 рік
AI’s insatiable appetite for memory

The term “memory wall” was first coined in the mid-1990s when researchers from the University of Virginia, William Wulf and Sally McKee, co-authored “Hitting the Memory Wall: Implications of the Obvious.” The research presented the critical bottleneck of memory bandwidth caused by the disparity between processor speed and the performance of dynamic random-access memory (DRAM) architecture.
These findings introduced the fundamental obstacle that engineers have spent the last three decades trying to overcome. The rise of AI, graphics, and high-performance computing (HPC) has only served to increase the magnitude of the challenge.
Modern large language models (LLMs) are being trained with over a trillion parameters, requiring continuous access to data and petabytes of bandwidth per second. Newer LLMs in particular demand extremely high memory bandwidth for training and for fast inference, and the growth rate shows no signs of slowing with the LLM market size expected to increase from roughly $5 billion in 2024 to over $80 billion by 2033. And the growing gap between CPU and GPU performance, memory bandwidth, and latency is unmistakable.
The biggest challenge posed by AI training is in moving these massive datasets between the memory and processor, and here, the memory system itself is the biggest bottleneck. As compute performance has increased, memory architectures have had to evolve and innovate to keep pace. Today, high-bandwidth memory (HBM) is the most efficient solution for the industry’s most demanding applications like AI and HPC.
History of memory architecture
In the 1940s, the von Neumann architecture was developed and it became the basis for computing systems. The control-centric design stores a program’s instructions and data in the computer’s memory. The CPU fetched instructions and data sequentially, creating idle time while the processor waited for these instructions and data to return from memory. The rapid evolution of processors and the relatively slower improvement of memory eventually created the first system memory bottlenecks.

Figure 1 Here is a basic arrangement showing how processor and memory work together. Source: Wikipedia
As memory systems evolved, memory bus widths and data rates increased, enabling higher memory bandwidths that improved this bottleneck. The rise of graphics processing units (GPUs) and HPC in the early 2000s accelerated the compute capabilities of systems and brought with them a new level of pressure on memory systems to keep compute and memory systems in balance.
This led to the development of new DRAMs, including graphics double data rate (GDDR) DRAMs, which prioritized bandwidth. GDDR was the dominant high-performance memory until AI and HPC applications went mainstream in the 2000s and 2010s, when a newer type of DRAM was required in the form of HBM.

Figure 2 The above chart highlights the evolution of memory in more than two decades. Source: Amir Gholami
The rise of HBM for AI
HBM is the solution of choice to meet the demands of AI’s most challenging workloads, with industry giants like Nvidia, AMD, Intel, and Google utilizing HBM for their largest AI training and inference work. Compared to standard double-data rate (DDR) or GDDR DRAMs, HBM offers higher bandwidth and better power efficiency in a similar DRAM footprint.
It combines vertically stacked DRAM chips with wide data paths and a new physical implementation where the processor and memory are mounted together on a silicon interposer. This silicon interposer allows thousands of wires to connect the processor to each HBM DRAM.
The much wider data bus enables more data to be moved efficiently, boosting bandwidth, reducing latency, and improving energy efficiency. While this newer physical implementation comes at a greater system complexity and cost, the trade-off is often well worth it for the improved performance and power efficiency it provides.
The HBM4 standard, which JEDEC released in April of 2025, marked a critical leap forward for the HBM architecture. It increases bandwidth by doubling the number of independent channels per device, which in turn allows more flexibility in accessing data in the DRAM. The physical implementation remains the same, with the DRAM and processor packaged together on an interposer that allows more wires to transport data compared to HBM3.
While HBM memory systems remain more complex and costlier to implement than other DRAM technologies, the HBM4 architecture offers a good balance between capacity and bandwidth that offers a path forward for sustaining AI’s rapid growth.
AI’s future memory need
With LLMs growing at a rate between 30% to 50% year over year, memory technology will continue to be challenged to keep up with the industry’s performance, capacity, and power-efficiency demands. As AI continues to evolve and find applications at the edge, power-constrained applications like advanced AI agents and multimodal models will bring new challenges such as thermal management, cost, and hardware security
The future of AI will continue to depend as much on memory innovation as it will on compute power itself. The semiconductor industry has a long history of innovation, and the opportunity that AI presents provides compelling motivation for the industry to continue investing and innovating for the foreseeable future.
Steve Woo is a memory system architect at Rambus. He is a distinguished inventor and a Rambus fellow.
Special Section: AI Design
- The AI design world in 2026: What you need to know
- AI workloads demand smarter SoC interconnect design
The post AI’s insatiable appetite for memory appeared first on EDN.
How AI and ML Became Core to Enterprise Architecture and Decision-Making
by Saket Newaskar, Head of AI Transformation, Expleo
Enterprise architecture is no longer a behind-the-scenes discipline focused on stability and control. It is fast becoming the backbone of how organizations think, decide, and compete. As data volumes explode and customer expectations move toward instant, intelligent responses, legacy architectures built for static reporting and batch processing are proving inadequate. This shift is not incremental; it is structural. In recent times, enterprise architecture has been viewed as an essential business enabler.
The global enterprise architecture tools market will grow to USD 1.60 billion by 2030, driven by organizations aligning technology more closely with business outcomes. At the same time, the increasing reliance on real-time insights, automation, and predictive intelligence is pushing organizations to redesign their foundations. Also, artificial intelligence (AI) and machine learning (ML) are not just optional enhancements. They have become essential architectural components that determine how effectively an enterprise can adapt, scale, and create long-term value in a data-driven economy.
Why Modernisation Has Become Inevitable
Traditional enterprise systems were built for reliability and periodic reporting, not for real-time intelligence. As organisations generate data across digital channels, connected devices, and platforms, batch-based architectures create latency that limits decision-making. This challenge is intensifying as enterprises move closer to real-time operations. According to IDC, 75 per cent of enterprise-generated data is predicted to be processed at the edge by 2025. It highlights how data environments are decentralising rapidly. Legacy systems, designed for centralised control, struggle to operate in this dynamic landscape, making architectural modernisation unavoidable.
AI and ML as Architectural Building Blocks
AI and ML have moved from experimental initiatives to core decision engines within enterprise architecture. Modern architectures must support continuous data pipelines, model training and deployment, automation frameworks, and feedback loops as standard capabilities. This integration allows organisations to move beyond descriptive reporting toward predictive and prescriptive intelligence that anticipates outcomes and guides action.
In regulated sectors such as financial services, this architectural shift has enabled faster loan decisions. Moreover, it has improved credit risk assessment and real-time fraud detection via automated data analysis. AI-driven automation has also delivered tangible efficiency gains, with institutions reporting cost reductions of 30–50 per cent by streamlining repetitive workflows and operational processes. These results are not merely the outcomes of standalone tools. Instead, they are outcomes of architectures designed to embed intelligence into core operations.
Customer Experience as an Architectural Driver
Customer expectations are now a primary driver of enterprise architecture. Capabilities such as instant payments, seamless onboarding, and self-service have become standard. In addition, front-end innovations like chatbots and virtual assistants depend on robust, cloud-native, and API-led back-end systems that deliver real-time, contextual data at scale. While automation increases, architectures must embed security and compliance by design. Reflecting this shift, the study projects that the global market worth for zero-trust security frameworks will exceed USD 60 billion annually by 2027. As a result, this will reinforce security as a core architectural principle.
Data Governance and Enterprise Knowledge
With the acceleration of AI adoption across organisations, governance has become inseparable from architecture design. Data privacy, regulatory compliance, and security controls must be built into systems from the outset, especially as automation and cloud adoption expand. Meanwhile, enterprise knowledge, proprietary data, internal processes, and contextual understanding have evolved as critical differentiators.
Grounding AI models in trusted enterprise knowledge improves accuracy, explainability, and trust, particularly in high-stakes decision environments. This alignment further ensures that AI systems will support real business outcomes rather than producing generic or unreliable insights.
Human Readiness and Responsible Intelligence
Despite rapid technological progress, architecture-led transformation ultimately depends on people. Cross-functional alignment, cultural readiness, and shared understanding of AI initiatives are imperative for sustained adoption. Enterprise architects today increasingly act as translators between business strategy and intelligent systems. Additionally, they ensure that innovation progresses without compromising control.
Looking ahead, speed and accuracy will remain essential aspects of enterprise architecture. However, responsible AI will define long-term success. Ethical use, transparency, accountability, and data protection are becoming central architectural concerns. Enterprises will continue redesigning their architectures to be scalable, intelligent, and responsible for the years to come. Those that fail to modernise or embed AI-driven decision-making risk losing relevance in an economy where data, intelligence, and trust increasingly shape competitiveness.
The post How AI and ML Became Core to Enterprise Architecture and Decision-Making appeared first on ELE Times.
My First PCB, Upgraded the Front IO board of Antec Silver Fusion HTPC case
| | At first I thought it would be a simple upgrade. But Damn, had to learn about Tolerances, differential pairs and Resistances. First PCB that I ordered had incorrect pin pitches, they were supposed to be smaller. Had to redesign the entire board and use 3rd layer for power routing. Ordered from JLCPCB as it was easier to find through hole USB 3.0 on their site. 2nd layer is not shown but it's a grounding plane. There's Probably a ton of improvements to be made. I want to thank folks over at r/PCB and r/PrintedCircuitBoard, those guys are a real deal. [link] [comments] |
Sometimes you have to improvise…
| Building a little flyback driver and this was the only MOSFET I had with a high enough Vds and low enough Vgs to work…hopefully I didn’t overheat it too badly. [link] [comments] |















