Українською
  In English
Feed aggregator
Photon Design showcasing simulation tool innovations at Photonics West
📋Кошторис на 2026 рік
AI’s insatiable appetite for memory

The term “memory wall” was first coined in the mid-1990s when researchers from the University of Virginia, William Wulf and Sally McKee, co-authored “Hitting the Memory Wall: Implications of the Obvious.” The research presented the critical bottleneck of memory bandwidth caused by the disparity between processor speed and the performance of dynamic random-access memory (DRAM) architecture.
These findings introduced the fundamental obstacle that engineers have spent the last three decades trying to overcome. The rise of AI, graphics, and high-performance computing (HPC) has only served to increase the magnitude of the challenge.
Modern large language models (LLMs) are being trained with over a trillion parameters, requiring continuous access to data and petabytes of bandwidth per second. Newer LLMs in particular demand extremely high memory bandwidth for training and for fast inference, and the growth rate shows no signs of slowing with the LLM market size expected to increase from roughly $5 billion in 2024 to over $80 billion by 2033. And the growing gap between CPU and GPU performance, memory bandwidth, and latency is unmistakable.
The biggest challenge posed by AI training is in moving these massive datasets between the memory and processor, and here, the memory system itself is the biggest bottleneck. As compute performance has increased, memory architectures have had to evolve and innovate to keep pace. Today, high-bandwidth memory (HBM) is the most efficient solution for the industry’s most demanding applications like AI and HPC.
History of memory architecture
In the 1940s, the von Neumann architecture was developed and it became the basis for computing systems. The control-centric design stores a program’s instructions and data in the computer’s memory. The CPU fetched instructions and data sequentially, creating idle time while the processor waited for these instructions and data to return from memory. The rapid evolution of processors and the relatively slower improvement of memory eventually created the first system memory bottlenecks.

Figure 1 Here is a basic arrangement showing how processor and memory work together. Source: Wikipedia
As memory systems evolved, memory bus widths and data rates increased, enabling higher memory bandwidths that improved this bottleneck. The rise of graphics processing units (GPUs) and HPC in the early 2000s accelerated the compute capabilities of systems and brought with them a new level of pressure on memory systems to keep compute and memory systems in balance.
This led to the development of new DRAMs, including graphics double data rate (GDDR) DRAMs, which prioritized bandwidth. GDDR was the dominant high-performance memory until AI and HPC applications went mainstream in the 2000s and 2010s, when a newer type of DRAM was required in the form of HBM.

Figure 2 The above chart highlights the evolution of memory in more than two decades. Source: Amir Gholami
The rise of HBM for AI
HBM is the solution of choice to meet the demands of AI’s most challenging workloads, with industry giants like Nvidia, AMD, Intel, and Google utilizing HBM for their largest AI training and inference work. Compared to standard double-data rate (DDR) or GDDR DRAMs, HBM offers higher bandwidth and better power efficiency in a similar DRAM footprint.
It combines vertically stacked DRAM chips with wide data paths and a new physical implementation where the processor and memory are mounted together on a silicon interposer. This silicon interposer allows thousands of wires to connect the processor to each HBM DRAM.
The much wider data bus enables more data to be moved efficiently, boosting bandwidth, reducing latency, and improving energy efficiency. While this newer physical implementation comes at a greater system complexity and cost, the trade-off is often well worth it for the improved performance and power efficiency it provides.
The HBM4 standard, which JEDEC released in April of 2025, marked a critical leap forward for the HBM architecture. It increases bandwidth by doubling the number of independent channels per device, which in turn allows more flexibility in accessing data in the DRAM. The physical implementation remains the same, with the DRAM and processor packaged together on an interposer that allows more wires to transport data compared to HBM3.
While HBM memory systems remain more complex and costlier to implement than other DRAM technologies, the HBM4 architecture offers a good balance between capacity and bandwidth that offers a path forward for sustaining AI’s rapid growth.
AI’s future memory need
With LLMs growing at a rate between 30% to 50% year over year, memory technology will continue to be challenged to keep up with the industry’s performance, capacity, and power-efficiency demands. As AI continues to evolve and find applications at the edge, power-constrained applications like advanced AI agents and multimodal models will bring new challenges such as thermal management, cost, and hardware security
The future of AI will continue to depend as much on memory innovation as it will on compute power itself. The semiconductor industry has a long history of innovation, and the opportunity that AI presents provides compelling motivation for the industry to continue investing and innovating for the foreseeable future.
Steve Woo is a memory system architect at Rambus. He is a distinguished inventor and a Rambus fellow.
Special Section: AI Design
- The AI design world in 2026: What you need to know
- AI workloads demand smarter SoC interconnect design
The post AI’s insatiable appetite for memory appeared first on EDN.
How AI and ML Became Core to Enterprise Architecture and Decision-Making
by Saket Newaskar, Head of AI Transformation, Expleo
Enterprise architecture is no longer a behind-the-scenes discipline focused on stability and control. It is fast becoming the backbone of how organizations think, decide, and compete. As data volumes explode and customer expectations move toward instant, intelligent responses, legacy architectures built for static reporting and batch processing are proving inadequate. This shift is not incremental; it is structural. In recent times, enterprise architecture has been viewed as an essential business enabler.
The global enterprise architecture tools market will grow to USD 1.60 billion by 2030, driven by organizations aligning technology more closely with business outcomes. At the same time, the increasing reliance on real-time insights, automation, and predictive intelligence is pushing organizations to redesign their foundations. Also, artificial intelligence (AI) and machine learning (ML) are not just optional enhancements. They have become essential architectural components that determine how effectively an enterprise can adapt, scale, and create long-term value in a data-driven economy.
Why Modernisation Has Become Inevitable
Traditional enterprise systems were built for reliability and periodic reporting, not for real-time intelligence. As organisations generate data across digital channels, connected devices, and platforms, batch-based architectures create latency that limits decision-making. This challenge is intensifying as enterprises move closer to real-time operations. According to IDC, 75 per cent of enterprise-generated data is predicted to be processed at the edge by 2025. It highlights how data environments are decentralising rapidly. Legacy systems, designed for centralised control, struggle to operate in this dynamic landscape, making architectural modernisation unavoidable.
AI and ML as Architectural Building Blocks
AI and ML have moved from experimental initiatives to core decision engines within enterprise architecture. Modern architectures must support continuous data pipelines, model training and deployment, automation frameworks, and feedback loops as standard capabilities. This integration allows organisations to move beyond descriptive reporting toward predictive and prescriptive intelligence that anticipates outcomes and guides action.
In regulated sectors such as financial services, this architectural shift has enabled faster loan decisions. Moreover, it has improved credit risk assessment and real-time fraud detection via automated data analysis. AI-driven automation has also delivered tangible efficiency gains, with institutions reporting cost reductions of 30–50 per cent by streamlining repetitive workflows and operational processes. These results are not merely the outcomes of standalone tools. Instead, they are outcomes of architectures designed to embed intelligence into core operations.
Customer Experience as an Architectural Driver
Customer expectations are now a primary driver of enterprise architecture. Capabilities such as instant payments, seamless onboarding, and self-service have become standard. In addition, front-end innovations like chatbots and virtual assistants depend on robust, cloud-native, and API-led back-end systems that deliver real-time, contextual data at scale. While automation increases, architectures must embed security and compliance by design. Reflecting this shift, the study projects that the global market worth for zero-trust security frameworks will exceed USD 60 billion annually by 2027. As a result, this will reinforce security as a core architectural principle.
Data Governance and Enterprise Knowledge
With the acceleration of AI adoption across organisations, governance has become inseparable from architecture design. Data privacy, regulatory compliance, and security controls must be built into systems from the outset, especially as automation and cloud adoption expand. Meanwhile, enterprise knowledge, proprietary data, internal processes, and contextual understanding have evolved as critical differentiators.
Grounding AI models in trusted enterprise knowledge improves accuracy, explainability, and trust, particularly in high-stakes decision environments. This alignment further ensures that AI systems will support real business outcomes rather than producing generic or unreliable insights.
Human Readiness and Responsible Intelligence
Despite rapid technological progress, architecture-led transformation ultimately depends on people. Cross-functional alignment, cultural readiness, and shared understanding of AI initiatives are imperative for sustained adoption. Enterprise architects today increasingly act as translators between business strategy and intelligent systems. Additionally, they ensure that innovation progresses without compromising control.
Looking ahead, speed and accuracy will remain essential aspects of enterprise architecture. However, responsible AI will define long-term success. Ethical use, transparency, accountability, and data protection are becoming central architectural concerns. Enterprises will continue redesigning their architectures to be scalable, intelligent, and responsible for the years to come. Those that fail to modernise or embed AI-driven decision-making risk losing relevance in an economy where data, intelligence, and trust increasingly shape competitiveness.
The post How AI and ML Became Core to Enterprise Architecture and Decision-Making appeared first on ELE Times.
My First PCB, Upgraded the Front IO board of Antec Silver Fusion HTPC case
| | At first I thought it would be a simple upgrade. But Damn, had to learn about Tolerances, differential pairs and Resistances. First PCB that I ordered had incorrect pin pitches, they were supposed to be smaller. Had to redesign the entire board and use 3rd layer for power routing. Ordered from JLCPCB as it was easier to find through hole USB 3.0 on their site. 2nd layer is not shown but it's a grounding plane. There's Probably a ton of improvements to be made. I want to thank folks over at r/PCB and r/PrintedCircuitBoard, those guys are a real deal. [link] [comments] |
Sometimes you have to improvise…
| Building a little flyback driver and this was the only MOSFET I had with a high enough Vds and low enough Vgs to work…hopefully I didn’t overheat it too badly. [link] [comments] |
UK–Ukraine 100 Year Partnership Forum
Київська політехніка взяла участь в UK–Ukraine 100 Year Partnership Forum, присвяченому річниці підписання Угоди про сторічне партнерство між Україною та Великою Британією, що визначає довгостроковий вектор розвитку й відновлення нашої країни.
Custom light for disc golf baskets.
| | I am making my own disc golf basket light. It features 32 leds, battery management for a 21700 battery and constant current driver. All housed in a 3d printed case and polycarbonate lens/cover. [link] [comments] |
AI generated electronic horrors
| | submitted by /u/ThomasTTEe2 [link] [comments] |
I built a Programmable Electronic Load & Battery Tester from scavenged parts
| | submitted by /u/Alternative-Way-3685 [link] [comments] |
Customizable 4-Letter 5x5 LED Matrices
| | This was designed and 'built' by me, by that I mean I designed the circuit, PCB layout, 3D model (and printed them myself) and only had JLCPCB fabricate the PCB as that is outside of my abilities. Edit: I forgot to mention that I also programmed this all, originally in Arduino C (in 2024) and then in 2025 I ported it over to micropython and made it more scalable. [link] [comments] |
Weekly discussion, complaint, and rant thread
Open to anything, including discussions, complaints, and rants.
Sub rules do not apply, so don't bother reporting incivility, off-topic, or spam.
Reddit-wide rules do apply.
To see the newest posts, sort the comments by "new" (instead of "best" or "top").
[link] [comments]
Радіохвилі науки: РТПСАС-2025 на РТФ
На початку грудня КПІ ім. Ігоря Сікорського знову став майданчиком для фахової наукової дискусії. На РТФ відбулася XIV Міжнародна науково-технічна конференція "Радіотехнічні проблеми, сигнали, апарати та системи" (РТПСАС-2025) – подія, яка вже багато років є невід'ємною частиною наукового життя факультету. Попри всі виклики сьогодення, конференція зібрала науковців, довівши: інженерна думка, дослідницький інтерес і прагнення до розвитку не зупиняються.
Візит партнерів задля впровадження освітньої програми з гуманітарного розмінування
Проректор КПІ ім. Ігоря Сікорського з міжнародних зв'язків Андрій Шишолін, директорка Навчально-наукового інституту енергозбереження та енергоменеджменту КПІ Оксана Вовк і директорка Українсько-Японського центру КПІ Катерина Луговська зустрілися 16 грудня з фахівцем з комунікацій Управління ООН з обслуговування проєктів (UNOPS) Михайлом Туряницею та журналісткою Інформаційного агентства "Кіодо-Ньюз" Нарумі Татеди (Японія).
Легенди КПІ у Лізі Легенд ГУР
З 20 листопада по 1 грудня 2025 року Головне управління розвідки Міністерства оборони України спільно з компанією HackenProof провели онлайн-змагання "Перший національний CTF". Участь у них взяла і команда dcua НН ФТІ КПІ ім. Ігоря Сікорського, яка показала найкращий результат – зайняла перше місце у найвищому рівні змагань – спеціальному заліку Ліги Легенд!
III-V Epi exhibiting at Photonics West 2026
Lumentum showcasing ultrafast and UV laser platforms for precision manufacturing
На війні загинув випускник нашого університету — Цапок Андрій Михайлович
🕯 З глибоким сумом повідомляємо, що Цапок Андрій Михайлович, випускник 2018 року Навчально-наукового механіко-машинобудівного інституту кафедри лазерної техніки та фізико-технічних технологій, загинув, захищаючи Україну.
Zero maintenance asset tracking via energy harvesting
Real-time tracking of assets has enabled both supply chain digitalization and operational efficiency leaps. These benefits, driven by IoT advances, have proved transformational. As a result, the market for asset-tracking systems for transportation and logistics firms is set to triple, reaching USD 22.5 billion by 2034¹. And, if we look across all sectors, the asset tracking market is forecasted to grow at a CAGR of 15%, reaching USD 51.2 billion by 2030².
However, the ability for firms to maximize the benefits of asset tracking is being constrained by the finite power limitations of a single component, the battery. Reliance on batteries has a number of disadvantages. In addition to the battery cost, battery replacement across multiple locations increases operational costs and demands considerable time and effort.
At the same time, batteries can cause system-wide vulnerabilities. When a tag’s battery unexpectedly fails, for example, a tracked item can effectively disappear from the network and the corresponding data is no longer collected. This, in turn, leads to supply chain disruptions and bottlenecks, sometimes even production line downtime, and reduces the very efficiencies the IoT-based system was designed to deliver (Figure 1).
![]()
Figure 1 Real-time tracking of assets is transforming logistics operations, enabling supply chain digitalization and unlocking major efficiency gains.
Battery maintenanceA “typical” asset tracking tag will implement two core functions: location and communications. For long-distance shipping, GPS will primarily be used as the location identifier. In a logistics warehouse, GPS coverage can be poor, but Wi-Fi scanning remains an option. Other efficient systems include FSK or BLE beacons, Wirepas mesh, or Quuppa’s angle of arrival (AoA).
For data communication, several protocols are possible,
- BLE if the assets remain indoors
- LTE-M if global coverage is a key requirement, and the assets are outdoors
- LoRaWAN if seamless indoor and outdoor coverage is needed, as this can use private, public, community, and satellite networks, with some of them offering native multi-country coverage.
Sensors can also improve functionality and efficiency. For example, an accelerometer can be added to identify when a tag moves and then initiate a wake-up. Other sensors can determine a package’s status and condition. In the case of energy harvesting, the power management chip can indicate the amount of energy that is available. Therefore, the behavior of the device can also be adapted to this information. The final important component on the board of an asset tracker will be an energy-efficient MCU.
The stated battery life of a 15-dollar tag will often be overestimated. This will mainly be due to the radio protocol behaviors. But even if the battery cost itself is limited, the replacement cost can be estimated at around 50 dollars once man-hours are factored into this.
An alternative tag based on the latest energy harvesting technology might have an initial cost of around 25 dollars, but with no batteries to replace, its total cost over a decade remains essentially the same, whereas even a single battery replacement already pushes a 15-dollar tag above that level.
For example, in the automotive industry, manufacturers transport parts using large reusable metal racks. Each manufacturer will use tens of thousands of these, each valued at around 500 dollars. We have been told that, because of scanning errors and mismanagement, up to 10 percent go missing each year.
By equipping racks with tags powered from harvested energy, companies can create an automated inventory system. This results in annual OPEX savings that can be in the order of millions of dollars, a return on investment within months, and lower CAPEX since fewer racks are required for the same production volume.
Self-powered trackingUnlike battery-powered asset trackers, Ambient IoT tags use three core blocks to supply energy to the system: the harvester, an energy storage element, and a power management IC. Together, these enable energy to be harvested as efficiently as possible.
Energy sources can range from RF through thermoelectric to vibration, but for many logistics and transport applications, the most readily available and most commonly used source is light. And this will be natural (solar) or ambient, depending on whether the asset being tracked spends most of its life outdoors (e.g., a container) or indoors (e.g., a warehouse environment).
For outdoor asset trackers on containers or vehicles, significant energy can be harvested from direct sunlight using traditional photovoltaic (PV) amorphous silicon panels. When space is limited, monocrystalline silicon technology provides a higher power density and still works well indoors. For indoor light levels, in addition to the traditional amorphous silicon, there are three additional technologies that become available and cost-effective for these use cases.
- Organic photovoltaic (OPV) cells can provide up to twice the power density of amorphous silicon. Furthermore, the flexibility of these PV cells allows for easy mechanical implementation on the end device.
- Dye-sensitized solar cells bring even higher power densities and exhibit low degradation levels over time, but they are sometimes limited by the requirement for a glass substrate, which prevents flexibility.
- Perovskite PV cells also reach similar power densities as dye-sensitized solar cells, with the possibility of a flexible substrate. However, these have challenges related to lead content and aging.
Before selecting a harvester, an evaluation of the PV cell should be undertaken. This should combine both laboratory measurements and real-world performance tests, along with an assessment of aging characteristics (to ensure that the lifetime of the PV cell exceeds the expected end-of-life of the tracker) and mechanical integration into the casing. The manufacturer chosen to supply the technology should also be able to support large-scale deployments.
When it comes to energy storage, such a system may require either a small, rechargeable chemical-based battery or a supercapacitor. Alternatively, there is the lithium capacitor (a hybrid of the two). Each has distinct characteristics regarding energy density and self-discharge. The right choice will depend on a number of factors, including the application’s required operating temperature and longevity.
Finally, a power management IC (PMIC) must be chosen. This provides the interface between the PV cell and the storage element, and manages the energy flow between the two, something that needs to be done with minimal losses. The PMIC should be optimized to maximize the lifespan of the energy storage element, protecting it from overcharging and overdischarging, while delivering a stable, regulated power output to the tag’s application electronics (Figure 2).
For an indoor industrial environment, where ambient light levels can be low, there is the risk of the storage element becoming fully depleted. It is therefore crucial that the PMIC can perform a cold start in these conditions, when only a small amount of energy is available.
In developing the most appropriate system for a given asset tracking application, it will be important to undertake a power budget analysis. This will consider both the energy consumed by the application and the energy available for harvesting. With the size of the device and its power consumption, it is relatively straightforward to determine the number of hours per day and the luminosity (lux level) for any given PV cell technology to make the device capable of autonomously running by harvesting more energy over a 24-hour period than it consumes.
The storage element size is also critical as it determines how long the device can operate without any power at the source. And even if power consumption is too high to make it fully autonomous, the application of energy harvesting can be used to significantly extend battery life.
![]()
Figure 2 e-peas has worked with several leading tracking system developers, including MOKO SMART (top), Minew (left), and inVirtus (center), Jeng IoT (right) to implement energy harvesting in asset trackers. Source: e-peas
Examples of energy-harvested tracking systemsCompanies such as inVirtus, Jeng IoT, Minew, and MOKO SMART, all leaders in developing logistics and transportation tracking systems, have already started transitioning to energy-harvesting-powered asset trackers. And notably, these devices are delivering significant returns in complex logistical environments.
Minew’s device, for example, implements Epishine’s ultra-thin solar cells to create a credit card-sized asset tracker. MOKO SMART’s L01A-EH is a BLE-based tracker with a three-axis accelerometer and temperature and humidity sensors. These tags, which can be placed on crates to track their journey through a production process, give precise data on lead times and dwell times at each station. This allows monitoring of efficiency and the highlighting of bottlenecks in the system.
A good example of such benefits can be found at Thales, where the InVirtus EOSFlex Beacon battery-free tag is being used. The company has cited a saving of 30 minutes on tracking during part movements when monitoring work orders after the company switched to a system where each work order was digitally linked to a tagged box. Because each area of the factory corresponds to a specific task, the tag’s indoor location provides accurate manufacturing process monitoring.
Additionally, the system saves time by selecting the highest priority task and activating a blinking LED on the corresponding box. It also improves both lead time prediction accuracy and scheduling adherence—the alignment between the planned schedule and actual work progress.
The tags have also been used to locate measurement equipment shared by multiple divisions, and Thales has reported savings of up to two hours when locating these pieces of equipment. This is a critical difference as each instance of downtime represents a major cost, and without this tracking, the company would incur significant maintenance delays that could stop the production line.
Additionally, one aviation manufacturer that is also using this approach to track the work orders has improved scheduling adherence from 30% up to 90%.
Ultimately, energy harvesting in logistics is not simply about eliminating batteries, but about building more resilient, predictable, and cost-effective supply chains. Perpetually powered tracking systems provide constant and reliable visibility, allow for more accurate lead-time predictions, better resource planning, and a significant reduction in the operational friction caused by lost or untraceable assets.
Pierre Gelpi graduated from École Polytechnique in Paris and obtained a Master’s degree from the University of Montreal in Canada. He has 25 years of experience in the telecommunications industry. He began his career at Orange Labs, where he spent eight years working on radio technologies and international standardization. He then served for five years as Technical Director for large accounts at Orange Business Services. After Orange, he joined Siradel, where he led sales and customer operations for wireless network planning and smart city projects, notably in Chile. He subsequently co-founded the first SaaS-based radio planning tool dedicated to IoT.
In 2016, he joined Semtech, where he was responsible for LoRa business development in the EMEA region, driving demand creation to accelerate market growth, particularly in the track-and-trace segment. He joined e-peas in 2024 to lead Sales in EMEA and to promote the vision of unlimited battery life.
References:
- Yahoo! (n.d.). Real Time Location Systems in transportation and Logistics Market Outlook Report 2025-2034 | AI, ML, and IOT, enhancing the capabilities of RTLS in real-time data collection and analysis. Yahoo! Finance. https://uk.finance.yahoo.com/news/real-time-location-systems-transportation-150900694.html?guccounter=2
- Asset tracking market size & share: Industry report, 2030. Asset Tracking Market Size & Share | Industry Report, 2030. (n.d.). https://www.grandviewresearch.com/industry-analysis/asset-tracking-market-report#:~:text=Industry:%20Technology,reducing%20losses%20and%20optimizing%20logistics.
Related Content
- Energy harvesting gets really personal
- Circuits for RF Energy Harvesting
- Lightning as an energy harvesting source?
- 5 key considerations in IoT asset tracking design
The post Zero maintenance asset tracking via energy harvesting appeared first on EDN.
NOS AT&T MilSpec Transistor Collection Circa 1974-79
| | These came from an AT&T plant that worked on submarine data systems. All officially inspected. Just wanted to share for anyone else who nerds out on this stuff. [link] [comments] |



