Feed aggregator

Складай екзамени TestDaF, TestAS, onSET, dMAT у Києві – без зайвих турбот та витрат на подорожі за кордон!

Новини - 3 hours 51 min ago
Складай екзамени TestDaF, TestAS, onSET, dMAT у Києві – без зайвих турбот та витрат на подорожі за кордон!
Image
kpi пн, 01/19/2026 - 13:46
Текст

✅ Офіційно визнаний і сертифікований центр:
Екзамени організовуються та проводяться ТестДаФ-Центром, який має ліцензовану угоду з ТестДаф-Інститутом (м. Бохум, Німеччина). Проф. д. філол. н. С.М. Іваненко очолює роботу Центру з дня його заснування.

element14 and Fulham announce global distribution partnership

ELE Times - 4 hours 53 min ago

element14 has formed a new global distribution partnership with Fulham, expanding access to advanced LED drivers, emergency lighting and intelligent control solutions for customers across EMEA & APAC. The agreement strengthens element14’s lighting portfolio in the region, supporting engineers and buyers across commercial, industrial and architectural lighting applications.

Fulham brings more than 30 years of expertise in LED drivers, modules, emergency lighting and intelligent control systems. Headquartered in the United States, the company operates globally, with manufacturing in India, supply channels in India and China, and a strong presence across Europe. Its portfolio includes indoor and outdoor LED drivers, emergency lighting systems, UV ballasts, and smart control technologies, all designed to meet key international standards, including CE, ENEC, DALI-2, and UL.

Through this partnership, element14 will distribute Fulham’s lighting solutions globally, improving availability and access to future-ready technologies for engineers and buyers worldwide.

The agreement includes Fulham’s core lighting portfolio, including emergency lighting systems, indoor LED drivers, and constant-voltage driver platforms, with key ranges including the HotSpot Series, WorkHorse DALI-2 constant-current drivers, and the ThoroLED Series for architectural lighting, signage, and LED strip applications.

Customer benefits include:

  • Broader access to certified, future-ready lighting technologies.
  • Global availability through element14’s established distribution network.
  • Support for a wide range of lighting applications and form factors.
  • Access to Fulham’s deep technical expertise and proven product platforms.

Jose Lok, Global Product Category Director – Onboard Components & SBC, element14, said: “element14 has a strong commitment to adding value for our customers, and this partnership expands both choice and access to innovative lighting technologies. By working with Fulham, we are enabling customers worldwide to source advanced LED drivers, emergency lighting and control solutions through a trusted global distribution partner.”

Antony Corrie, CEO, Fulham, added: “Fulham is extremely excited to embark on this new relationship with element14. The partnership brings together shared values, strong heritage and a commitment to global innovation. element14 in APAC will be selling Fulham’s LED drivers, emergency battery backup solutions, exit signs and UV-C power systems across their global customer base.”

The post element14 and Fulham announce global distribution partnership appeared first on ELE Times.

India’ PLI Scheme Brings a Surge of 146% in Electronics Production

ELE Times - 5 hours 30 min ago

Despite geopolitical tensions, manufacturing in India has done exponentially well, with smartphones leading the trail. According to data shared by CareEdge Ratings, India’s production has surged by 146% since 2021. The Performance Linked Incentive (PLI) scheme played a significant role in boosting electronics manufacturing from Rs 2.13 lakh crore in the Financial Year 2021 to Rs 5.45 lakh crore in the Financial Year 2025.

Additionally, the boost in production was aided by USD 4billion in FDI, where 70% was to PLI beneficiaries. Apart from economic benefits, the accelerated production has triggered a massive socio-economic multiplier effect. The electronics sector has been a dominant contributor to the 9.5 lakh jobs generated across all PLI schemes, providing significant direct and indirect employment. Simultaneously, electronics have climbed to become one of India’s top export categories. By shifting from an importer to a “net exporter” of mobile phones, India is successfully narrowing its trade deficit and reducing its long-term dependency on imports from neighbouring manufacturing hubs.

While the 146% jump is a historic achievement, the roadmap ahead focuses on “Deep Localisation.” The government and industry leaders are now pivoting toward high-value components, including semiconductor packaging and display manufacturing. As of January 2026, this momentum positions India to reach its goal of a $300 billion electronics production ecosystem, solidifying its role as a critical alternative in the global “China Plus One” supply chain strategy.

The post India’ PLI Scheme Brings a Surge of 146% in Electronics Production appeared first on ELE Times.

Snow Lake extends option agreement for Mound Lake Gallium Project

Semiconductor today - 6 hours 13 min ago
Canada-based nuclear fuel cycle and critical minerals company Snow Lake Resources Ltd (trading as Snow Lake Energy) has reached an agreement with Canadian Uranium Corp to extend the existing option agreement with respect to the Mound Lake Gallium Project, situated north of Thunder Bay, Ontario, Canada...

Photon Design showcasing simulation tool innovations at Photonics West

Semiconductor today - 6 hours 29 min ago
In booth 3452 at SPIE Photonics West 2026 in San Francisco, CA, USA (20–22 January), photonic simulation CAD software developer Photon Design Ltd of Oxford, UK is highlighting its latest innovations, HAROLD (QD), MT-FIMMPROP and EPIPPROP. The firm’s simulation tools will also be featured in academic presentations and commercial demonstrations elsewhere at the show...

📋Кошторис на 2026 рік

Новини - 7 hours 24 min ago
📋Кошторис на 2026 рік kpi пн, 01/19/2026 - 10:13

AI’s insatiable appetite for memory

EDN Network - 8 hours 15 min ago

The term “memory wall” was first coined in the mid-1990s when researchers from the University of Virginia, William Wulf and Sally McKee, co-authored “Hitting the Memory Wall: Implications of the Obvious.” The research presented the critical bottleneck of memory bandwidth caused by the disparity between processor speed and the performance of dynamic random-access memory (DRAM) architecture.

These findings introduced the fundamental obstacle that engineers have spent the last three decades trying to overcome. The rise of AI, graphics, and high-performance computing (HPC) has only served to increase the magnitude of the challenge.

Modern large language models (LLMs) are being trained with over a trillion parameters, requiring continuous access to data and petabytes of bandwidth per second. Newer LLMs in particular demand extremely high memory bandwidth for training and for fast inference, and the growth rate shows no signs of slowing with the LLM market size expected to increase from roughly $5 billion in 2024 to over $80 billion by 2033. And the growing gap between CPU and GPU performance, memory bandwidth, and latency is unmistakable.

The biggest challenge posed by AI training is in moving these massive datasets between the memory and processor, and here, the memory system itself is the biggest bottleneck. As compute performance has increased, memory architectures have had to evolve and innovate to keep pace. Today, high-bandwidth memory (HBM) is the most efficient solution for the industry’s most demanding applications like AI and HPC.

History of memory architecture

In the 1940s, the von Neumann architecture was developed and it became the basis for computing systems. The control-centric design stores a program’s instructions and data in the computer’s memory. The CPU fetched instructions and data sequentially, creating idle time while the processor waited for these instructions and data to return from memory. The rapid evolution of processors and the relatively slower improvement of memory eventually created the first system memory bottlenecks.

Figure 1 Here is a basic arrangement showing how processor and memory work together. Source: Wikipedia

As memory systems evolved, memory bus widths and data rates increased, enabling higher memory bandwidths that improved this bottleneck. The rise of graphics processing units (GPUs) and HPC in the early 2000s accelerated the compute capabilities of systems and brought with them a new level of pressure on memory systems to keep compute and memory systems in balance.

This led to the development of new DRAMs, including graphics double data rate (GDDR) DRAMs, which prioritized bandwidth. GDDR was the dominant high-performance memory until AI and HPC applications went mainstream in the 2000s and 2010s, when a newer type of DRAM was required in the form of HBM.

Figure 2 The above chart highlights the evolution of memory in more than two decades. Source: Amir Gholami

The rise of HBM for AI

HBM is the solution of choice to meet the demands of AI’s most challenging workloads, with industry giants like Nvidia, AMD, Intel, and Google utilizing HBM for their largest AI training and inference work. Compared to standard double-data rate (DDR) or GDDR DRAMs, HBM offers higher bandwidth and better power efficiency in a similar DRAM footprint.

It combines vertically stacked DRAM chips with wide data paths and a new physical implementation where the processor and memory are mounted together on a silicon interposer. This silicon interposer allows thousands of wires to connect the processor to each HBM DRAM.

The much wider data bus enables more data to be moved efficiently, boosting bandwidth, reducing latency, and improving energy efficiency. While this newer physical implementation comes at a greater system complexity and cost, the trade-off is often well worth it for the improved performance and power efficiency it provides.

The HBM4 standard, which JEDEC released in April of 2025, marked a critical leap forward for the HBM architecture. It increases bandwidth by doubling the number of independent channels per device, which in turn allows more flexibility in accessing data in the DRAM. The physical implementation remains the same, with the DRAM and processor packaged together on an interposer that allows more wires to transport data compared to HBM3.

While HBM memory systems remain more complex and costlier to implement than other DRAM technologies, the HBM4 architecture offers a good balance between capacity and bandwidth that offers a path forward for sustaining AI’s rapid growth.

AI’s future memory need

With LLMs growing at a rate between 30% to 50% year over year, memory technology will continue to be challenged to keep up with the industry’s performance, capacity, and power-efficiency demands. As AI continues to evolve and find applications at the edge, power-constrained applications like advanced AI agents and multimodal models will bring new challenges such as thermal management, cost, and hardware security

The future of AI will continue to depend as much on memory innovation as it will on compute power itself. The semiconductor industry has a long history of innovation, and the opportunity that AI presents provides compelling motivation for the industry to continue investing and innovating for the foreseeable future.

Steve Woo is a memory system architect at Rambus. He is a distinguished inventor and a Rambus fellow.

Special Section: AI Design

The post AI’s insatiable appetite for memory appeared first on EDN.

How AI and ML Became Core to Enterprise Architecture and Decision-Making

ELE Times - 9 hours 14 min ago

by Saket Newaskar, Head of AI Transformation, Expleo

Enterprise architecture is no longer a behind-the-scenes discipline focused on stability and control. It is fast becoming the backbone of how organizations think, decide, and compete. As data volumes explode and customer expectations move toward instant, intelligent responses, legacy architectures built for static reporting and batch processing are proving inadequate. This shift is not incremental; it is structural. In recent times, enterprise architecture has been viewed as an essential business enabler.

The global enterprise architecture tools market will grow to USD 1.60 billion by 2030, driven by organizations aligning technology more closely with business outcomes. At the same time, the increasing reliance on real-time insights, automation, and predictive intelligence is pushing organizations to redesign their foundations. Also, artificial intelligence (AI) and machine learning (ML) are not just optional enhancements. They have become essential architectural components that determine how effectively an enterprise can adapt, scale, and create long-term value in a data-driven economy.

Why Modernisation Has Become Inevitable

Traditional enterprise systems were built for reliability and periodic reporting, not for real-time intelligence. As organisations generate data across digital channels, connected devices, and platforms, batch-based architectures create latency that limits decision-making. This challenge is intensifying as enterprises move closer to real-time operations. According to IDC, 75 per cent of enterprise-generated data is predicted to be processed at the edge by 2025. It highlights how data environments are decentralising rapidly. Legacy systems, designed for centralised control, struggle to operate in this dynamic landscape, making architectural modernisation unavoidable.

AI and ML as Architectural Building Blocks

AI and ML have moved from experimental initiatives to core decision engines within enterprise architecture. Modern architectures must support continuous data pipelines, model training and deployment, automation frameworks, and feedback loops as standard capabilities. This integration allows organisations to move beyond descriptive reporting toward predictive and prescriptive intelligence that anticipates outcomes and guides action.

In regulated sectors such as financial services, this architectural shift has enabled faster loan decisions. Moreover, it has improved credit risk assessment and real-time fraud detection via automated data analysis. AI-driven automation has also delivered tangible efficiency gains, with institutions reporting cost reductions of 30–50 per cent by streamlining repetitive workflows and operational processes. These results are not merely the outcomes of standalone tools. Instead, they are outcomes of architectures designed to embed intelligence into core operations.

Customer Experience as an Architectural Driver

Customer expectations are now a primary driver of enterprise architecture. Capabilities such as instant payments, seamless onboarding, and self-service have become standard. In addition, front-end innovations like chatbots and virtual assistants depend on robust, cloud-native, and API-led back-end systems that deliver real-time, contextual data at scale. While automation increases, architectures must embed security and compliance by design. Reflecting this shift, the study projects that the global market worth for zero-trust security frameworks will exceed USD 60 billion annually by 2027. As a result, this will reinforce security as a core architectural principle.

Data Governance and Enterprise Knowledge

With the acceleration of AI adoption across organisations, governance has become inseparable from architecture design. Data privacy, regulatory compliance, and security controls must be built into systems from the outset, especially as automation and cloud adoption expand. Meanwhile, enterprise knowledge, proprietary data, internal processes, and contextual understanding have evolved as critical differentiators.

Grounding AI models in trusted enterprise knowledge improves accuracy, explainability, and trust, particularly in high-stakes decision environments. This alignment further ensures that AI systems will support real business outcomes rather than producing generic or unreliable insights.

Human Readiness and Responsible Intelligence

Despite rapid technological progress, architecture-led transformation ultimately depends on people. Cross-functional alignment, cultural readiness, and shared understanding of AI initiatives are imperative for sustained adoption. Enterprise architects today increasingly act as translators between business strategy and intelligent systems. Additionally, they ensure that innovation progresses without compromising control.

Looking ahead, speed and accuracy will remain essential aspects of enterprise architecture. However, responsible AI will define long-term success. Ethical use, transparency, accountability, and data protection are becoming central architectural concerns. Enterprises will continue redesigning their architectures to be scalable, intelligent, and responsible for the years to come. Those that fail to modernise or embed AI-driven decision-making risk losing relevance in an economy where data, intelligence, and trust increasingly shape competitiveness.

The post How AI and ML Became Core to Enterprise Architecture and Decision-Making appeared first on ELE Times.

My First PCB, Upgraded the Front IO board of Antec Silver Fusion HTPC case

Reddit:Electronics - 15 hours 42 sec ago
My First PCB, Upgraded the Front IO board of Antec Silver Fusion HTPC case

At first I thought it would be a simple upgrade.

But Damn, had to learn about Tolerances, differential pairs and Resistances.

First PCB that I ordered had incorrect pin pitches, they were supposed to be smaller. Had to redesign the entire board and use 3rd layer for power routing. Ordered from JLCPCB as it was easier to find through hole USB 3.0 on their site. 2nd layer is not shown but it's a grounding plane.

There's Probably a ton of improvements to be made.

I want to thank folks over at r/PCB and r/PrintedCircuitBoard, those guys are a real deal.

submitted by /u/Lordcorvin1
[link] [comments]

Sometimes you have to improvise…

Reddit:Electronics - Sun, 01/18/2026 - 19:52
Sometimes you have to improvise…

Building a little flyback driver and this was the only MOSFET I had with a high enough Vds and low enough Vgs to work…hopefully I didn’t overheat it too badly.

submitted by /u/SaintLuke1
[link] [comments]

UK–Ukraine 100 Year Partnership Forum

Новини - Sun, 01/18/2026 - 14:04
UK–Ukraine 100 Year Partnership Forum
Image
kpi нд, 01/18/2026 - 14:04
Текст

Київська політехніка взяла участь в UK–Ukraine 100 Year Partnership Forum, присвяченому річниці підписання Угоди про сторічне партнерство між Україною та Великою Британією, що визначає довгостроковий вектор розвитку й відновлення нашої країни.

Custom light for disc golf baskets.

Reddit:Electronics - Sun, 01/18/2026 - 12:06
Custom light for disc golf baskets.

I am making my own disc golf basket light. It features 32 leds, battery management for a 21700 battery and constant current driver. All housed in a 3d printed case and polycarbonate lens/cover.

submitted by /u/EtherealProject3D
[link] [comments]

Customizable 4-Letter 5x5 LED Matrices

Reddit:Electronics - Sat, 01/17/2026 - 18:01
Customizable 4-Letter 5x5 LED Matrices

This was designed and 'built' by me, by that I mean I designed the circuit, PCB layout, 3D model (and printed them myself) and only had JLCPCB fabricate the PCB as that is outside of my abilities.

Edit: I forgot to mention that I also programmed this all, originally in Arduino C (in 2024) and then in 2025 I ported it over to micropython and made it more scalable.

submitted by /u/Arynoth
[link] [comments]

Weekly discussion, complaint, and rant thread

Reddit:Electronics - Sat, 01/17/2026 - 18:00

Open to anything, including discussions, complaints, and rants.

Sub rules do not apply, so don't bother reporting incivility, off-topic, or spam.

Reddit-wide rules do apply.

To see the newest posts, sort the comments by "new" (instead of "best" or "top").

submitted by /u/AutoModerator
[link] [comments]

Радіохвилі науки: РТПСАС-2025 на РТФ

Новини - Fri, 01/16/2026 - 16:18
Радіохвилі науки: РТПСАС-2025 на РТФ
Image
kpi пт, 01/16/2026 - 16:18
Текст

На початку грудня КПІ ім. Ігоря Сікорського знову став майданчиком для фахової наукової дискусії. На РТФ відбулася XIV Міжнародна науково-технічна конференція "Радіотехнічні проблеми, сигнали, апарати та системи" (РТПСАС-2025) – подія, яка вже багато років є невід'ємною частиною наукового життя факультету. Попри всі виклики сьогодення, конференція зібрала науковців, довівши: інженерна думка, дослідницький інтерес і прагнення до розвитку не зупиняються.

Візит партнерів задля впровадження освітньої програми з гуманітарного розмінування

Новини - Fri, 01/16/2026 - 15:50
Візит партнерів задля впровадження освітньої програми з гуманітарного розмінування
Image
Інформація КП пт, 01/16/2026 - 15:50
Текст

Проректор КПІ ім. Ігоря Сікорського з міжнародних зв'язків Андрій Шишолін, директорка Навчально-наукового інституту енергозбереження та енергоменеджменту КПІ Оксана Вовк і директорка Українсько-Японського центру КПІ Катерина Луговська зустрілися 16 грудня з фахівцем з комунікацій Управління ООН з обслуговування проєктів (UNOPS) Михайлом Туряницею та журналісткою Інформаційного агентства "Кіодо-Ньюз" Нарумі Татеди (Японія). 

Легенди КПІ у Лізі Легенд ГУР

Новини - Fri, 01/16/2026 - 15:37
Легенди КПІ у Лізі Легенд ГУР
Image
Інформація КП пт, 01/16/2026 - 15:37
Текст

З 20 листопада по 1 грудня 2025 року Головне управління розвідки Міністерства оборони України спільно з компанією HackenProof провели онлайн-змагання "Перший національний CTF". Участь у них взяла і команда dcua НН ФТІ КПІ ім. Ігоря Сікорського, яка показала найкращий результат – зайняла перше місце у найвищому рівні змагань – спеціальному заліку Ліги Легенд!

III-V Epi exhibiting at Photonics West 2026

Semiconductor today - Fri, 01/16/2026 - 15:22
III–V Epi Ltd of Glasgow, Scotland, UK — which provides a molecular beam epitaxy (MBE) and metal-organic chemical vapor deposition (MOCVD) service for custom compound semiconductor wafer design, manufacturing, test and characterization — is showcasing its specialist, low-volume, fast-turnaround, III-V epitaxy manufacturing services in booth 4923 at SPIE Photonics West 2026 in San Francisco, CA, USA (20–22 January)...

Pages

Subscribe to Кафедра Електронної Інженерії aggregator