Feed aggregator

🏰 Запрошуємо на екскурсію «Місто корупційних таємниць: відкрийте правду, яка ховається за фасадами»

Новини - Mon, 12/08/2025 - 10:11
🏰 Запрошуємо на екскурсію «Місто корупційних таємниць: відкрийте правду, яка ховається за фасадами»
Image
kpi пн, 12/08/2025 - 10:11
Текст

14 грудня о 12:00 запрошуємо всіх охочих на екскурсію, яка змінить ваше уявлення про Київ.

Ми пройдемо маршрутами, які зберігають більше, ніж здається на перший погляд, повз будівлі, що могли б розповісти не одну цікаву історію.

The Great Leap: How AI is Reshaping Cybersecurity from Pilot Projects to Predictive Defense

ELE Times - Mon, 12/08/2025 - 09:44

Imagine your cybersecurity team as a group of highly-trained detectives. For decades, they’ve been running through digital crime scenes with magnifying glasses, reacting to the broken window or the missing safe after the fact. Now, suddenly, they have been handed a crystal ball—one that not only detects the threat but forecasts the modus operandi of the attacker before they even step onto the property. That crystal ball is Artificial Intelligence, and the transformation it’s bringing to cyber defense is less a technological upgrade and more a fundamental re-engineering of the entire security operation.

Palo Alto Networks, in partnership with the Data Security Council of India (DSCI), released the State of AI Adoption for Cybersecurity in India report. The report found that only 24% of CXOs consider their organizations fully prepared for AI-driven threats, underscoring a significant gap between adoption intent and operational readiness. The report sets a clear baseline for India Inc., examining where AI adoption stands, what organizations are investing in next, and how the threat landscape is changing. It also surfaces capability and talent gaps, outlines governance, and details preferred deployment models.

While the intent to leverage AI for enhanced cyber defense is almost universal, its operational reality is still maturing. The data reveals a clear gap between strategic ambition and deployed scale.

The report underscores the dual reality of AI: it is a potent defense mechanism but also a primary source of emerging threat vectors. Key findings include:

  • Adoption intent is high, maturity is low: 79% of organizations plan to integrate AI/ML towards AI-enabled cybersecurity, but 40% remain in the pilot stage. The main goal is operational speed, prioritizing the reduction of Mean Time to Detect and Respond (MTTD/MTTR).
  • Investments are Strategic: 64% of organizations are now proactively investing through multi-year risk-management roadmaps.
  • Threats are AI-Accelerated: 23% of the organizations are resetting priorities due to new AI-enabled attack paradigms. The top threats are coordinated multi-vector attacks and AI-poisoned supply chains.
  • Biggest Barriers: Financial overhead (19%) and the skill/talent deficit (17%) are the leading roadblocks to adoption.
  • Future Defense Model: 31% of organizations consider Human-AI Hybrid Defense Teams as an AI transforming cybersecurity approach and 33% of organizations require human approval for AI-enabled critical security decisions and actions.

“AI is at the heart of most serious security conversations in India, sometimes as the accelerator, sometimes as the adversary itself. This study, developed with DSCI, makes one thing clear: appetite and intent are high, but execution and operational discipline are lagging,” said Swapna Bapat, Vice President and Managing Director, India & SAARC, Palo Alto Networks. “Catching up means using AI to defend against AI, but success demands robustness. Given the dynamic nature of building and deploying AI apps, continuous red teaming of AI is an absolute must to achieve that robustness. It requires coherence: a platform that unifies signals across network, operations, and identity; Zero-Trust verification designed into every step; and humans in the loop for decisions that carry real risk. That’s how AI finally moves from shaky pilots to robust protection.”

Vinayak Godse, CEO, DSCI, said “India is at a critical juncture where AI is reshaping both the scale of cyber threats and the sophistication of our defenses. AI enabled attacker capabilities are rapidly increasing in scale and sophistication. Simultaneously, AI adoption for cyber security can strengthen security preparedness to navigate risk, governance, and operational readiness to predict, detect, and respond to threats in real time. This AI adoption study, supported by Palo Alto Networks, reflects DSCI’s efforts to provide organizations with insights to navigate the challenges emerging out of AI enabled attacks for offense while leveraging AI for security defense.

The report was based on a survey of 160+ organizations across BFSI, manufacturing, technology, government, education, and mid-market enterprises, covering CXOs, security leaders, business unit heads, and functional teams.

The post The Great Leap: How AI is Reshaping Cybersecurity from Pilot Projects to Predictive Defense appeared first on ELE Times.

Bringing up my rosco m68k

Reddit:Electronics - Sun, 12/07/2025 - 12:09
Bringing up my rosco m68k

Hey folks!
I’ve been playing around with the rosco m68k open-source computer lately and wanted to share some progress.
I’m working on this as part of my personal project SolderDemon, where I’ve been experimenting with DIY retro-computing hardware.

On my boards the official firmware boots cleanly, the memory checks pass, and UART I/O behaves exactly as it should. I’m using the official rosco tools to verify RAM/ROM mapping, decoding, and the overall bring-up process. I also managed to get a small “hello world” running over serial after sorting out the toolchain with their Docker setup.

I’m also tinkering with a 6502 through-hole version — something simple for hands-on exploration of that architecture.

Happy to answer any questions or discuss the bring-up process.

submitted by /u/kynis45
[link] [comments]

Weekly discussion, complaint, and rant thread

Reddit:Electronics - Sat, 12/06/2025 - 18:00

Open to anything, including discussions, complaints, and rants.

Sub rules do not apply, so don't bother reporting incivility, off-topic, or spam.

Reddit-wide rules do apply.

To see the newest posts, sort the comments by "new" (instead of "best" or "top").

submitted by /u/AutoModerator
[link] [comments]

eth industrial switch rx/tx

Reddit:Electronics - Sat, 12/06/2025 - 09:24
eth industrial switch rx/tx

yet still one pair leads to nonexisting chip and second shows only diagnostics from mcu. Life is brutal.

submitted by /u/mac_bigmac
[link] [comments]

Simple Electronic Dice

Reddit:Electronics - Sat, 12/06/2025 - 03:33
Simple Electronic Dice

I had a free evening, so decided to make this in the shed/workshop.

It uses a 555 to produce rapid pulses, and a 4017 decade counter to sequence 6 LEDs rapidly.
Pressing the button pulls current through an opto-isolator, whos phototransistor connects pin 3 of the 555 to the trigger of the 4017.
A small capacitor was placed across the contacts of the push button, so that the dice continues to 'roll' for a second or two after releasing the button (Makes sure that people can't rapidly release and re-press for a more preferable number.

in r/askelectronics I asked for advice about more chips I can use in the future, and got another 4000 series which will allow me to drive a seven segment display in the same fashion, as opposed to six individual LEDs.

Once I was happy with how the circuit behaves on the breadboard I put it to stripboard.
From what I have seen, most people here seem to use the perfboard, which has pads which are disconnected from each other.
I personally prefer stripboard, as it's what I've grown up with as a kid. You can use a drill shaped tool to cut the copper tracks where needed.

I decided to current limit the white LEDs with a 12KR resistor.
I had one to hand, and it dims them down to the same brightness as a standard diffused red, yellow or green variant.

I don't know if using an opto-isolator in the way I did is good practice or not. It works, and is simple enough.
I don't really have any official teachings in electronics, so sometimes I have a different approach to a problem.
Sometimes for the better, sometimes not.

I found that for me, the best way to use a pulldown resistor for the 4017 trigger was to also connect a small .1uF ceramic capacitor in parallel to the pulldown resistor.

I know that by no means is this groundbreaking, or advanced. It's probably akin to something that would have been made 30 or 40 years ago, but I only dabble as a hobby, and find soldering away, alone, for a few hours, whilst the rain hammers down outside quite therapeutic for me.

submitted by /u/One-Cardiologist-462
[link] [comments]

Зустріч із данськими організаціями

Новини - Fri, 12/05/2025 - 17:29
Зустріч із данськими організаціями
Image
kpi пт, 12/05/2025 - 17:29
Текст

🇺🇦🇩🇰 У КПІ ім.

onsemi releases EliteSiC MOSFETs in T2PAK top-cool package

Semiconductor today - Fri, 12/05/2025 - 16:49
Intelligent power and sensing technology firm onsemi of Scottsdale, AZ, USA has released its EliteSiC MOSFETs in the industry-standard T2PAK top-cool package, advancing power packaging for automotive and industrial applications. The new product delivers enhanced thermal performance, reliability and design flexibility for demanding high-power, high-voltage applications for markets including electric vehicles, solar infrastructure, and energy storage systems...

Без київських політехніків Україна не була б повноправним членом Антарктичного клубу

Новини - Fri, 12/05/2025 - 16:00
Без київських політехніків Україна не була б повноправним членом Антарктичного клубу
Image
kpi пт, 12/05/2025 - 16:00
Текст

Величезним досягненням української науки сьогодні, на думку професійної спільноти, є продовження антарктичних досліджень на станції "Академік Вернадський" та океанографічному судні "Ноосфера". З гордістю можемо сказати, що випускники і вчені КПІ є учасниками як отримання станції Україною майже 30 років тому, так і забезпечення її життєдіяльності та виконання програми спостережень нині.

My class AB amplifier

Reddit:Electronics - Fri, 12/05/2025 - 15:05
My class AB amplifier

So, I'm developing a guitar amplifier for a friend, and I need a high power (as for my standards) amp to make it loud. So I made this one, the most powerful discrete amp to date, that can deliver 20Vpp to 8 ohm speaker without distortion at 24V supply. I had a problem with connecting everything for tests and idle current calibration because PCB is , so i had to improvise. I put a power diode into ground terminal of amp, connected a big clip of function generator ground, then connecred small clip of power supply ground, and scope ground to power supplu ground clip. The effect is this big tangle of wires and connectors, but it worked as intended. The design is a variation of amp from 70s record player but with changed voltage rating and conversion from class B to AB. It's suprisingly stable and silent when input is floating, so I like it.

submitted by /u/ZaznaczonyKK
[link] [comments]

The Big Allis generator sixty years ago 

EDN Network - Fri, 12/05/2025 - 15:00

Think back to the 1965 electrical power blackout in the Northeast United States of just over sixty years ago. It was on November 9, 1965. There was a huge consequence for Consolidated Edison in New York City.

Their power-generating facility in Ravenswood had been equipped with a generator made by Allis-Chalmers, as shown in the following screenshots.

Figure 1 Ravenswood power generating facility and the Big Allis power generator.

That generator was the largest of its kind in the whole world at that time. Larger generators did get made in later years, but at that time, there were none bigger. It was so big that some experts opined that such a generator would not even work. Because of its size and its manufacturer’s name, that generator came to be called “Big Allis”.

Big Allis had a major design flaw. The bearings that supported the generator’s rotor were protected by oil pumps that were powered from the Big Allis generator itself.

When the power grid collapsed, Big Allis stopped delivering power, which then shut down the pumps delivering the oil pressure that had been protecting the rotor bearings.

With no oil pressure, the bearings were severely damaged as the rotor slowed down to a halt. One newspaper article described the bearings as having been ground to dust. It took months to replace those bearings and to provide their oil pumps with separate diesel generators devoted solely to maintaining the protective oil pressure.

So far as I know, Big Allis is still in service, even through the later 1977 and 2003 blackouts, so I guess that those 1965 revisions must have worked out.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

The post The Big Allis generator sixty years ago  appeared first on EDN.

Optimized analog front-end design for edge AI

ELE Times - Fri, 12/05/2025 - 13:12

Courtesy: Avnet

Key Takeaways:

01.   AI models see data differently: what makes sense to a digital processor may not be useful to an AI model, so avoid over-filtering and consider separate data paths

02.   Consider training needs: models trained at the edge will need labeled data (such as clean, noisy, good, faulty)

03.   Analog data is diverse: match the amplifier to the source, consider the bandwidth needs of the model, and the path’s signal-to-noise ratio

 

Machine learning (ML) and artificial intelligence (AI) have expanded the market for smart, low-power devices. Capturing and interpreting sensor data streams leads to novel applications. ML turns simple sensors into smart leak detectors by inferring why the pressure in a pipe has changed. AI can utilize microphones in audio equipment to detect events within the home, such as break-ins or an occupant falling.

For many applications that rely on real-world data, the analog front-end (AFE) is one of the most important design elements as it functions as a bridge to the digital world. At a high level, AFEs delivering data to a machine-learning back-end have broadly similar design needs to conventional data-acquisition and signal-processing systems.

But in some applications, particularly those in transition from IoT to AIoT, the data is doing double-duty. Sensors could be used for conventional data analysis by back-end systems and also as real-time input to AI models. There are trade-offs implied by this split, but it could also deliver greater freedom in the AFE architecture. Any design freedom must still address overall cost, power efficiency, and system reliability.

The importance of bandwidth and signal-to-noise ratio

Accuracy is often an imperative with analog signals. The signal path must deliver the bandwidth and signal-to-noise ratio required by the front-end’s digitizer. When using AI, designers will be more diligent when avoiding distortion, as introducing spurious signals during training could compromise model training.

The classic AFE may need to change to accommodate the sensor and digital processing sections, and the AI model’s needs which may be different. (Source: Avnet)

For signals with a wide dynamic range, it may make sense to employ automated gain control (AGC) to ensure there is enough detail in the recorded signal under all practical conditions. The changes in amplification should also be passed to the digitizer and synchronized with the sensor data so they can be recorded as features during AI training or combined by a preprocessing step into higher-resolution samples. If not, the model may learn the wrong features during training.

Interfacing AI systems with multi-sensor designs

Multi-sensor designs introduce another consideration. Devices that process biological signals or industrial condition-monitoring systems often need to process multiple types of data together. Time-synchronized data will deliver the best results as changes in group delay caused by filtering or digitization pipelines of different depths can change the relationship between signals.

The use of AI may lead the designer to make choices they might not make for simpler systems. For example, aggressive low- and high-pass filtering might help deliver signals that are easier for traditional software to interpret. But this filtering may obscure signal features that are useful to the AI.

Design Tip – Analog Switches & Multiplexers

Analog switches and multiplexers perform an important role in AFEs where multiple sensors are used in the signal chain. Typically, devices are digitally addressed and controlled, switches selectively connect inputs to outputs, while multiplexers route a specific input to a common output. Design considerations include resistance, switching speed, bandwidth, and crosstalk.

 

For example, high-pass filtering can be useful for removing apparent signal drift but will also remove cues from low-frequency sources, such as long-term changes in pressure. Low-pass filtering may remove high-frequency signal components, such as transients, that are useful for AI-based interpretation. It may be better to perform the filtering digitally after conversion for other downstream uses of the data.  

Techniques for optimizing energy efficiency in AFEs

Programmable AFEs, or interchangeable AFE pipelines, can improve energy optimization. It is common for edge devices to operate in a low-energy “always on” mode, acquiring signals at a relatively low level of accuracy while the AI model is inactive. Once a signal passes a threshold, the system wakes the AI accelerator and moves into a high-accuracy acquisition mode.

That change can be accommodated in some cases by programming the preamplifiers and similar components in the AFE to switch between low-power and low-noise modes dynamically.

A different approach often used in biomedical sensors is to use changes in duty cycles to reduce overall energy. In the low-power state, the AFE may operate at a relatively low data rate and powered down during quiet intervals. The risk arises of the system missing important events. An alternative is to use a separate, low-accuracy AFE circuit that runs at nanowatt levels. This circuitry may be quite different to the main AFE signal path.

In audio sensing, one possibility is to use a frequency-detection circuit coupled with a comparator to listen for specific events captured by a microphone. A basic frequency detector, consisting of a simple bandpass filter and comparator, may wake the system or move the low-power AFE into a second, higher-power state, but not the full wakefulness mode that engages the back-end digital AI model.

In this state, a circuit such as a generalized impedance converter can be manipulated to sweep the frequency range and look for further peaks to see if the incoming signal meets the wakeup criteria. That multistage approach will limit the time during which the full AI model needs to be active.

Breaking down analog front-ends for AI

Further advances in AI theory enable more sophisticated analog-domain processing before digitization. Some vendors have specialized in neural-network devices that combine on-chip memory with analog computation. Another possibility for AFE-based AI that results in a lower hardware overhead is reservoir computing. This uses concepts from the theory of recurrent neural networks. A signal fed back into a randomly connected network, known as the reservoir, can act as a discriminator used by an output layer that is trained to recognize certain output states as representing an event of interest. This provides the ability to train an AFE on trigger states that are more complex than simple threshold detectors.

Another method for trading off AFE signal quality against AI capability is compressive or compressed sensing. This uses known signal characteristics, such as sparsity, to lower the sample rate and, with it, power. Though this mainly affects the choice of sampling rate in the analog-to-digital converter, the AFE still needs to be designed to accommodate the signal’s full bandwidth. At the same time, the AFE may need to incorporate stronger front-end filtering to block interferers that may fold into the measurement frequency range.

Optimizing AFE/AI trade-offs through experimentation

With so many choices, experimentation will be key to determining the best tradeoffs for the target system. Operating at higher bandwidth and resolution specifications is a good start. Data can be readily filtered and converted to the digital domain at lower resolutions to see how they affect AI model performance.

The results of those experiments can be used to determine the target AFE’s specifications in terms of gain, filtering, bandwidth, and the ENOB needed. Such experiments also provide opportunities to experiment with more novel AFE processing, such as reservoir computing and compressive sensing to gauge how well they might enhance the final system.

The post Optimized analog front-end design for edge AI appeared first on ELE Times.

X-FAB’s XbloX accelerates time-to-market for scalable, high-performance SiC MOSFETs

Semiconductor today - Fri, 12/05/2025 - 11:50
Through its XbloX platform, analog/mixed-signal and specialty foundry X-FAB says that it is offering easy access to a standardized yet flexible set of silicon carbide (SiC) process technologies that accelerate the development of advanced power devices. From rapid prototyping to full production, the modular and fully scalable XbloX platform helps SiC device developers to expedite engineering assessments and technology release, with production starts achieved up to nine months faster than traditional methods, it is reckoned...

Introducing Wi-Fi 8: The Next Boost for the Wireless AI Edge

ELE Times - Fri, 12/05/2025 - 11:33

Courtesy: Broadcom

Wi-Fi 8 has officially arrived—and it marks a major leap forward for next-generation connectivity.

Wi-Fi has come a long way. Earlier generations (Wi-Fi 1 through 5) focused mainly on delivering content to users: streaming video, online gaming, and video calls. But today’s digital world runs in both directions. We create as much data as we consume. We upload high-resolution content, collaborate in real time, and rely on on-device AI for everything from productivity to entertainment. That makes the “last hop” between devices and wireless networks more critical than ever.

Wi-Fi 8 is built for this new reality. Evolving from the advances of Wi-Fi 6 and 7, it offers reliable performance at scale, consistently low latency, and significantly stronger uplink capacity—precisely what modern, AI-driven applications need to run smoothly and responsively.

Why Wi-Fi 8 Matters

The internet has shifted from passive browsing to immersive, interactive, and personalized experiences. Devices now sense, analyze, and generate data constantly. By the end of 2025, hundreds of millions of terabytes will be created every day, much of it from IoT, enterprise telemetry, and video. A lot of that data never even makes it to the cloud—it’s handled locally. But guess what still carries it around? Wi-Fi.

Uplink matters

Traditional traffic patterns skewed roughly 90/10 downlink/uplink. Not anymore. AI apps, smart assistants, and continuous sync push networks toward a 50/50 split. Your Wi-Fi can’t just be fast going to you—it has to be equally fast, fair, and predictable going from you.

Real-time Wi-Fi

In the age of AI, Wi-Fi has to be far more real-time. Take, for example, agentic apps that work with voice inputs. We all know today’s assistants can feel clunky—they buffer, they miss interruptions. To get to your true agentic assistant with a “Jarvis-like” back-and-forth, networks need ultra-low latency, less jitter, and fewer drops.

With the right broadband and Wi-Fi 8, those thresholds become possible.

What’s New in Wi-Fi 8

Wi-Fi 8 delivers a system-wide upgrade across speed, capacity, reach, and reliability. Here’s what that means in practice:

Higher throughput—more of the time: Wi-Fi 8 achieves higher real-world speeds with smarter tools. Unequal Modulation ensures each stream runs at its best rate, so one fading link doesn’t drag the others down. Enhanced MCS smooths out the data-rate ladder to prevent sudden slowdowns, while Advanced LDPC codes hold strong even in noisy conditions. The result: faster, steadier performance across the network.

More network capacity in busy air: In crowded spaces, Wi-Fi 8 is built for cooperation. Inter-AP Coordination helps access points schedule transmissions instead of colliding. Non-Primary Channel Access (NPCA) taps into secondary channels when the primary is congested, while Dynamic Subband Operation (DSO) lets wide channels serve multiple narrower-band clients at once. Dynamic Bandwidth Expansion (DBE) then selectively opens wider pipes for Wi-Fi 8 devices without disrupting legacy clients—unlocking more usable capacity where it’s needed most. 

Longer reach and stronger coverage close to the edge: Connections go farther with Distributed Resource Units (dRU), which spread transmit energy for noticeable gains at the fringe. And with Enhanced Long Range (ELR), a special 20-MHz mode extends coverage up to 2× in line-of-sight and about 50% farther in non-LoS—keeping links alive even at the outer edge of the network.

Reliability and QoS that stick: Real-time apps get the consistency they need thanks to smarter quality-of-service features. Low-Latency Indication prioritizes AR/VR, gaming, and voice traffic, while Seamless Roaming keeps calls and streams intact during movement. QoS enhancements and Prioritized EDCA reduce latency and prevent bottlenecks across multiple streams. Plus, enhanced in-device coexistence coordinates Wi-Fi with Bluetooth, UWB, and more to avoid self-interference. Together, these features make the network feel smoother and more reliable.

Wi-Fi 8 Real-World Impact

So what does all this look like in everyday use? Assume a busy apartment building, or an office full of devices roaming between access points. Signals aren’t perfect, interference is everywhere, but Wi-Fi 8 keeps things flowing. It does this by coordinating access points, smoothing out delays, and reducing radio clashes—so your most important traffic doesn’t get stuck in line.

Picture this: It’s a busy evening in a modern family home. One person is streaming a live sports match in 8K on the big screen, another is deep into an online game while streaming to friends, and a third is working on a project that involves real-time AI voice assistance.

Meanwhile, the smart doorbell detects someone approaching. But instead of just pinging a vague “motion alert,” the AI-powered camera recognizes whether it’s a delivery driver dropping off a package, a neighbor stopping by, or a family member arriving home. The alert is contextual and useful.

In older Wi-Fi environments, that mix of high-bandwidth streams, real-time gaming, and constant AI inference could lead to stutters, buffering, or dropped packets at the worst moments. With Wi-Fi 8, all of it just works. The 8K stream stays crisp. The gamer experiences smooth, low-latency play. The AI assistant responds instantly without awkward delays. And the doorbell notification comes through without competing for airtime—because the network can intelligently prioritize, coordinate, and balance all that traffic.

That’s the difference Wi-Fi 8 brings: a reliable home network, no matter how many devices or demands are piled on at once.

Why it works: Wi-Fi 8 increases throughput and, at the same time, greatly reduces the 99th percentile latency tail. By coordinating APs, elevating delayed packets, and reducing radio self-contention, it shortens queues, avoids collisions, and keeps critical traffic flowing—even when signals aren’t perfect.

Conclusion

Wi-Fi 8 represents far more than an incremental upgrade—it marks a fundamental shift in how wireless networks will power the AI-driven world. As our homes, offices, factories, and devices generate and process more data than ever, the need for reliable uplink performance, real-time responsiveness, and intelligent coordination becomes non-negotiable. Broadcom’s new ecosystem brings these capabilities to life, ensuring that next-generation applications—from immersive entertainment and autonomous IoT to true conversational AI—can operate smoothly, consistently, and securely.

With Wi-Fi 8, the wireless edge finally catches up to the ambitions of modern computing. It isn’t just faster Wi-Fi; it’s the foundation for the seamless, AI-enabled experiences we’ve been waiting for—and a major leap toward the connected future we’re building every day.

The post Introducing Wi-Fi 8: The Next Boost for the Wireless AI Edge appeared first on ELE Times.

КПІ ім. Ігоря Сікорського — учасник семінару-практикуму щодо фінансування вищої освіти

Новини - Fri, 12/05/2025 - 11:00
КПІ ім. Ігоря Сікорського — учасник семінару-практикуму щодо фінансування вищої освіти
Image
kpi пт, 12/05/2025 - 11:00
Текст

4–6 грудня 2025 року перший проректор КПІ ім.

Pages

Subscribe to Кафедра Електронної Інженерії aggregator