Українською
  In English
Збирач потоків
Dell 2219H Diode replaced - problem solved
![]() | Hi guys! Last weekend I want to play CS on my PC, but when I turned monitor on, i noticed that the indication light on monitor do not works properly, just flashing fast. I dissasembled monitor, and take the power board to my father to do diagnostic, assuming that the problem is on that component. My father told me that the problem is faulty HBR5200 SMD diode. I tried to find it in shops in my country (Serbia), but it is unsucceful. I looked on few old PCBs and find diode that could be appropriate. We used SR5200, it's not SMD diode, but we fitted it, because it has same specifications. Monitor works perffectly. [link] [comments] |
Weekly discussion, complaint, and rant thread
Open to anything, including discussions, complaints, and rants.
Sub rules do not apply, so don't bother reporting incivility, off-topic, or spam.
Reddit-wide rules do apply.
To see the newest posts, sort the comments by "new" (instead of "best" or "top").
[link] [comments]
When you want low ESR in a limited footprint.
![]() | submitted by /u/1Davide [link] [comments] |
Relentless Innovation is Driving Software-Defined Vehicles into the Future
Amazon and Google: Can you AI-upgrade the smart home while being frugal?

The chronological proximity of Amazon and Google’s dueling new technology and product launch events on Tuesday and Wednesday of this week was highly unlikely to have been a coincidence. Which company, therefore, reacted to the other? Judging solely from when the events were first announced, which is the only data point I have as an outsider, it looks like Google was the one who initially put the stake in the ground on September 2nd with an X (the service formerly known as Twitter) post, with Amazon subsequently responding (not to mention scheduling its event one day earlier in the calendar) two weeks later, on September 15.
Then again, who can say for sure? Maybe Amazon started working on its event ahead of Google, and simply took longer to finalize the planning. We’ll probably never know for sure. That said, it also seems from the sidelines that Amazon might have also gotten its hands on a leaked Google-event script (to be clear, I’m being completely facetious with what I just said). That’s because, although the product specifics might have differed, the overall theme was the same: both companies are enhancing their existing consumer-residence ecosystems with AI (hoped-for) smarts, something that they’ve both already announced as an intention in the past:
- Amazon, with a generative AI evolution-for-Alexa allusion two years ago, subsequently assigned the “Alexa+” marketing moniker back in February, and
- Google, which foreshadowed the smart home migration to come within its announcement of the Google Assistant-to-Gemini transition for mobile devices back in March.
Quoting from one of Google’s multiple event-tied blog posts as a descriptive example of what both companies seemingly aspire to achieve:
The idea of a helpful home is one that truly takes care of the people inside it. While the smart home has shown flashes of that promise over the last decade, the underlying AI wasn’t anywhere as capable as it is today, so the experience felt transactional, not conversational. You could issue simple commands, but the home was never truly conversational and seldom understood your context.
Today, we’re taking a massive step toward making the helpful home a reality with a fundamentally new foundation for Google Home, powered by our most capable AI yet, Gemini. This new era is built on four pillars: a new AI for your home, a redesigned app, new hardware engineered for this moment and a new service to bring it all together.
Amazon’s hardware “Hail Mary”Of the two companies, Amazon has probably got the most to lose if it fumbles the AI-enhancement service handoff. That’s because, as Ars Technica’s coverage title aptly notes, “Alexa’s survival hinges on you buying more expensive Amazon devices”:
Amazon hasn’t had a problem getting people to buy cheap, Alexa-powered gadgets. However, the Alexa in millions of homes today doesn’t make Amazon money. It’s largely used for simple tasks unrelated to commerce, like setting timers and checking the weather. As a result, Amazon’s Devices business has reportedly been siphoning money, and the clock is ticking for Alexa to prove its worth.
I’m ironically a case study of Amazon’s conundrum. Back in early March, when the Alexa+ early-access program launched, I’d signed up. I finally got my “Your free Early Access to Alexa+ starts now” email on September 24, a week and a day ago, as I’m writing this on October 2. But I haven’t yet upgraded my service, which is admittedly atypical behavior for a tech enthusiast such as myself.
Why? Price isn’t the barrier in my particular case (though it likely would be for others less Amazon-invested than me); mine’s an Amazon Prime-subscribing household, so Alexa+ is bundled versus costing $19.99 per month for non-subscribers. Do the math, though, and why anyone wouldn’t go the bundle-with-Prime route is the question (which, I’d argue, is Amazon’s core motivation); Prime is $14.99 per month or $139/year right now.
So, if it’s not the service price tag, then what alternatively explains my sloth? It’s the devices—more accurately, my dearth of relevant ones—with the exception of the rarely-used Alexa app on my smartphones and tablets (which, ironically, I generally fire up only when I’m activating a new standalone Alexa-cognizant device).
Alexa+ is only supported on newer-generation hardware, whereas more than half (and the dominant share in regular use) of the devices currently activated in my household are first-generation Echoes, early-generation Echo Dots, and a Tap. With the exception of the latter, which I sometimes need to power-cycle before it’ll start streaming Amazon Music-sourced music again, they’re all still working fine, at least for the “transactional” (per Google’s earlier lingo) functions I’ve historically tasked them with.
And therefore, as an example of “chicken and the egg” paralysis, in the absence of their functional failure, I’m not motivated to proactively spend money to replace them in order to gain access to additional Alexa+ services that might not end up rationalizing the upfront investment.
Speakers, displays, and stylus-augmented e-book readersAmazon unsurprisingly announced a bevy of new devices this week, strangely none of which seemingly justified a press release or, come to think of it, even an event video, in stark contrast to Apple’s prerecorded-only approach (blog posts were published a’plenty, however). Many of the new products are out-of-the-box Alexa+ capable and, generally speaking, they’re also more expensive than their generational precursors. First off is the curiously reshaped (compared to its predecessor) Echo Studio, in both graphite (shown) and “glacier” white color schemes:
There’s also a larger version of the now-globular Echo Dot (albeit still smaller than the also-now-globular Echo Studio), called the Echo Dot Max, with the same two color options:
And two also-redesigned-outside smart displays, the Echo Show 11 and latest-generation Echo Show 8, which basically (at least to me) look like varying-sized Echo Dots with LCDs stuck to their fronts. They both again come in both graphite and glacier white options:
and also have optional, added-price, more position-adjustable stands:
This new hardware begs the perhaps-predictable question: Why is my existing hardware not Alexa+ capable? Assuming all the deep learning inference heavy lifting is being done on the Amazon “cloud”, what resource limitations (if any) exist with the “edge” devices already residing in my (at least semi-) smart home?
Part of the answer might be with my assumption in the prior sentence; perhaps Amazon is intending for them to have limited (at least) ongoing standalone functionality if broadband goes down, which would require beefier processing and memory than that included with my archaic hardware. Perhaps, too, even if all the AI processing is done fully server-side, Amazon’s responsiveness expectations aren’t adequately served by my devices’ resources, in this case also including Wi-Fi connectivity. And yes, to at least some degree, it may just be another “obsolescence by design” case study. Sigh. More likely, my initial assumption was over-simplistic and at least a portion of the inference functions suite is running natively on the edge device using locally stored deep learning models, particularly for situations where rapid response time (vs edge-to-cloud-and-back round-trip extended latency) is necessary.
Other stuff announced this week included three new stylus-inclusive, therefore scribble-capable, Kindle Scribe 11” variants, one with a color screen, which this guy, who tends to buy—among other content—comics-themed e-books that are only full-spectrum appreciable on tablet and computer Kindle apps, found intriguing until he saw the $629.99-$679.99 price tag (in fairness, the company also sells stylus-less, but notably less expensive Colorsoft models):
and higher-resolution indoor and outdoor Blink security cameras, along with a panorama-stitching two-camera image combiner called the Blink Arc:
Speaking of security cameras, Ring founder Jamie Siminoff, who had previously left Amazon post-acquisition, has returned and was on hand this week to personally unveil also-resolution-bumped (this time branded as Retinal Vision) indoor- and outdoor-intended hardware, including an updated doorbell camera model:
Equally interesting to me are Ring’s community-themed added and enhanced services: Familiar Faces, Alexa+ Greetings, and (for finding lost dogs) Search Party. And then there’s this notable revision of past stance, passed along as a Wired coverage quote absent personal commentary:
It’s worth noting that Ring has brought back features that allow law enforcement to request footage from you in the event of an incident. Ring customers can choose to share video, and they can stay anonymous if they opt not to send the video. “There is no access that we’re giving police to anything other than the ability to, in a very privacy-centric way, request footage from someone who wants to do this because they want to live in a safe neighborhood,” Siminoff tells WIRED.
A new software chapterLast, but not least (especially in the last case) are several upgraded Fire TVs, still Fire OS-based:
and a new 4K Fire TV Stick, the latter the first out-of-box implementation example of Amazon’s newfound Linux embrace (and Linux-derived Android about-face), Vega OS:
We’d already known for a while that Amazon was shutting down its Appstore, but its Fire OS-to-Vega OS transition is more recent. Notably, there’s no more local app sideloading allowed; all apps come down from the Amazon cloud.
Google’s more modest (but comprehensive) responseGoogle’s counterpunch was more muted, albeit notably (and thankfully, from a skip-the-landfill standpoint) more inclusive of upgrades for existing hardware versus the day-prior comparative fixation on migrating folks to new devices, and reflective of a company that’s fundamentally a software supplier (with a software-licensing business model). Again from Wired’s coverage:
This month, Gemini will launch on every Google Assistant smart home device from the last decade, from the original 2016 Google Home speaker to the Nest Cam Indoor 2016. It’s rolling out in Early Access, and you can sign up to take part in the Google Home app.
There’s more:
Google is bringing Gemini Live to select Google Home devices (the Nest Audio, Google Nest Hub Max, and Nest Hub 2nd Gen, plus the new Google Home Speaker). That’s because Gemini Live has a few hardware dependencies, like better microphones and background noise suppression. With Gemini Live, you’ll be able to have a back-and-forth conversation with the chatbot, even have it craft a story to tell kids, with characters and voices.
But note the fine print, which shouldn’t be a surprise to anyone who’s already seen my past coverage: “Support doesn’t include third-party devices like Lenovo’s smart displays, which Google stopped updating in 2023.”
One other announced device, an upgraded smart speaker visually reminiscent of Apple’s HomePod mini, won’t ship until early next year. There was one other announced device, an upgraded smart speaker visually reminiscent of Apple’s HomePod mini, which won’t ship until early next year.
And, as the latest example of Google’s longstanding partnership with Walmart, the latter retailer has also launched a line of onn.-branded, Gemini-supportive security cameras and doorbells:
That’s what I’ve got for you today; we’ll have to see what, if anything else, Apple has for us before the end of the year, and whether it’ll take the form of an event or just a series of press releases. Until then, your fellow readers and I await your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Disassembling the Echo Studio, Amazon’s Apple HomePod foe
- Amazon’s Echo Auto Assistant: Legacy vehicle retrofit-relevant
- Lenovo’s Smart Clock 2: A “charged” device that met a premature demise
- The 2025 Google I/O conference: A deft AI pivot sustains the company’s relevance
- Google’s fall…err…summer launch: One-upping Apple with a sizeable product tranche
The post Amazon and Google: Can you AI-upgrade the smart home while being frugal? appeared first on EDN.
Візит делегації Міжнародного Комітету Червоного Хреста (ICRC)
КПІ ім. Ігоря Сікорського відвідала делегація Міжнародного Комітету Червоного Хреста (ICRC) на чолі з Ульріком Графом, радником ICRC з питань новітніх технологій ведення війни.
Благодійний концерт «Під покровом захисників України» у КПІ ім. Ігоря Сікорського
У традиційному концерті до Дня захисників і захисниць України взяли участь:
🎨 Національний музей Тараса Шевченка запрошує на виставки та екскурсії
Національний музей Тараса Шевченка пропонує цієї осені для студентів і викладачів наступні оглядові, тематичні екскурсії та виставки:
Infrared Communication Made Simple for Everyday Devices
As technology advances, most everyday devices depend on short-range communication to exchange or gather data. Although wireless technologies such as Wi-Fi and Bluetooth dominate the market, they are not always the ideal option especially for low-power applications where efficiency, simplicity, and cost management are most important. In these instances, infrared (IR) communication is still an efficient option that energizes applications such as smart meters, wearable electronics, medical devices, and remote controls.
But using an infrared link is not always easy. An IR diode cannot just be attached to a microcontroller pin and be efficient. In order to avoid saturating the diode and to provide a robust signal, a low-frequency carrier is often employed, which then must be modulated by the data stream. Historically, this has involved using more modem chips, timers, and mixers increasing cost, complexity, and additional board space to the design.
The Inefficient Signal Generation Challenge
Fundamentally, infrared communication relies on two key signals:
- Carrier Frequency – a square wave that paces the IR diode at a suitable frequency.
- Data Stream – the content of the communication, which must modulate the carrier.
In most implementations, these signals are from various peripherals on a microcontroller and must be merged externally. This adds more components and uses multiple I/O pins, which is not conducive to small, battery-powered devices.
A Smarter Way Forward
Since recent microcontrollers started meeting this challenge, they now provide easier mechanisms for IR signal generation. Instead of needing a separate modem chip, some of these devices combine the timer output (carrier frequency) with the communication output (data) internally. The result is a ready modulation that can directly drive an infrared diode.
An example that offers such capability is RA4C1. Being an 80 MHz device with low-power operating modes down to 1.6 V, it offers an SCI/AGT mask function that combines a UART or IrDA interface output with a timer signal and thus makes it possible to generate the required modulated IR output without any external hardware.
Design Flexibility
The reason this method is efficient is because it is flexible:
- Developers have the option of utilizing a basic UART output that is modulated by a timer-generated carrier.
- Or they can implement an integrated IrDA interface, with provisions for direct modulation or phase-inverted output based on the application requirement.
Both schemes present a clean, stable signal while minimizing the amount of external components and I/O pins needed.
For designers of small electronics like handheld meters, fitness monitors, or household appliances space and power efficiency are key considerations. An IR communication solution with minimal IR circuitry saves cost and enhances reliability by eliminating outside circuitry. It also aids in speeding up product development as engineers no longer need to spend extra time connecting individual modem chips or modulation hardware.
Conclusion:
Infrared communication remains to provide a reliable, low-cost solution for short-range connectivity, particularly in environments where the inclusion of a full radio system is not warranted. With newer microcontrollers embracing built-in modulation capabilities, establishing an IR connection has never been simpler. This change makes it possible for developers to provide smarter, power-sensing products while maintaining simplicity and low cost.
(This article has been adapted and modified from content on Renesas.)
The post Infrared Communication Made Simple for Everyday Devices appeared first on ELE Times.
PoE basics and beyond: What every engineer should know

Power over Ethernet (PoE) is not rocket science, but it’s not plug-and-play magic either. This short primer walks through the basics with a few practical nudges for those curious to try it out.
It’s a technology that delivers electrical power alongside data over standard twisted-pair Ethernet cables. It enables a single RJ45 cable to supply both network connectivity and power to powered devices (PDs) such as wireless access points, IP cameras, and VoIP phones, eliminating the need for separate power cables and simplifying installation.
PoE essentials: From devices to injectors
Any network device powered via PoE is known as a powered device or PD, with common examples including wireless access points, IP security cameras, and VoIP phones. These devices receive both data and electrical power through Ethernet cables from power sourcing equipment (PSE), which is classified as either “endspan” or “midspan.”
An endspan—also called an endpoint—is typically a PoE-enabled network switch that directly supplies power and data to connected PDs, eliminating the need for a separate power source. In contrast, when using a non-PoE network switch, an intermediary device is required to inject power into the connection. This midspan device, often referred to as a PoE injector, sits between the switch and the PD, enabling PoE functionality without replacing existing network infrastructure. A PoE injector sends data and power together through one Ethernet cable, simplifying network setups.
Figure 1 A PoE injector is shown with auto negotiation that manages power delivery safely and efficiently. Source: http://poe-world.com
The above figure shows a PoE injector with auto negotiation, a safety and compatibility feature that ensures power is delivered only when the connected device can accept it. Before supplying power, the injector initiates a handshake with the PD to detect its PoE capability and determine the appropriate power level. This prevents accidental damage to non-PoE devices and allows precise power delivery—whether it’s 15.4 W for Type 1, 25.5 W for Type 2, or up to 90 W for newer Type 4 devices.
Note at this point that the original IEEE 802.3af-2003 PoE standard provides up to 15.4 watts of DC power per port. This was later enhanced by the IEEE 802.3at-2009 standard—commonly referred to as PoE+ or PoE Plus—which supports up to 25.5 watts for Type 2 devices, making it suitable for powering VoIP phones, wireless access points, and security cameras.
To meet growing demands for higher power delivery, the IEEE introduced a new standard in 2018: IEEE 802.3bt. This advancement significantly increased capacity, enabling up to 60 watts (Type 3) and circa 100 watts (Type 4) of power at the source by utilizing all four pairs of wires in Ethernet cabling compared to earlier standards that used only two pairs.
As indicated previously, VoIP phones were among the earliest applications of PoE. Wireless access points (WAPs) and IP cameras are also ideal use cases, as all these devices require both data connectivity and power.
Figure 2 This PoE system is powering a fixed wireless access (FWA) device.
As a sidenote, an injector delivers power over the network cable, while a splitter extracts both data and power—providing an Ethernet output and a DC plug.
A practical intro to PoE for engineers and DIYers
So, PoE simplifies device deployment by delivering both power and data over a single cable. For engineers and DIYers looking to streamline installations or reduce cable clutter, PoE offers a clean, scalable solution.
This brief session outlines foundational use cases and practical considerations for first-time PoE users. No deep dives: just clear, actionable insights to help you get started with smarter, more efficient connectivity.
Up next is the tried-and-true schematic of a passive PoE injector I put together some time ago for an older IP security camera (24 VDC/12 W).
Figure 3 Schematic demonstrates how a passive PoE injector powers an IP camera. Source: Author
In this setup, the LAN port links the camera to the network, and the PoE port delivers power while completing the data path. As a cautionary note, use a passive PoE injector only when you are certain of the device’s power requirements. If you are unsure, take time to review the device specifications. Then, either configure a passive injector to match your setup or choose an active PoE solution with integrated negotiation and protection.
Fundamentally, most passive PoE installations operate across a range of voltages, with 24 V often serving as practical middle ground. Even lower voltages, such as 12 V, can be viable depending on cable length and power requirements. However, passive PoE should never be applied to devices not explicitly designed to accept it; doing so risks damaging the Ethernet port’s magnetics.
Unlike active PoE standards, passive PoE delivers power continuously without any form of negotiation. In its earliest and simplest form, it leveraged unused pairs in Fast Ethernet to transmit DC voltage—typically using pins 4–5 for positive and 7–8 for negative, echoing the layout of 802.3af Mode B. As Gigabit Ethernet became common, passive PoE evolved to use transformers that enabled both power and data to coexist on the same pins, though implementations vary.
Seen from another angle, PoE technology typically utilizes the two unused twisted pairs in standard Ethernet cables—but this applies only to 10BASE-T and 100BASE-TX networks, which use two pairs for data transmission.
In contrast, 1000BASE-T (Gigabit Ethernet) employs all four twisted pairs for data, so PoE is delivered differently—by superimposing power onto the data lines using a method known as phantom power. This technique allows power to be transmitted without interfering with data, leveraging the center tap of Ethernet transformers to extract the common-mode voltage.
PoE primer: Surface touched, more to come
Though we have only skimmed the surface, it’s time for a brief wrap-up.
Fortunately, even beginners exploring PoE projects can get started quickly, thanks to off-the-shelf controller chips and evaluation boards designed for immediate use. For instance, the EV8020-QV-00A evaluation board—shown below—demonstrates the capabilities of the MP8020, an IEEE 802.3af/at/bt-compliant PoE-powered device.
Figure 4 MPS showcases the EV8020-QV-00A evaluation board, configured to evaluate the MP8020’s IEEE 802.3af/at/bt-compliant PoE PD functionality. Source: MPS
Here are my quick picks for reliable, currently supported PoE PD interface ICs—the brains behind PoE:
- TI TPS23730 – IEEE 802.3bt Type 3 PD with integrated DC-DC controller
- TI TPS23731 – No-opto flyback controller; compact and efficient
- TI TPS23734 – Type 3 PD with robust thermal performance and DC-DC control
- onsemi NCP1081 – Integrated PoE-PD and DC-DC converter controller; 802.3at compliant
- onsemi NCP1083 – Similar to NCP1081, with auxiliary supply support for added flexibility
- TI TPS2372 – IEEE 802.3bt Type 4 high-power PD interface with automatic MPS (maintain power signature) and autoclass
Similarly, leading semiconductor manufacturers offer a broad spectrum of PSE controller ICs for PoE applications—ranging from basic single-port controllers to sophisticated multi-port managers that support the latest IEEE standards.
As a notable example, TI’s TPS23861 is a feature-rich, 4-channel IEEE 802.3at PSE controller that supports auto mode, external FET architecture, and four-point detection for enhanced reliability, with optional I²C control and efficient thermal design for compact, cost-effective PoE systems.
In short, fantastic ICs make today’s PoE designs smarter and more efficient, especially in dynamic or power-sensitive environments. Whether you are refining an existing layout or venturing into high-power applications, now is the time to explore, prototype, and push your PoE designs further. I will be here.
T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.
Related Content
- More opportunities for PoE
- A PoE injector with a “virtual” usage precursor
- Simple circuit design tutorial for PoE applications
- Power over Ethernet (PoE) grows up: it’s now PoE+
- Power over Ethernet (PoE) to Power Home Security & Health Care Devices
The post PoE basics and beyond: What every engineer should know appeared first on EDN.
Blue Laser Fusion wins US DOE 2025 INFUSE project award
Quintauris and Everspin Technologies Partner to Advance Dependable RISC-V Solutions for Automotive
Quintauris and Everspin Technologies, Inc. announced a strategic collaboration to bring advanced memory solutions into the Quintauris ecosystem.
The collaboration aims to strengthen the reliability and safety of RISC-V–based platforms, particularly for automotive, industrial and edge applications where data persistence, integrity, low latency and security are critical.
By integrating Everspin’s proven MRAM technologies with Quintauris’ reference architectures and real-time platforms, the partnership works to ensure memory subsystems meet the highest standards for performance and functional safety – one of the most pressing challenges in safety-driven markets.
Everspin’s strong commitment to the automotive market extends beyond technology to include proper certifications, manufacturing excellence, long-term supply and continuous quality improvement, values that align closely with Quintauris’ mission to make RISC-V commercially ready for automotive programs.
“Everspin’s leadership in MRAM and their track record of over 200 million products deployed make them a strong addition to our ecosystem,” said Pedro Lopez, Market Strategy Officer at Quintauris. “Together, we are closing the gap between innovation and dependability, enabling RISC-V to be confidently adopted in next-generation automotive programs.”
“RISC-V is opening new doors in safety-critical computing, but it also demands memory that can match its performance and reliability,” said David Schrenk, VP Business Development at Everspin Technologies. “By integrating our MRAM into the Quintauris platform, we’re helping developers build systems that retain data integrity under power loss, radiation or extreme temperatures, without compromising speed or security. This partnership strengthens the foundation for scalable, dependable platforms that will shape the future of automotive electronics.”
The post Quintauris and Everspin Technologies Partner to Advance Dependable RISC-V Solutions for Automotive appeared first on ELE Times.
EEVblog 1712 - CSIRO Mobile Space Mission Control Centre
Siemens Unveils ‘Groundbreaking’ Software for Automated Analog IC Tests
DMD powers high-resolution lithography

With over 8.9 million micromirrors, TI’s DLP991UUV digital micromirror device (DMD) enables maskless digital lithography for advanced packaging. Its 4096×2176 micromirror array, 5.4-µm pitch, and 110-Gpixel/s data rate remove the need for costly mask technology while providing scalability and precision for increasingly complex designs.
The DMD is a spatial light modulator that controls the amplitude, direction, and phase of incoming light. Paired with the DLPC964 controller, the DLP991UUV DMD supports high-speed continuous data streaming for laser direct imaging. Its resolution enables large 3D-print build sizes, fine feature detail, and scanning of larger objects in 3D machine vision applications.
Offering the highest resolution and smallest mirror pitch in TI’s Digital Light Processing (DLP) portfolio, the DLP991UUV provides precise light control for industrial, medical, and consumer applications. It steers UV wavelengths from 343 nm to 410 nm and delivers up to 22.5 W/cm² at 405 nm.
Preproduction quantities of the DLP991UUV are available now on TI.com.
The post DMD powers high-resolution lithography appeared first on EDN.
Co-packaged optics enables AI data center scale-up

AIchip Technologies and Ayar Labs unveiled a co-packaged optics (CPO) solution for multi-rack AI clusters, providing extended reach, low latency, and high radix. The joint development tackles AI infrastructure data-movement bottlenecks by replacing copper interconnects with CPO in large-scale accelerator deployments.
The offering integrates Ayar’s TeraPHY optical engines with AIchip’s advanced packaging on a common substrate, bringing optical I/O directly to the AI accelerator interface. This enables over 100 Tbps of scale-up bandwidth per accelerator and supports more than 256 optical scale-up ports per device. TeraPHY is also protocol agnostic, allowing flexible integration with customer-designed chiplets and fabrics.
The co-packaged solution scales multi-rack networks without the power and latency penalties of pluggable optics by shortening electrical traces and placing optical I/O close to the compute core. With UCIe support and flexible protocol endpoints at the package boundary, it integrates alongside compute tiles, memory, and accelerators while maintaining performance, signal integrity, and thermal requirements.
Both companies are working with select customers to integrate co-packaged optics into next-generation AI accelerators and scale-up switches. They will provide collateral, reference architectures, and build options to qualified design teams.
The post Co-packaged optics enables AI data center scale-up appeared first on EDN.
Platform speeds AI from prototype to production

Purpose-built for Lantronix Open-Q system-on-modules (SOMs), EdgeFabric.ai is a no-code development platform for designing and deploying edge AI applications. According to Lantronix, it helps customers move AI from prototype to production in minutes instead of months, without needing a team of AI experts.
The visual orchestration platform integrates with Open-Q hardware and leading AI model ecosystems, automatically configuring performance across Qualcomm GPUs, DSPs, and NPUs. It streamlines data pipelines with drag-and-drop workflows for AI, video, and sensors, while delivering real-time visualization. Prebuilt templates support common use cases such as surveillance, anomaly detection, and safety monitoring.
EdgeFabric.ai auto-generates production-ready code in Python and C++, making it easy to build and adjust pipelines, fine-tune parameters, and adapt workflows quickly.
Learn more about the EdgeFabric.ai platform here. For details on Open-Q SOMs, visit SOM solutions. Lantronix also offers engineering services for development support.
The post Platform speeds AI from prototype to production appeared first on EDN.
Dual-core MCUs drive motor-control efficiency

RA8T2 MCUs from Renesas integrate dual processors for real-time motor control in advanced factory automation and robotics. They pair a 1-GHz Arm Cortex-M85 core with an optional 250-MHz Cortex-M33 core, combining high-speed operation, large memory, timers, and analog functions on a single chip.
The Cortex-M85 with Helium technology accelerates DSP and machine-learning workloads, enabling AI functions that predict motor maintenance needs. In dual-core variants, the embedded Cortex-M33 separates real-time control from general-purpose tasks to further enhance system performance.
RA8T2 devices integrate up to 1 MB of MRAM and 2 MB of SRAM, including 256 KB of TCM for the Cortex-M85 and 128 KB of TCM for the Cortex-M33. For high-speed networking in factory automation, they offer multiple interfaces, such as two Gigabit Ethernet MACs with DMA and a two-port EtherCAT slave. A 32-bit, 14-channel timer delivers PWM functionality up to 300 MHz.
The RA8T2 series of MCUs is available now through Renesas and its distributors.
The post Dual-core MCUs drive motor-control efficiency appeared first on EDN.
Image sensor provides ultra-high dynamic range

Omnivision’s OV50R40 50-Mpixel CMOS image sensor delivers single-exposure HDR up to 110 dB with second-generation TheiaCel technology. It also reduces power consumption by ~20% compared with the previous-generation OV50K40, enabling longer HDR video capture.
Aimed at high-end smartphones and action cameras, the OV50R40 achieves ultra-high dynamic range in any lighting. Built on PureCel Plus‑S stacked die technology, the color sensor supports 100% coverage quad phase detection for improved autofocus. It features an active array of 8192×6144 with 1.2‑µm pixels in a 1/1.3‑in. format and supports premium 8K video with dual analog gain (DAG) HDR and on-sensor crop zoom.
The sensor also supports 4-cell binning, producing 12.5‑Mpixel resolution at 120 fps. For 4K video at 60 fps, it provides 3-channel HDR with 4× sensitivity, ensuring enhanced low-light performance.
The OV50R40 is now sampling, with mass production planned for Q1 2026.
The post Image sensor provides ultra-high dynamic range appeared first on EDN.
Прийнято Положення про систему запобігання плагіату, фабрикації та фальсифікації
Прийняття Положення про систему запобігання плагіату, фабрикації та фальсифікації в Національному технічному університеті України "Київський політехнічний інститут імені Ігоря Сікорського" є надзвичайно важливим і своєчасним кроком на шляху до формування
Сторінки
