Збирач потоків

Weekly discussion, complaint, and rant thread

Reddit:Electronics - Сбт, 03/14/2026 - 17:00

Open to anything, including discussions, complaints, and rants.

Sub rules do not apply, so don't bother reporting incivility, off-topic, or spam.

Reddit-wide rules do apply.

To see the newest posts, sort the comments by "new" (instead of "best" or "top").

submitted by /u/AutoModerator
[link] [comments]

Spent hours troubleshooting to find out I got my PFETs backwards qnq

Reddit:Electronics - Сбт, 03/14/2026 - 13:13
Spent hours troubleshooting to find out I got my PFETs backwards qnq

I’m attempting to make an LED scoreboard for my cricket team using large 7‑segment LED displays. I want it to be battery powered, so I’m trying to reduce the power needed to run 6+ digits at once by using multiplexing. Each segment is connected to a high‑side switch, and the digits to the low‑side. That way I can turn on each digit by pulling it low, and only the segments held high will activate.

The code I’m using runs on an Arduino, which talks to a cheap PCA8695 PWM board. That board connects to a custom MOSFET driver board that handles the high‑ and low‑side switching.

Running code that worked fine in my prototype setup just gave me an epileptic strobing effect on all segments, which completely threw me. I spent hours probing with a multimeter, using the oscilloscope at work, and eventually started cutting “non‑essential” components off the board. Instead of getting an inverted 12 V PWM signal like I expected, I was constantly getting a square wave oscillating between 12 V and 11.5 V no matter what I did.

I was about to post on r/AskElectronics for help, but I wanted to be 110% sure I wasn’t missing something obvious. So I went to falstad.com and built the circuit in the simulator. Sure enough, it behaved exactly how I expected. Then I noticed a little checkbox for “Swap D/S,” and out of curiosity I clicked it… bingo.

For testing, I’m going to desolder the PFETs I’ve got and jankily wire them in upside‑down just to confirm that’s the issue before ordering new ones.

Moral of the story: make sure you’re using the right datasheet for your parts, because manufacturers love reusing part numbers even when the pinouts are completely different.

(p.s. pls don't be too mean about diagram conventions, signal noise, etc. cos this is a self-taught learning exercise and I'm trying my best)

submitted by /u/NinjaBreadM4N
[link] [comments]

30-minute PCB fabrication with a fiber laser (double-sided boards)

Reddit:Electronics - Сбт, 03/14/2026 - 06:55
30-minute PCB fabrication with a fiber laser (double-sided boards)

I've been experimenting with using a fiber laser to fabricate prototype PCBs.

Current workflow:

- design PCB

- laser isolate traces

- drill vias

- clean

- solder

Total time from design to board is about 30 minutes.

Trace pitch so far is around ___ mil and I've been able to do reliable double-sided boards.

I made a video showing the full process and the relaxation oscillator circuit I designed for it:

www.youtube.com/@Electronics_with_Joe

submitted by /u/Intelligent_Raise_40
[link] [comments]

Exploration Alternatives of Component Marketplaces

Reddit:Electronics - Птн, 03/13/2026 - 20:03
Exploration Alternatives of Component Marketplaces

The goal was to find where to buy electronics that i need(STM32F103C8T6 and STM32F401RET6), but figured it will be cool if i put all that in one post. Maybe someone finds it interesting.

submitted by /u/DamnStupidMan
[link] [comments]

IFW Dresden selects Agnitron Agilis 100 MOCVD platform for precursor chemistry and ultra-wide-bandgap materials development

Semiconductor today - Птн, 03/13/2026 - 18:48
Agnitron Technology Inc of Chanhassen, MN, USA says that its Agilis 100 MOCVD system has been selected by the Institute for Materials Chemistry (IMC) at the Leibniz Institute for Solid State and Materials Research (IFW) Dresden, Germany, for its MOCVD and ALD Competence Centre...

TNO and High Tech Campus Eindhoven begin construction of first 6-inch indium phosphide photonic chip foundry

Semiconductor today - Птн, 03/13/2026 - 16:40
The research institute TNO (the Netherlands Organization for Applied Scientific Research in Delft) and High Tech Campus Eindhoven are starting construction of what is reckoned will be the world’s first foundry for producing indium phosphide photonic chips on 6-inch wafers. The official opening was attended by European Commission executive vice-president Henna Virkkunen and the Netherlands’ Minister of Economic Affairs and Climate Heleen Herbert, and Minister of Defence Dilan Yeşilgöz-Zegerius...

Balun transformers: Linking balanced to unbalanced

EDN Network - Птн, 03/13/2026 - 14:48

Balun transformers remain indispensable in RF and high-frequency design, serving as the quiet interface between balanced transmission lines and unbalanced circuits. By enabling impedance matching, minimizing signal distortion, and suppressing common-mode noise, they provide the foundation for reliable connectivity in applications ranging from antennas to amplifiers to broadband communication systems.

As wireless technologies push toward higher frequencies and tighter integration, understanding the principles and practical nuances of balun transformers is key to optimizing performance and ensuring design resilience.

The term “balun” itself comes from balanced to unbalanced. While many implementations use transformer coupling, not all baluns are transformer-based—some rely on transmission line techniques. Using “balun transformer” specifies the transformer-type design, distinguishing it from coaxial sleeve or other non-transformer baluns.

 

Historic note: The iconic TV balun adapter

Before digital tuners and streaming boxes took over, this compact 300 Ω to 75 Ω matching transformer was a fixture in analog television setups. Designed to reconcile the impedance and mode mismatch between twin-lead ribbon antennas and coaxial inputs, it featured screw terminals for the antenna wire and a standard coaxial plug for the TV’s antenna input socket.

Connected at the final stage of the antenna lead and plugged directly into the tuner, it quietly performed its dual role—impedance transformation and balanced-to-unbalanced conversion. This ensured that rooftop signals reached living rooms with minimal distortion. In the analog broadcast era, this unassuming adapter was the last link in the RF chain, faithfully bridging generations of antenna technology.

Figure 1 Screwing the 300 Ω ribbon cable into the balun terminals and plugging its coaxial end into the TV’s antenna input socket completes the balanced-to-unbalanced transition. Source: Author

Video balun transformers: Bridging coax and twisted pair

Video balun transformers—more commonly referred to simply as video baluns in industry parlance—extend the utility of balun technology beyond RF and audio domains into the realm of video signal transmission. These devices convert unbalanced coaxial signals (such as composite video) into balanced signals suitable for twisted-pair cabling, and vice versa.

This conversion not only reduces susceptibility to electromagnetic interference (EMI) but also enables cost-effective long-distance video distribution using standard Cat5/Cat6 cabling. Passive video baluns rely on transformer coupling to maintain signal integrity without external power, while active baluns incorporate amplification and equalization to support higher resolutions or longer cable runs.

In surveillance and broadcast applications, video baluns have become indispensable for bridging legacy coaxial infrastructure with modern structured cabling, ensuring clean signal delivery and simplified installation.

Figure 2 Video baluns connect coaxial BNC interfaces to twisted-pair cabling and deliver HD CCTV signals over long distances with reduced interference. Source: Author

As a quick aside, it’s worth noting that the K and MP ratings of a video balun both denote its supported resolution class. The MP rating specifies the maximum camera resolution in megapixels, while the K rating expresses the same capability in terms of horizontal pixel count.

In practice, both ratings reflect the balun’s bandwidth and signal-handling capacity for HD CCTV. For example, a 4K balun supports roughly 8 megapixels of resolution, since 3840 × 2160 pixels equals about 8.3MP (8.3 million pixels).

Baluns in practice: Theory meets application

Balun transformers are invaluable not only for converting between balanced and unbalanced signals but also for performing impedance transformations with minimal loss. Unlike LC circuits, many balun designs can operate effectively across very wide frequency ranges.

In RF applications, baluns are commonly used to interface antennas with transmitters and receivers, ensuring that as much power as practically possible is delivered. This session blends accessible theory—without heavy mathematics—with a few practical pointers and real-world implementations.

Among the fundamental designs, the balun transformer is the most widely recognized. Using magnetic coupling, it converts between balanced and unbalanced signals while providing excellent isolation and impedance matching. Transmission-line baluns achieve balance through carefully arranged lengths of coaxial or twisted-pair lines, making them well-suited for wideband RF applications.

Hybrid baluns combine transformers and transmission-line techniques, offering flexibility across frequency ranges. Together, these basic types form the foundation for more advanced designs, and understanding their principles helps engineers and experimenters select the right balun for applications ranging from antenna systems to CCTV.

In practice, the terms “balun transformer” and “transformer balun” both refer to the same device: a balun realized through transformer coupling. The difference is mostly in emphasis. Balun transformer highlights the function first—balanced-to-unbalanced conversion—while noting that it’s implemented as a transformer.

Transformer balun highlights the construction first, pointing out that it’s a transformer adapted to serve as a balun. Both usages are common, but in technical writing “balun transformer” is often preferred because it stresses the primary role of the device.

A further distinction often made is between voltage baluns and current baluns. A voltage balun enforces equal voltages on the balanced output terminals, which can work well in many cases but may allow unequal currents if the load is not perfectly symmetrical. In contrast, a current balun enforces equal and opposite currents in the balanced lines, often providing better suppression of common-mode currents on antenna feedlines.

Both approaches have their place: voltage baluns are straightforward and widely used, while current baluns are often preferred in RF antenna systems where minimizing feedline radiation and maintaining balance are critical.

Also essential to audio systems, baluns form the core of passive direct injection (DI) boxes. A passive DI employs a transformer—acting as a voltage balun—to convert an unbalanced, high-impedance instrument signal into a balanced, low-impedance output. This conversion is vital for interfacing high-Z sources such as electric guitars with low-Z mixing console inputs over long cable runs.

By enforcing equal and opposite voltages on the balanced lines, the transformer achieves high common-mode rejection, suppressing noise and ensuring transparent signal transfer. This application demonstrates how the balancing principles fundamental to RF and CCTV extend seamlessly into professional audio, underscoring the cross-domain versatility of balun technology.

Figure 3 A passive DI box handles extreme signal levels without introducing any distortion. Source: Radial Engineering

Seemingly, instead of diving straight into balun transformer–based RF or video projects, makers may find it easier—and just as rewarding—to begin with a closely related audio build: the passive DI box. Ready-to-use direct box transformers are widely available, and their simplicity makes them an ideal starting point for a fun and accessible DIY project.

Notable part numbers include JT-DB-EPC and A187A10C, both excellent examples of components that make this project approachable for beginners. The Hammond 1140-DB-A is another great catch, offering a versatile option for those eager to experiment with high-quality audio designs.

Figure 4 The 1140-DB-A direct box transformer delivers a balanced microphone output from an unbalanced line-level signal, enabling long cable runs with minimal high-frequency loss. Source: Hammond

From first steps to deeper layers

As is often the case, we have only just wetted our feet—there is still a vast ocean of balun transformer theory, design variations, and application nuances left to explore. From specialized wideband implementations to creative DIY builds, each path opens new insights into how these deceptively simple devices shape signal integrity across RF, audio, and video domains.

This overview is meant as a starting point, a foundation for deeper dives into the many layers of balun transformer technology that await.

Your turn: If this sparked your curiosity, take the next step—experiment with a simple antenna balun build, revisit your audio gear with fresh eyes, or explore advanced designs in RF literature. Share your experiences, questions, or even your own schematics, because the best way to deepen understanding is to connect theory with practice.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Balun transformers: Linking balanced to unbalanced appeared first on EDN.

🚀 Інженерні тижні «KPISchool» для учнів 9–11 класів у КПІ ім. Ігоря Сікорського

Новини - Птн, 03/13/2026 - 14:22
🚀 Інженерні тижні «KPISchool» для учнів 9–11 класів у КПІ ім. Ігоря Сікорського
Image
kpi пт, 03/13/2026 - 14:22
Текст

З 23 березня по 28 березня 2026 року в Національному технічному університеті України «Київський політехнічний інститут імені Ігоря Сікорського» відбудуться Інженерні тижні «KPISchool» — освітній профорієнтаційний захід у межах проєкту «Майбутній КПІшник».

Designing the voice AI stack: Integrating spatial hearing AI with edge-based intent gating

EDN Network - Птн, 03/13/2026 - 14:00

We’re past the point where voice can be treated as just another feature.

For more than a decade, the smart home has operated under a flawed assumption: that voice is optional. It’s not. As homes grow more complex and connected, voice is the only interface that aligns with how people actually live.

Traditional interfaces don’t scale: touchscreens fail when your hands are full, apps demand too much attention, and remotes are always missing when you need them. Voice is the only input that works across rooms, contexts, and users, if it works reliably.

And yet, we’re still tethered to physical buttons and remote controls, because we don’t fully trust voice interfaces. They miss commands, struggle in noisy environments, and break the moment connectivity becomes unstable. That’s not a UI flaw. It’s an architectural one.

To replace the light switch, voice needs to be always available, always accurate, and always in context. That means rethinking where intelligence lives and how decisions are made.

Hybrid Voice AI architecture is not an incremental upgrade, it’s an engineering breakthrough that transforms the smart home from a scattered set of reactive gadgets into a cohesive, proactive system. By separating real-time, on-device reflexes from deep, cloud-based reasoning, this architecture is designed to make voice a trusted, primary interface, every time, in every room.

Making voice work in the real world

The flaw in current voice technology isn’t a lack of data; it’s a lack of clarity

Real homes are acoustically chaotic. They’re full of overlapping conversations, background music, household noise, and hard surfaces that introduce echo and reverb. Users speak from different rooms, distances, and angles. Commands are often ambiguous or incomplete. These aren’t edge cases. They’re the default operating conditions.

Current cloud-only models are powerful but slow, while legacy on-device models are fast but dim-witted. Neither alone can deliver the “Star Trek” experience users crave. To achieve the non-negotiable standard of 100% reliability, we need a system that mimics the human brain’s ability to process reflexes locally and complex thoughts deeply

In that context, today’s voice interfaces consistently fall short. Not because of a lack of data or model size, but because of fundamental architecture-level decisions about where processing happens, how quickly systems respond, and how they handle failure.

A symbiotic two-tier architecture

The innovation lies in splitting the intelligence. By decoupling immediate execution from deep reasoning, we create a system that is both instant and intelligent.

  1. The Reflex Layer – Edge AI (Supports Instant Response):
    1. Definition: Think of this as the smart home’s autonomic nervous system.
    2. Innovation: High-performance, always-on SLM embedded directly on the device’s silicon.
    3. Function: Handles the “here and now.” Commands like “Lights on” or “Volume down” are processed locally with near-zero latency.
    4. Impact: Delivers absolute privacy and instant responsiveness. No data leaves the room, and the experience feels as immediate as flipping a physical switch.
  2. The Reasoning Layer – Cloud AI (Intelligent Coordination):
    1. Definition: This acts as the system’s prefrontal cortex—responsible for reasoning.
    2. Innovation: Leverages large language models (LLMs) to manage long-term state, memory, and complex logic across devices and use cases.
    3. Function: Handles the “what if” and “what next.” It manages household routines, coordinates multiple devices, and draws inferences from incomplete inputs (e.g., “Order dinner for whoever is home tonight.”)
    4. Impact: Enables devices to go beyond command execution—they begin to understand intent, anticipate user needs, and adapt over time (Figure 1).

Figure 1 A hybrid voice stack routes audio through on-device perception (AEC, spatial analysis, separation, intent gating) and escalates only complex requests to cloud reasoning. (Source: Kardome)

Differentiation for the decade ahead

For OEMs and Tier 1 suppliers, architecture, not features, is emerging as the defining battleground for the next generation of smart home systems.

The market is saturated with devices that can set timers, play music, or toggle lights. These capabilities are now commodity. What will set future systems apart is their ability to demonstrate true Auditory Intelligence—to perceive, localize, and interpret human speech reliably, even in noisy, multi-speaker, real-world environments.

By integrating spatial hearing AI and cognition technologies into a hybrid architecture, manufacturers can go beyond individual product features and instead build the auditory nervous system of the modern home.

We are past the era of voice assistants that require users to repeat themselves or speak in rigid syntax. Hybrid Voice AI enables a different class of experience—one where technology is felt, but rarely seen.

Figure 2 Spatial processing turns a mixed audio scene (TV + two speakers + reverb) into separated target streams suitable for intent detection and command execution. (Source: Kardome)

What “reflex vs. reasoning” means

In a production voice system, “hybrid” isn’t simply “ASR on-device and an LLM in the cloud.” It’s a routing architecture with a continuously running perception pipeline that decides:

  • Is anyone speaking?
  • Who is speaking (and where)?
  • Is it directed at the device?
  • Can we execute locally, or do we need cloud reasoning?

A practical edge “reflex” stack typically includes:

  1. Acoustic front end (always-on): microphone capture → gain control / denoise → echo cancellation (to remove the device’s own playback).
  2. Spatial scene analysis: estimate how many sources exist and where they are relative to the device (near/far, left/right, different rooms).
  3. Source separation + target selection: isolate the intended speaker stream(s) and suppress competing sources (TV, music, second speaker).
  4. Speech activity detection + endpointing: stable detection of speech start/stop to avoid clipped commands and reduce false triggers.
  5. Device-directed intent gating (SLM): a lightweight model answers: “Is this speech for the device?” using spatial cues + conversational flow + linguistic signals.
  6. Execution vs. escalation:
    1. Local path: deterministic actions and short commands (“lights on,” “stop,” “volume down”) with minimal latency.
    2. Cloud path: long-horizon reasoning, multi-device planning, and tasks requiring external knowledge—only when needed.

 The engineering advantage is that the system can stay fast and predictable for everyday commands while still enabling deeper capabilities when appropriate.

Why spatial audio is the “make or break” layer

Most failures in today’s voice assistants begin before language: the system is fed garbage audio (mixed speakers, reverberation, background media), then asked to “understand” it. Hybrid architectures push the hard work earlier: fix the audio scene first, then do language.

Spatial processing matters because it enables three foundational capabilities:

  • Localization: determine where speech is coming from and whether it’s in the same room.
  • Separation: isolate a voice even with overlapping speakers and media noise.
  • Attribution: reduce wrong-room actions and improve “who said what” reliability.

This is also where direction of arrival (DOA)-only approaches struggle in real homes: reflective surfaces create strong echoes and multiple delayed arrivals. A “flat” directional estimate can become unstable under reverb, causing separation and attribution errors. A more robust approach treats each source as having a unique spatial signature (an “acoustic fingerprint”) and uses that signature to stabilize separation and tracking over time.

Latency, offline behavior, failure modes

If voice is going to replace physical controls, reliability can’t be an aspiration—it has to be engineered with explicit budgets and test matrices.

Latency budget

Humans pause roughly ~200ms between conversational turns, while cloud round trips often land in the 1–3 second range—good enough for Q&A, not good enough for control.

The reflex path should therefore be designed so the most common commands complete without waiting on the network.

Offline and “brownout” modes

Define tiers of capability that remain functional without connectivity:

  • Tier A (must work offline): lights, volume, stop/quiet, timers, basic routines.
  • Tier B (cloud-required): deep reasoning, external services.

This avoids a binary “voice works / voice is dead” experience and increases user trust.

Failure modes that must be tested (not treated as edge cases)

  • overlapping speakers (barge-in, crosstalk)
  • competing media (TV/music)
  • far-field speech + occlusion (speaker in hallway / adjacent room)
  • changing echo paths (content and volume changes)
  • reverberant rooms (kitchen tile, open-plan living spaces)

 Metrics that map to trust (beyond WER):

  • end-to-end command success rate by scenario class
  • false accept / false reject rates for device-directed intent gating
  • speaker attribution / room attribution accuracy
  • P95 latency (not just average) for Tier A commands
  • recovery time after connectivity loss
Why privacy and economics often improve in a hybrid design

A counterintuitive benefit of edge-first reflex layers is that they can be more private and more cost-stable than cloud-streaming approaches—because a large fraction of everyday interactions can be processed locally, and the cloud is invoked only when deeper reasoning is necessary.

On the economics side, cloud inference costs scale with usage, while edge compute is amortized with silicon volume and can reduce the need for continuous cloud processing for trivial requests.

One example of this architectural direction is Kardome, which focuses on combining spatial hearing (to separate and localize voices) with an on-device context-aware SLM (to decide whether speech is directed at the system), escalating to the cloud only when deeper reasoning is needed.

Dr. Alon Slapak is the co-founder and CTO of Kardome, a voice AI startup pioneering Spatial Hearing and Cognition AI technology that enables seamless, natural voice interaction in real-world noisy environments. He holds a Ph.D. from Tel Aviv University and brings deep expertise in acoustics, signal processing, and machine learning. Alon and co-founder and CEO Dr. Dani Cherkassky launched Kardome out of a shared passion for solving end-user frustrations with voice devices, combining their expertise in acoustics and advanced machine learning to build leading-edge voice user interface technology. Kardome has raised $10M in Series A funding.

Related Content

The post Designing the voice AI stack: Integrating spatial hearing AI with edge-based intent gating appeared first on EDN.

Qualitas Semiconductor Picks Anritsu’s Vector Network Analyzer for High-Speed Interconnect Signal Integrity Verification

ELE Times - Птн, 03/13/2026 - 13:22

Qualitas Semiconductor Co., Ltd., a leading developer specialising in PHY IP solutions for high-speed interconnects, has adopted Anritsu’s ShockLine 4-Port Performance Vector Network Analyzer (VNA) MS46524B to enhance signal integrity verification for its high-speed interface IP development. Qualitas has significantly improved the quality and reliability of its IP solutions by establishing a verification environment that enables highly accurate, repeatable signal-integrity evaluations across the entire system, including PHY IP.

Qualitas develops high-speed interface IP solutions, including SerDes PHY IP, PCI Express® PHY IP, UCIe interconnect solutions, and Ethernet PHY IP, and it collaborates with global customers across advanced semiconductor markets in fields such as AI, data centres, automotive, and mobile systems.

As semiconductor interface technologies continue to increase data transmission speeds, system-level verification that includes the characteristics of the entire interconnect channel, such as the PCB, package, and socket, has become increasingly important, rather than just the performance of the chips. In high-speed signal environments, factors such as transmission loss, reflection, and crosstalk affect signal integrity, making precise measurement-based verification environments essential.

To address these requirements, Qualitas has adopted Anritsu’s ShockLine MS46524B to analyse the characteristics of high-speed interconnect channels and quantitatively verify signal integrity, based on differential S-parameter analysis and time-domain reflectometry (TDR) measurements.

The ShockLine MS46524B provides high-frequency measurement stability, support for mixed probe and coaxial cable environments, and high-resolution TDR measurement capabilities, enabling precise analysis of subtle impedance variations occurring in the package and PCB structures. Through this approach, Qualitas has established a verification environment that is close to the conditions of real systems, enabling it to provide the reliability required in the PHY IP development process.

Anritsu highlights the importance of signal integrity verification solutions and measurement technologies that are required in next-generation interface technology environments, and it plans to support semiconductor and high-speed interface development companies in building more efficient verification environments.

The post Qualitas Semiconductor Picks Anritsu’s Vector Network Analyzer for High-Speed Interconnect Signal Integrity Verification appeared first on ELE Times.

The Tomorrow for AI and India’s edge advantage

ELE Times - Птн, 03/13/2026 - 11:36

Courtesy: Qualcomm

Artificial intelligence is entering its next chapter, one that reshapes not only how computing works, but how people experience technology in their daily lives. Intelligence is no longer just a feature, but is being built directly into devices and woven into systems and experiences so that it becomes ambient and always present.

In this next chapter, AI runs everywhere — across smartphones, PCs, wearables, cars, industrial machines, robots and connected infrastructure. These systems will understand context and the physical world around them and adjust in real time to our needs. Intelligence will operate quietly alongside us — working in the background, responding instantly, adapting continuously and ultimately expanding what’s possible in productivity, creativity and learning.

This marks a fundamental shift in how humans interact with technology. The interfaces we’ve relied on for decades — screens, apps, menus — will matter less as intelligence becomes more natural and intuitive. We won’t have to tell our devices what to do because they will understand our intent, anticipate what we want and act on our behalf. Some devices will increasingly see what we see, hear what we hear, understand what we read and write. In many cases, AI will feel less like a tool and more like a trusted assistant — always available, always learning and designed around us.

As agentic AI assistants become more common, they will become your personal companion in your home, the workplace and your car — everywhere you go. For example, in India, smart glasses are already being used to make digital payments using voice commands or by scanning a QR code. In your car, your AI assistant will not only help you find the fastest route but can also manage your errands, make recommendations or answer questions about places of interest.

In industries, edge AI boxes are being used to improve decision-making and operational efficiency, including monitoring and optimising production processes in a manufacturing facility or better managing inventory in a retail store.

Making these experiences real requires a new architecture — one where intelligence is distributed seamlessly across every computing device from cloud to edge. Training and deep reasoning will continue to scale in the cloud. At the same time, immediacy, perception and personalisation, as well as ambient and physical AI, will happen on devices — closer to people and things.

“To realise this future, democratizing access to AI is essential. That requires competitive and efficient data centre technology, powerful on-device intelligence and advanced connectivity working together.”

India’s size, diversity, economic growth and digital momentum make it one of the most important countries for AI’s next chapter. With hundreds of millions of connected users, a vibrant developer ecosystem, and deep expertise across engineering and software, India is not simply adopting AI — it is helping define how AI can work for the world.

In agriculture, AI can help enable precision farming and natural resource optimisation. Access to healthcare can be improved by on-device screening and diagnostics, which extend care into clinics, homes,s and remote communities. AI will realise the vision of smart cities with intelligent traffic management, smart infrastructure, security, and more. And, AI-enabled devices, such as PCs, smartphones, and wearables, will make education more personalised and support continuous, lifelong learning. These are not abstract ideas; they are practical pathways to broader participation in the AI economy.

To realise this future, democratizing access to AI is essential. That requires competitive and efficient data centre technology, powerful on-device intelligence, and advanced connectivity working together. It also requires an ecosystem approach — bringing together industry, startups, academia, and policymakers to ensure innovation is trusted, accessible,e and sustainable.

At Qualcomm, we’ve been building toward this future — advancing high-performance, power-efficient, and heterogeneous computing, AI, and wireless technologies that enable intelligence everywhere. But no single company can define AI’s next chapter alone. Progress will come through collaboration, from aligning technology with real-world needs, and from ensuring the benefits of AI extend beyond early adopters to entire societies.

With the right choices, India can help shape a future where intelligence empowers people, accelerates opportunity, and reaches every community — setting an example the world can follow.

The post The Tomorrow for AI and India’s edge advantage appeared first on ELE Times.

Marvell and Mojo Vision to co-develop high-density micro-LED connectivity solutions

Semiconductor today - Птн, 03/13/2026 - 10:39
Data infrastructure semiconductor solutions provider Marvell Technology Inc of Santa Clara, CA, USA and Mojo Vision Inc of Cupertino, CA, USA — which is pioneering a wafers-in, wafers-out micro-LED platform designed to enable AI applications — have announced a long-term collaboration to develop a new class of optical interconnect solutions to power the next wave of high-performance AI data-center infrastructure...

Posifa Technologies Introduces PVC4001-C MEMS Pirani Vacuum Transducer for Wide-Range Vacuum Measurement

ELE Times - Птн, 03/13/2026 - 08:46

Posifa Technologies has introduced its new PVC4001-C MEMS Pirani vacuum transducer, the latest device in the company’s PVC4000 series. Designed for cost-effective OEM integration, the transducer combines a MEMS thermal conduction sensor, measurement electronics, a microprocessor, and an onboard barometric pressure sensor in an ultra-compact PCB assembly with a connector-terminated wire harness.

Based on Posifa’s second-generation MEMS thermal conduction chip, the PVC4001-C operates on the principle that the thermal conductivity of gases is proportional to vacuum pressure. Its electronics and microprocessor amplify and digitise the sensor signal and provide output via an I²C interface. For applications requiring calibrated output, users can enter up to 10 pairs of calibration points, which are used by a built-in piecewise linearization algorithm.

The PVC4001-C is designed to deliver stable performance across changing operating conditions. A built-in temperature sensor supports a temperature compensation algorithm to offset changes in thermal conductivity caused by ambient temperature variation. In addition, a pulsed excitation scheme — in which the sensor is heated for about 100 ms and then turned off for one second — helps minimise drift due to self-heating in high vacuum, while also reducing power consumption for battery-powered instruments.

The device provides a measurement range from 0.001 Torr to 900 Torr (1.3*10-4 kPa to 120 kPa) with a response time of less than 200 ms. Because Pirani vacuum sensors typically lose resolution above 10 Torr, the PVC4001-C adds an onboard barometric pressure sensor that supports measurement from 10 Torr to 760 Torr with 5 % accuracy across that extended range. This combination makes the device especially well-suited for portable digital vacuum gauges and for leak detection in closed systems maintained under primary vacuum, including vacuum-insulated panels.

Additional features of the PVC4001-C include low power consumption, resistance to contamination, and an operating temperature range of -25 °C to +85 °C.

The post Posifa Technologies Introduces PVC4001-C MEMS Pirani Vacuum Transducer for Wide-Range Vacuum Measurement appeared first on ELE Times.

STMicroelectronics to support AI infrastructure demand with high-volume production of its industry-leading silicon photonics platform

ELE Times - Птн, 03/13/2026 - 07:26

STMicroelectronics is now entering high-volume production for its state-of-the-art silicon photonics-based PIC100 platform used by hyperscalers for optical interconnect for data centres and AI clusters. The 800G and 1.6T PIC100 transceivers enable higher bandwidth, lower latency, and greater energy efficiency as AI workloads surge.

“Following the announcement of its new silicon photonics technology in February 2025, ST is now entering high-volume production for leading hyperscalers. The combination of our technology platform and the superior scale of our 300 mm manufacturing lines gives us a unique competitive advantage to support the AI infrastructure super-cycle,” said Fabio Gualandris, President, Quality, Manufacturing & Technology, STMicroelectronics. “Looking ahead, we are planning and executing on capacity expansions to enable more than quadrupling of production by 2027. This fast expansion is fully underpinned by customers’ long-term capacity reservation commitments.”

“The data centre pluggable optics market continues to expand strongly, reaching $15.5 billion in 2025. We expect the market to grow at a compound annual growth rate (CAGR) of 17% from 2025 through 2030, surpassing $34 billion by the end of the forecast period. In addition, co-packaged optics (CPO) will emerge as a rapidly growing segment, contributing more than $9 billion in revenue by 2030. Over the same period, the share of transceivers incorporating silicon photonics modulators is projected to increase from 43% in 2025 to 76% by 2030,” said Dr. Vladimir Kozlov, CEO and Chief Analyst at LightCounting. “ST’s leading silicon photonics platform, coupled with its aggressive capacity expansion plan, illustrates its capabilities to provide hyperscalers with secure, long-term supply, predictable quality, and manufacturing resilience.”

Upcoming PIC100 TSV Platform Technology

AI infrastructure is experiencing unprecedented scaling, with cloud-optical interconnect performance becoming a critical bottleneck. Drawing on years of silicon photonics innovation, ST’s PIC100 platform provides state-of-the-art optical performance, including best-in-class silicon and silicon nitride waveguide losses (respectively as low as 0.4 and 0.5 dB/cm), advanced modulator and photodiode performance, as well as an innovative edge coupling technology.

In parallel with high-volume PIC100 production, ST is planning to introduce the next step in its silicon photonics technology roadmap: the PIC100 TSV, a new and unique platform that integrates through-silicon via (TSV) technology to further increase optical connectivity density, module integration, and system-level thermal efficiency. The PIC100 TSV platform is designed to support future generations of Near Packaged Optics (NPO) and co-packaged optics (CPO), aligning with hyperscalers’ long-term migration paths toward deeper optical–electronic integration for scale up.

The post STMicroelectronics to support AI infrastructure demand with high-volume production of its industry-leading silicon photonics platform appeared first on ELE Times.

My Smart Wall Clock

Reddit:Electronics - Чтв, 03/12/2026 - 23:15
My Smart Wall Clock

I designed the case myself. Use esp32-c3 with WifiManager library. The time updates automatically:)

submitted by /u/udfsoft
[link] [comments]

Just started the ICL7135-based multimeter

Reddit:Electronics - Чтв, 03/12/2026 - 23:03
Just started the ICL7135-based multimeter

Yes, I will try to build a precise voltage/current measurment equipment from scratch just for fun. Wish me luck.

One step at a time: - 5-digit multiplexed display with the К176ИД2 driver - MC34063 negative rail DC-DC converter - 555 timer 120kHz click source - REF3333 precise voltage reference

submitted by /u/nerovny
[link] [comments]

University of Sheffield to lead £12.5m UK Centre for Heterogeneous Integrated MicroElectronic and Semiconductor Systems

Semiconductor today - Чтв, 03/12/2026 - 20:22
The University of Sheffield is leading a new £12.5m national research center to strengthen the UK’s ability to design the next generation of advanced electronic systems and support the ambitions of the UK Semiconductor Strategy...

Low-cost MCUs enable smarter embedded devices

EDN Network - Чтв, 03/12/2026 - 19:43

Leveraging ST’s 40-nm process and an Arm Cortex-M33 core, STM32C5 MCUs deliver increased speed for cost-sensitive embedded devices. The microcontrollers run faster than many entry-level chips, improving the capabilities of compact smart devices in factories, homes, cities, and infrastructure while keeping dynamic power consumption low (<80 µA/MHz).

Running at 144 MHz and achieving a CoreMark score of 593, the Cortex-M33 offers up to three times the performance of typical Cortex-M0+ devices. ST’s 40-nm cost-efficient manufacturing process supports higher clock speeds and larger on-chip memory. The STM32C5 series integrates 128 KB to 1024 KB of flash and 64 KB to 256 KB of RAM.

The MCUs are designed to meet SESIP3 and PSA Level 3 security requirements, with memory protection, tamper protection, cryptographic engines, and temporal isolation to protect processes such as secure boot and firmware updates. Variants with additional security provide hardware unique key support, secure key storage, and hardware cryptographic accelerators for symmetric and asymmetric operations.

The STM32C5 MCUs are entering production now and are available in packages ranging from 20 to 144 pins. Pricing starts at $0.64 each in 10,000-unit quantities.

STM32C5 product page 

STMicroelectronics

The post Low-cost MCUs enable smarter embedded devices appeared first on EDN.

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів