Українською
  In English
Feed aggregator
Підвищуємо безпеку кампусу КПІ ім. Ігоря Сікорського разом із компанією SHERIFF
Співпраця КПІ ім. Ігоря Сікорського із компанією SHERIFF — це комплексна система безпеки, яка працює 24/7:
Arrow Electronics Launches Web-based “Digital Test Drive” to Streamline Hardware Testing
Arrow Electronics (NYSE: ARW) today announced the launch of Digital Test Drive, a cloud‑based remote engineering service that helps technology developers evaluate hardware faster, reduce costs and improve productivity.
Through a secure, private web link, individual users and distributed teams can instantly connect to a pre-set up virtual machine and connect via cloud directly to physical development boards hosted in Arrow’s engineering labs. Users can remotely control evaluation kits, access software environments, run tests and view results in real time. Workshops, training, product demonstrations and live support from Arrow’s technical experts are available.
Digital Test Drive simplifies early‑stage testing and collaboration by helping eliminate common barriers such as kit availability, shipping delays, customs paperwork, platform comparisons, complex setup and software installation, which helps businesses shorten the development cycles and accelerate decision‑making.
“Digital Test Drive helps remove the delays and complexity that slow product development,” said Murdoch Fitzgerald, chief growth officer of global services for Arrow’s global components business. “There’s no shipping, no setup and fewer up‑front costs, just instant access to the tools engineering teams need to work more efficiently.”
Digital Test Drive complements Arrow’s existing Test Drive program that allows customers to borrow physical hardware for on‑site evaluation for up to 28 days.
More information:
Digital Test Drive – Remote Hardware Testing
About Arrow Electronics
Arrow Electronics (NYSE: ARW) sources and engineers technology solutions for thousands of leading manufacturers and service providers. With 2025 sales of $31 billion, Arrow helps enable innovation across major industries and markets. Learn more at arrow.com.
The post Arrow Electronics Launches Web-based “Digital Test Drive” to Streamline Hardware Testing appeared first on ELE Times.
❤️ Запрошуємо долучитися до особливої справи — донорства крові для поранених військових!
▫️ БО БФ «КОЛО» і Центр крові ЗСУ у співпраці із КПІ ім. Ігоря Сікорського запрошує долучитися до донорства крові для поранених військових!
Xanadu and EVG partner on heterogeneous integration and wafer bonding processes for photonic quantum systems
Veeco receives $250m+ in equipment orders for manufacturing InP lasers
From Updates to Intelligence: How OTA, Data, and Ethernet Are Reshaping Vehicles
In an exclusive interview with ELE Times, Shrikant Acharya, CTO and Co-founder of Excelfore, outlines how vehicles are evolving from simple update-driven systems to intelligent, data-centric platforms. He explains the distinction between OTA updates and data aggregation within a unified lifecycle pipeline, while highlighting innovations such as adaptive delta compression and distributed architectures. Acharya also explores the growing role of Ethernet, AI, and scalable system design in shaping software-defined vehicles, positioning India as a key market in this transformation.
ELE Times: Could you elaborate on OTA updates and how they differ from in-vehicle data-related processes? Also, what differentiates your OTA solution in this evolving landscape?
Excelfore:
It is important to distinguish between OTA updates and data aggregation. OTA primarily refers to a one-way process—delivering updates from infrastructure to the device. In contrast, extracting data from the device back to the infrastructure is better described as data aggregation. When viewed as a unified pipeline, both functions contribute to lifecycle management. Updates are deployed to improve or fix device functionality, while data is retrieved to evaluate performance, detect issues, and validate those updates through analytics.
From a technical standpoint, OTA updates are asynchronous, involving large data transfers—often several gigabytes —owing to their bulk, especially in systems like Android-based infotainment. Conversely, data retrieval is typically synchronous or near-real-time, requiring smaller, segmented packets to ensure continuity and responsiveness, thereby maintaining the real-time nature of the aggregation. In essence, while both operate within the same pipeline, OTA updates and data aggregation serve fundamentally different purposes—one enables corrective action, while the other supports monitoring and analysis.
OTA has evolved significantly—from early implementations in industrial systems to its adoption in automotive environments. Initial solutions, such as those derived from mobile update frameworks, were primarily suited for infotainment systems and can be considered first-generation approaches. At the same time, our solution represents a more advanced, third-generation architecture. A key innovation lies in its plug-and-play capability. Devices entering the network authenticate themselves through certificates and register dynamically. The client system acts as a generic dispatcher without embedded knowledge of the vehicle or environment, enabling deployment across diverse ecosystems.
Another major advancement is the distributed architecture. Complexity is intentionally removed from the communication pipeline and instead distributed between the server and device. This approach ensures scalability, simplifies integration, and allows seamless accommodation of legacy systems. OEMs can retain existing device management frameworks while selectively adopting newer capabilities.
Agents within devices handle updates, ensuring structured execution while maintaining flexibility. This modular and distributed design is central to our differentiation, which also helps OEMs to preserve legacy.
ELE Times: Could you explain the concept and significance of adaptive delta compression? How does this approach optimize bandwidth and system performance?
Excelfore:
Traditionally, software updates required transmitting the entire payload. Delta compression improves efficiency by sending only the differences between software versions, significantly reducing bandwidth usage and update time. However, managing these differential files over time creates a substantial IT burden for OEMs. Our approach shifts this responsibility to the server-client system. The server dynamically determines when and how to generate and transmit delta updates, eliminating the need for OEMs to manage them manually.
Also, if one doesn’t want to use the main channel to send these large files, you only give them a reference to the URL for that payload, and then the agent sets up an independent connection and puts it down. Also, the “adaptive” aspect introduces intelligence into this process. The system evaluates multiple parameters—such as device memory, processing capability, network interface (CAN, LIN, Ethernet), and connection speed—to determine the most efficient compression strategy.
Additionally, large payloads are handled via separate channels, ensuring that the primary communication pipeline remains responsive for critical operations such as authentication and command execution.
Regarding optimization, it is achieved by tailoring data packets to device constraints. For instance, if a device has limited cache capacity, the system ensures that data units fit precisely within that space. This avoids inefficiencies caused by partial data processing and repeated memory access. Beyond cache considerations, factors such as network speed and interface type are also evaluated. The system assigns weighted parameters to these variables and generates an optimal data transfer strategy, ensuring efficient utilization of bandwidth while maintaining system performance.
ELE Times: With the rise of SDVs and advanced features, how do you see networking technologies evolving?
Excelfore:
Ethernet has emerged as the dominant in-vehicle networking standard due to its scalability, cost efficiency, and high bandwidth capabilities. Earlier technologies like FlexRay served as transitional solutions but have largely been superseded.
While legacy systems such as CAN will continue to exist due to installed base constraints, advancements like 10 Mbps multi-drop Ethernet are increasingly capable of replacing them.
Time-Sensitive Networking (TSN) plays a crucial role, particularly in time synchronization and deterministic data transmission. Combined with Quality of Service (QoS) mechanisms, it enables efficient bandwidth utilization—often achieving up to 85–90% channel efficiency compared to significantly lower utilization without traffic management.
ELE Times: How are SDVs reshaping vehicle architecture and OEM strategies? How do you view the evolution of SDVs and connected vehicles in India?
Excelfore:
The term SDV is often used loosely, but its true definition involves a standardized hardware platform whose functionality can be dynamically reconfigured through software.
Architecturally, the industry has evolved from domain-based systems to zonal architectures with centralized computing. Zonal controllers process localized data, which is then transmitted to central compute units for decision-making.
This shift introduces challenges, particularly in thermal management, as high-performance compute systems generate significant heat. Cooling solutions have thus become a critical component of system design.
For India, it presents a unique opportunity, having bypassed several legacy stages of technological evolution. This allows for a more forward-looking approach, with fewer constraints from outdated systems. There is a strong willingness to adopt advanced technologies based on value and functionality. This mindset, similar to what was observed in China during its rapid technological growth phase, creates a favorable environment for innovation.
For technology providers, this openness enables deeper collaboration and the deployment of cutting-edge solutions, positioning India as a promising market for SDVs and connected vehicle ecosystems.
ELE Times: What role do you see AI playing in OTA and SDV ecosystems?
Excelfore:
AI adoption in vehicles is constrained by cost and computational limitations. As a result, the focus is shifting toward domain-specific, lightweight models rather than large, generalized AI systems.
While generative AI will primarily reside in the cloud, vehicles will utilize smaller models tailored to specific functions—such as diagnostics or object detection. One practical application is the digitization of vehicle manuals, enabling intelligent interpretation of diagnostic codes and user-friendly outputs.
However, monetization will be a key factor. Advanced AI-driven features are unlikely to be offered free of cost and will likely be delivered as subscription-based services.
ELE Times: How do you ensure safety and integrity in OTA updates, especially for critical systems?
Excelfore:
Data integrity is ensured through mechanisms such as SHA-256 hashing, which verifies that transmitted data remains unaltered. If discrepancies are detected, updates are rejected.
Authentication is enforced באמצעות digital certificates, establishing both device identity and software origin. Additionally, encryption ensures that only the intended device can decode and execute the update.
A critical vulnerability lies in key management during manufacturing. Protecting private keys is essential, as any compromise at this stage can undermine the entire security framework.
The post From Updates to Intelligence: How OTA, Data, and Ethernet Are Reshaping Vehicles appeared first on ELE Times.
An IV-11 VFD Tube Clock I designed and built from scratch! [KiCad + Arduino + Custom PCB]
| Hello everyone! About 2 months ago on a whim I ordered 6x of these IV-11 VFD tubes from eBay, and decided I wanted to design and build my very own VFD tube clock! After getting good tips and feedback on reddit, prototyping everything on a breadboard, designing a custom PCB, and soldering it all together, here's the finished result! This is my first real personal project as a new EE major and I'm thrilled with how it turned out. The clock runs on an Arduino Nano Every with 6x daisy-chained 74HC595 shift registers and UDN2981A high-voltage source drivers, one pair per tube. The anode and grid rails run at 25V from a boost converter, and the filament runs at 1.5V from a buck converter, all from a single 5V USB supply. A full writeup covering design decisions, schematic, and PCB layout is on my GitHub Repo. Stars are appreciated! :) [link] [comments] |
UK Semiconductor Centre appoints first CEO
На Факультеті інформатики та обчислювальної техніки КПІ відсвяткували Day F
🔹 Цього року традиційне свято ФІОТ вийшло за межі звичайних урочистостей. Хаб ФІОТ працював як великий коворкінг, де студенти спілкувалися з рекрутерами топових ІТ-компаній, слухали лекції, вітали актив факультету та переможців кіберзмагань.
MineCraft Inspired compass PT.1
| Work in progress! I was inspired by numerous #Minecraft inspired IRL compass and the fact i could not buy it was the driving force to develop one. The needle animation works fine; it is the first step in order to make the final product. I built it from ground up (Custom LED matrix, 3d printed housing), if anyone needs more details on the hardware or anything related to the project feel free to comment and ask away. P.S. It is still a prototype so it's a little a little janky and the custom PCB is held by some tape. [link] [comments] |
Power Integrations appoints Mike Balow as senior VP, worldwide sales
КПІ отримав відзнаку Platinum Educational Partner від SoftServe
✔ Відзнака Platinum Educational Partner від SoftServe - це визнання системної роботи Київської політехніки з розвитку сучасної освіти та тісної співпраці з ІТ-індустрією.
AXT’s revenue grows 17% in Q1 after greater-than-expected export permits
ST unveils 100W VIPerGaN converters for energy-efficient appliances
From edge AI to physical AI in smart factories: A shift in how machines perceive and act

The concept of the “smart factory” has evolved significantly over the past decade. Early industrial AI deployments, often categorized as Industry 4.0, focused on centralized analytics. This typically involved collecting data from machines, transmitting it to the cloud, and generating insights for later action.
While useful for optimization and reporting, that model is no longer sufficient. What’s changing now is not just where AI runs, but how it operates—shifting from centralized analysis to systems that can perceive, decide, and act in real time within the physical environment.
Today’s factories demand intelligence that operates in real time, directly at the point of action. Whether detecting defects on a production line, coordinating robotic motion, or identifying safety hazards, AI is increasingly expected to function as an always-on, embedded capability within industrial systems.
This shift marks a broader transition in smart factories, from traditional edge AI toward more contextual awareness and autonomous operation: systems that not only analyze data, but perceive, decide, and act within the physical world. While the promise is substantial, realizing it introduces a new set of technical challenges that require purpose-built solutions.
Why edge AI Is moving closer to the machine in smart factories
Several converging forces are pushing AI workloads out of centralized infrastructure and toward the factory floor, where real-time interaction with physical systems is required.
Latency is among the most critical. In applications such as robotics, inspection, and safety monitoring, even small delays can result in defects, downtime, or safety risks. Round-trip communication to the cloud is often incompatible with these requirements. This is further compounded by the fact that many industrial environments operate with constrained, segmented, or variable network connectivity, making consistent low-latency cloud access difficult to guarantee.
Data volume is another key driver. Modern industrial systems generate vast streams of multimodal data—high-resolution video, audio signatures, vibration patterns, and increasingly, tactile inputs. Transmitting all of this data offsite is not only expensive but also unnecessary. In most cases, only a small fraction of events—such as anomalies, defects, or threshold violations—require action, making local inference far more efficient.

Figure 1 The transition from centralized AI to edge AI represents a fundamental shift in industrial computing. Source: Synaptics
Security and data sovereignty further make this trend important. Manufacturing processes and operational data are highly sensitive, and many organizations prefer to keep raw data within controlled environments.
The emergence of physical AI
On top of those factors, as AI moves closer to machines, its role is expanding. Instead of simply classifying or predicting, systems are beginning to interact with their environments in more dynamic ways.
This is the essence of physical AI in industrial systems, where they can:
- Interpret complex, multimodal sensory input in real time
- Adapt to changing physical conditions
- Execute actions with precise timing and coordination

Figure 2 The edge AI-enabled systems are now interacting with their environments in more dynamic ways. Source: Synaptics
Consider robotics as a leading example. Advances in tactile sensing now allow robotic systems to “feel” objects, adjusting grip force based on material properties. In one recent deployment developed with our partner Grinn, a robotic hand integrates distributed touch sensing with embedded machine learning, enabling nuanced manipulation of objects ranging from fragile materials to rigid components.
Such capabilities represent a shift from scripted automation to adaptive, context-aware behavior, bringing machines closer to human-like interaction with the physical world.
Key challenges in deploying edge and physical AI
Despite the momentum, implementing AI at the edge, and especially physical AI, presents several challenges.
- Balancing performance and power
Industrial AI systems must operate continuously, often in constrained thermal and power environments. Unlike data centers, where peak performance is the primary metric, factory deployments prioritize sustained performance per watt.
Always-on workloads, for instance, predictive maintenance or safety monitoring, require efficient architectures that can run continuously without excessive energy consumption.
- Managing workload diversity
Industrial AI is inherently multimodal. A single system may combine:
- Vision for inspection
- Audio for anomaly detection
- Vibration analysis for predictive maintenance
- Sensor fusion for robotics and control
These workloads have different computational characteristics, making it difficult to rely on a single type of processor. Increasingly, heterogeneous architectures that combine CPUs, GPUs, NPUs, and specialized sensors are required to efficiently handle diverse tasks.
- Ensuring long-term reliability
Industrial systems often remain in operation for years or even decades. This creates unique requirements around:
- Silicon longevity and availability
- Stable software ecosystems
- Predictable behavior across revisions
Frequent hardware changes or software incompatibilities can disrupt operations and increase lifecycle costs.
- Addressing model drift and lifecycle management
Unlike controlled lab environments, factories are dynamic. Lighting conditions change, materials vary, and equipment degrades over time. These factors can lead to model drift, where AI performance degrades after deployment.
Addressing this requires:
- Continuous monitoring and validation
- Local recalibration capabilities
- Secure, manageable update mechanisms
AI in industrial environments must be treated not as a static feature, but as a lifecycle-managed subsystem.
- Integrating compute and connectivity
As systems become more distributed, the interaction between compute and connectivity becomes critical. Many manufacturers still rely on separate vendors for processing and wireless communication, leading to integration challenges and fragmented support models.
In physical AI systems, high-bandwidth, low-latency data movement between sensors, processors, and actuators is essential for safe and reliable operation.
The role of Wi-Fi 7 and next-generation connectivity
Connectivity is often a critical enabler of physical AI in smart factories, where real-time coordination between distributed systems depends on low-latency, high-reliability communication. As industrial systems scale in complexity and device density, traditional wireless technologies struggle to meet performance requirements.
Advancements in Wi-Fi and Bluetooth are addressing this, but wireless connectivity can no longer be viewed as a standalone, discrete capability. Without this level of connectivity, many physical AI use cases, particularly those requiring coordination across multiple systems, are not feasible.
There is a growing need, and clear benefits, in integrating processing and connectivity. This helps reduce system complexity, improve reliability, strengthen security, and simplify development for design teams.
Bringing together connectivity and processing changes how design decisions are made early in the product lifecycle. When core system functions work together, teams can simplify architecture choices from the outset and reduce the number of variables that typically slow progress.
Integrating connectivity and compute has benefits beyond the engineering and manufacturing phase. Over the lifetime of a product, integration helps reduce power consumption, lower device weight, and decrease overall system cost. At scale, even small reductions in size, mass, and power can translate into meaningful savings across production, shipping, and years of deployment.
Of course, wireless performance, range, and reliability are still critical in their own right. While existing Wi-Fi and Bluetooth standards have advanced the state of wireless connectivity, the emergence of Wi-Fi 7 introduces capabilities that enable more scalable and deterministic edge AI, supporting higher device densities and more predictable low-latency communication in smart factory environments.
- Multi-link operation (MLO) allows devices to transmit data simultaneously across multiple frequency bands. This provides redundancy and helps maintain consistent, low-latency communication even in environments with interference or congestion.
- Wider channel bandwidth (up to 320 MHz) supports high-throughput applications such as machine vision, where large volumes of image data must be transmitted quickly and reliably.
- Higher spectral efficiency (via 4K QAM) enables more devices to share the same wireless spectrum without degrading performance, an essential feature as industrial systems scale.
Toward a new system architecture
The convergence of edge AI, physical AI, and advanced connectivity is reshaping how industrial systems are designed, requiring more integrated and system-level approaches.
Some guiding principles to consider in developing such intelligent deployments are:
- Start with system constraints
Rather than beginning with AI models, successful deployments start with system-level requirements:
- Latency and timing constraints
- Power and thermal limits
- Reliability and safety considerations
These factors should guide architecture decisions, including silicon selection and model design.
- Embrace distributed intelligence
Instead of centralizing all processing, intelligence should be distributed across the system:
- Sensor-level processing for early data reduction
- Edge inference for real-time decisions
- Connection to cloud-based training and optimization for continuous improvement
This layered approach balances performance, efficiency, and scalability.
- Design for multimodal integration
Physical AI systems rely on combining multiple sensing modalities. Architectures must support efficient data fusion and coordination across these inputs.
- Treat AI as a lifecycle capability
Deployment is only the beginning. Ongoing monitoring, updates, and optimization are essential to maintaining performance over time.
The path forward
The smart factory is no longer defined solely by automation, but by intelligence embedded throughout the system, enabling decision-making that operates in real time, it adapts to its environment, and interacts with the physical world.
This transition from centralized AI to edge AI represents a fundamental shift in industrial computing. Performance and accuracy are still important, but what matters most is whether AI can operate reliably under real-world constraints: continuously, efficiently, securely, and in close coordination with physical processes.
Advances in heterogeneous computing, integrated connectivity, and open software ecosystems—as evidenced by AI-native platforms such as the Synaptics Astra Platform—are enabling this shift.
As these elements come together, the factory floor is becoming not just automated, but perceptive and adaptive, comprised of increasingly autonomous systems that do more than execute tasks; they understand context and respond accordingly.
Neeta Shenoy is VP of marketing at Synaptics.
Special Section: Smart Factory
- Rethinking machine vision in industrial automation
- Smart factory: The rise of PoE in industrial environments
- Precision lasers boost safety and efficiency in smart factories
- Tale of 3 sensors operating in smart factory environments
The post From edge AI to physical AI in smart factories: A shift in how machines perceive and act appeared first on EDN.
Navitas appoints Davin Lee as independent director
Aixtron’s Q1 revenue down 47% year-on-year, but opto drives 30% growth in orders
The Blue (now Logitech) Snowball iCE: This mic sounds nice

This audio-capture computer peripheral contains an integrated-transistor pickup capsule and a hunk of metal.
Back in November 2022, EDN published my introductory tutorial on standalone microphones—single- vs. multi-element, electret condenser vs. dynamic (including the associated necessity-or-not of a separate preamp), and analog vs. digital interface (and variants of each)—along with a separate piece on system-integrated mics a couple of months later.
I followed up those conceptual pieces with a USB-interface mic teardown in October 2023. And in both standalone-mic coverage cases, I mentioned (among others) one other USB-interface product, Blue’s (now Logitech G’s) Snowball, two examples of which were in my possession.

The Snowball, which supports both omnidirectional and cardioid pickup patterns, remains on my teardown pile. Stay tuned; it’s supposedly based on dual 14-mm electret condenser capsules, although there’s some controversy here, which I hope to sort out by putting my own eyes on the situation.
What we’re taking apart today is its spherical “little brother”, the cardioid-only Snowball iCE, which comes in both black and white color variants. I’ll start with some stock shots of my black-color ones, one of which I’ll be disassembling (non-destructively, hopefully).






Mine were a $40 (post-20%-off promo discount) two-pack ($20 each) bought from Woot in early 2024. Woot’s posting included a few other stock images I thought you’d find interesting.



While the mics themselves were brand new, their blank-cardboard and scant bubble wrap on-arrival packaging was definitely not retail-grade.


This last shot, along with others that follow it, as usual includes a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes.

I’ll start with the “extras”; a modest-but-functional tripod stand that screws into the mic underside, along with a legacy USB-A, to mini-USB cable and a sliver of literature.

Now for our dissection patient. Front:

Left side:

Rear, showcasing the aforementioned mini-USB connector (when’s the last time you saw one of those?) leveraged for both power and digital audio transfer purposes:

Right side, completing the circle:

And, last but not least, the top:

And bottom, showcasing the “adjustable desktop stand” mentioned in one of the earlier stock images (and implemented via a swivel mount in the microphone, mind you, versus anything to do with the stand itself):

For those of you curious about what the sticker circumnavigating the mic says, here are four consecutive segment snapshots for you to verbiage-glue together in your mind.
Severing the sphereAnd now to get ‘er apart. In the earlier rear view, you might have noticed what looked like four screw holes, one in each corner. Kudos: you were right. It took me a bit of wading through my screwdriver collection to find one that:
- Had the right screw bit tip type-and-size
- With a bit that was both narrow enough to fit within the hole and
- Long enough to reach the screw heads deeply embedded inside

At that point, I expected the two halves of the sphere to neatly detach. But no. The previously mentioned sticker was still holding them together. There were two stickers, actually, as it turns out; the smaller one communicated device-specific info such as the serial number.


While the larger one handled the two-halves adhesion duties:


After I peeled it off, I thought its underside looked nifty and decided to share it with you, too.

And now the two halves of the sphere neatly detached:

Let’s first look at the moveable mount that fell out when the halves separated.


I trust many of you have already guessed that the red-and-black cable harness still connecting the two halves, which I promptly detached, is for the red LED. It only references the presence (or absence) of power to the microphone, by the way; there’s no integrated mute switch or any other reason for the LED to blink or otherwise communicate status.

There’s a notch in the internal assembly’s PCB that normally slots into a bracket at the inside back half of the microphone. With the two halves detached, the PCB slides out straightaway.

Assembly front view first:
Blue-now-Logitech claims that the 14-mm element is a “custom cardioid condenser capsule designed to deliver clear audio for recording and streaming, providing a significant upgrade over standard built-in computer microphones”. Marketing blah blah blah. Admittedly, it does review well, particularly considering its economical price tag. But its notable (IMHO) aspect, which I came across in my research, courtesy of a blogger who upgraded his, is its silicon integration:
The capsule in the Snowball is a 14-mm electret with an in-built FET that bears a striking resemblance to a JLI-140A-T. It uses a three-wire connection to the mic’s PCB, one each for the FET’s drain and source, and one for gate/ground. This means any electret with an in-built FET with all three pads brought out should work just as well (emphasis on “should”).
The fundamental purpose of the FET (alternatively a vacuum tube in some designs) is for impedance conversion and associated signal gain, thereby rationalizing why one well-known external mic preamp line is branded the “FetHead”. This thread on the Electrical Engineering Stack Exchange site gives a nice summary, complete with schematics and a conceptual diagram.

Now for the left-side perspective:
Normally, when I see a hunk of metal, I assume that at least one of its primary purposes is to act as a heatsink. Not in this case. It just adds “heft” to the Snowball iCE, holding it in place on the user’s desktop (in partnership with the rubber-tipped stand “feet”) and suppressing ambient vibrations from being picked up by the capsule (along with the flexible rubber mount that mates it with the rest of the assembly). Here’s a bottom-side view, further showcasing the “hunk of metal”:
Back to the side views, next of the back of the assembly (with the mini-USB connector obscured by the ever-present penny, apologetically):
And finally, the right side:
Now for the perspective you all care about, that of the assembly-including-PCB topside:
Zooming in on the PCB itself, and after disconnecting the capsule cable harness:
The dominant IC on the landscape, toward the center of the PCB, is (unsurprisingly, given the mic’s digital output) the audio ADC-plus-USB interface device, C-Media Electronics’ CM6327A. This chip also embeds an I2C interface, harnessed in communicating with the Fremont Micro Devices FT24C02A 2 Kbit serial EEPROM in the lower left corner (presumably housing system firmware).
In the spirit of thoroughness, and in closing, let’s take a peek at the PCB underside:
There’s nothing there that I can discern, other than test points, solder blobs and traces. In the interest of hopefully preserving mic functionality subsequent to re-assembly, I won’t proceed further with the dis-assembly. Sound off (bad pun intended) with your thoughts in the comments, please!
—Brian Dipert is the associate editor, as well as a contributing editor, at EDN.
Related Content
- Microphones: An abundance of options for capturing tones
- Microphones: On-PCB options for catching tones
- Disassembling an in-line microphone preamp
- Checking out a USB microphone
The post The Blue (now Logitech) Snowball iCE: This mic sounds nice appeared first on EDN.
Audio over Ethernet: How Stellar G6 is replacing dedicated audio cables with a single Ethernet backbone
STMicroelectronics is enabling a shift from dedicated audio wiring to Audio over Ethernet in next-generation vehicles. The Stellar G6 automotive MCU integrates hardware-level Time-Sensitive Networking, Media Clock Recovery, and a dedicated communication engine to deliver high-fidelity, zero-jitter audio over the vehicle’s existing Ethernet backbone. The approach eliminates the need for proprietary A2B cables and transceivers, saving automakers approximately $70 per vehicle while enabling new capabilities such as real-time Active Noise Cancellation at the zonal level. A joint solution with AutoCore has already demonstrated end-to-end latency under two milliseconds, and ST is showcasing the technology live at Embedded World 2026 in Nuremberg.
Bringing high-fidelity audio to the software-defined vehicle
In a car, sound is personal. Listeners sit in fixed, asymmetrical positions surrounded by dozens of speakers, and their brains are ruthlessly precise about timing. A delay of just five milliseconds between two speakers is enough for the Haas Effect to kick in, tricking the listener into “pinning” the sound to whichever speaker fired first. A delta of two milliseconds can pull the entire soundstage to one side of the cabin, destroying the “phantom center” that makes a singer feel like they’re standing on the dashboard. When speakers fall slightly out of sync, sound waves collide destructively, creating nulls in the frequency response that make audio sound hollow or metallic. This is comb filtering, and it’s the acoustic signature of a timing problem.
These are not edge cases. They are the everyday reality of in-cabin audio, and they explain why the automotive industry has relied on dedicated wiring like A2B (Automotive Audio Bus) for so long. A2B is effective, but it demands its own cabling and transceivers, adding weight, complexity, and cost to the vehicle harness. Now that the industry is shifting toward Software-Defined Vehicles and zonal architectures, a new question is taking center stage: can a single Ethernet backbone carry diagnostics, control signals, and high-fidelity audio at the same time, without compromising the millisecond precision that human hearing demands?
With the Stellar G6 automotive MCU, we set out to prove that it can.
Latency is a number; jitter is the real enemy
Engineers often focus on latency, the constant delay between source and speaker. However, in automotive audio, jitter is far more destructive. Jitter is the variation in that delay. On a standard Ethernet network, an audio packet can get stuck behind a burst of sensor data. If the delivery time “jitters” by even a few microseconds, it introduces phase distortion that smears the music. For applications like Active Noise Cancellation, where a microphone signal must be inverted and played back through a speaker in near real-time, jitter doesn’t just degrade quality. It breaks the physics entirely.
Solving this requires more than a fast processor. It requires determinism, meaning the guarantee that a packet arrives exactly when it’s supposed to, and clock coherency, ensuring every node in the vehicle shares the same nanosecond. These are hardware problems, and they need hardware answers.
What Stellar G6 brings to audio over Ethernet
The Stellar G6 was engineered to treat audio as a time-critical stream, not as generic data. Three hardware-level capabilities make this possible. First, the Stellar G6 features a built-in L2+ Ethernet Switch supporting the full suite of Time-Sensitive Networking (TSN) standards. IEEE 802.1AS (gPTP) synchronizes every node in the vehicle to a sub-microsecond master clock. IEEE 802.1Qbv (scheduled traffic) creates protected time slots for audio and microphone data, ensuring they always get priority even on a congested network. IEEE 802.1CB enables seamless redundancy through Ethernet ring topologies, eliminating the single point of failure that plagues traditional star configurations.
Second, even with a perfectly synchronized network, the audio sample clock can still drift. The Stellar G6 includes specialized Media Clock Recovery hardware. Rather than relying on a software-based PLL, a dedicated digital hardware loop recovers the Audio Master Clock directly from the Ethernet stream, keeping speakers and microphones in perfect phase. The result: virtually zero jitter on the recovered clock, which is the critical enabler for professional-grade audio delivery.
Third, Stellar embeds a dedicated communication engine that offloads all data-moving and synchronization tasks from the main CPUs. This hardware isolation means that a processing spike in the vehicle’s body-control zone cannot cause a pop or a glitch in the audio. Communication runs at the lowest possible latency, completely decoupled from whatever else the host cores are doing.
From central processing to localized intelligence
Traditionally, all audio processing happened in a central head unit. Moving to an Ethernet-based zonal architecture changes this fundamentally. With a Stellar G6 acting as the Zonal Controller at each vehicle zone, significant compute now sits closer to every speaker and microphone.
This unlocks capabilities that were previously impractical. In-Cabin noise cancellation becomes possible by placing microphones near individual seats, identifying noise sources such as a loud conversation in the rear, and cancelling them locally. Road noise cancellation works on the same principle: the system captures vibration and road noise through zone-level microphones, generates an anti-noise signal, and plays it back through nearby speakers with near-zero latency. The processing happens at the edge, in the zone, rather than travelling back and forth to a central unit. For the passenger, the result is a cabin that can become a sanctuary, a workspace, or a private sound bubble, all updated over-the-air as easily as a smartphone app.
The cost equation: saving up to $70 per vehicle
Beyond acoustic performance, Audio over Ethernet carries a straightforward economic argument. By eliminating dedicated A2B cables and transceivers and reusing the vehicle’s existing Ethernet backbone, automakers can save approximately $70 per vehicle. In an industry where every cent on the bill of materials is scrutinized, consolidating audio onto a network that already exists for diagnostics and control is not just elegant engineering. It’s a significant cost reduction that scales across millions of units.
From proof-of-concept to production validation
In January 2026, we announced a collaboration with AutoCore on an Ethernet-based Zonal Controller distributed audio solution. By combining Stellar G6’s Media Clock Recovery with AutoCore’s TSN protocol stack, the joint solution achieved end-to-end audio latency of less than two milliseconds. That is fast enough to run high-performance Active Noise Cancellation over a standard Ethernet backbone.
At Embedded World 2026, we are taking this further with a live demonstration of Stellar G6’s native Audio-over-Ethernet capabilities. The demo features two Zonal Controller Units, each built around a Stellar G6, connected in a ring topology. Each ZCU streams four channels of 24-bit audio over Ethernet, for a total of eight high-fidelity streams running simultaneously. Visitors can witness the audio clock recovery in action, hear the zero-jitter playback quality firsthand, and see the resilience of the ring topology through live plug-and-unplug trials that demonstrate fault tolerance without audio interruption. It is a concrete, audible proof point: dedicated audio cables are no longer a requirement for premium in-cabin sound.
The Ethernet backbone is the nervous system of the SDV
We are moving toward a future where the vehicle’s Ethernet backbone becomes its nervous system, and Audio over Ethernet is one of the most visible and audible ways this transformation is taking hold. When a vehicle can use its Zonal Controllers to deliver immersive sound, suppress road noise, or create a private acoustic zone for every passenger, the concept of what a “car” offers fundamentally changes.
Stellar G6 is not just a processor in this journey. Solving one of the most demanding timing and synchronization problems in hardware, it allows automotive engineers to focus on the experience rather than the plumbing. As the industry embraces the zonal revolution, we are ready to help redefine what the drive actually sounds like.
The post Audio over Ethernet: How Stellar G6 is replacing dedicated audio cables with a single Ethernet backbone appeared first on ELE Times.


















