Feed aggregator

❤️ 500 000 гривень за кращі цифрові рішення для громад: EGAP Ideathon 2025

Новини - 37 min 55 sec ago
❤️ 500 000 гривень за кращі цифрові рішення для громад: EGAP Ideathon 2025
Image
kpi вт, 08/26/2025 - 17:14
Текст

Фонд Східна Європа в межах флагманської Програми EGAP, що реалізується за підтримки Швейцарії, оголошує реєстрацію на EGAP Ideathon 2025 — національний конкурс ідей для вдосконалення цифрових платформ СВОЇ та e-DEM.

Simple but accurate 4 to 20 mA two-wire transmitter for PRTDs

EDN Network - 2 hours 12 min ago

Accurate, inexpensive, and mature platinum resistance temperature detectors (PRTDs) with an operating range extending from the cryogenic to the incendiary are a gold (no! platinum!) standard for temperature measurement.

Similarly, the 4 to 20 mA analog current loop is a legacy, but still popular, noise- and wiring-resistance-tolerant interconnection method with good built-in fault detection and transmitter “phantom-power” features.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 combines them in a simple, cheap, and cheerful temperature sensor using just eight off-the-shelf (OTS) parts, counting the PRTD. Here’s how it works.

Figure 1 PRTD current loop sensor with Ix = 500 µA constant current excitation.
Ix = 2.5v/R2, PRTD resistance = R1(Io/Ix – 1)
R1 and R2 are 0.1% tolerance (ideally)

The key to measurement accuracy is the 2.50-V LM4040x25 shunt reference, available with accuracy grade suffixes of  0.1% (x = A), 0.2% (B), 0.5% (C), and 1% (D). The “B” grade is consistent (just barely) with a temperature measurement accuracy of ±0.5oC.

R1 and R2 should have similar precision. R2 throttles the 2.5 V to provide Ix = 2.5/R2 = 500 µA excitation to T1. Because A1 continuously servos the Io output current to hold pin3 = pin4 = LM4040 anode, the 2.5 V across R2 is held constant, therefore Ix is likewise.

Thus, the voltage across output sense resistor R1 is forced to Vr1 = Ix(Rprtd) and Io = Ix(Rprtd/R1 + 1). This makes Io/Ix = Rprtd/R1 + 1 and Rprtd/R1 = Io/Ix – 1 for Rprtd = R1(Io/Ix – 1).

Wrapping it all up with a bow: Rprtd = R1(Io/(2.5/R2) – 1). Note that accommodation of different Rprtd resistance (and therefore temperature) ranges is a simple matter of choosing different R1 and/or R2 values.

Conversion of the Io reading to Rprtd is an easy chore in software, and the step from there to temperature isn’t much worse, thanks to Callendar Van Dusen math.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Simple but accurate 4 to 20 mA two-wire transmitter for PRTDs appeared first on EDN.

Filtronic secures record order from SpaceX

Semiconductor today - 2 hours 39 min ago
Filtronic plc of Sedgefield and Leeds, UK — which designs and manufactures RF and millimeter-wave (mmWave) transmit & receive components and subsystems — has secured its largest ever contract, valued at £47.3m ($62.5m), with its long-standing customer SpaceX, for the Starlink high-speed internet service...

Skyworks names Phil Carter as CFO

Semiconductor today - 3 hours 44 sec ago
Skyworks Solutions Inc of Irvine, CA, USA (which manufactures analog and mixed-signal semiconductors) says that Philip Carter has been appointed senior VP & chief financial officer, effective 8 September, responsible for financial strategy, investor relations, treasury and leadership of the global finance and information technology organizations...

Navitas names Chris Allexandre as president & CEO and board member

Semiconductor today - 3 hours 9 min ago
Gallium nitride (GaN) power IC and silicon carbide (SiC) technology firm Navitas Semiconductor Corp of Torrance, CA, USA has appointed Chris Allexandre as president & chief executive officer, effective 1 September. He will also join the board of directors...

Top 10 Federated Learning Algorithms

ELE Times - 3 hours 35 min ago

Federated Learning (FL) has been termed a revolutionary manner of machine learning because it provides the capability of collaborative model training across devices in a decentralized manner while preserving data privacy. Instead of transferring data to a centralized server for training, devices train locally, and only their model updates are shared. This way, it finds applicability in sensitive areas like healthcare, finance, and mobile applications. As Federated Learning continues to evolve, an increasingly diverse array of algorithms has emerged each designed to enhance communication efficiency, boost model accuracy, and strengthen resilience against data heterogeneity and adversarial challenges. This article will delve into the types, examples, and top 10 Federated Learning Algorithms.

Types of federated learning algorithms:

Federated Learning algorithms get classified by how data is laid out, by the system structure, and by the privacy requirements. Horizontal FL covers clients with the same features but distinct data points. Vertical FL captures the case where features are different but clients overlap. When users and features are both different, we use Federated Transfer Learning. Decentralized FL, as opposed to Centralized FL, doesn’t use a central server and instead allows for peer-to-peer communication. In terms of FL deployment methods, Cross-Silo FL consists of powerful participants like hospitals and banks, while Cross-Device FL focuses on lightweight devices, such as smartphones. In addition, Privacy-Preserving FL protects user data with encryption, differential privacy, and other techniques, and Robust FL attempts to protect the system from malicious, adversarial, or broken clients.

Examples of federated learning algorithms:

Examples of Federated Learning Algorithms: A number of algorithms have been created to overcome challenges specific to Federated Learning problems. The basic approach of Federated Learning is FedAvg, which, in contrast, models client averaging. FedProx, which is designed to work well with data heterogeneity, is a more advanced approach. For personalization, FedPer customizes top layers for each client, and pFedMe applies meta-learning techniques. Communication-efficient algorithms like SCAFFOLD and FedPAQ reduce bandwidth usage and client drift. Robust algorithms such as Krum, Bulyan, and RFA filter out malicious or noisy updates to maintain model integrity. Privacy-focused methods like DP-FedAvg and Secure Aggregation ensure data confidentiality during training. These algorithms are often tailored or combined to suit specific domains like healthcare, finance, and IoT.

Top 10 Federated Learning Algorithms:

  1. Federated Averaging (FedAvg):

FedAvg stands as the founding algorithm for Federated Learning. The weight averaging is performed after models are trained locally on each client for updating the global model. Due to its simple design and the ease with which one can scale, it has been widely implemented in practice.

  1. FedProx

FedProx improves upon FedAvg by adding a proximal term to the loss function. FedProx builds upon FedAvg by introducing a proximal term in the loss function. By penalizing local updates that diverge too much from the global model, this term helps stabilize training in settings with widely differing client data distributions. It is especially helpful in fields like healthcare and finance, where heterogeneous data is prevalent.

  1. FedNova (Federated Normalized Averaging)

To address the drift of the client, FedNova normalizes updates with respect to the number of local steps and learning rates. This ensures each client has an equal contribution to the global model regardless of its computational capabilities or data volume. This further favors convergence and fairness in heterogeneous setups.

  1. SCAFFOLD

SCAFFOLD, an abbreviation for Stochastic Controlled Averaging for Federated Learning, employs control variates to make corrections to the client’s updates. This limits the variance that exists owing to non-IID data and speeds the convergence. It is particularly effective in an edge computing environment, where data come from various sources.

  1. MOON (Model-Contrastive Federated Learning)

MOON brings contrastive learning into FL by aligning local and global model representations. It enforces consistency of models that are particularly necessary when client data are highly divergent. MOON should often be used for image and text classification tasks for very heterogeneous user bases.

  1. FedDyn (Federated Dynamic Regularization)

FedDyn incorporates a dynamic regularization term in the loss function to enable the global model to accommodate client-specific updates better. Because of this, it can withstand situations involving extremely diverse data, such user-specific recommendation systems or personalized healthcare.

  1. FedOpt

FedOpt substitutes in place of the vanilla averaging mechanisms with advanced server-side optimizers like Adam, Yogi, and Adagrad. Using these optimizers leads to faster and more stable convergence, which is paramount in deep learning tasks with large neural networks.

  1. Per-FedAvg (Personalized Federated Averaging)

Personalized Federated Averaging hopes to balance global generalization with local adaption by allowing clients to fine-tune the global model locally. Because of this, Per-FedAvg is suitable for personalized recommendations, mobile apps, and wearable health monitors.

  1. FedMA (Federated Matched Averaging)

The distinguishing feature of this method is the matching of neurons across client models before averaging. This retains the architecture of a deep neural network and hence allows for much more meaningful aggregation, especially for convolutional and recurrent architectures.

  1. FedSGD (Federated Stochastic Gradient Descent)

A simpler alternative to FedAvg, FedSGD sends gradients instead of model weights. It’s more communication-intensive but can be useful when frequent updates are needed or when model sizes are small.

Conclusion:

These algorithms represent the cutting edge of federated learning, each tailored to address specific challenges like data heterogeneity, personalization, and communication efficiency. As FL continues to grow in importance especially in privacy-sensitive domains these innovations will be crucial in building robust, scalable, and ethical AI systems.

The post Top 10 Federated Learning Algorithms appeared first on ELE Times.

Integrated voltage regulator (IVR) for the AI era

EDN Network - 4 hours 31 min ago

A new integrated voltage regulator (IVR) claims to expand the limits of current density, conversion efficiency, voltage range, and control bandwidth for artificial intelligence (AI) processors without hitting thermal and space limits. This chip-scale power converter can sit directly within the processor package to free up board and system space and boost current density for the most power-hungry digital processors.

Data centers are grappling with rising energy costs as AI workloads scale with modern processors demanding over 5 kW per chip. That’s more than ten times what CPUs and GPUs required just a few years ago. Not surprisingly, therefore, in a data center, power can account for more than 50% of the total cost of ownership.

“This massive jump in power consumption of data centers calls for a fundamental rethink of power delivery networks (PDNs),” said Noah Sturcken, co-founder and CEO of Ferric. He claims that his company’s new IVR addresses both the chip-level bottleneck and the system-level PDN challenge in one breakthrough.

Fe1766—a single-output, self-contained power system-on-chip (SoC)—is a 16-phase interleaved buck converter with a fully-integrated powertrain that includes ferromagnetic power inductors. The high-switching-frequency powertrain also includes high-performance FETs and capacitors that drive ferromagnetic power inductors.

Figure 1 The new IVR features a digital interface that provides complete power management and monitoring with fast and precise voltage control, fast transient response times, and high bandwidth regulation. Source: Ferric

Fe1766 delivers 160 A in an 8 × 4.4 mm form factor to bolster power density and reduce board area, layout complexity, and component count. The new IVR achieves one to two levels of miniaturization compared to a traditional DC/DC converter by taking a collection of discrete components that we design on a motherboard and replacing them with a much smaller chip-scale power converter.

Moreover, these IVRs can be directly integrated into the packaging of a processor, which improves the efficiency of the PDN by reducing transmission losses. It also brings the power converter much closer to the processor, leading to a cleaner power supply and a reduction in board area. “That means more processing can occur in the same space, and in some cases, design engineers can place a second processor in the same space,” Sturcken added.

Fe1766, which enables vertical power delivery within the processor package, claims to provide more power within the processor package while cutting energy losses with vertical power delivery. That makes it highly suitable for ultra-dense AI chips like GPUs. AI chip suppliers like Marvell have already started embedding IVRs in their processor designs.

Figure 2 Marvell has started incorporating IVRs in its AI processor packages. Source: Ferric

Ferric, which specializes in advanced power conversion technologies designed to optimize power delivery in next-generation compute, aims to establish a new benchmark for integrated power delivery in the AI era. And it’s doing that by providing dynamic control over power at the core level.

Related Content

The post Integrated voltage regulator (IVR) for the AI era appeared first on EDN.

Hon’ble PM Shri. Narendra Modi to inaugurate fourth edition of SEMICON India 2025

ELE Times - 5 hours 18 min ago
  • Bharat set to welcome delegates from 33 Countries, 50+ CXOs,  350 Exhibitors
  • At country’s biggest Semiconductors & Electronics Show in New Delhi from 2-4 September 2025
  • Over 50+  Eminent Global Visionary Speakers
  • Event To Highlight Robust Local Semiconductor Ecosystem Expansion and Industry Trends

The fourth edition of SEMICON India 2025 will be officially inaugurated by Hon’ble Prime Minister Shri. Narendra Modi on 2nd September 2025 at Yashobhoomi (India International Convention and Expo Centre), New Delhi. Staying true to its legacy of positioning India as a global Semiconductor powerhouse, the fourth edition of SEMICON India 2025 will convene key stakeholders including global leaders, semiconductor industry experts, academia, government officials and students.

Under the Semicon India program, 10 strategic projects have been approved across high-volume fabs, 3D heterogeneous packaging, compound semiconductors (including SiC), and OSATs, marking a significant milestone for the country. Recognizing semiconductors as a foundational technology, over 280 academic institutes and 72 startups have been equipped with state-of-the-art design tools, while 23 startups have already been approved under the DLI scheme. These initiatives are driving innovations in critical applications such as CCTV systems, navigation chips, motor controllers, communication devices, and microprocessors—strengthening India’s journey towards Atmanirbhar Bharat.

Accelerating India’s semiconductor revolution, SEMI, the global industry association prompting the semiconductor industry and India Semiconductor Mission (ISM), Ministry of Electronics and Information Technology (MeitY), announced the programming for SEMICON India 2025 at a press conference held in the national capital.

Under the theme Building the Next Semiconductor Powerhouse, the event will offer valuable insights into innovations and trends in key areas such as Fabs, Advanced packaging, smart manufacturing, AI, supply chain management, sustainability, workforce development, Designs and Start Up’s along with 6 country round tables.

The SEMICON India exhibition will feature nearly 350 exhibitors from across the global semiconductor value chain including 6 county Round Tables, 4 country pavilions,  9 states participations and over 15000 expected visitors providing South Asia’s single largest platform for showcasing the latest advancements in the semiconductor and electronics industries, said Shri S Krishnan, Secretary, MeitY.

“SEMI is bringing the combined expertise and capabilities of our member companies across the global electronics design and manufacturing supply chain to SEMICON India, helping to advance both India’s semiconductor ecosystem expansion and industry supply chain resiliency,” said Ajit Manocha, President and CEO, SEMI. “The event will feature signature SEMICON opportunities for professional networking, business development, and insights into technology and market trends from a star-studded lineup of leading industry experts.”

SEMICON India 2025 is designed to maximize technological advancements in the semiconductor and electronics domain and highlight India’s policies aimed at strengthening its semiconductor ecosystem.

The event is a remarkable convergence of ideas, collaboration and innovation, and provides a unique opportunity to address complex challenges of tomorrow while fostering collaboration across the semiconductor ecosystem. We are looking forward to an astounding number of participations this year, Said Shri Amitesh Kumar Sinha, Additional Secretary , MeitY and CEO ISM.

“India’s semiconductor industry is poised for a breakthrough, with domestic policies and private sector capacity finally aligning to propel the nation to global prominence. As we navigate this transformative landscape, collaboration and ecosystem building will be key to unlocking the next wave of growth and breakthroughs and SEMICON India 2025 plays the catalyst for this.” said Ashok Chandak, President, SEMI India and IESA.

In addition to distinguished government officials, this year’s event will also feature an impressive lineup of industry leaders from top companies including Applied Materials, ASML, IBM, Infineon, KLA, Lam Research, MERCK, Micron, PSMC, Rapidus, Sandisk, Siemens, SK Hynix, TATA Electronics, Tokyo Electron, and many more.

Over the span of three days, the flagship event will feature a diverse range of activities including high profile keynotes, panel discussions, fireside chats, paper presentations, 6 international roundtables and more that will converge to drive the next wave of semiconductor innovation and growth. The event will also include a ‘Workforce Development Pavilion’ to showcase microelectronics career prospects and attract new talent.

SEMICON India is one of eight annual SEMICON expositions worldwide hosted by SEMI that bring together executives and leading experts in the global semiconductor design and manufacturing ecosystem. The upcoming event marks the beginning of an exciting journey into the future of technological innovation, fostering collaboration and sustainability in the global semiconductor ecosystem.

The post Hon’ble PM Shri. Narendra Modi to inaugurate fourth edition of SEMICON India 2025 appeared first on ELE Times.

Rohde & Schwarz extends the broadband amplifier range to 18 GHz

ELE Times - 8 hours 28 min ago

The new BBA series features higher field strengths for critical test environments up to 18 GHz

Rohde & Schwarz, a leading global supplier of test and measurement equipment and a reliable partner for turnkey EMC solutions, has expanded its broadband amplifier portfolio of the R&SBBA300 family with the two innovative amplifier series R&SBBA300-F for 6 to 13 GHz and R&SBBA300-FG for 6 to 18 GHz with additional power classes such as 90W, 180W and 300W.

Together with the already successfully introduced broadband amplifier series R&SBBA300-CDE for 380 MHz to 6 GHz and R&SBBA300-DE for 1 to 6 GHz, Rohde & Schwarz now offers compact dual-band amplifiers covering the entire frequency range from 380 MHz to 18 GHz in 4HU desktop models only.

The R&SBBA300 family is the new generation of compact, solid-state broadband amplifiers, designed for high availability and a linear output across an ultra-wide frequency range. It supports amplitude, frequency, phase, pulse and complex OFDM modulation modes and is extremely robust under all mismatch conditions, providing reliable test results in all circumstances.

Typical applications include EMC, co-existence and RF component tests during development, compliance test and production. The very wide frequency range makes them ideal for wireless and ultra-wideband testing.

The R&SBBA300-F series is a cost-effective solution for applications between 6 GHz and 13 GHz; the R&SBBA300-FG series covers a continuous frequency band from 6 GHz to 18 GHz. The two amplifier series can be used for ultrawideband applications as well as to address various EMC standards within mobile communications (FCC, ETSI), automotive (ISO), aerospace (DO-160), and military (MIL-STD-461). Both the R&SBBA300-F and the R&SBBA300-FG are now available in the power classes 30 W, 50 W, 90 W, 180 W, 300 W.

The R&SBBA300 broadband amplifier family offers two powerful tools for tailoring the RF output signal to the application: adjusting the amplifier either for excellent linearity or faithful reproduction of pulse signals by shifting the operating point between class A and class AB, and setting the amplifier for maximum tolerance to output mismatch or for maximum RF output power to utilize the power reserves for the application.

This allows users like developers, test engineers, integrators, or operators to optimize the output signal and react flexibly to a wide variety of requirements. Both parameters can be changed during amplifier operation.

“In addition to high linearity and excellent harmonic properties, our users also need extremely wide, continuous frequency bands at high RF output power,” said Michael Hempel, product manager for amplifier systems at Rohde & Schwarz. “The BBA300 series is our direct response to these requirements, offering outstanding bandwidth with high output power.”

Rohde & Schwarz also provides fully compliant EMI test receivers, signal generators, antennas, software and other essential system components and service for EMC testing.

The post Rohde & Schwarz extends the broadband amplifier range to 18 GHz appeared first on ELE Times.

EDOM Strengthens NVIDIA Jetson Thor Distribution Across APAC

ELE Times - 8 hours 56 min ago

Empowering a New Era of Physical AI and Robotics Development in the Asia-Pacific Region

EDOM Technology announced the official distribution of NVIDIA’s latest NVIDIA Jetson Thor module and developer kit, built for physical AI and general robotics. This move is set to accelerate technological upgrades and local deployment of applications such as intelligent robotics, AMR (Autonomous Mobile Robot), AIoT, and smart manufacturing across the region.

Jetson Thor is the most powerful edge AI module in the NVIDIA Jetson series. Built on NVIDIA Blackwell GPU architecture, it delivers over 2,070 TFLOPS of AI inference capability, specifically designed for humanoid robots, AMRs, and industrial smart devices. Its highly integrated computing architecture supports multi-sensor fusion, Transformer model inference, and real-time motion control, enabling a deep integration of generative AI and the physical world. Jetson Thor seamlessly integrates with NVIDIA Isaac ROS, NVIDIA Omniverse, and NVIDIA Isaac GR00T, forming a complete AI toolchain from data generation and simulation training to edge deployment. This significantly accelerates the adoption and commercialization of Physical AI applications, making it a key enabler of next-generation edge AI and robotics intelligence.

As NVIDIA’s long-standing partner and authorized distributor of Jetson series modules in the Asia-Pacific, EDOM brings around 30 years of experience in distribution and technical integration, covering AI modules, embedded systems, sensor integration, industrial automation, and component applications.

EDOM provides comprehensive product offerings of the Jetson Thor platform, including the Jetson AGX Thor Developer kit and Jetson T5000 module. Equipped with NVIDIA Holoscan Sensor Bridge for real-time data processing, along with high-speed interfaces such as GMSL, MIPI, 25GbE, 5G, and Wi-Fi modules, as well as high-performance storage interfaces, these solutions effectively meet the stringent low-latency and high-bandwidth demands of edge computing. Additionally, EDOM supports custom hardware design and system integration reference solutions, fully assisting customers in accelerating product development and deployment processes.

Jeffrey Yu, CEO at EDOM Technology stated:
Jetson Thor represents a major breakthrough in NVIDIA’s physical AI and robotics applications. We are honored to be the authorized distributor for Jetson Thor in the Asia-Pacific. By combining technical supports, educational resources, and platform ecosystems, we aim to help customers accelerate innovation and advance the deployment of generative and physical AI technologies.”

With the launch of Jetson Thor, the module is expected to see wide adoption in fast-growing physical AI and robotics sectors across Asia-Pacific, including smart manufacturing, AMRs, smart transportation, and service robots. For example:

  • In high-precision AOI (Automated Optical Inspection), Jetson Thor can process large-scale image data in real time and perform inference, improving yield rates and automation in factories.
  • In AMR factory logistics, through multi-sensor fusion and real-time motion control, it enables autonomous navigation and smart scheduling in complex environments.
  • In humanoid and companion robots, Jetson Thor’s integration with GR00T multimodal models and visual recognition enables highly interactive scenarios, ideal for applications in aging societies and public services.
  • With support for multiple GMSL cameras and high-speed Ethernet, Jetson Thor is also well-suited for smart city traffic nodes, performing real-time image analysis and behavior recognition.

These applications demonstrate Jetson Thor’s powerful computing capabilities and provide developers and enterprises in Asia-Pacific a complete path from AI training to edge deployment.

EDOM will continue to act as a critical bridge between technology and the market, working with system developers, integrators, and academic institutions. By driving the local deployment of the NVIDIA Jetson platform across key sectors—such as smart transportation, AIoT, and smart manufacturing—EDOM is accelerating the development and implementation of generative AI and Physical AI throughout the Asia-Pacific region.

The post EDOM Strengthens NVIDIA Jetson Thor Distribution Across APAC appeared first on ELE Times.

Innodisk Serves Up DDR5 and LPDDR5X Memory for Industrial Designs

AAC - 15 hours 52 min ago
The memory modules combine high-speed performance, compact layouts, and tool-free modularity for reliable edge AI operation in tough environments.

If you made it through the schtick, Google’s latest products were pretty fantastic

EDN Network - Mon, 08/25/2025 - 21:38

Until last year, Google historically held its mobile device launch events in October, ceding the yearly first-mover advantage to primary competitor Apple with its September comparable-device announcements. In 2024, however, Google “flipped the script”, jumping ahead to August. The same thing seems to have happened this year…assuming Apple does a late summer or early fall event at all, of course, since all we have right now is a lot of leaks, not a solid date. That said, Google rolled out the latest updates to its longstanding smartphone, smart watch, and earbuds products last Wednesday, August 20th at its Made by Google event, along with making additional announcements related to other R&D programs and product lines.

I suppose I probably should touch on (and get past) the “schtick” aspect of this post title’s first. I didn’t watch the livestream, as I was fully focused on my “day job” duties at the time. And truth be told, I still haven’t watched the archived video in its entirety, because I can’t stomach it:

Say what you want about Jimmy Fallon as a comedian, television host, actor, singer, writer, and producer; I personally think he’s quite talented, generally speaking:

As a tech event host, however, in this initial experiment at least, his skill set was a mismatch, IMHO at least. Not that the other guests, or even Google’s own spokespersons, were much—if any—better, for that matter. Here’s what TechCrunch noted in retrospect:

The result was a watered-down, cringey, and at times almost QVC-like sales event, which Reddit users immediately dubbed “unwatchable.” In large part, this had to do with Fallon’s performance.  Trying to shift his goofy late-night persona to a corporate event, he ended up coming across as deeply uninterested in the technology, necessitating an over-the-top display of decidedly less-than-genuine enthusiasm.

The Verge’s conceptually similar take was aptly titled “The Made by Google event felt like being sucked into an episode of Wandavision”. Here’s an excerpt:

The real unsettling thing was understanding that I — and other gadget nerds and media — were not the target audience for this show. The point of a keynote is to be both informative and impressive, telling the most interested audiences about the ins and outs of the new products and attempting to wow them with live demos and technological feats. Today’s Pixel event was less concerned with product minutiae and more concerned with making it all entertaining.

That said, Victoria Song’s self-aware closing comments were thought-provoking; perhaps at least some of the reason for my underwhelming reaction was that I’m traditional and…old:

Back in the day, [Steve] Jobs needed media to get the word out and build buzz. In this new age, companies can go straight to the source through influencers, YouTube (which Google also owns), and livestreams. It’s why you see an increasing number of influencers invited to launch events — and featuring in them. There were plenty in attendance today. It’s not that journalists are getting left out. It’s more that the keynote as we know it isn’t the only way to get attention anymore. All I know is today felt like the end of an era. That’s not necessarily a bad thing. I’ll confess that traditional keynotes have felt stale as of late. As cringe as it was, this was at least something different.

That all said, I give Google kudos for taking it straight to Apple this time, which depending on your perspective, reflects either genuine confidence or deluded arrogance. And I’d still suggest you stick with The Verge’s 11:39 abridged video versus slogging through the full 1:16:55 version:

The processors

One downside to the reality that “gadget nerds and media were not the target audience for this show” is that we didn’t end up getting nearly as much technical detail as we’d like. At this point, for example, we don’t have any idea whose SoC is inside Google’s new Pixel Buds 2a earbuds:

INSERT https://youtu.be/v7sWikAU-os

To be fair, we don’t generally find out this kind of info for these kinds of products anyway, at least until either the supplier reveals its presence or someone like me tears ‘em apart. And speaking of suppliers subtly-or-not revealing themselves, the fact that Qualcomm rolled out its latest “Snapdragon W5+ and W5 Gen 2 Wearable Platforms” for smart watches and the like the same day as Google’s event was a tipoff that it’s what’s powering the new Pixel Watch 4:

The main IC, comprising a quad-core Arm Cortex-A53 CPU matrix and a Hexagon V66K AI DSP, is fabricated on a 4 nm process (foundry source not identified). The key difference between the W5 (what Google’s smartwatch uses) and W5+ is the latter’s inclusion of a separate 22 nm-fabricated always-on coprocessor (AOC). The Qualcomm chipset’s narrowband non-terrestrial networks (NB-NTN) support enables emergency message transmission and reception via satellite when out of cellular and Wi-Fi coverage, something rumored for the (near) future with Apple Watches but not available with Apple’s current wrist-wearable products. And dual-band GPS capabilities, coupled with “Location Machine Learning 3.0” RF front-end (RFFE) and processing algorithm enhancements, claim to improve positioning accuracy by up to 50%.

Speaking of “foundry sources”, a supplier transition here is one of the most notable aspects of the new Tensor G5 SoC powering Google’s latest Pixel 10 products, including the newest Fold:

Google provided no detailed block diagram, sorry, only a pretty concept picture:

And when it comes to specs, there’s only high-level handwaving, at least for now, until third-party developers and users get their hands on hardware:

  • An up to 60% more powerful TPU
  • A 34% faster on average CPU, and
  • New security hardware

The other thing we know is that Google switched from its longstanding foundry partner, Samsung, to TSMC this time around. The Tensor G4 (along with its G3 precursor…perhaps that lithography stall was behind the foundry switch?) had been built on a 4-nm process. Now it’s fabbed on 3 nm.

Beyond that…🤷‍♂️ The Tensor G4 contained the following “octa-core” CPU cluster:

  • 1× 3.1 GHz Cortex-X4
  • 3× 2.6 GHz Cortex-A720
  • 4× 1.92 GHz Cortex-A520

along with an Arm Mali-G715 MP7 GPU. Ars Technica notes that this time around, the total CPU core count is the same (eight), but the “mix” is different; one “prime” core, five mid-level ones, and two efficiency ones. Core identity and speed specifics are TBD, as are GPU details, although benchmarks (including relative comparisons to Apple SoC counterparts) have already leaked. To wit, the Tensor Processing Unit (TPU) for on-device AI inference seems to be notably upgraded:

The more powerful TPU runs the largest version of Gemini Nano yet, clocking in at 4 billion parameters. This model, designed in partnership with the team at DeepMind, is twice as efficient and 2.6 times faster than Gemini Nano models running on the Tensor G4. The context window (a measure of how much data you can put into the model) now sits at 32,000 tokens, almost three times more than last year.

More on the Pixel Buds 2a (and Pro 2)

As I’d mentioned upfront in my Pixel Buds Pro teardown published at the beginning of 2023, Google’s initial earbuds product efforts had been hit-or-miss at best. The Pixel Buds Pro, though, introduced at the May 2022 Google I/O developer conference, was a notable update, adding both active noise cancellation (ANC) and “transparency”, among other improvements:

The subsequent enhancements made to their Pixel Buds Pro 2 successors, unveiled at last year’s Made by Google event, were more modest, and I took a “pass” on the upgrade. The original Pixel Buds Pro remain my Android-paired “daily drivers” to this very day, actually. But now, with the gen-2 update to the four-plus year old A Series:

I may reconsider my longstanding no-update loyalty. They carry forward the bulk of the Pixel Buds Pro 2 capabilities, including first-time A-Series ANC support, at the modest tradeoff of decreased between-charges operating time. Speaking of charging, the batteries inside the case (albeit not those in the earbuds themselves) are user-replaceable, precluding you from needing to toss the case in the trash when its original cells expire. And did I mention that the Pixel Buds 2a costs $100 less than its “big brother”? Presumably as an attempt to maintain (and maximize) the feature set differentiation, as a means of rationalizing the price differentiation, Google also announced a new color option and pending modest feature set updates for the Pixel Buds Pro 2:

More on the Pixel Watch 4

I never would have believed that a smartwatch update would be the highlight of a new product launch suite, but I actually think that’s what Google pulled off last week. The glass face is now curved across the entirety of its diameter, not just at the outer edges…as is the display itself, which Google refers to as “Actua 360”. The result? A 10% larger active area, even with 16% smaller bezels, and an edgeless appearance. It’s also 50% brighter, with a 3000-nit max output.

No word on battery capacity expansions for either/both the 41 mm and 45 mm diameter models, although given that the new Qualcomm chipset’s RFFE is ~20% smaller than before, it wouldn’t surprise me to learn that Google filled the now-available internal space with more Li-ion capacity. Regardless, Google claims that the Pixel Watch 4 has a 25% longer battery life (30 hours on the 41 mm version and 40 hours on the larger battery capacity 45 mm variant), further extendable to two days (41 mm) and three days (45 mm) via Battery Saver mode.

And when recharging is necessary, Google has made welcome updates here as well, claiming that the Pixel Watch 4 charges 25% faster than before, from zero to 50% in just 15 minutes.

The approach shown in the above video marks the third charging scheme Google has employed across only four smartwatch generations to date. The first-generation Pixel Watch was launched three years ago at Made by Google:

and previewed a few months earlier at the 2022 Google I/O conference. It remains in daily use on my wrist to this very day. The premiere Pixel Watch leveraged proprietary wireless charging, which was convenient but slow and inefficient, and also translated into thermal tradeoffs that “encouraged” the back panel to fall off. Second- and third-generation successors switched to physical charging contacts on that same back panel. And now Google’s moved them to the side, among other things, translating into improved (more accurately: finally feasible) repairability.

Unsurprisingly, the new SoC affords additional Gemini-fueled AI capabilities, both fitness-specific (a pending Fitbit revamp is planned, for example, and more general. Other UI enhancements are physical versus virtual: a 15% stronger haptic engine and a louder, clearer speaker. Pixel Watch 4 preorders are now open, with product availability slated for October.

More on the Pixel 10 phone family

And now for the smartphones, normally the upfront-in-coverage stars of the show. Unless you look closely, and disregarding the varied color options this time around, you won’t be able to discern any differences between them and last year’s Pixel 9 predecessors, at least from the outside. Same four models (10, 10 Pro, 10 Pro Max, and 10 Pro Fold, the latter also with October availability), same screen-size options (albeit with modestly boosted peak brightness) and other dimensions (albeit slightly thicker in some cases), etc. The biggest external evolution is the baseline Pixel 10’s added (third) 10.8 Mpixel backside telephoto camera, prompting a (presumably bill-of-materials driven) devolution of its ultrawide peer to 13 Mpixels from the Pixel 9’s 48 Mpixels (the wide camera resolution also dropped slightly, from 50 to 48 Mpixels).

Pop off the screen and peer inside, and things get more interesting. The 3rd gen Fold version, for example, is now IP68 water and dust resistant; Google was also refreshingly candid that it’s not a “forever” panacea (for it or any other device, for that matter, either). The Pixel 10’s Wi-Fi downgrades from 7 (on the Pixel 9) to 6e. Battery capacities have gone up slightly across the board, as have between-charges battery life estimates. And how does one charge those batteries? Legacy wired USB-C connections are faster than before, at least for the Pixel 10 Pro XL, which can charge to 70% in 30 minutes using a 45-W input. And that same product variant also supports up-to-25W wireless Qi2.2 charging. The others are “only” 15W-capable, although their common Qi2-gen technology first-time embeds magnets, branded by Google as Pixelsnap:

One pleasant surprise, speaking of bill-of-materials costs, was that tariff pressures aside (Pixel products are variously manufactured in China, Vietnam and, increasingly, India), and aside from the $100-more Pixel 10 Pro XL, there were no other price increases from last year’s models to this year’s. And Google also didn’t “hide” tariff costs by cutting RAM capacities (which would counterbalance its burgeoning AI ambitions, anyway) or offering only higher-capacity, higher-priced (and profitable) storage variants, the latter as Apple is rumored to be doing with at least some of its various upcoming iPhone 17 flavors. Speaking of storage, the baseline interface moves from UFS v3.1 on the Pixel 9 to faster v4.0 on the Pixel 10…as long as you purchase a device with at last 256 GBytes of flash memory, that is. Bump that up to 512 GBytes or further, and you also get “Zoned UFS” (ZUFS). Google didn’t say much about it last week, but here’s how SK Hynix explained it in a year-plus-back press release:

The ZUFS is a differentiated technology that classifies and stores data generated from smartphones in different zones in accordance with characteristics. Unlike a conventional UFS, the latest product groups and stores data with similar purposes and frequencies in separate zones, boosting the speed of a smartphone’s operating system and management efficiency of the storage devices. The ZUFS also shortens the time required to run an application from a smartphone in long hours use by 45%, compared with a conventional UFS. With the issue of degradation of read and write performance improved by more than four times, the lifetime of the product also increased by 40%.

The explicit ZUFS tie to higher capacities suggests to me that it’s explicitly tied to multi-die memory modules, which are inherently easier to manage from a multiple-simultaneous-access (read and/or write) standpoint. Further, regarding the claimed performance and durability improvements, it’s conceptually feasible that a portion of the total capacity allocation might derive from more costly (on a per-bit basis) but more robust single- or dual-bit-per-cell flash memory, with the remainder using cheaper but slower and less durable triple- or quad-bit-per-cell flash and the operating system on-the-fly directing usage to one or the other as appropriate. One final internal (with external ramifications) change of note: with the exception of the Fold variant and only in the United States, Google has dropped physical SIM support from this year’s phones, just as Apple had done with its iPhone 14 product line three years back.

Other “teasers”

Google also mentioned last week that a pending migration from Google Assistant to Gemini, in both free and paid service tiers, was planned for its various existing Home devices (likely a reaction to both users’ increasingly vocal complaints about their existing setups and competitor Amazon’s underway Alexa+ staged rollout), along with reassuring everyone that Gemini support in Android Auto and Google TV is still on the way. And apparently, judging from a teased image, “Gemini for Home” will be supported by not only legacy but also new hardware. I could imagine, for example, that legacy memory capacity and processing horsepower limitations would significantly hamper, if not completely preclude, local “edge” AI inference capabilities:

(yes, that’s Formula 1 Team McLaren driver Lando Norris)

And what about new (specifically Google-branded) product categories? Company executives indicated, for example, that Google has at least temporarily paused internal tablet development after the underwhelming market acceptance of its most recent (2.5 year old) Pixel Tablet model:

a particularly interesting twist in light of chronologically-coincident reports that Amazon is dropping its Android-derived Fire OS and refocusing on “pure” Android for its future tablets.

Similarly, Google claims it has no definite (public, at least) plans to release branded smart glasses or other head-mounted wearables—instead being content to develop foundation O/S and application suites for partners to productize—or even a smart ring. I’m particularly skeptical about that last one, as I am regarding Apple’s claimed non-interest in the smart ring product category. I’ve been testing various manufacturers’ smart rings in recent months, with compelling albeit embryonic outcomes, and I find it hard to imagine either Apple or Fitbit-by-Google perpetually ceding that particular product-category space to others (that said, the effectiveness of patent-portfolio barriers should never be underestimated).

Stay tuned for the first in a series of smart ring-themed posts by yours truly in EDN starting next month. And with that, nearing 3,000 words, I’m going to wrap up for today. Apple is rumored to be holding its own event in a few weeks, which I’m as-usual also planning on covering. Until then, as always, let me (and your fellow readers) know your thoughts via the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related articles

 

The post If you made it through the schtick, Google’s latest products were pretty fantastic appeared first on EDN.

Got faders?

Reddit:Electronics - Mon, 08/25/2025 - 21:27
Got faders?

Penny + giles potentiometers dont like isopropyl so I had to take them apart. Absolute works of art these motorized faders. They are driven with two 2A opamps acting as an H-Bridge lol

submitted by /u/XDFreakLP
[link] [comments]

Lab on a Tag: NFC Chips Power Medical Sensors—No Batteries Required

AAC - Mon, 08/25/2025 - 20:00
Silicon Craft is graduating NFC from a passive data carrier to a complete electrochemical measurement system. With this technology, a simple phone tap can both power and execute a diagnostic test. 

The Google TV Streamer 4K: Hardware updates on display(s)

EDN Network - Mon, 08/25/2025 - 19:09

Within my year-back coverage of Google’s August 2024 multi-product launch event, I devoted multiple prose paragraphs to the $99.99 TV Streamer 4K, the company’s high-end replacement for the popular prior Chromecast with Google TV 4K and HD series:

Memory-drive evolutions

Part of the motivation for Google’s product-succession move, we belatedly learned, was a requirement unveiled three months later that all new Google TV O/S-licensed devices needed to ship with a minimum of 2 GBytes of RAM. While the original (4K) Chromecast with Google TV met that specification, the HD sibling undershot it by 25% (1.5 GBytes). The TV Streamer 4K, on the other hand, doubles the onboard RAM allotment to 4 GBytes.

Another increasingly problematic issue with prior-generation devices was their dearth of integrated nonvolatile (flash memory) storage, which adversely affected not only how many apps and other downloaded content could be held on-device but even the available capacity to house operating system updates. Both the 4K and HD variants of the Chromecast with Google TV included only 8 GBytes of storage, only around half of which were user-accessible. The TV Streamer 4K quadruples that total amount, to 32 GBytes.

Then there’s the competitive angle. A year ago, the most advanced device in licensee-slash-competitor (frenemy?) Walmart’s product arsenal was the $19.88 onn. 4K Streaming Box (which I just noticed they’re calling the “Streaming Device” again in conjunction with the recent packaging refresh) with 2 GBytes of RAM and 8 GBytes of nonvolatile storage, memory capacity-matching the Chromecast with Google TV 4K at less than half the price. That said, as any of you who saw one of my last-month teardowns already knows, Walmart subsequently unveiled a “Pro” device of its own, with 3 GBytes of RAM, 32 GBytes of nonvolatile storage, and, at $49.99, a price tag once again half that of the Google TV Streamer 4K counterpart.

And amid all this memory-related chitchat, don’t overlook equally important processing and graphics horsepower, along with connectivity and other hardware enhancements. Walmart has historically leveraged Amlogic SoCs, sometimes architecture- and/or clock speed-upgraded from one generation to another, and other times generationally essentially the same. Up to this point, at least, Google has also done the same. What’s inside the TV Streamer 4K, claimed to be “22% faster” this time? And does its feature set “adders” versus competitive alternatives, such as the ability to act as a Google Home and Matter-and-Thread hub…umm…matter? Let’s find out.

eBay once again comes through

Sorry, folks, but given my per-teardown monetary compensation, I’m not going to drop $100 on a brand new dissection “patient”, especially if I’m not confident upfront that I’ll be able to get it back together afterwards in cosmetically pristine and fully functional form. Fortunately, back in early May, I came across a “Porcelain” color (“Hazel” is also available) used-condition device with all accessories included on eBay for $52.25 plus tax, with free shipping. It was a bit beat up, but seemingly still worked fine:

Here’s how it and the accompanying accessories arrived (inside a bubble wrap-rich cardboard box, of course), as usual, in the following photo (and others to come) accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

Let’s have a close-up peek at the power supply first. I was admittedly surprised to still see Google shipping devices accompanied by wall warts with legacy USB-A outputs, mated to USB-A to USB-C cables, although the combo still seemingly provides sufficient juice to power the streamer:

That’s a 5V/1.5A (7.5W) output, if you can’t discern the faint fine print:

Next, the remote control:

It’s a slightly larger version of the one bundled with the Chromecast with Google TV HD (to the right in the following photos), notably moving the volume controls to the front versus the side:

And now for the star of the show, with the following specifications:

  • Length: 6.4 in
  • Width: 3.0 in
  • Height: 1.0 in
  • Weight: 5.7 oz

Note that (optional for use, in addition to built-in Wi-Fi) wired Ethernet support is integrated this time, not necessitating the use of a separate USB-C hub. More generally, left-to-right, there’s the status LED, a “find remote” button that does double-duty for reset purposes, USB-C (software-enabled for both power and peripheral data purposes), GbE Ethernet, and HDMI 2.1:

Open sesame

Time to dive inside. That underside rubberized “foot” is usually a fruitful pathway bet:

No luck yet, but the various-shaped and -sized opening outlines barely visible below the translucent next-level layer are encouraging:

That’s better…

…save for the lingering “bubble” after I put the “foot” back in place, a familiar sight to anyone who’s ever imperfectly applied a screen protector…

Let’s pause for a moment and take in the lay of the land:

There are screw heads in all four corners, along with recessed tabs on both sides, and additional holes (with metal visible within them) at both the top and bottom edges:

Removing the screws was easy:

The tabs were more of a struggle and, ultimately, a surprise. What I thought I needed to do was to carefully bend them out of the way, thereby enabling the two halves to vertically separate. And indeed, I was able to shift one to the side, fortunately not breaking it in the process. But when I turned my attention to the other, the two halves instead separated sideways in response:

And then they vertically lifted apart. Turns out I could have saved myself some trouble (and potential tab breakage) by just sliding them apart from the beginning:

Tackling various temperature inhibitors

Next up: that sizeable heat sink. Remember the earlier-mentioned “additional holes at both the top and bottom edges”? Those were for the four additional screws that now need to be removed; the tips had been visible through the holes to the other (bottom) side:

Houston, we have liftoff:

Next, the PCB, held in place by plastic tabs (and the connectors’ inserts to the case back panel):

And yes, as you can see from the now-present smear, I got thermal paste all over myself, etc. in the process of getting the PCB out of the bottom case half:

A close-up of the LED light pipe and button mechanical bits:

Voila:

Already visible are the PCB-embedded Wi-Fi antennae on both sides; the TV Streamer 4K supports Wi-Fi 802.11ac (both 2.4 GHz and 5 GHz) along with both Bluetooth 5.1 and a Thread transceiver. Before going any further, let’s get rid of the rest of that thermal paste, properly this time (via rubbing alcohol and a tissue):

Now let’s flip the PCB over and see what the other side reveals:

Another Faraday cage! And another embedded antenna (lower left). I’m guessing that it’s for Bluetooth and, doing double-duty, Thread, both protocols being 2.4 GHz-based.

While here, let’s get this cage off. Unlike most I’ve encountered, this one has numerous discrete “dimpled” tabs holding it in place, versus longer segments each with multiple embedded “dimples”:

Tedious patience eventually won out, however:

The “fins” (which I presume are for “spring” purposes) on top of the Faraday cage are interesting:

And what’s with the three gold-color “clips” (for lack of a better word) scattered around the cage, readers? I’ve seen them in past teardowns, too; I’m not sure what purpose they serve:

A new generation, a supplier transition

A closeup reveals, at lower left, an unknown chip stamped thusly:

MG21
A020H1
B02ARA
2436

to its right, an unknown-function MediaTek MT6393GN (although this has me suspecting it’s a power management controller, and to my earlier “what SoC is in the design this time” question: hmm, MediaTek?), and at lower right, a Samsung K4FBE3D4HB-MGCL 32 Gbit LPDDR4 DRAM:

Back to the topside, and (tediously, again) off with another Faraday cage:

More thermal paste inside, unsurprisingly:

Zooming in, I’m guessing that the application processor is at far left, under the lingering lump of paste (which I’ll attempt to clean up next). Below it is the nonvolatile storage, a Kioxia (formerly Toshiba Semiconductor) THGAMVG8T13BAIL 32 GByte eMMC flash memory. To its right is the wired Ethernet transceiver, a Realtek RTL8211F. And at far right is the wireless communications nexus, MediaTek’s MT7663BSN “802.11a/b/g/n/ac Wi-Fi 2T2R + Bluetooth v5.1 Combo Chip”.

Who’ll take my bet that under that glob of thermal paste is a MediaTek-sourced SoC?

I win! It’s the MT8696, based on a quad-core Arm Cortex-A55 and capable of clocking at up to 2 GHz. I can’t read the markings on the crystal in the SoC’s upper left corner, but TechInsights’ analysis report, which I’ll revisit soon, says that the MT8696 runs at 1.8 GHz in this design.

All that was left was to apply fresh thermal paste everywhere I’d cleaned it off, set the Faraday cages back on top of their brackets, push the tabs back in place, snap some side-view shots:

and then fire it back up and see if it still works. I didn’t bother with putting the top back in place at first, in case it didn’t work, but that white LED glow in the lower left is an encouraging sign.

Huzzah!

I let it run for about 15 minutes to ensure that it was thermally stable, then unplugged it and completed the reassembly process.

Is the enemy of my enemy my friend?

In closing, I’ll share the report summary of another teardown I came across, from TechInsights, with the identities of a few other ICs. And I’ll toss out a few questions for your introspection:

  • Given that Google’s conspicuous reference to this one as the “4K” model, will they follow up later with a “HD” edition as they did in the Chromecast with Google TV era?
  • Given the subsequent unveiling of both Walmart’s aforementioned 4K Pro Streaming Device and even newer “little brother” (sorta…hold that thought for another teardown to come) onn. 4K Plus Streaming Device, plus other manufacturers’ Google TV O/S-based products, all significantly lower priced, just how many TV Streamer 4Ks does Google really expect to sell?
  • And at the end of the day, given that Google is fundamentally a software company (with a software-licensing business model), does it matter? Is TV Streamer 4K fundamentally just a showcase product to advance the feature set of the overall market, analogous to Microsoft and its Surface computer product line? Said another way, are Amazon (with its various Fire OS-based devices), Apple (with tvOS-based Apple TV products), and Roku (Roku OS-based sticks, boxes, and TVs) Google’s real competitors?

Wrapping up, some words I previously wrote (and EDN subsequently published) last August:

Competing against a foundation-software partner who’s focused on volume at the expense of per-unit profit (even willing to sell “loss leaders” in some cases, to get customers in stores and on the website in the hopes that they’ll also buy other, more lucrative items while they’re there) is a tough business for Google to be in, I suspect. Therefore, the pivot to the high end, letting its partners handle the volume market while being content with the high-profit segment.

How well (or not) has my year-back perspective held up? Any other thoughts on what I’ve shared today? Let me (and your fellow readers) know in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

The post The Google TV Streamer 4K: Hardware updates on display(s) appeared first on EDN.

EMC compliance spanning instruments, software, and systems

EDN Network - Mon, 08/25/2025 - 15:48

A variety of electromagnetic compliance (EMC) testing solutions—standalone instruments, software, and systems—will be on display at Rohde & Schwarz’s booth during IEEE EMC Europe 2025 symposium held at Sorbonne Université in Paris from 1-5 September 2025.

Start with HF1444G14, the new high-gain electromagnetic interference (EMI) microwave antenna covering 14.9 to 44 GHz. It will be paired with the company’s ESW EMI test receiver to demonstrate full compliance testing with a single measurement. The ESW EMI test receiver, boasting an FFT bandwidth of up to 970 MHz, facilitates measurements of CISPR frequency bands C and D in a single sweep.

Figure 1 The ESW EMI test receiver offers a wide measurement bandwidth and high dynamic range. Source: Rohde & Schwarz

Next, the EPL1007 EMI test receiver, supporting frequency ranges up to 7.125 GHz, can be either used for EMI pre-compliance testing or as a CISPR 16-1-1 compliant receiver. It’s a portable device that can operate on batteries, which makes it suitable for a wide range of testing environments.

Figure 2 The EPL1007 EMI test receiver is suitable for conducted and radiated measurements. Source: Rohde & Schwarz

Then there is the ELEKTRA test software, which automates EMC testing for EMI and electromagnetic susceptibility (EMS) measurements of an equipment under test (EUT). The software simplifies test configuration, speeds up test execution, and generates comprehensive test reports. Rohde & Schwarz will demonstrate new features of this test software, including the latest capabilities for immunity testing in reverberation chambers.

Figure 3 The ELEKTRA test software captures the entire system to measure EMI emissions and EMS immunity. Source: Rohde & Schwarz

Moreover, the Munich, Germany-based test and measurement company will also demonstrate EMI debugging on its oscilloscopes and probing solutions. Rohde & Schwarz’s MXO 5 oscilloscopes—featuring an update rate of more than 4.5 million wfms/s and more than 45k FFT/s for spectrum analysis—will be paired with the isolated probing system RT-ZISO to allow users to debug digital and power electronic devices quickly.

Rohde & Schwarz will also present four technical sessions at the conference.

Related Content

The post EMC compliance spanning instruments, software, and systems appeared first on EDN.

Power Integrations rolls out reference design kit for solar race cars featuring high-efficiency GaN IC

Semiconductor today - Mon, 08/25/2025 - 15:15
Power Integrations Inc of San Jose, CA, USA (which provides high-voltage integrated circuits for energy-efficient power conversion) is rolling out a new reference design kit tailored specifically for solar-powered race cars as 37 student teams prepare to race across the Australian Outback in the Bridgestone World Solar Challenge, starting 24 August...

Plessey Semiconductors acquired by Haylo Labs

Semiconductor today - Mon, 08/25/2025 - 15:07
Plessey Semiconductors Ltd of Plymouth, UK — which develops embedded micro-LED technology for augmented-reality and mixed-reality (AR/MR) display applications — has been acquired by London-based Haylo Labs, which has been established by Haylo Ventures (a venture operator founded in 2023 to build and scale deep-tech businesses) to focus specifically on the micro-LED, optical compute, and interconnect sectors...

Rocket Lab expands US investments for national security programs and semiconductor manufacturing

Semiconductor today - Mon, 08/25/2025 - 13:49
Launch services and space systems company Rocket Lab Corp of Long Beach, CA, USA (the parent company of space power provider SolAero Technologies Corp) is boosting its US investments to expand semiconductor manufacturing capacity and provide supply chain security for space-grade solar cells and electro-optical sensors for national security space missions. The investments are supported by a $23.9m award through the US Department of Commerce, as part of the CHIPS and Science Act...

Pages

Subscribe to Кафедра Електронної Інженерії aggregator