Новини світу мікро- та наноелектроніки

Navigating energy efficiency in O-RAN architectures

EDN Network - Fri, 05/17/2024 - 16:44

Open radio access network (O-RAN) technology is driving the mobile communication industry toward an open, virtualized, and disaggregated architecture. O-RAN breaks traditional hardware-centric RANs into building blocks—radios, hardware, and virtualized functions—enabling mobile network operators to create their RANs using a multivendor, interoperable, and autonomous supply ecosystem. Using open and standardized interfaces, O-RAN enables network vendors to focus on specific building blocks rather than creating an entire RAN. Similarly, operators can mix and match components from multiple vendors. While O-RAN delivers many improvements, energy efficiency is a top priority.

Energy efficiency in RAN

Global efforts toward a carbon-neutral future and consumer demand for greener products have increased the urgency to focus on energy efficiency. Sustainability is a critical priority for the information and communications technology (ICT) industry, which is committed to making 6G and 5G a green reality.

Three key performance indicators define objectives and characterize improvements for a RAN energy optimization effort:

  • Energy consumption (EC) represents the energy used to power the infrastructure. The European Telecommunications Standards Institute (ETSI) ES 202 706-1 defines EC as the integral of power consumption.
  • Energy savings (ES) represents the reduction of energy consumed with minimal impact on the quality of service (QoS).
  • Energy efficiency (EE) refers, in a general sense, to a measure of how an appliance or system uses energy. EE is the ratio between useful output or service over the required energy input.

These three factors are essential to consider. EE improvement strategies aim to apply several mechanisms when there is no need for all the available performance, thereby minimizing the impact on QoS and the user experience. Engineers need a balanced approach depending on QoS goals. They must make insightful measurements to understand power consumption rates for different load conditions and metrics.

Energy efficiency in O-RAN

The ETSI ES 203 228 test specification considers the gNodeB as a whole. However, the O-RAN Alliance® recognizes the urgency of addressing EE and EC in a disaggregated RAN. For example, the initial version of the O-RAN fronthaul interface specification included signaling mechanisms to notify the radio unit about periods of non-usage of radio symbols. These signaling mechanisms enabled the radio to halt transmission and conserve power. The fronthaul interface now provides the capability to inform the network about energy-saving capabilities in each radio, such as carrier deactivation, enabling automated activation and deactivation of energy-saving mechanisms. The O-RAN Alliance is also developing energy-saving test cases to ensure conformance and enhance vendor interoperability.

As shown in Figure 1, energy consumption spans the entire network, including the grid, RAN, core, and transport, depending on many parameters: from RF channels to topology. Therefore, energy consumption requires a comprehensive approach involving multiple parts of an operator’s organization to capture all components. Test engineers must consider and tweak numerous parameters and variables to identify optimal configurations. The main question remains: How can test engineers reduce energy consumption and costs without impacting QoS?

Figure 1 O-RAN architecture with user equipment (UE) and core network. Source: Keysight

 Reducing energy consumption and cost without impacting QoS  1. Reducing energy consumption

There are numerous techniques to reduce EC at a network level, each requiring varying levels of effort for implementation. Migrating technology from legacy platforms onto the most recent and energy-efficient platforms can immediately reduce network energy consumption. Such a migration requires an upfront investment in new equipment and resources to perform the upgrade, but it is a relatively low engineering effort. If investing in equipment upgrades is not possible, analyzing existing deployments, eliminating redundancies, and identifying overprovisioned devices helps improve energy consumption.

While it requires a mix of engineering and equipment investments, network topology optimization can also help reduce EC by determining the ideal minimum subset of equipment necessary to cover different topologies without sacrificing the QoS.

2. Optimizing energy efficiency

To maximize energy efficiency, engineers must continuously adapt user demand to the supply of network resources. They need to dynamically allocate the correct number of computing services and radio resources to match demand and aggregate user demand to ensure the entire use of each resource. For example, engineers can avoid using two servers at 50% load each. By applying the methodology at various levels, from the system to the device/chipset levels, engineers can optimize EE.

System-level intelligence is another way to maximize EE by performing dynamic resource allocation decisions at the system level. Engineers would activate or turn off nodes on wireless networks and perform load balancing to redirect users to active nodes. Similarly, they can allocate resources to the device hardware and chipset level.

Semiconductors and chipsets are the first elements of the energy chain. Hence, they are the main contributors to energy consumption and efficiency. New chipset generations provide advanced resource optimization capabilities, such as turning on or off discrete digital resources on the chip. Engineers can accomplish this mechanism by changing analog parameters (clock speed and bandwidth) to adjust the desired performance level and reduce power consumption since the energy is a function of electrical transitions in each gate.

At the radio level, engineers can perform additional optimization with innovative scheduling capabilities in the O-RAN distributed unit when traffic is low. They can regroup physical resource blocks (PRBs) from multiple symbols into a reduced number and augment the transmission blanking time.

Optimizing EE requires using chipset-level power-saving capabilities to the fullest extent possible. Only then can engineers determine the hosting of the power decision entity to prevent conflicts.

3. Standardization

The emergence of standardized O-RAN drives the need to define standards that enable intelligent control and energy optimization of multivendor-based networks. Ultimately, the Alliance’s work will result in new O-RAN specifications and technical reports sections. The Alliance’s ongoing work includes defining procedures, methodology, use cases, and test case definitions for cell/carrier switch on/off, RF channel selection, advanced sleep modes, and cloud resource management.

4. Embedded and chipset-based energy optimization

Intelligent control loops significantly contribute to energy optimization at the system level. But these loops are also appropriate at the chipset level as they contribute to local power optimization within a device.

Chipset sleep mode mechanisms consist of deactivating or slowing down function within a specified period. Different sleep levels enable multiple levels of energy saving.

However, each sleep mode comes at a cost: the deeper the sleep and energy-saving, the more time the chipset remains in the sleep mode and the wake-up transitions. These transitions are not energy-efficient and may offset gains from the sleep phase. Therefore, defining sleep strategies optimizes the trade-off between transitions and sleep phases.

How to Measure and evaluate the energy efficiency of O-RAN components

Figure 2 shows an O-RAN architecture which consists of the following components:

  • O-RAN radio unit (O-RU) for processing the lower part of the physical layer
  • O-RAN distributed unit (O-DU) for baseband processing, scheduling, radio link control, medium access control, and the upper part of the physical layer
  • O-RAN central unit (O-CU) for the packet data convergence protocol layer
  • O-RAN intelligent controller to gather information from the network and perform the necessary optimization tasks

Figure 2 An overview of O-RAN architecture with the O-RU for processing the lower part of the PHY layer, O-DU for processing the upper part of the PHY layer, O-CU for the packet data convergence protocol layer, and an O-RAN intelligent controller to perform optimization. Source: Keysight

Energy plane testing requires a cross-domain measurement system and cross-correlation of the data to gain meaningful insights into the energy performance of the RAN components. The testing combines power measurement with protocol and RF domains. As O-RAN and the 3rd Generation Partnership Project (3GPP) are fast-evolving standards, equipment manufacturers must ensure product compliance with the latest versions. Automation of the test cases and report generation are the keys to ensuring compatibility with the latest standards of regression testing.

Measure the energy efficiency of an O-RU

RAN energy consumption and efficiency improvement requires minimizing power usage while maximizing performance. For RU testing, the ETSI ES 202 706 standard, which describes the test methodology to measure power consumption in a gNodeB, can be adapted to make similar measurements in an O-RU under different load conditions, representing a typical day in the life of a RU—the load changes during the test in low, medium, high, and complete steps (Figure 3). So, by measuring the O-RU at different loads, we can calculate the total energy consumed.

Figure 3 Decoded constellation, signal spectrum, allocated PRBs, EVM per modulation type and decoded bits. Source: Keysight

To measure the energy efficiency of an O-RU, test engineers need an O-DU emulator, a DC power supply, and an RF power sensor (Figure 4). The O-DU emulator generates different static traffic levels from low, medium, busy, to full load traffic as defined by the ETSI ES 202 706-1 standard. A DC power supply provides power to the O-RU and measures the accumulated power consumption over time. The RF power sensor measures the output power at the antenna connector port. The ratio of output RF power to input DC power represents the energy efficiency measurement.

Figure 4 O-RU test set-up with an O-DU emulator, power sensor, as well as a power supply and analyzer. Source: Keysight

Measure the energy efficiency of O-CU/O-DU

Ensuring accurate and standardized EE of O-DU and O-CU in an O-RAN involves assessing various factors related to power consumption, resource utilization, and overall network performance. As EE is the ratio of delivered bits and consumed energy, test engineers need access to the user equipment (UE) throughput data to ensure that lower EC is not at the cost of lower quality of service. The fronthaul and backhaul interface require emulation to measure the EE of an O-DU and O-CU. In addition, test engineers must be able to simulate the traffic profiles of different pieces of UE.

The fronthaul requires an O-RU emulator to provide the interface to the O-DU. A UE emulator simulates the traffic flow to the O-RU emulator the UEs request. The backhaul requires a core emulator or a live core network. An AC or DC power supply capable of recording the output power measures the combined energy consumption of O-DU / O-CU. The ETSI specification does not refer to the disaggregated base station architecture, so the points of power measurement can vary depending on the implementation. The test software generates an energy efficiency report by simulating different UE traffic profiles with varying path loss, file size, and throughput.

Evaluate the performance of gNodeB

To test gNodeB, test engineers can use a set of automated test cases and analytics tools based on ETSI standards. The test setup should include a UE emulator, a core network emulator, and a power analyzer. The UE emulator emulates stateful UE traffic and measurements, while the core network emulator terminates the calls from the UE emulator for stateful O-DU/O-CU testing. Both emulators require dimensioning to load testing scenarios, and a power analyzer measures the server’s power consumption.

Energy efficient wireless networks

While the wireless communication industry increasingly prioritizes sustainability and net zero strategies, achieving energy efficiency has become as important as performance, reliability, and security. As wireless networks evolve into multivendor disaggregated systems, collaboration among chipsets, equipment, and test vendors is necessary to optimize power consumption without compromising performance.

Moving forward, test and measurement companies should focus on delivering cutting-edge technology and tools that accelerate the transition to green and sustainable wireless communications, realizing the network performance and capital expenditure (CapEx) advantages of O-RAN. To achieve that, understanding the energy performance of RAN components is key. As highlighted in this article, there are methodologies providing standardized and accurate assessments of energy efficiency in RAN components, essential for optimizing network performance while minimizing energy consumption in the increasingly dynamic and complex telecommunications landscape.

Chaimaa Aarab is a use case marketer focused on the wireless industry (5G, 6G, Wi-Fi 7, O-RAN) at Keysight Technologies. Her background is in electronics engineering with previous experience as a technical support engineer and market industry manager. 



 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Navigating energy efficiency in O-RAN architectures appeared first on EDN.

ESA awards €0.5m to Phlux, Airbus and Sheffield University to develop free-space optics satellite terminals

Semiconductor today - Fri, 05/17/2024 - 15:33
Sheffield University spin-off Phlux Technology (which designs and manufactures 1550nm avalanche photodiode infrared sensors), Airbus Defence and Space, and The University of Sheffield have embarked on a €500,000 project to build more efficient free-space optical communications (FSOC) satellite terminals...

High-level object-oriented Python package for Digitizers and Generators

ELE Times - Fri, 05/17/2024 - 13:02

Spectrum Instrumentation presents versatile Python programming for all its 200+ products

Bangalore, India. – 16. May 2024. Spectrum Instrumentation presents a new open-source Python package (“spcm”) that is now available for the current line of all Spectrum Instrumentation test and measurement products. The new package makes the programming of all 200+ instruments, offering sampling rates from 5 MS/s to 10 GS/s, faster and easier. Python, popular for its simplicity, versatility and flexibility, boasts an extensive collection of libraries and frameworks (such as NumPy) that significantly accelerates programming development cycles. The new spcm package allows users to take full advantage of the Python language by providing a high-level Object-Oriented Programming (OOP) interface that is specifically designed for the Spectrum Instrumentation Digitizer, AWG and Digital I/O products. It includes the full source code as well as a number of detailed examples. Available on GitHub, spcm is free of charge under the MIT license.

Spectrum’s Python package safely handles the automatic opening and closing of cards, groups of cards and Ethernet instruments, as well as the allocation of memory for transferring data to and from these devices. All the device specific functionality is capsulated in easy-to-use classes. This includes clock and trigger settings, hardware channel settings, card synchronization, direct memory access (DMA) and product features such as Block Averaging, DDS and Pulse Generator.

The package supports the use of real-world physical quantities and units (e.g. “10 MHz”) enabling the user to directly program driver settings in their preferred unit system. This removes the need for tedious manual conversions to cryptic API settings. Moreover, this package also includes support for calculations with NumPy and Matplotlib, allowing the user to handle data coming from, or going to, the products with the vast toolbox provided by those packages. Detailed examples can be found in the GitHub repository.

Installing the package is easy, thanks to its availability in the pip repository. Simply install Python and then the package with a single command: $ pip install spcm

Users can include the Spectrum Instrumentation Python package in their own programs, or fork to the repository to add more functionality. The package is directly maintained by Spectrum engineers and updates are released regularly offering bug-fixes and new features.

The example in the photo shows the opening of the first analog-output card (AWG) and programming of a simple 10 MHz sine-wave output using the DDS option.

The Spectrum Python repository is found under: https://github.com/SpectrumInstrumentation/spcm

The post High-level object-oriented Python package for Digitizers and Generators appeared first on ELE Times.

Skyworks’ quarterly revenue falls 12.9% to $1.046bn

Semiconductor today - Fri, 05/17/2024 - 11:17
For its fiscal second-quarter 2024 (to 29 March) Skyworks Solutions Inc of Irvine, CA, USA (which manufactures analog and mixed-signal semiconductors) has reported revenue of $1046m, down 12.9% on $1201.5m last quarter and 9.5% on $1153.1m a year ago...

Micro-LED IP plateaus after seven years of exponential growth

Semiconductor today - Fri, 05/17/2024 - 11:12
Apple effectively pioneered the micro-LED industry, thrusting it into the spotlight back in 2014 with the acquisition of micro-LED startup Luxvue. However, in February, Apple pulled the plug on its smartwatch micro-LED project despite a decade-long investment totaling $3bn, reports market analyst firm Yole Group in ‘Micro-LED Display Intellectual Property Landscape 2024’...

Gartner Identifies the Top Five Strategic Technology Trends in Software Engineering for 2024

ELE Times - Fri, 05/17/2024 - 10:09

By 2026, 80% of Large Software Engineering Organizations Will Establish Platform Engineering Teams, up from 45% in 2022

Gartner, Inc. announced the top five strategic technology trends in software engineering for 2024 and beyond. Analysts presented these findings during the Gartner Application Innovation & Business Solutions Summit, which is taking place here through today.

Meeting business objectives is one of their top three performance objectives for 65% of software engineering leaders, according to a Gartner survey of 300 software engineering and application development team managers in the U.S. and UK in the fourth quarter of 2023. By investing in disruptive technologies, software engineering leaders can empower their teams to meet business objectives for productivity, sustainability and growth.

“The technology trends Gartner has identified are already helping early adopters to achieve business objectives, “said Joachim Herschmann, VP Analyst at Gartner. “These disruptive tools and practices enable software engineering teams to deliver high-quality, scalable AI-powered applications, while reducing toil and friction in the software development life cycle (SDLC), improving developer experience and productivity.”

The top five strategic technology trends for software engineering for 2024 are-

  • Software Engineering Intelligence

Software engineering intelligence platforms provide a unified, transparent view of engineering processes that helps leaders to understand and measure not only velocity and flow but also quality, organizational effectiveness and business value.

Gartner predicts by 2027, 50% of software engineering organizations will use software engineering intelligence platforms to measure and increase developer productivity, compared to 5% in 2024.

  • AI-Augmented Development

Software engineering leaders need a cost-effective way to help their teams build software faster. According to the Gartner survey, 58% of respondents said their organization is using or planning to use generative AI over the next 12 months to control or reduce costs.

AI-augmented development is the use of AI technologies, such as generative AI and machine learning, to aid software engineers in designing, coding and testing applications. AI-augmented development tools integrate with a software engineer’s development environment to produce application code, enable design-to-code transformation and enhance application testing capabilities.

“Investing in AI-augmented development will support software engineering leaders in boosting developer productivity and controlling costs and can also improve their teams’ ability to deliver more value,” said Herschmann.

  • Green Software Engineering

Green software engineering is the discipline of building software that is carbon-efficient and carbon-aware. Building green software involves making energy-efficient choices for architecture and design patterns, algorithms, data structures, programming languages, language runtimes and infrastructure.

Gartner predicts by 2027, 30% of large global enterprises will include software sustainability in their non-functional requirements, up from less than 10% in 2024.

The use of compute-heavy workloads increases an organization’s carbon footprint, and generative AI-enabled applications are especially energy-intensive, so implementing green software engineering will help organizations prioritize their sustainability objectives.

  • Platform Engineering

Platform engineering reduces cognitive load for developers by offering underlying capabilities via internal developer portals and platforms that multiple product teams can use. These platforms provide a compelling “paved road” to software development, which saves time for developers and improves their job satisfaction.

Gartner predicts that by 2026, 80% of large software engineering organizations will establish platform engineering teams, up from 45% in 2022.

  • Cloud Development Environments
    Cloud development environments provide remote, ready-to-use access to a cloud-hosted development environment with minimal effort for setup and configuration. This decoupling of the development workspace from the physical workstation enables a low-friction, consistent developer experience and faster developer onboarding.

The post Gartner Identifies the Top Five Strategic Technology Trends in Software Engineering for 2024 appeared first on ELE Times.

STMicroelectronics reveals monolithic automotive synchronous buck converters for light-load, low-noise, and isolated applications

ELE Times - Fri, 05/17/2024 - 09:25

Save space and ease integration in car body electronics, audio systems, and inverter gate drivers

STMicroelectronics has introduced new automotive-qualified step-down synchronous DC/DC converters that save space and ease integration in applications including body electronics, audio systems, and inverter gate drivers.

The A6983 converters offer flexible design choices, comprising six non-isolated step-down converters in low-consumption and low-noise configurations and the A6983I isolated buck converter. With compensation circuitry on-chip, these highly integrated monolithic devices need only minimal external components including filtering, feedback, and a transformer with the A6983I.

The non-isolated A6983 converters can supply up to 3A load current and achieve 88% typical efficiency at full load. The low-consumption variants (A6983C) are optimized for light-load operation, with high efficiency and low output ripple, to minimize drain on the vehicle battery in applications that remain active when parked. The low-noise A6983N variants operate with constant switching frequency and minimize output ripple across the load range for optimum performance in applications such as audio-system power supplies. Both types offer a choice of 3.3V, 5.0V, and adjustable output voltage from 0.85V to VIN.

The A6983I is a 10W iso-buck converter with primary-side regulation that eliminates the need for an optocoupler. Ideal for use as an isolated gate driver for IGBTs or silicon-carbide (SiC) MOSFETs in traction inverters and on-board chargers (OBCs), this converter allows accurate adjustment of the primary output voltage. The transformer turns ratio determines the secondary voltage.

All isolated and non-isolated variants have a low quiescent operating current of 25µA and a power-saving shutdown mode that draws less than 2µA. The input-voltage range from 3.5V to 38V, and load-dump tolerance up to 40V, prevent disruption due to transients on the main supply bus. There is also output overvoltage protection, thermal protection, and internal soft start. In addition, optional spread-spectrum operation helps lower electromagnetic interference (EMI) for noise-sensitive applications, and a power-good pin that enables power sequencing. The A6983I and A6983 allow synchronization to an external clock.

The converters are offered in a 3mm x 3mm QFN16 package. Pricing starts at $1.75 for the A6983 and $1.81 for the A6983I, for orders of 1000 pieces, and free samples of the A6983 and A6983I are available from the ST eStore. The STEVAL-A6983CV1 and STEVAL-A6983NV1 A6983 evaluation boards and STEVAL-L6983IV for the A6983I are available to kickstart development and accelerate project completion.

The post STMicroelectronics reveals monolithic automotive synchronous buck converters for light-load, low-noise, and isolated applications appeared first on ELE Times.

Greencore Electronics Drives Innovation in the Automotive Industry Through its Products

ELE Times - Thu, 05/16/2024 - 14:42

Greencore Electronics is making significant strides in the automotive space with its products. By focusing on smart mobility solutions like hybrid and electric vehicles, autonomous driving, and connected cars, Greencore demonstrates a forward-thinking approach to addressing the evolving needs of the automotive market while contributing to sustainability efforts. Having an in-house research and development department speaks volumes about the company’s dedication to innovation and quality assurance. It ensures that their products meet and exceed market standards, reinforcing their reputation for reliability and durability.

Overall, Greencore Electronics Pvt. Ltd. is making commendable efforts to provide comprehensive and cutting-edge solutions for various automotive needs, from convenience to safety to environmental sustainability.

Pavan Puri, Founder and Managing Director at Greencore Electronics

Rashi Bajpai, Sub-Editor at ELE Times interacted with Mr Pavan Puri, Founder and Managing Director at Greencore Electronics about their services and products.

This is an excerpt from the conversation.



ELE Times: Brief us on your product catalogue, specifying their applications and customer base.

Pavan Puri: Our product portfolio includes a fast car phone USB charger, a 3-in-1 charging cable, a door handle scratch guard with a universal design and size to fit most automobiles, and a car charger extension cable for rear seat charging. Furthermore, the shark-fit antenna allows for simple and painless installation with no drilling necessary. We also have a powerful car vacuum cleaner for both wet and dry debris, as well as a car air purifier ionizer with two ports for superior air quality. The single DIN audio system (12V) with a remote control battery and puncture repair kit repairs the tire in less than ten minutes.

The customer base for automotive electronic products includes both automotive manufacturers and end consumers. Manufacturers purchase these systems to integrate into vehicles, while end consumers (car buyers) benefit from features such as ADAS, infotainment systems, telematics, wireless connectivity, and electric/hybrid vehicle systems.

ELE Times: In light of the Make in India initiative that is driving India towards self-reliance and sustainability, how is Greencore contributing towards the same through its expertise in automotive electronics?

Pavan Puri: As a leading automotive electronics company, Greencore is committed to supporting India’s Make in India initiative by leveraging our expertise in developing innovative and sustainable automotive solutions. Recently, we took a significant step towards this goal by launching India’s first Made-in-India car vacuum cleaner. This initiative aligns perfectly with the Made-in-India initiative, as it not only demonstrates our commitment to self-reliance but also contributes to the sustainability goals of the nation.

In addition to helping the Indian economy, we are ensuring that our products are of the best quality and at an affordable price for Indian customers. Furthermore, these endeavours aid in India’s pursuit of becoming a global manufacturing hub.

ELE Times: Greencore develops cutting-edge solutions in the automotive electronics sector. What core technologies and innovations are you currently working on?

Pavan Puri: At Greencore, we’re currently focused on several essential technologies and innovations to enhance the driving experience. Our primary focus is on developing a comprehensive connected infotainment system that seamlessly integrates with other vehicle systems. We’re also working on advanced 360-degree camera systems to enhance safety and improve the driver’s visibility.

Additionally, our radar sensors are being optimized for ADAS (Advanced Driver Assistance Systems) vehicles to provide reliable collision detection and avoidance capabilities. To further support the growth of electric vehicles, we’re developing telematics tracking systems and fast charging ports for both commercial vehicles (CV) and electric vehicles (EV), ensuring efficient and convenient charging solutions.

ELE Times: Help us understand the application areas of automotive electronics in today’s times.

Pavan Puri: Automotive electronics play an important part in modern vehicles, contributing to their operation, safety, and efficiency. Automotive electronics applications include Advanced Driver Assistance Systems (ADAS), which use sensors and cameras to give features such as adaptive cruise control, lane departure warning, and automatic emergency braking to make driving safer. Infotainment systems provide entertainment, navigation, and connectivity through touchscreen displays, GPS navigation, and Bluetooth connectivity.

In addition, telematics systems use telecommunications and GPS to offer services like car tracking, remote diagnostics, and emergency support. Also, wireless connectivity, such as Wi-Fi, Bluetooth, and cellular connectivity, enables the seamless integration of smartphones and other devices, improving the overall driving experience and convenience.

ELE Times: Give us some insights into your R&D process and goals. Also, shed some light on Greencore’s vision for the next decade.

Pavan Puri: Our R&D process is deeply rooted in innovation, safety, and reliability. Our goal is to develop cutting-edge electronic systems that enhance vehicle performance, safety, and efficiency, as we prioritize detailed testing, validation, and development before launching any product. Our rigorous testing procedures ensure that our products meet the highest safety standards and comply with government regulations. Certifications are obtained as per government norms to guarantee the safety and reliability of every product we bring to market.

Greencore’s goal for the next decade involves continued breakthroughs in vehicle electronics, with an emphasis on sustainability, connectivity, and autonomous driving technology. We are dedicated to pushing the frontiers of innovation while emphasizing security as well as reliability in all of our operations.

The post Greencore Electronics Drives Innovation in the Automotive Industry Through its Products appeared first on ELE Times.

ROHM showcasing EcoGaN and SiC power semiconductors at PCIM Europe

Semiconductor today - Thu, 05/16/2024 - 10:09
In booth 304 (hall 9) at Power, Control and Intelligent Motion (PCIM) Europe 2024 trade fair in Nuremberg, Germany (11–13 June), ROHM is presenting its new power semiconductor solutions, with a special focus on wide-bandgap devices...

Built a discrete triangle wave generator

Reddit:Electronics - Thu, 05/16/2024 - 00:41
Built a discrete triangle wave generator

Thought I'd challenge myself and depart from the tired methods of buying miniscule op-amps and smack something together from spare parts, although I bought some decent-quality components from Mouser to build the final version lol

Took me about 2 hours to design and another 3 to fully work out.

This thing is run by an LC oscillator. From what I could gather, the inductor creates a high voltage at the junction between the 1K resistor and the collector of Q1, which is fed into a resistor-transistor inverter of sorts (Q2), and then run through a miller integrator (Q3). The result is this extremely clean triangle wave with only a small amount of frequency drift (I estimated about 1% over the course of an hour, but I attribute it to the half-dead battery I'm using). I won't pretend like I know every detail about how this thing works, but I honestly didn't expect it to run this well.

The schematic:


The board:


The result:


submitted by /u/ItchyContribution758
[link] [comments]

Porotech selects ClassOne’s Solstice single-wafer platform for development and manufacture of GaN products

Semiconductor today - Wed, 05/15/2024 - 21:03
Fabless micro-LED company Poro Technologies Ltd (a spin off from the Cambridge Centre for Gallium Nitride at the UK’s University of Cambridge) has selected the Solstice single-wafer platform of ClassOne Technology of Kalispell, MT, USA (which manufactures electroplating and wet-chemical process systems for ≤200mm wafers) for the development and manufacture of GaN products for applications requiring silicon wafer substrates...

Blue laser maker NUBURU enters medical device market

Semiconductor today - Wed, 05/15/2024 - 20:27
NUBURU Inc of Centennial, CO, USA — which was founded in 2015 and develops and manufactures high-power industrial blue lasers — has received a purchase order for its BlueScan solution from Blueacre Technology of Dundalk, Ireland (which was founded in 2005 to provide contract manufacturing solutions to the medical device industry). The BL laser will be integrated by Blueacre with scan head optics to produce a versatile welding system for the manufacture of precision medical devices...

Just built a new table

Reddit:Electronics - Wed, 05/15/2024 - 18:25
Just built a new table

Unfortunately space demands to be filled.

submitted by /u/zxobs
[link] [comments]

Bias for HF JFET

EDN Network - Wed, 05/15/2024 - 16:57

Junction field-effect transistors (JFETs) usually require some reverse bias voltage to be applied to a gate terminal.

In HF and UHF applications, this bias is often provided using the voltage across the source resistor Rs (Figure 1).

Figure 1: JFETs typically require some reverse bias across the gate terminal and in HF/UHF applications, this is often provided using the voltage across resistor Rs.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Barring the evident lack of efficiency, such approach has other shortcomings as well:

  • The drain current has statistical dispersion, so to get a target value of the current some circuit adjustment is required.
  • The drain current may depend on temperature or power fluctuations.
  • To achieve an acceptable low source impedance, several capacitors Cs have to be used.
  • To maintain the same headroom a higher power voltage is required.
  • The lack of direct contact with the ground plane means worse cooling of the transistor, which is crucial for power applications.

The circuit in Figure 2 is free of all these. It consists of a control loop which produces control voltage of negative polarity for n-channel JFET amplifier.

Figure 2: A control loop that produces control voltage of negative polarity for n-channel JFET amplifier in HF and UHF applications.

The circuit uses two infrared LEDs IR333C (diameter = 5 mm) in a self-made photocoupler. Two such LEDs placed face-to-face in an appropriate PVC tube about 12 mm long, that’s all. One such device produces 0.81 V @ Iled < 4 mA, which is quite sufficient for the HEMT FHX35LG, for example.

Of course, if you need higher voltage, several such devices can be simply cascaded.

The main amplification in the loop is performed by the JFET itself. Its value is about gm * R1, where gm is a transconductance of Q1.

The transistor pair Q2 and Q3 compares the voltage drops on the resistors R1 and R2 making them equal. Hence, by changing the ratio R2:R3 you can set the working point you need:

Id = Vdd * R2 / ((R2 + R3) * R1)

As we can see, the drain current (Id) still depends on power voltage (Vdd). To avoid this dependence, we can replace resistor R2 with a Zener diode, then:

Id = Vz / R1

 Peter Demchenko studied math at the University of Vilnius and has worked in software development.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Bias for HF JFET appeared first on EDN.

Thin PCBs: Challenges with BGA packages

EDN Network - Wed, 05/15/2024 - 16:33

During electrical design process, certain design choices need to be made. One example is USB C type connector-based design with a straddle-mount connector. In such scenario, the overall PCB thickness is constrained while using a straddle-mount connector whose thickness governs the overall thickness. For historical reasons, the standard PCB thickness is 0.063” (1.57 mm).

Before the advent of PCBs, transistor-based electronics were often assembled using a method called breadboarding, which involved using wood as a substrate. However, wood was fragile, leading to delicate assemblies. To address this, bakelite sheets, commonly used on workbench surfaces, became the standard substrate for electronic assemblies, with a thickness of 1/16 inch, marking the beginning of PCBs at this thickness.

Figure 1 A PCB cross section is shown with a straddle-mount type connector. Source: Wurth Elektronik

Take the example of Wurth Elektronik’s USB 3.1 plug, a straddle-mount connector with part number 632712000011. The part datasheet recommends a PCB thickness of 0.8 mm/0.031” for an optimal use. This board thickness is common among various board fabrication houses. The 0.031” board is relatively easy to fabricate as many fab houses do a 6-layer PCB with 1 Oz copper on each layer.

However, designing and working with thin PCBs presents several challenges. One of the primary concerns is their mechanical fragility. Thin PCBs are more flexible and prone to bending or warping, making them difficult to handle during assembly and more susceptible to damage during handling. The handling includes pick and place assembly process, holes drilling, in-circuit testing (ICT) as well as functional probes during the functional testing.

The second level of handling is by the end user, for example dropping the device containing the PCB assembly (PCBA). Additionally, thin PCBs often requires specialized manufacturing processes and materials, leading to increased production costs. Component placement becomes more critical as well, as traces may need to be positioned closer together, increasing the risk of short circuits and signal interference.

Furthermore, thin PCBs face challenges in heat dissipation due to their reduced thermal mass. Addressing these challenges demands careful consideration during the design, manufacturing, and assembly stages to ensure the reliability and performance of the final product.

These issues are especially critical when a designer mounts a ball grid array (BGA) component on a 0.031” thickness board. Most of major fabrication houses recommend a minimum thickness of 0.062” when BGAs are mounted on the board.

How to test durability

The mechanical durability of PCB assemblies is generally assessed using a drop test. Drop test requirements for a PCBA typically include specifying the drop height, drop surface, number of drops, orientation during the drop, acceptance criteria, and testing standards. The drop height is the distance from which the PCBA will be dropped, typically ranging from 30 to 48 inches, depending on the application and industry standards.

The drop surface, such as concrete or wood, is also defined. Manufacturers determine the number of drops the PCBA must withstand, usually between 3 to 6 drops. The orientation of the PCBA during the drop, whether face down, face up, or on an edge or corner, is also specified. Acceptance criteria, such as functionality after the drop and any visible damage, are clearly defined.

Testing standards like IPC-TM-650 or specific customer requirements guide the testing process. For a medical device, the drop test requirements are governed by section of IEC 60601-1 Third Edition 2005-12. By establishing these requirements, manufacturers ensure that their PCBAs and products are robust enough to withstand real-world use and maintain functionality even after being subjected to drops and impacts.

The soldering joint might not be captured during a drop test until a functional failure is observed. The BGA can fail due to poor assembly-related issues like the thermal stresses during soldering or poor soldering joint quality. A thin board weakens due to excessive mechanical shock and vibration assembly.

These defects can be captured during a drop test as the BGA part may not withstand the stresses encountered during a drop test, as shown in the figures below. The BGA failures can be inspected using X-ray, optical inspection, or electrical testing. A detailed analysis may be performed using cross section analysis using scanning electron microscopy (SEM).

Figure 2 The BGA solder joint shows a line crack. Source: Keyence

Figure 3 The above image displays a cross section of a healthy BGA. Source: Keyence

Figure 4 Here is a view of some of the BGA failure modes. Source: Semlabs

How to fix BGA failure on thin PCBs

Pad cratering is the fracturing of laminate under Cu pads of surface mount components, which often occurs during mechanical events. The initial crack can propagate, causing electrically open circuits by affecting adjacent Cu conducting lines. It’s more common in lead-free assemblies due to different laminate materials. Mitigation involves reducing stress on the laminate or using stronger, more pad cratering-resistant materials.

The issue can be fixed by mechanically stretching the PCB or changing the laminate material. It can be done with any of the following steps.

  • Thinner boards are more prone to warping and may require additional fixturing (stiffeners or work board holders) to process on the manufacturing line if the requirements below are not met. A PCB stiffener is not an integral part of the circuit board; rather, it’s an external structure that offers mechanical support to the board.

Figure 5 An aluminum bar is shown as a mechanical PCB stiffener. Source: Compufab

  • Corner adhesive/epoxy on the BGA corners or use BGA underfill. For example, an adhesive that can be used for this purpose is Zymet UA-3307-B Edgebond, Korapox 558 or Eccobond 286. The epoxy along the BGA corners or as an underfill strengthens the PCB, thereby preventing PCB flexion and hence the failure.
  • Strict limitations on board flexure during circuit board assembly operations. For instance, supporting the PCB during handling operation like via hole drilling, pick and place, ICT, or functional testing with flying probes.
  • Matching the recommended soldering profile of the BGA. The issue can be made worse if the BGA manufacture’s recommended soldering profile is not followed, resulting in cold solder joints. There should be enough thermocouples on the PCB panel to monitor the PCB temperature.
  • Ensure that the BGA pad size is as per manufactures recommendation.

Managing thin PCB challenges

A thin PCB (0.031”) can weaken the PCB assembly, thereby making it susceptible to mechanical and thermal forces. And the challenges are unique when mounting a BGA to the thin PCB.

However, the design challenges and risks can be managed by carefully controlling the PCB handling processes and then strengthening the thin PCB with design solutions discussed in this article.

Editor’s Note: The views expressed in the article are author’s personal opinion.

Jagbir Singh is a staff electrical engineer for robotics at Smith & Nephew.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Thin PCBs: Challenges with BGA packages appeared first on EDN.

The 2024 Google I/O: It’s (pretty much) all about AI progress, if you didn’t already guess

EDN Network - Wed, 05/15/2024 - 16:32

Starting last year, as I mentioned at writeup publication time, EDN asked me to do yearly coverage of Google’s (or is that Alphabet’s? whatevah) I/O developer conference, as I’d already long been doing for Apple’s WWDC developer-tailored equivalent event, and on top of my ongoing throughout-the-year coverage of notable Google product announcements:

And, as I also covered extensively a year ago, AI ended up being the predominant focus of Google I/O’s 2023 edition. Here’s part of the upfront summary of last year’s premier event coverage (which in part explains the rationalization for the yearly coverage going forward):

Deep learning and other AI operations…unsurprisingly were a regularly repeated topic at Wednesday morning’s keynote and, more generally, throughout the multi-day event. Google has long internally developed various AI technologies and products based on them—the company invented the transformer (the “T” in “GPT”) deep learning model technique now commonly used in natural language processing, for example—but productizing those research projects gained further “code red” urgency when Microsoft, in investment partnership with OpenAI, added AI-based enhancements to its Bing search service, which competes with Google’s core business. AI promises, as I’ve written before, to revolutionize how applications and the functions they’re based on are developed, implemented and updated. So, Google’s ongoing work in this area should be of interest even if your company isn’t one of Google’s partners or customers.

And unsurprisingly, given Google’s oft-stated, at the time, substantial and longstanding planned investment in various AI technologies and products and services based on them, AI was again the predominant focus at this year’s event, which took place earlier today as I write these words, on Tuesday, May 14:

But I’m getting ahead of myself…

The Pixel 8a

Look back at Google’s Pixel smartphone family history and you’ll see a fairly consistent cadence:

  • One or several new premium model(s) launched in the fall of a given year, followed by (beginning with the Pixel 3 generation, to be precise)
  • one (or, with the Pixel 4, two) mainstream “a” variant(s) a few calendar quarters later

The “a” variants are generally quite similar to their high-end precursors, albeit with feature set subtractions and other tweaks reflective of their lower price points (along with Google’s ongoing desire to still turn a profit, therefore the lower associated bill of materials costs). And for the last several years, they’ve been unveiled at Google I/O, beginning with the Pixel 6a, the mainstream variant of the initial Pixel 6 generation based on Google-developed SoCs, which launched at the 2022 event edition. The company had canceled Google I/O in 2020 due to the looming pandemic, and 2021 was 100% virtual and was also (bad-pun-intended) plagued by ongoing supply chain issues, so mebbe they’d originally planned this cadence earlier? Dunno.

The new Pixel 8a continues this trend, at least from feature set foundation and optimization standpoints (thicker display bezels, less fancy-pants rear camera subsystem, etc.). And by the way, please put in proper perspective reviewers who say things like “why would I buy a Pixel 8a when I can get a Pixel 8 for around the same price?” They’re not only comparing apples to oranges; they’re also comparing old versus new fruit (this is not an allusion to Apple; that’s in the next paragraph). The Pixel 8 and 8 Pro launched seven months ago, and details on the Pixel 9 family successors are already beginning to leak. What you’re seeing are retailers promo-pricing Pixel 8s to clear out inventory, making room for Pixel 9 successors to come soon. And what these reviewers are doing is comparing them against brand-new list-price Pixel 8as. In a few months, order will once again be restored to the universe. That all said, to be clear, if you need a new phone now, the Pixel 8 is a compelling option.

But here’s the thing…this year, the Pixel 8a was unveiled a week prior to Google I/O, and even more notably, right on top of Apple’s most recent “Let Loose” product launch party. Why? I haven’t yet seen a straight answer from Google, so here are some guesses:

  • It was an in-general attempt by Google to draw attention away from (or at least mute the enthusiasm for) Apple and its comparatively expensive (albeit non-phone) widgets
  • Specifically, someone at Google had gotten a (mistaken) tip that Apple might roll out one (or a few) iPhone(s) at the event and decided to proactively queue up a counterpunch
  • Google had so much else to announce at I/O this year that they, not wanting the Pixel 8a to get lost in all the noise, decided to unveil it ahead of time instead.
  • They saw all the Pixel 8a leaks and figured “oh, what the heck, let’s just let ‘er rip”.

The Pixel Tablet (redux)

But that wasn’t the only thing that Google announced last week, on top of Apple’s news. And in this particular case the operative term is relaunched, and the presumed reasoning is, if anything, even more baffling. Go back to my year-back coverage, and you’ll see that Google launched the Tensor G2-based Pixel Tablet at $499 (128GB, 255GB for $100 more), complete with a stand that transforms it into an Amazon Echo Show-competing (and Nest Hub-succeeding) smart display:

Well, here’s the thing…Google relaunched the very same thing last week, at a lower price point ($399), but absent the stand in this particular variant instance (the stand-inclusive product option is still available at $499). It also doesn’t seem that you can subsequently buy the stand, more accurately described as a dock (since it also acts as a charger and embeds speakers that reportedly notably boost sound quality), separately. That all, said, the stand-inclusive Pixel Tablet is coincidentally (or not) on sale at Woot! for $379.99 as I type these words, so…🤷‍♂️

And what explains this relaunch? Well:

  • Apple also unveiled tablets that same day last week, at much higher prices, so there’s the (more direct in this case, versus the Pixel 8a) competitive one-upmanship angle, and
  • Maybe Google hopes there’s sustainable veracity to the reports that Android tablet shipments (goosed by lucrative trade-in discounts) are increasing at iPads’ detriment?

Please share your thoughts on Google’s last-week pre- and re-announcements in the comments.


Turnabout is fair play, it seems. Last Friday, rumors began circulating that OpenAI, the developer of the best-known GPT (generative pre-trained transformer) LLM (large language model), among others, was going to announce something on Monday, one day ahead of Google I/O. And given the supposed announcement’s chronological proximity to Google I/O, those rumors further hypothesized that perhaps OpenAI was specifically going to announce its own GPT-powered search engine as an alternative to Google’s famous (and lucrative) offering. OpenAI ended up in-advance denying the latter rumor twist, at least for the moment, but what did get announced was still (proactively, it turned out) Google-competitive, and with an interesting twist of its own.

To explain, I’ll reiterate another excerpt from my year-ago Google I/O 2023 coverage:

The way I look at AI is by splitting up the entire process into four main steps:

  1. Input
  2. Analysis and identification
  3. Appropriate-response discernment, and
  4. Output

Now a quote from the LLM-focused section of my 2023 year-end retrospective writeup:

LLMS’ speedy widespread acceptance, both as a generative AI input (and sometimes also output) mechanism and more generally as an AI-and-other interface scheme, isn’t a surprise…their popularity was a matter of when, not if. Natural language interaction is at the longstanding core of how we communicate with each other after all, and would therefore inherently be a preferable way to interact with computers and other systems (which Star Trek futuristically showcased more than a half-century ago). To wit, nearly a decade ago I was already pointing out that I was finding myself increasingly (and predominantly, in fact) talking to computers, phones, tablets, watches and other “smart” widgets in lieu of traditional tapping on screens and keyboards, and the like. That the intelligence that interprets and responds to my verbally uttered questions and comments is now deep learning trained and subsequent inferred versus traditionally algorithmic in nature is, simplistically speaking, just an (extremely effective in its end result, mind you) implementation nuance.

Here’s the thing: OpenAI’s GPT is inherently a text-trained therefore text-inferring deep learning model (steps 2 and 3 in my earlier quote), reflected in the name of the “ChatGPT” AI agent service based on it (later OpenAI GPT versions also support still image data). To speak to an LLM (step 1) as I described in the previous paragraph, for example, you need to front-end leverage another OpenAI model and associated service called Whisper. And for generative AI-based video from text (step 4) there’s another OpenAI model and service, back-end this time, called Sora.

Now for that “interesting twist” from OpenAI that I mentioned at the beginning of this section. In late April, a mysterious and powerful chatbot named “gpt2-chatbot” appeared on a LLM comparative evaluation forum, only to disappear shortly thereafter…and reappear again a week after that. Its name led some to deduce that it was a research project from OpenAI (further fueled by a cryptic social media post from CEO Sam Altman) —perhaps a potential successor to latest-generation GPT-4 Turbo—which had intentionally-or-not leaked into the public domain.

Turns out, we learned on Monday, it was a test-drive preview of now-public GPT-4o (“o” for “omni”), And not only does GPT-4o outperform OpenAI precursors as well as competitors, based on Chatbot Arena leaderboard results, it’s also increasingly multimodal, meaning that it’s been trained on and therefore comprehends additional input (as well as generating additional output) data types. In this case, it encompasses not only text and still images but also audio and vision (specifically, video). The results are very intriguing. For completeness, I should note that OpenAI also announced chatbot agent application variants for both MacOS and Windows on Monday, following up on the already-available Android and iOS/iPadOS versions.

Google Gemini

All of which leads us (finally) to today’s news, complete with the aforementioned 121 claimed utterances of “AI” (no, I don’t know how many times they said “Gemini”):

@verge Pretty sure Google is focusing on AI at this year’s I/O. #google #googleio #ai #tech #technews #techtok ♬ original sound – The Verge

Gemini is Google’s latest LLM, previewed a year ago, formally unveiled in late 2023 and notably enhanced this time around. Like OpenAI with GPT, Google’s deep learning efforts started out text-only with models such as LaMDA and PaLM; more recent Gemini has conversely been multimodal from the get-go. And pretty much everything Google talked about during today’s keynote (and will cover more comprehensively all week) is Gemini in origin, whether as-is or:

  • Memory footprint and computational “muscle” fine-tuned for resource-constrained embedded systems, smartphones and such (Gemini Nano, for example), and/or
  • Training dataset-tailored for application-specific use cases

including the Gemma open model variants.

In the interest of wordcount (pushing 2,000 as I type this), I’m not going to go through each of the Gemini-based services and other technologies and products announced today (and teased ahead of time, in Project Astro’s case) in detail; those sufficiently motivated can watch the earlier-embedded video (upfront warning: 2 hours), archived liveblogs and/or summaries (linked to more detailed pieces) for all the details. As usual, the demos were compelling, although it wasn’t entirely clear in some cases whether they were live or (as Google caught grief for a few months ago) prerecorded and edited. More generally, the degree of success in translating scripted and otherwise controlled-environment demo results into real-life robustness (absent hallucinations, please) is yet to be determined. Here are a few other tech tidbits:

  • Google predictably (they do this every year) unveiled its sixth-generation TPU (Tensor Processing Unit) architecture, code-named Trillium, with a claimed 4.7x performance boost in compute performance per chip versus today’s 5th-generation precursor. Design enhancements to achieve this result include expanded (count? function? both? not clear) matrix multiply units, faster clock speeds, doubled memory bandwidth and the third-generation SparseCore, a “specialized accelerator for processing ultra-large embeddings common in advanced ranking and recommendation workloads,” with claimed benefits both in training throughput and subsequent inference latency.
  • The company snuck a glimpse of some AR glasses (lab experiment? future-product prototype? not clear) into a demo. Google Glass 2, Revenge of the Glassholes, anyone?
  • And I couldn’t help but notice that the company ran two full-page (and identical-content, to boot) ads for YouTube in today’s Wall Street Journal even though the service was barely mentioned in the keynote itself. Printing error? Google I/O-unrelated v-TikTok competitive advertising? Again, not clear.

And with that, my Google I/O coverage is finit for another year. Over to all of you for your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The 2024 Google I/O: It’s (pretty much) all about AI progress, if you didn’t already guess appeared first on EDN.

Canada’s NSERC awards grants worth $1.1m to research gallium extraction

Semiconductor today - Wed, 05/15/2024 - 16:11
Two researchers at Northern Ontario School of Medicine (NOSM) University and Laurentian University have been awarded funding from grants administered through the Natural Sciences and Engineering Research Council of Canada (NSERC) Alliance Missions program. One project studies how gallium can be mined more efficiently while the second project focuses on the extraction of critical minerals from tailings ponds...

Luminus launches 4-in-1 red-green-blue-lime color-mix LEDs

Semiconductor today - Wed, 05/15/2024 - 15:51
Luminus Devices Inc of Sunnyvale, CA, USA – which designs and makes LEDs and solid-state technology (SST) light sources for illumination markets – has launched a series of 4-in-1 RGBL (red-green-blue-lime) LEDs designed for stage and architectural lighting systems that require high-output color mixing with high color-rendering index (CRI)...

Rohde & Schwarz introduces the MXO 5C series, the world’s most compact oscilloscope with up to 2 GHz bandwidth

ELE Times - Wed, 05/15/2024 - 13:45

Rohde & Schwarz extends its portfolio with a 2U high oscilloscope/digitizer tailored for rack mount and other applications where a low-profile form factor is critical. The new MXO 5C series is the company’s first oscilloscope without an integrated display. It delivers the same peformance as the previously introduced MXO 5 series, but with a fourth of the vertical height.

Rohde & Schwarz introduces the new MXO 5C oscilloscope with four or eight channels. The new series is based on the next-generation MXO 5 oscilloscope and specifically addresses rack mount and automated test system applications where users are often confronted with space limitations. The instrument’s 2U vertical height – just 3.5” or 8.9 cm – allows engineers to deploy it in test systems where a traditional oscilloscope with a large display would not fit. The compact form factor is also of value in applications with high channel density where users need a large number of channels in a small volume. Users operate the instrument via the integrated web interface, or they interact with it exclusively programmatically and use the instrument as a high-speed digitizer.

Like other MXO oscilloscopes, the new MXO 5C series builds on next-generation MXO-EP processing ASIC technology developed by Rohde & Schwarz. It offers the fastest acquisition capture rate in the world of up to

4.5 million acquisitions per second. This makes it the world’s first compact oscilloscope that allows engineers to capture up to 99% real-time signal activity enabling them to see more signal details and infrequent events better than with any other oscilloscope.

Philip Diegmann, Vice President Oscilloscopes at Rohde & Schwarz, said: “While oscilloscopes with large displays work well for bench usage, we’ve had a number of customers ask for a version that is tailored for rack mount applications. At the same time, we have customers who need a large channel count, for example in physics. With the MXO 5C we created a unique instrument that offers the best possible performance for both scenarios.” The new form factor allows to place many channels in close proximity. The eight-channel model of the MXO 5C provides a channel density of 1500 cm3 per channel and consumes just 23 watts per channel.

While primarily designed for rack mount usage, the instrument doubles as a stand-alone bench oscilloscope. Users can simply attach an external display via the built-in DisplayPort and HDMI connectors, or they can

access the instrument’s GUI via a web interface by typing in the oscilloscope’s IP address into their browser. As the first oscilloscope to offer E-ink display technology, the MXO 5C shows the IP address and other critical information on a small non-volatile display on the front of instrument, which stays visible even when power is switched off.

Like the MXO 5, the MXO 5C series comes in both four and eight channel models, in bandwidth ranges with100 MHz, 200 MHz, 350 MHz, 500 MHz, 1 GHz, and 2 GHz models. The starting price of EUR 18 000 for the eight-channel models sets a new industry standard. Various upgrade options are available to users with demanding application needs, such as 16 digital channels with a mixed-signal oscilloscope (MSO) option, an integrated dual-channel 100 MHz arbitrary generator, protocol decode and triggering options for industry-standard buses and a frequency response analyzer to enhance the capabilities of the instrument.

The new MXO 5C series oscilloscopes are now available from Rohde & Schwarz and selected distribution channel partners. For more information on the instrument, visit :


The post Rohde & Schwarz introduces the MXO 5C series, the world’s most compact oscilloscope with up to 2 GHz bandwidth appeared first on ELE Times.


Subscribe to Кафедра Електронної Інженерії aggregator - Новини світу мікро- та наноелектроніки