Feed aggregator

Semiconductor Attributes for Sustainable System Design

ELE Times - Fri, 08/09/2024 - 12:45

Courtesy: Jay Nagle, Principal Product Marketing Engineer, Microchip Technology Inc.

Jay Nagle, Principal Product Marketing Engineer, Microchip Technology Inc.

Gain further insights on some of the key attributes required of semiconductors to facilitate sustainability in electronic systems design.

Semiconductor Innovations for Sustainable Energy Management

As systems design becomes more technologically advanced, the resultant volume increase in electronic content poses threats to environmental sustainability. Global sustainability initiatives are being implemented to mitigate these threats. However, with the rise of these initiatives, there is also an increasing demand for the generation of electricity. Thus, a new challenge emerges: how can we manage these increasing levels of energy consumption?

To answer the call for more electricity generation, it is essential for renewable energy sources to have increasing shares of energy production vs. fossil fuels to reduce greenhouse gas emissions. The efficiency of a renewable energy source hinges on optimizing the transfer of energy from the source to the power grid or various electrical loads. These loads include commonly utilized consumer electronics, residential appliances and large-scale battery energy storage systems. Furthermore, the electrical loads must utilize an optimal amount of power during operation to encourage efficient energy usage.

Read on to learn more about the key attributes of semiconductors that contribute to enhanced sustainability in system designs.

Integrated circuits (ICs) or application-specific integrated circuits (ASICs) used for renewable power conversion and embedded systems must have four key features: low power dissipation, high reliability, high power density and security.

Low Power Dissipation

One of the main characteristics needed in a semiconductor for sustainable design is low power consumption. This extends battery life, allowing longer operating times between recharges, which ultimately conserves energy.

There are two leading sources of semiconductor power loss. The first is static power dissipation or power consumption when a circuit is in stand-by or a non-operational state. The second source is dynamic power dissipation, or power consumption when the circuit is in an operational state.

To reduce both static and dynamic power dissipation, semiconductors are developed to minimize capacitance through their internal layout construction, operate at lower voltage levels and activate functional blocks depending on if the device is in “deep sleep” stand-by or functional mode.

Microchip offers low power solutions that are energy efficient and reduce hazardous e-waste production.

High Reliability

The reliability of parts and the longevity of the system help to measure performance of semiconductors in sustainable system designs. Semiconductor reliability and longevity can be compromised by operation near the limits of the device’s temperature ratings, mechanical stresses, and torsion.

We use Very Thin Quad Flat No-Lead (VQFN) and Thin Quad Flat Pack (TQFP) packages to encapsulate complex layouts in small form factor packages to address these concerns. Exposed pads on the bottom surface of the VQFN package dissipate an adequate amount of heat, which helps to hold a low junction to case thermal resistance when the device operates at maximum capacity. TQFP packages use gull-wing leads on low-profile height packages to withstand torsion and other mechanical stresses.

High Power Density

Power density refers to the amount of power generated per unit of die size. Semiconductors with high power densities can run at high power levels while being packaged in small footprints. This is common in silicon carbide (SiC) wide-bandgap (WBG) discretes and power modules used in solar, wind and electric-vehicle power-conversion applications.

SiC enhances power-conversion systems by allowing the system to operate at higher frequencies, reducing the size and weight of electrical passives needed to transfer the maximum amount of power from a renewable source.

Our WBG SiC semiconductors offer several advantages over traditional silicon devices, such as running at higher temperatures and faster switching speeds. SiC devices’ low switching losses improve system efficiency while their high-power density reduces size and weight. They also can achieve a smaller footprint with reduction in heat sink dimensions.

Security

Security in semiconductors is almost synonymous with longevity, as security features can enable continued reuse of existing systems. This means that the design can be operated for longer periods of time without the need for replacement or becoming outdated.

There are helpful security features that support system longevity. For example, secure and immutable boot can verify the integrity of any necessary software updates to enhance system performance or fix software bugs. Secure key storage and node authentication can protect against external attacks as well as ensure that verified code runs on the embedded design.

The post Semiconductor Attributes for Sustainable System Design appeared first on ELE Times.

ADAS and autonomous vehicles with distributed aperture radar

EDN Network - Fri, 08/09/2024 - 12:33

The automotive landscape is evolving, and vehicles are increasingly defined by advanced driver-assistance systems (ADAS) and autonomous driving technologies. Moreover, radar is becoming increasingly popular for ADAS applications, offering multiple benefits over rival technologies such as cameras and LiDAR.

It’s a lot more affordable, and it also operates more efficiently in challenging conditions, such as in the dark, when it’s raining or snowing, or even when sensors are covered in dirt. As such, radar sensors have become a workhorse for today’s ADAS features such as adaptive cruise control (ACC) and automatic emergency braking (AEB).

However, improved radar performance is still needed to ensure reliability, safety, and convenience of ADAS functions. For example, the ability to distinguish between objects like roadside infrastructure and stationary people or animals, or to detect lost cargo on the road, are essential to enable autonomous driving features. Radar sensors must provide sufficient resolution and accuracy to precisely detect and localize these objects at long range, allowing sufficient reaction time for a safe and reliable operation.

A radar’s performance is strongly influenced by its size. A bigger sensor has a larger radar aperture, which typically offers a higher angular resolution. This delivers multiple benefits and is essential for the precise detection and localization of objects in next-generation safety systems.

Radar solutions for vehicles are limited by size restrictions and mounting constraints, however. Bigger sensors are often difficult to integrate into vehicles, and the advent of electric vehicles has resulted in front grills increasingly being replaced with other design elements, creating new constraints for the all-important front radar.

With its modular approach, distributed aperture radar (DAR) can play a key role in navigating such design and integration challenges. DAR builds on traditional radar technology, combining multiple standard sensors to create a solution that’s greater than the sum of its parts in terms of performance.

Figure 1 DAR combines multiple standard sensors to create a more viable radar solution. Source: NXP

The challenges DAR is addressing

To understand DAR, it’s worth looking at the challenges the technology needs to overcome. Traditional medium-rage radar (MRR) sensors feature 12-16 virtual antenna channels. This technology has evolved into high-resolution radars, which provide enhanced performance by integrating far more channels onto a sensor, with the latest production-ready sensors featuring 192 virtual channels.

The next generation of high-resolution sensors might offer 256 virtual channels with innovative antenna designs and software algorithms for substantial performance gains. Alternative massive MIMO (M-MIMO) solutions are about to hit the market packing over 1,000 channels.

Simply integrating 1000s of channels is incredibly hardware-intensive and power-hungry. Each channel consumes power and requires more chip and board area, contributing to additional costs. As the number of channels increases, the sensor becomes more and more expensive, while at the same time, the aperture size remains limited by the physical realities of manufacturing and vehicle integration considerations. At the same time, the large size and power consumption of an M-MIMO radar make it difficult to integrate with the vehicle’s front bumper.

Combining multiple radars to increase performance

DAR combines two or three MRR sensors, operated coherently together to provide enhanced radar resolution. The use of two physically displaced sensors creates a large virtual aperture enabling enhanced azimuth resolution of 0.5 degrees or lower, which helps to separate objects which are closely spaced.

Figure 2 DAR enhances performance by integrating far more channels onto a sensor. Source: NXP

The image can be further improved using three sensors, enhancing elevation resolution to less than 1 degree. The higher-resolution radar helps the vehicle navigate complex driving scenarios while recognizing debris and other potential hazards on the road.

The signals from the sensors, based on an RFCMOS radar chip, are fused coherently to produce a significantly richer point cloud than has historically been practical. The fused signal is processed using a radar processor, which is specially developed to support distributed architectures.

Figure 3 Zendar is a software-driven DAR technology. Source: NXP

Zendar is a DAR technology, developing system software for deployment in automobiles. The performance improvement is software-driven, enabling automakers to leverage low-cost, standard radar sensors yet attain performance that’s comparable to or better than the top-of-the-line high-resolution radar counterparts.

How DAR compares to M-MIMO radars

M-MIMO is an alternative high-resolution radar solution that embraces the more traditional radar design paradigm, which is to use more hardware and more channels when building a radar system. M-MIMO radars feature between 1,000 and 2,000 channels, which is many multiples more than the current generation of high-resolution sensors. This helps to deliver increased point density, and the ability to sense data from concurrent sensor transmissions.

The resolution and accuracy performance of radar are limited by the aperture size of the sensor; however, M-MIMO radars with 1,500 channels have apertures that are comparable in size to high-resolution radar sensors with 192 channels. The aperture itself is limited by the sensor size, which is capped by manufacturing and packaging constraints, along with size and weight specifications.

As a result, even though M-MIMO solutions can offer more channels, DAR systems can outperform M-MIMO radars on angular resolution and accuracy performance because their aperture is not limited by sensor size. This offers significant additional integration flexibility for OEMs.

M-MIMO solutions are expensive because they use highly specialized and complex hardware to improve radar performance. The cost of M-MIMO systems and their inherently unscalable hardware-centric design make them impractical for everything but niche high-end vehicles.

Such solutions are also power-hungry due to significantly increased hardware channels and processing requirements, which drive expensive cooling measures to manage the thermal design of the radar, which in turn, creates additional design and integration challenges.

More efficient, cost-effective solution

DAR has the potential to revolutionize ADAS and autonomous driving accessibility by using simple, efficient, and considerably more affordable hardware that makes it easy for OEMs to scale ADAS functionality across vehicle ranges.

Coherent combining of distributed radar is the only radar design approach where aperture size is not constrained by hardware, enabling an angular resolution lower than 0.5 degrees at significantly lower power dissipation. This is simply not possible in a large single sensor with thousands of antennas, and it’s particularly relevant considering OEM challenges with the proliferation of electric vehicles and the evolution of car design.

DAR’s high resolution helps it to differentiate between roadside infrastructure, objects, and stationary people or animals. It provides a higher probability of detection for debris on the road, which is essential for avoiding accidents, and it’s capable of detecting cars up to 350-m away—a substantial increase in detection range compared to current-generation radar solutions.

Figure 4 DAR’s high resolution provides a higher probability of detection for debris on the road. Source: NXP

Leveraging the significant detection range extension enabled by an RFCMOS radar chip, DAR also provides the ability to separate two very low radar cross section (RCS) objects such as cyclists, beyond 240 m, while conventional solutions start to fail around 100 m.

Simpler two-sensor DAR solutions can be used to enable more effective ACC and AEB systems for mainstream vehicles, with safety improvements helping OEMs to pass increasingly stringent NCAP requirements.

Perhaps most importantly for OEMs, DAR is a particularly cost-effective solution. The component sensors benefit from economies of scale, and OEMs can achieve higher autonomy levels by simply adding another sensor to the system, rather than resorting to complex hardware such as LiDAR or high-channel-count radar.

Because the technology relies on existing sensors, it’s also much more mature. Current ADAS systems are not fully reliable—they can disengage suddenly or find themselves unable to handle driving situations that require high-resolution radar to safely understand, plan and respond. This means drivers should be on standby to react and take over the control of the vehicle suddenly. The improvements offered by DAR will enable ADAS systems to be more capable, more reliable, and demand less human intervention.

Changing the future of driving

DAR’s effectiveness and reliability will help carmakers deliver enhanced ADAS and autonomous driving solutions that are more reliable than current offerings. With DAR, carmakers will be able to develop driving automation that is both safer and provides more comfortable experiences for drivers and their passengers.

For a new technology, DAR is already particularly robust as it relies on the mainstream radar sensors which have already been used in millions of cars over the past few years. As for the future, ADAS using DAR will become more trusted in the market as these systems provide comprehensive and comfortable assisted driving experiences at more affordable prices.

Karthik Ramesh is marketing director at NXP Semiconductors.

 

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post ADAS and autonomous vehicles with distributed aperture radar appeared first on EDN.

Pulsus Is a Breakthrough for PiezoMEMS Devices

ELE Times - Fri, 08/09/2024 - 12:30

Courtesy: Lam Research

  • The tool enables the deposition of high-quality, highly scandium-doped AlScN films
  • Features include dual-chamber configuration, degas, preclean, target library, precise laser scanning, and more

In this post, we explain how the Pulsus system works, and how it can achieve superior film quality and performance compared to conventional technologies.

PiezoMEMS devices are microelectromechanical systems that use piezoelectric materials to convert electrical energy into mechanical motion, or vice versa. They have applications in a wide range of fields, including sensors, actuators, microphones, speakers, filters, switches, and energy harvesters.

PiezoMEMS devices require high-quality thin films of piezoelectric materials, such as aluminum scandium nitride (AlScN), to achieve optimal performance. Conventional deposition technologies—think sputtering or chemical vapor deposition—face challenges in producing AlScN films with desired properties, such as composition, thickness, stress, and uniformity. These obstacles limit both the scalability and functionality of piezoMEMS devices.

Revolutionary Tech 

To help overcome these challenges, Lam Research recently introduced Pulsus, a pulsed laser deposition (PLD) system that we hope will revolutionize the world of piezoMEMS applications. The addition of Pulsus PLD to the Lam portfolio further expands our comprehensive range of deposition, etch and single wafer clean products focused on specialty technologies and demonstrates Lam’s continuous innovation in this sector.

Pulsus is a PLD process module that has been optimized and integrated on Lam’s production-proven 2300 platform. It was developed to enable the deposition of high-quality AlScN films, which are essential to produce piezoMEMS devices.

A key benefit of the Pulsus system is its ability to deposit multi-element thin films, like highly scandium-doped AlScN. The intrinsic high plasma density—in combination with pulsed growth—creates the conditions to stabilize the elements in the same ratio as they arrive from the target. This control is essential for depositing materials where the functionality of the film is driven by the precise composition of the elements.

Plasma, Lasers 

Local plasma allows for high local control of film specifications across the wafer, like thickness and local in-film stress. Pulsus can adjust deposition settings while the plasma “hovers” over the wafer surface. This local tuning of thickness and stress allows for high uniformities over the wafer, which is exactly what our customers are asking for.  And because the plasma is generated locally, Pulsus uses targets that are much smaller than you would typically see in PVD systems. Pulsus can exchange these smaller targets, without breaking vacuum, through a target exchange module—the target library.

Pulsus uses a pulsed high-power laser to ablate a target material, in this case AlScN, and create a plasma plume. The plume expands and impinges on a substrate, where it forms a thin film.

Pulsus has a fast and precise laser optical path which, in combination with the target scanning mechanism, allows for uniform and controlled ablation of the target material. The Pulsus system has a high control of plasma plume generation, wafer temperature, and pressure control to achieve the desired film composition and stoichiometry.

By combining these features, Pulsus can produce high-quality films with superior performance for piezoMEMS devices. Pulsus can achieve excellent composition control, with low variation of the scandium (Sc) content across the wafer and within individual devices. It also delivers high film uniformity, with low WiW (within-wafer) and wafer-to-wafer (WtW) variation of the film thickness and stress.

Breakthrough Technology 

Pulsus is a breakthrough technology for AlScN deposition, which can improve film quality and performance for piezoMEMS applications. In addition, Pulsus has the potential to enhance the functionality and scalability of piezoMEMS devices. The Pulsus technology deposits AlScN films with very high Sc concentration, resulting in high piezoelectric coefficients, which drive higher device sensitivity and output. These films feature tunable stress states to enable the design of different device configurations and shapes.

Pulsus is currently in use on 200 mm wafers and is planned to expand to 300 mm wafers in the future—a move that has the potential to increase the productivity and yield of piezoMEMS devices.

The post Pulsus Is a Breakthrough for PiezoMEMS Devices appeared first on ELE Times.

Navitas’s Q2 revenue and gross margin at higher end of guidance

Semiconductor today - Fri, 08/09/2024 - 10:59
For second-quarter 2024, gallium nitride (GaN) power IC and silicon carbide (SiC) technology firm Navitas Semiconductor of Torrance, CA, USA has reported revenue of $20.5m, down 12.6% on $23.2m last quarter but up 13% on $18.1m a year ago, and at the top of the $20m±$0.5m guidance range. Revenue for first-half 2024 was up nearly 40% year-on-year...

4K and beyond: Trends that are shaping India’s home projector market

ELE Times - Fri, 08/09/2024 - 08:57

Sushil Motwani, founder of Aytexcel Pvt Ltd, also evaluates the change in customer preferences that is driving the growth of the home entertainment segment

Sushil Motwani, Founder of Aytexcel Pvt. Ltd. and Official India Representative of Formovie

Recent news reports indicate that a few leading companies in the home entertainment industry are in discussions with major production studios to ensure 8K resolution content, which offers extremely high-definition video quality. This means that the availability of 8K content is on the verge of becoming normative.  From the modern consumer looking for the best visual experience, this is an exciting prospect.

 

 

Even though the availability of 8K content is currently minimal, many projectors boosted by technologies like Artificial Intelligence (AI) can upscale 4K content. While this cannot match the true quality of the native 8K, improved versions are expected in the coming years.

In the case of 4K and beyond, devices like laser projectors are continually evolving to match user preferences. Until the COVID-19 pandemic, laser projectors were mainly used for business presentations, in the education sector and at screening centres. However, with the rise of more OTT platforms and the availability of 4K content, there has been a huge demand for home theatres, where projector screens have replaced traditional TVs.

According to Statista, the number of households in India using home entertainment systems, such as home theatres, projectors and advanced TVs, is expected to reach 26.2 million by 2028. The revenue in this segment is projected to show an annual growth rate (CAGR) of 3.70 per cent, resulting in an estimated market volume of US$0.7 billion by 2028.

So, what are the key trends driving the home projector market in India? Visual quality is definitely one of them. Modern consumers demand upgraded display technologies like the Advanced Laser Phosphor Display® (ALPD). This innovative display combines laser-excited fluorescent materials with multi-colour lasers, resulting in a smooth and vividly coloured display, superior to regular projectors.

Multifunctionality is another key requirement for gamers. When transitioning from PCs to projector-driven gaming, consumers look for a large screen size, preferably 120 inches and above, high resolution, low input lag, quick refresh rate and excellent detailing and contrast.

With the integration of AI and Machine Learning (ML) tools, manufacturers are developing projectors with more user-friendly features and automatic settings that adjust to surrounding light conditions based on the displayed content. AI also helps improve security features and facilitates personalised user modes, while predictive maintenance makes the devices more intuitive and efficient.

Projectors with a multifaceted interface are also a popular choice. Voice assistance features enable users to connect their large-screen setups with other smart devices. The user experience is enhanced by options such as Alexa or voice commands through Google Assistant using a Google Home device or an Android smartphone. Multiple connectivity options, including HDMI, USB, Bluetooth and Wi-Fi facilitate smooth handling of these devices. Consumers also prefer projectors with native app integrations, like Netflix, to avoid external setups while streaming content.

There is also a desire among users to avoid messy cables and additional devices, which not only affect the convenience of installation but also impact the aesthetics of the interiors. This is why Ultra Short Throw (UST) projectors, which can offer a big screen experience even in small spaces, are emerging as a top choice. Some of these projectors can throw a 100-inch projection with an ultra-short throw distance of just 9 inches from the wall.

And finally, nothing can deliver a true cinematic experience like a dedicated surround sound system. But customers also want to avoid the additional setup of soundbars and subwoofers for enhanced sound. Since most movies are now supported by Dolby Atmos 7.1 sound, the home theatre segment is also looking for similar sound support. Projectors integrated with Dolby Atmos sound, powered by speakers from legendary manufacturers like Bowers & Wilkins, Yamaha, or Wharfedale, are key attractions for movie lovers and gamers.

Buyers are also looking for eye-friendly projectors equipped with features like infrared body sensors and diffuse reflection. The intelligent light-dimming and eye care technologies make their viewing experience more comfortable and reduce eye strain, especially during prolonged sessions like gaming.

The growing popularity of projectors is also attributed to the increasing focus on sustainability. Laser projectors are more energy-efficient than traditional lamp-based projectors. They use almost 50 per cent less power compared to the latter, which helps in energy savings and reduces the overall environmental impact. They are also very compact and made with sustainable and recycled materials, which minimises the logistical environmental impact and carbon footprint associated with their operation.

The post 4K and beyond: Trends that are shaping India’s home projector market appeared first on ELE Times.

Memory Leaders Rise to Meet the Storage Challenges of AI

AAC - Fri, 08/09/2024 - 02:00
At this year's Future of Memory and Storage show, Microchip, Micron, and Samsung presented new memory solutions in the age of AI—from SSD controllers to LPDDR5X DRAM.

🎥 Новітнє технологічне обладнання від Huawei для КПІ

Новини - Thu, 08/08/2024 - 22:09
🎥 Новітнє технологічне обладнання від Huawei для КПІ
Image
kpi чт, 08/08/2024 - 22:09
Текст

Київська політехніка отримала від компанії Huawei енергетичне обладнання для НН ІЕЕ та новітнє обладнання для лабораторії DATACOM, що на РТФ.

Client DIMM chipset reaches 7200 MT/s

EDN Network - Thu, 08/08/2024 - 20:57

A memory interface chipset from Rambus enables DDR5 client CSODIMMS and CUDIMMs to operate at data rates of up to 7200 MT/s. This product offering includes a DDR5 client clock driver (CKD) and a serial presence detect (SPD) hub, bringing server-like performance to the client market.

The DDR5 client clock driver, part number DR5CKD1GC0, buffers the clock between the host controller and the DRAMs on DDR5 CUDIMMs and CSODIMMs. It receives up to four differential input clock pairs and supplies up to four differential output clock pairs. The device can operate in single PLL, dual PLL, and PLL bypass modes, supporting clock frequencies from 1600 MHz to 3600 MHz (DDR5-3200 to DDR5-7200). An I2C/I3C sideband bus interface allows device configuration and status monitoring.

Equipped with an internal temperature sensor, the SPD5118-G1B SPD hub senses and reports important data for system configuration and thermal management. The SPD hub contains 1024 bytes of nonvolatile memory arranged as 16 blocks of 64 bytes per block. Each block can be optionally write-protected via software command.

The DR5CKD1GC0 client clock driver is now sampling, while the SPD5118-G1B SPD hub is already in production. To learn more about the DDR5 client DIMM chipset, click here.

Rambus

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Client DIMM chipset reaches 7200 MT/s appeared first on EDN.

Reference design trio covers EV chargers

EDN Network - Thu, 08/08/2024 - 20:57

Microchip has released three flexible and scalable EV charger reference designs for residential and commercial charging applications. These reference designs include a single-phase AC residential model, a three-phase AC commercial model that uses the Open Charge Point Protocol (OCPP) and a Wi-Fi SoC, and a three-phase AC commercial model with OCPP and a display.

The reference designs offer complete hardware design files and source code with software stacks that are tested and compliant with communication protocols, such as OCPP. OCPP provides a standard protocol for communication between charging stations and central systems, ensuring interoperability across networks and vendors.

Most of the active components for the reference designs, including the MCU, analog front-end, memory, connectivity, and power conversion, are available from Microchip. This streamlines integration and accelerates time to market for new EV charging systems.

The residential reference design is intended for home charging with a single-phase supply. It supports power up to 7.4 kW with an on-board relay and driver. The design also features an energy metering device with automatic calibration and two Bluetooth LE stacks.

The three-phase commercial reference design, aimed at high-end residential and commercial stations, integrates an OCPP 1.6 stack for network communication and a Wi-Fi SoC for remote management. It supports power up to 22 kW.

Catering to commercial and public stations, the three-phase commercial reference design with OCPP and a TFT touch-screen display supports bidirectional charging up to 22 kW.

To learn more about Microchip’s EV charger reference designs, click here.

Microchip Technology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Reference design trio covers EV chargers appeared first on EDN.

Infineon expands GaN transistor portfolio

EDN Network - Thu, 08/08/2024 - 20:57

Infineon has launched the CoolGaN Drive family, featuring single switches and half-bridges with integrated drivers for compact, efficient designs. The family includes CoolGaN Drive 700-V G5 single switches, which integrate a transistor and gate driver in PQFN 5×6 and PQFN 6×8 packages. It also offers CoolGaN Drive HB 600-V G5 devices, which combine two transistors with high-side and low-side gate drivers in a LGA 6×8 package.

Depending on the product group, CoolGaN Drive components include a bootstrap diode, loss-free current measurement, and adjustable dV/dt. They also provide overcurrent, overtemperature, and short-circuit protection.

These devices support higher switching frequencies, leading to smaller, more efficient systems with reduced BoM, lower weight, and a smaller carbon footprint. The GaN HEMTs are suitable for longer-range e-bikes, portable power tools, and lighter-weight household appliances, such as vacuums, fans, and hairdryers.

Samples of the half-bridge devices are available now. Single-switch samples will be available starting Q4 2024. For more information about Infineon’s GaN HEMT lineup, click here.

Infineon Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Infineon expands GaN transistor portfolio appeared first on EDN.

5G-enabled SBC packs AI accelerator

EDN Network - Thu, 08/08/2024 - 20:56

Tachyon, a Snapdragon-powered single-board computer (SBC) from Particle, boasts 5G connectivity and an NPU for AI/ML workloads. This credit-card-sized board provides the compute power and connectivity of a midrange smartphone in a Raspberry Pi form factor, supported by Particle’s edge-to-cloud IoT infrastructure.

At the heart of Tachyon is the Qualcomm Snapdragon QCM6490 SoC, featuring an octa-core Kryo CPU, Adreno 643 GPU, and an NPU for AI acceleration at a rate of up to 12 TOPS. The chipset also provides upstream Linux support, as well as support for Android 13 and Windows 11. Wireless connectivity includes 5G cellular and Wi-Fi 6E with on-device antennas. Ample storage is provided by 4 GB of RAM and 64 GB of flash memory.

Tachyon has two USB-C 3.1 connectors. One of these supports Display Port Alt Mode, which allows the connection of a USB-C capable monitor (up to 4K). Particle also offers a USB-C hub to add USB ports, HDMI, and a gigabit Ethernet port. The computer board includes a Raspberry PI-compatible 40-pin connector and support for cameras, displays, and PCIe peripherals connected via ribbon cables.

Tachyon is now available for pre-order on Kickstarter. Early bird prices start at $149. Shipments are expected to begin in January 2025. To learn more about the Tachyon SBC, click here.

Particle

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 5G-enabled SBC packs AI accelerator appeared first on EDN.

AI tweaks presence sensor accuracy

EDN Network - Thu, 08/08/2024 - 20:56

Joining Aqara’s smart home sensor lineup is the FP1E, which combines mmWave technology and AI algorithms to enable precise human sensing. The FP1E, which supports Zigbee and Matter, detects human presence, even when the person is sitting or lying still.

Useful for various home automation scenarios, the FP1E detects presence up to 6 meters away and monitors rooms up to 50 square meters when ceiling-mounted. It can detect when someone leaves a room within seconds, automatically triggering actions such as turning off the lights or air conditioner.

The FP1E sensor uses AI algorithms to distinguish between relevant movements and false triggers, eliminating interference from small pets, reflections, and electronics to reduce unnecessary alerts. AI learning capabilities enhance detection accuracy through continuous learning, adapting to the user’s home environment over time.

The FP1E presence sensor is now available from Aqara’s Amazon brand stores for $50, as well as from select Aqara retailers worldwide. An Aqara hub, sold separately, is required for operation.

FP1E product page

Aqara

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post AI tweaks presence sensor accuracy appeared first on EDN.

Rohm Announces the ‘Industry’s Smallest’ CMOS Op Amp

AAC - Thu, 08/08/2024 - 20:00
Rohm designed the new 0.88 mm x 0.58 mm rail-to-rail op amp for devices where space is at a premium, such as smartphones and IoT devices.

Lasers4NetZero and NUBURU collaborate on industrial lasers for sustainable technology

Semiconductor today - Thu, 08/08/2024 - 18:17
NUBURU Inc of Centennial, CO, USA — which was founded in 2015 and develops and manufactures high-power industrial blue lasers — has announced a strategic collaboration with Lasers4NetZero, an initiative dedicated to advancing sustainable practices...

Infineon opens first phase of largest SiC power semiconductor fab in Malaysia

Semiconductor today - Thu, 08/08/2024 - 18:10
Infineon Technologies AG of Munich, Germany has officially opened the first phase of a new 200mm-wafer silicon carbide (SiC) power semiconductor fab at its Kulim 3 site in Malaysia. Malaysian Prime Minister The Right Honourable Dato’ Seri Anwar Ibrahim and Chief Minister of the state of Kedah The Right Honourable Dato’ Seri Haji Muhammad Sanusi Haji Mohd Nor joined Infineon’s CEO Jochen Hanebeck to symbolically launch production...

Keep Dpot pseudologarithmic gain control on a leash

EDN Network - Thu, 08/08/2024 - 17:20

A Microchip Inc datasheet covering the MCP4xxx family of digital potentiometers (Dpot) includes an interesting application circuit on datasheet page 15. See Figure 1 for a (somewhat edited) version of their Figure 4-5.

Figure 1 Amplifier with Dpot pseudologarithmic gain control that runs away at zero and 28. Source: Microchip

Wow the engineering world with your unique design: Design Ideas Submission Guide

As explained in the Microchip accompanying text, the gain range implemented by this circuit begins to change radically when the control setting of the pot approaches 0 or 256. See Figure 2.

Figure 2 Pseudologarithmic gain goes off the chart for codes below 24 and above 232.

As the datasheet puts it: As the wiper approaches either terminal, the step size in the gain calculation increases dramatically. This circuit is recommended for gains between 0.1 and 10 V/V.

This is a sound recommendation. Unfortunately, it involves effectively throwing away some 48 of the 256 8-bit pot settings, amounting to nearly 20% of available resolution. Figure 3 suggests another solution.

Figure 3 Add two fixed resistors to bound the gain range to the recommended limits while keeping full 8-bit resolution.

If we add two fixed resistors, each equal to 1/9th of the pot’s resistance, gain will be limited to the recommended two decades without throwing away any codes or resolution to do so.

The red curve in Figure 4 shows the result.

Figure 4 Two added resistors limits gain to recommended 0.1 to 10 range without sacrificing resolution. The red curve shows the result.

Note that none of this has to do with wiper resistance.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Keep Dpot pseudologarithmic gain control on a leash appeared first on EDN.

🎉 Виставка української авторської кераміки для японського чайного дійства

Новини - Thu, 08/08/2024 - 14:34
🎉 Виставка української авторської кераміки для японського чайного дійства
Image
kpi чт, 08/08/2024 - 14:34
Текст

Українсько-Японський центр КПІ ім. Ігоря Сікорського запрошує відвідати виставку «П’ять чаш»! На виставці буде представлена авторська кераміка 7 українських керамістів для японського чайного дійства.

An Overview of Oscilloscopes and Their Industrial Uses

ELE Times - Thu, 08/08/2024 - 14:10

Key takeaways:

  • Oscilloscopes are primarily time-domain measurement instruments that mostly display timing-related characteristics.
  • However, mixed-domain oscilloscopes give you the best of both worlds by including built-in spectrum analyzers for frequency-domain measurements.
  • Modern oscilloscopes sport extremely sophisticated triggering and analysis features, both on-device and through remote measurement software.

After a multimeter, an oscilloscope is probably the second-most popular instrument on an engineer’s workbench. Oscilloscopes enable you to peer into the internals of electronic devices and monitor the signals they use under the hood.

What do engineers look for when using oscilloscopes? What are some innovations that these instruments have facilitated? What are some key characteristics to look for? Find out the answers to all this and more below.

What is the primary function of oscilloscopes in electronic measurements?

Oscilloscopes enable engineers to measure and visualize the amplitude of an electrical signal over time. This is also the reason they are generally considered time-domain measurement instruments. However, there are mixed-domain oscilloscopes that provide both time-domain (amplitude vs. time) and frequency-domain (power vs. frequency) measurements.

The precise characterization of waveforms is a critical diagnostic tool in every stage of an electronic product lifecycle, including cutting-edge research, prototyping, design, quality assurance, compliance, maintenance, and calibration.

Let’s look at the type of signals that are being tested with oscilloscopes in various industries to facilitate innovations and products.

What signal characteristics are verified using oscilloscopes?

When experienced electronics engineers are troubleshooting issues using oscilloscopes, they are looking for evidence of several ideal characteristics as well as problematic phenomena, depending on the type of signal and the application. Some of the common aspects and phenomena they examine are listed below:

  • Signal shape: The waveform should match the expected shape if the specification requires a square, sawtooth, or sine wave. Any deviations might indicate an issue.
  • Amplitude: The signal levels should remain within the expected range of volts without excessive fluctuations.
  • Frequency or period: The frequency or period of the signal should always remain within specified limits. Deviations from the expected frequency can lead to synchronization problems in communication and control systems.
  • Rise and fall times: For digital signals, sharp and consistent rise and fall times are essential for reliable operation. If the rise time is slower than required, it may lead to problems like data corruption, timing errors, and reduced performance in digital circuits. If it’s overly fast, it can lead to increased electromagnetic interference as well as signal integrity issues like ringing and crosstalk.
  • Jitter: Jitter is the variation in a signal characteristic during significant transitions. Period jitter is the variation in the duration of individual clock periods. Cycle-to-cycle jitter is the variation in duration between consecutive clock cycles. Phase jitter is the variation in the phase of the signal with respect to a reference clock. Timing jitter is the variation in the timing of signal edges. Low jitter indicates stable signal timing. Excessive jitter may cause errors in high-speed digital communication.
  • Phase consistency: In systems with multiple signals, phase consistency between them is critical for proper synchronization.
  • Duty cycle: For pulse-width modulation signals and clock signals, the duty cycle should be as specified.
  • Noise: Noise is any unwanted disturbance that affects a signal’s amplitude, phase, frequency, or other characteristics. It should be minimal and within acceptable limits to avoid interference and degradation of the signal. Too much noise indicates poor signal integrity, possible shielding issues, or noise due to suboptimal power supply. Phase noise can affect the synchronization of communication and clock signals.
  • Harmonics and distortion: For analog signals, low harmonic distortion ensures signal fidelity.
  • Ringing: Ringing refers to oscillations after a signal transition, usually seen in digital circuits, that can lead to errors and signal integrity issues.
  • Crosstalk: Unwanted coupling from adjacent signal lines can appear as unexpected waveforms on the oscilloscope trace.
  • Drift: Changes in signal amplitude or frequency over time are indicators of instability in the power supply or other components.
  • Ground bounce: Variability in the ground potential, often visible as a noisy baseline, can be critical in fast-switching digital circuits.
  • Clipping: If the input signal amplitude exceeds the oscilloscope’s input range, the displayed waveform will be clipped, indicating a need for signal attenuation or a more appropriate input setting on the scope.
  • Direct current (DC) offsets: Unexpected DC offsets can indicate issues with the waveform generation or coupling methods.
  • Aliasing: Aliasing occurs if the oscilloscope sampling rate is too low for the signal frequency, leading to an incorrect representation of the signal.
What types of waveforms and signals can be analyzed using an oscilloscope?

Oscilloscopes are used to verify a variety of analog signals and digital signals in many industries as explained below.

5G and 6G telecom

Figure 1: A Keysight Infiniium UXR-series real-time oscilloscope

The radio frequency (RF) signals used in telecom systems and devices must strictly adhere to specifications for optimum performance as well as regulatory compliance.

Some examples of oscilloscope use in this domain include:

  • InfiniiumUXR-B series real-time oscilloscopes (RTOs) for characterizing 5G and 6G systems, including phased-array antenna transceivers and mmWave wideband analysis capable of measuring frequencies as high as 110 gigahertz (GHz) and bandwidths of as much as 5 GHz
  • development and verification of 41-GHz power amplifier chips for 5G New Radio applications
  • qualifying a 6G 100 gigabits-per-second (Gbps) 300GHz (sub-terahertz) wireless data link using a 70 GHz UXR0704B Infiniium UXR-Series RTO
Photonics and fiber optics

Oscilloscopes are extensively employed for functional and compliance testing of optical and electrical transceivers used in high-speed data center networks.

Some of the use cases are listed below:

  • Oscilloscopes, with the help of optical-to-electrical adaptors, verify characteristics like phase-amplitude modulation (PAM4) of 400Ghigh-speed optical networks.
  • Oscilloscopes test the conformance of 400G/800G electrical data center transceivers with the Institute of Electrical and Electronics Engineers (IEEE) 802.3CK and the Optical Internetworking Forum’s (OIF) OIF-CEI-5.0 specifications.
  • Real-time oscilloscopes like the UXR-B are used to evaluate the forward error correction performance of high-speed optical network links.
Digital interfaces of consumer electronics

Oscilloscopes and arbitrary waveform generators are used together for debugging and automated testing of high-speed digital interfaces like:

  • Wi-Fi 7 networking standard
  • universal serial bus (USB)
  • mobile industry processor interface (MIPI) standards
  • peripheral component interconnect express (PCIe) buses
  • high-definition multimedia interface (HDMI)

They are also being used for testing general-purpose digital interfaces like the inter-integrated circuit (I2C), the serial peripheral interface (SPI), and more.

Automotive radars and in-vehicle networks Figure 2: Integrated protocol decoders for automotive and other digital signals

Oscilloscopes are used for validating automotive mmWave radar chips. Additionally, oscilloscopes are extensively used for verifying automotive in-vehicle network signals like:

  • automotive Ethernet
  • controller area network (CAN)
  • FlexRay
  • local interconnect network (LIN)
Aerospace and defense

Radars for aerospace and defense uses are validated using instruments like the UXR-series oscilloscopes.

They are also used for ensuring that data communications comply with standards like the MIL-STD 1553 and ARINC 429.

Space

Oscilloscopes are being used for developing 2.65 Gbps high-speed data links to satellites.

How does an oscilloscope visually represent electrical signals? Figure 3: Schematic of an oscilloscope

An oscilloscope’s display panel consists of a two-dimensional resizable digital grid. The horizontal X-axis represents the time base for the signal, while the vertical Y-axis represents signal amplitude in volts.

Each segment of an axis is called a division (or div). Control knobs on the oscilloscope allow the user to change the magnitude of volts or time that each div represents.

Figure 4: Visualizing a signal on an oscilloscope

Increasing this magnitude on the X-axis means more seconds or milliseconds per division. So you can view a longer capture of the signal, effectively zooming out on it. Similarly, by reducing the magnitude on the X-axis, you’re able to zoom into the signal to see finer details. The maximum zoom depends on the oscilloscope’s sampling rate. It’s often possible to zoom in to nanosecond levels on modern oscilloscopes since they have sampling rates of some giga samples per second.

Similarly, you can zoom in or out on the Y-axis to examine finer details of changes in amplitude.

What are the various types of oscilloscopes? Figure 5:Waveform acquisition using an equivalent time sampling oscilloscope

Some of the common types of oscilloscopes are:

  • Digital storage oscilloscopes (DSOs): They capture and store digital representations of analog signals, allowing for detailed analysis and post-processing. All modern scopes, including the sub-types below, are DSOs. The term differentiates them from older analog scopes that showed waveforms by firing an electron beam from a cathode ray tube (CRT) onto a phosphor-coated screen to make it glow.
  • Mixed-signal oscilloscopes (MSOs): They integrate both analog and digital channels, enabling simultaneous observation of analog signals and digital logic states. They’re useful for use cases like monitoring power management chips.
  • Mixed-domain oscilloscopes (MDOs): They combine normal time-domain oscilloscope functions with a built-in spectrum analyzer, allowing for time-correlated viewing of time-domain and frequency-domain signals.
  • Real-time oscilloscopes: They capture and process a waveform in real time as it happens, making them suitable for non-repetitive and transient signal analysis.
  • Equivalent time oscilloscopes: Equivalent time or sampling oscilloscopes are designed to capture high-frequency or fast repetitive signals by reconstructing them using equivalent time sampling. They sample a repetitive input signal at a slightly different point of time during each repetition. By piecing these samples together, they can reconstruct an accurate representation of the waveform, even one that is very high frequency.
How does an oscilloscope differ from other test and measurement equipment?

Oscilloscopes often complement other instruments like spectrum analyzers and logic analyzers. Some key differences between oscilloscopes and spectrum analyzers include:

  • Purpose: Oscilloscopes show how a signal changes over time by measuring its amplitude. Spectrum analyzers show how the energy of a signal is spread over different frequencies by measuring the power at each frequency.
  • Displayed information: Oscilloscopes show time-related information like rise and fall times, phase shifts, and jitter. Spectrum analyzers show frequency-related information like signal bandwidth, carrier frequency, and harmonics.
  • Uses: Oscilloscopes are extensively used for visualizing signals in real time and near real time. Spectrum analyzers are useful when frequency analysis is critical, such as in radio frequency communications and electromagnetic interference testing.

A mixed-domain oscilloscope combines oscilloscope and spectrum analyzer capabilities in a single instrument with features like fast Fourier transforms (FFT) to convert between the two domains.

Another complementary instrument is a logic analyzer. Both mixed-signal oscilloscopes and logic analyzers are capable of measuring digital signals. But they differ in some important aspects:

  • Analog and digital signals: An MSO can measure both analog and digital signals. However, logic analyzers only measure digital signals.
  • Number of channels: Most oscilloscopes support two to four channels and a few top out around eight. In sharp contrast, logic analyzers can support dozens to hundreds of digital signals.
  • Analysis capabilities: Oscilloscopes provide sophisticated triggering options for capturing complex analog signals. But logic analyzers can keep it relatively simple since they only focus on digital signals.
What are the key specifications to consider when choosing an oscilloscope for a specific application? Figure 6: A Keysight UXR-B series scope

The most important specifications and features to consider when choosing an oscilloscope include:

  • Bandwidth: For analog signals, the recommended bandwidth is three times or more of the highest sine wave frequency. For digital signals, the ideal bandwidth is five times or more of the highest digital clock rate, measured in hertz (Hz), megahertz (MHz), or GHz.
  • Sample rate: This is the number of times the oscilloscope measures the signal each second. State-of-the-art oscilloscopes, like the UXR series, support up to 256 giga samples (billion samples) per second, which works out to a measurement taken every four femtoseconds. The sample rate dramatically impacts the signal you see on the display. An incorrect sample rate can result in an inaccurate or distorted representation of a signal. A low sample rate can cause errors to go undetected because they can occur between collected samples. The sample rate should be at least twice the highest frequency of the signal to avoid aliasing, but a sample rate of 4-5 times the bandwidth is often recommended to precisely capture signal details.
  • Waveform update rate: A higher waveform rate increases the chances of detecting possible glitches and other infrequent events that occur during the blind time between two acquisitions.
  • Number of channels: Most use cases are mixed-signal environments with multiple analog and digital signals. Select an oscilloscope with sufficient channels for critical time-correlated measurements across multiple waveforms.
  • Effective number of bits (ENOB): ENOB says how many bits are truly useful for accurate measurements. Unlike the total analog-to-digital converter (ADC) bits, which can include some bits influenced by noise and errors, ENOB reflects the realistic performance and quality of the oscilloscope’s measurements.
  • Signal-to-noise ratio (SNR): This is the ratio of actual signal information to noise in a measurement. Low SNR is recommended for higher accuracy.
  • Time base accuracy: This tells you the timing accuracy in parts per billion.
  • Memory depth: This is specified as the number of data points that the scope can store in memory. It determines the longest waveforms that can be captured while measuring at the maximum sample rate.
What trends are emerging in oscilloscope development?

Some emerging trends in oscilloscopes and onboard embedded software are in the areas of signal analysis, automated compliance testing, and protocol decoding capabilities:

Advances in signal analysis include:

  • deep signal integrity analysis for high-speed digital applications
  • advanced statistical analysis of jitter and noise in digital interfaces in the voltage and time domains
  • analysis of high-speed PAM data signals
  • power integrity analysis to understand the effects of alternating or digital signals and DC supplies on each other
  • de-embedding of cables, probes, fixtures, and S-parameters to remove their impacts from measurements for higher accuracy

Automated compliance testing software can automatically check high-speed digital transceivers for compliance with the latest digital interface standards like USB4, MIPI, HDMI, PCIe 7.0, and more.

Comprehensive protocol decoding capabilities enable engineers to understand the digital data of MIPI, USB, automotive protocols, and more in real time.

Measure with the assurance of Keysight oscilloscopes Fig 7. Keysight Infiniium and InfiniiVision oscilloscopes

This blog introduced several high-level aspects of oscilloscopes. Keysight provides a wide range of state-of-the-art, reliable, and proven oscilloscopes including real-time and equivalent-time scopes for lab use and handheld portable oscilloscopes for field use.

MICHELLE TATE
Product Marketing
Keysight Technologies

The post An Overview of Oscilloscopes and Their Industrial Uses appeared first on ELE Times.

Best Virtual Machine Size for Self-Managed MongoDB on Microsoft Azure

ELE Times - Thu, 08/08/2024 - 13:52

Courtesy: Michał Prostko (Intel) and Izabella Raulin (Intel)

In this post, we explore the performance of MongoDB on Microsoft Azure examining various Virtual Machine (VM) sizes from the D-series as they are recommended for general-purpose needs.

Benchmarks were conducted on the following Linux VMs: Dpsv5, Dasv5, Dasv4, Dsv5, and Dsv4. They have been chosen to represent both the DS-Series v5 and DS-Series v4, showcasing a variety of CPU types. The scenarios included testing instances with 4 vCPUs, 8 vCPUs, and 16 vCPUs to provide comprehensive insights into MongoDB performance and performance-per-dollar across different compute capacities.

Our examination showed that, among instances with the same number of vCPUs, the Dsv5 instances consistently delivered the most favorable performance and the best performance-per-dollar advantage for running MongoDB.

 

MongoDB Leading in NoSQL Ranking

MongoDB stands out as the undisputed leader in the NoSQL Database category, as demonstrated by the DB-Engines Ranking. MongoDB emerges as the clear frontrunner in the NoSQL domain, with its closest competitors, namely Amazon DynamoDB and Databricks, trailing significantly in scores. Thus, MongoDB is supposed to maintain its leadership position.

MongoDB Adoption in Microsoft Azure

Enterprises utilizing Microsoft Azure can opt for a self-managed MongoDB deployment or leverage the cloud-native MongoDB Atlas service. MongoDB Atlas is a fully managed cloud database service that simplifies the deployment, management, and scaling of MongoDB databases. Naturally, this convenience comes with additional costs. Additionally, it restricts us, for example, we cannot choose the instance type to run the service on.

In this study, the deployment of MongoDB through self-managed environments within Azure’s ecosystem was deliberately chosen to retain autonomy and control over Azure’s infrastructure. This approach allowed for comprehensive benchmarking across various instances, providing insights into performance and the total cost of ownership associated only with running these instances.

Methodology

In the investigation into MongoDB’s performance across various Microsoft Azure VMs, the same methodology was followed as in our prior study conducted on the Google Cloud Platform. Below is a recap of the benchmarking procedures along with the tooling information necessary to reproduce the tests.

Benchmarking Software – YCSB

The Yahoo! Cloud Serving Benchmark (YCSB), an open-source benchmarking tool, is a popular benchmark for testing MongoDB’s performance. The most recent release of the YCSB package, version 0.17.0, was used.

The benchmark of MongoDB was conducted using a workload comprising 90% read operations and 10% updates to reflect, in our opinion, the most likely distribution of operations. To carry out a comprehensive measurement and ensure robust testing of system performance, we configured the YCSB utility to populate the MongoDB database with 10 million records and execute up to 10 million operations on the dataset. This was achieved by configuring the recordcount and operationcount properties within YCSB. To maximize CPU utilization on selected instances and minimize the impact of other variables such as disk and network speeds we configured each MongoDB instance with at least 12GB of WiredTiger cache. This ensured that the entire database dataset could be loaded into the internal cache, minimizing the impact of disk access. Furthermore, 64 client threads were set to simulate concurrency. Other YCSB parameters, if not mentioned below, remained as default.

Setup

Each test consisted of a pair of VMs of identical size: one VM running MongoDB v7.0.0 designated as the Server Under Test (SUT) and one VM running YCSB designed as the load generator. Both VMs ran in the Azure West US Region as on-demand instances, and the prices from this region were used to calculate performance-per-dollar indicators.

Scenarios

MongoDB performance on Microsoft Azure was evaluated by testing various Virtual Machines from the D-series, which are part of the general-purpose machine family. These VMs are recommended for their balanced CPU-to-memory ratio and their capability to handle most production workloads, including databases, as per Azure’s documentation.

The objective of the study is to compare performance and performance-per-dollar metrics across different processors for the last generation and its predecessor. Considering that the newer Dasv6 and Dadsv6 series are currently in preview, the v5 generation represents the latest generally available option. We selected five VM sizes that offer a substantively representative cross-section of choices in the general-purpose D-Series spectrum: Dsv5 and Dsv4 powered by Intel Xeon Scalable Processors, Dasv5 and Dasv4 powered by AMD EPYC processors, and Dpsv5 powered by Ampere Altra Arm-based processors. The testing scenarios included instances with 4, 8, and 16 vCPUs.

Challenges in VM type selection on Azure

In Microsoft Azure instances are structured in a manner where a single VM size can accommodate multiple CPU families. This means that different VMs created under the same VM Size can be provisioned on different CPU types. Azure does not provide a way to specify the desired CPU during instance creation, neither through the Azure Portal nor API. The CPU type can only be determined once the instance is created and operational from within the operating system. It turned out that it required multiple tries to get matching instances as we opted for an approach where both the SUT and the client instance have the same CPU type. What was observed is that larger instances (with more vCPUs) tended to have newer generations of CPU more frequently, while smaller instances were more likely to have the older ones. Consequently, for the smaller instances of Dsv5 and Dsv4 we have never come across VMs with 4th Generation Intel Xeon Scalable Processors.

More details about VM sizes used for testing are provided in Appendix A. For each scenario, a minimum of three runs were conducted. If the results showed variations exceeding 3%, an additional measurement was taken to eliminate outlier cases. This approach ensures the accuracy of the final value, which is derived from the median of these three recorded values.

Results

The measurements were conducted in March 2024, with Linux VMs running Ubuntu 22.04.4 LTS and kernel 6.5.0 in each case. To better illustrate the differences between the individual instance types, normalized values were computed relative to the performance of the Dsv5 instance powered by the 3rd Generation Intel Xeon Scalable Processor. The raw results are shown in Appendix A.

Whether both 16 vCPUs Dsv4 and Dsv5 VMs are powered by 3rd Generation Intel Xeon Scalable Processors 8370C and, moreover, they share the same compute cost of $654.08/month, the discrepancy in MongoDB workload performance scores is observed, favoring the Dsv5 instance. This difference can be attributed to the fact that the tested 16 vCPUs Dsv4, as a representation of the 4th generation of D-series, is expected to be more aligned with other representatives of its generation (see Table 1). Analyzing results for Dasv4 VMs vs Dasv5 VMs, powered by 3rd Generation AMD EPYC 7763v, similar outcomes can be noted – in each tested case, Dasv5-series VMs overperformed Dasv4-series VMs.

Observations:
  • Dsv5 VMs, powered by 3rd Generation Intel Xeon Scalable Processor, offer both the most favorable performance and the best performance-per-dollaramong the other instances tested in each scenario (4vCPUs, 8vCPUs, and 16 vCPUs).
  • Dasv5 compared to Dsv5 is less expensive, yet it provides lower performance. Therefore, the Total Cost of Ownership (TCO) is in favour of the Dsv5 instances.
  • Dpsv5 VMs, powered by Ampere Altra Arm-based processors, have the lowest costs among the tested VM sizes. However, when comparing performance results, that type of VM falls behind, resulting in the lowest performance-per-dollar among the tested VMs.
Conclusion

The presented benchmark analysis covers MongoDB performance and performance-per-dollar across 4vCPUs, 8vCPUs, and 16 vCPUs instances representing general-purpose family VM sizes available on Microsoft Azure and powered by various processor vendors. Results show that among the tested instances, Dsv5 VMs, powered by 3rd Generation Intel Xeon Scalable Processors, provide the best performance for the MongoDB benchmark and lead in performance-per-dollar.

Appendix A

The post Best Virtual Machine Size for Self-Managed MongoDB on Microsoft Azure appeared first on ELE Times.

Pages

Subscribe to Кафедра Електронної Інженерії aggregator