EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 4 hours 5 min ago

Samsung’s memory chip business: Trouble in paradise?

Mon, 05/27/2024 - 04:00

The week of 20 May 2024 has been quite eventful for Samsung’s semiconductor unit, the world’s largest producer of memory chips like DRAMs, SRAMs, NAND flash, and NOR flash. Early this week, an unexpected change of guard at Samsung’s semiconductor business rocked the industry.

When Samsung abruptly replaced its semiconductor business chief Kyung Kye-hyun with DRAM and flash memory veteran Jun Young-hyun, the transition was mostly credited to the “chip crisis” associated with Samsung being a laggard in high bandwidth memory (HBM) business, where SK hynix has become a market leader.

Figure 1 Jun Young-hyun led Samsung’s memory chip business from 2014 to 2017 after working on the development of DRAM and flash memory chips. Source: The Chosun Daily

It’s worth noting that management reshuffles at Samsung are usually announced at the start of the year. However, being seen as a laggard in HBM technology has pushed the memory kingpin into a desperate position, and the appointment of a new chip unit head mostly reflects that sense of crisis at the world’s largest memory chip supplier.

HBM, a customized memory product, has enjoyed explosive growth in artificial intelligence (AI) applications due to its suitability for training AI models like ChatGPT. HBM, where DRAM chips are vertically stacked to save space and reduce power consumption, helps process massive amounts of data produced by complex AI applications.

SK hynix, Samsung’s Korean memory chip rival, produced its first HBM chip in 2013. Since then, it has continuously invested in developing this memory technology while bolstering manufacturing yield. According to media reports, SK hynix’s HBM production capacity is fully booked through 2025.

SK hynix is also the main supplier of HBM chips to Nvidia, which commands nearly 80% of the GPU market for AI applications, a premise where HBM memory chips are strategically paired with AI processors like GPUs to overcome data overheads. On the other hand, Samsung, currently catching up on HBM technology, is known to be in the process of qualifying its HBM memory chips for Nvidia AI processors.

During Nvidia’s annual event, GPU Technology Conference (GTC), held in March 2024 in San Jose, California, the company’s co-founder and CEO Jensen Huang endorsed Samsung’s HBM3e chips, then going through a verification process at Nvidia, with a note “Jensen Approved” next to Samsung’s 12-layer HBM3e device on display at GTS 2024 floor.

HBM test at Nvidia

While the start of the week stunned the industry with an unusual reshuffle at the top, the end of the week came with a bigger surprise. According to a report published in Reuters on Friday, 24th May, Samsung’s HBM chips failed to pass Nvidia’s test for pairing with its GPUs due to heat and power consumption issues.

In another report published in The Chosun Daily that day, Professor Kwon Seok-joon of the Department of Chemical Engineering at Sungkyunkwan University said that Samsung has not been able to fully manage quality control of through-silicon vias (TSVs) for packaging HBM memory chips. In other words, high yield in packaging multiple DRAM layers has been challenging. Another insider pointed to reports that the power consumption of Samsung’s HBM3E samples is more than double that of SK hynix.

Figure 2 According to the article published in Reuters, a test for Samsung’s 8-layer and 12-layer HBM3e memory chips failed in April 2024. Source: Samsung Electronics

While Nvidia declined to comment on this story, Samsung was quick to state that the situation has not been concluded, and that testing is still ongoing. The South Korean memory chipmaker added that HBM, a specialized memory product, requires optimization through close collaboration with customers. Jeff Kim, head of research at KB Securities, quoted in the Reuters story, acknowledged that while Samsung anticipated to quickly pass Nvidia’s tests, a specialized product like HBM could take some time to go through customers’ performance evaluations.

Still, it’s a setback for Samsung that could go to advantage of SK hynix and Micron, the remaining players in the high-stake HBM game. Micron, which claims that its HBM3e consumes 30% less power than its competitors, has announced that its 24-GB, 8-layer HBM3e memory chips will be part of Nvidia’s H200 Tensor Core GPUs, breaking the previous exclusivity of SK hynix as the sole HBM supplier for Nvidia’s AI processors.

A rude awakening?

Samsung, being a laggard in HBM, won’t be the only worry for the upcoming chief Jun. Despite the recovery in memory prices, Samsung’s semiconductor business is lagging in competitiveness on various fronts. According to another Reuters report, Samsung’s high-density DRAMs and NAND flash products are no longer ahead of the competition.

Next, the Korean tech heavyweight’s foundry operation is struggling to catch up with market leader TSMC. Samsung’s chip contract-manufacturing business has struggled to win big customers, while TSMC is still far ahead in terms of overall market share. Then there is the global AI wave in which Samsung is currently struggling to find its place besides its HBM woes.

Samsung is known for its fierce competitive skills, and the appointment of the new chief of its semiconductor unit signals that it means business. The Korean tech giant is facing an uphill battle in catching up in HBM memory technology, but one thing is for sure: Samsung is no stranger to charting hot waters.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Samsung’s memory chip business: Trouble in paradise? appeared first on EDN.

Single event upset and recovery

Fri, 05/24/2024 - 16:35

The effects of cosmic rays were once discussed in “Doubled-up MOSFETs“.

The idea was that component redundancy, paired MOSFETs in that case, would allow one MOSFET to still function even if its partner in a switched mode power supply were to be disabled from normal switching because of a cosmic ray event, a single event upset, or an SEU (Figure 1).

Figure 1 An SEU from a cosmic ray can lead to component failure.

However, an SEU doesn’t necessarily have to come from a cosmic ray. CMOS integrated circuits are sometimes seen to latch-up for no apparent reason. The latch-up event comes about from internal four-layer structures that look very much like SCRs which when triggered, can virtually short circuit the +Vcc rail pin to ground. Unlike the power MOSFET situation, component redundancy may not be possible. In such a case, SEU recovery may be the answer.

Figure 2 is conceptual, but it is derived from actual circuitry that was used in a more complex design. 

Figure 2 The SEU recovery concept where the circuitry in green in latch-up prone.

The basic idea is that Q1, Q2 etal in green represents a latch-up prone integrated circuit, probably CMOS, while V1 etal in blue represents a latch-up trigger. An RC pair in yellow provides a delay of the latch-up recovery process so that the recovery scenario can be more easily seen on the scope, but we will shortly remove that RC pair.

When the IC latches up, it drags down the output of the +5-volt regulator. When that voltage falls below the comparator threshold, +3 volts as shown here, the comparator sends a drive pulse to the power MOSFET which further lowers the rail voltage to where the IC latch cannot be sustained. When the power MOSFET turns off again, the +5-volt regulator output voltage returns to normal.

If we now remove that RC delay, the scenario proceeds the same way, but in this simulation it all happens too fast for the saturation voltage of the latched-up device to be viewable in the scope display (Figure 3).

Figure 3 SEU Recovery where the RC delay is now removed.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Single event upset and recovery appeared first on EDN.

Why HBM memory and AI processors are happy together

Fri, 05/24/2024 - 09:18

High bandwidth memory (HBM) chips have become a game changer in artificial intelligence (AI) applications by efficiently handling complex algorithms with high memory requirements. They became a major building block in AI applications by addressing a critical bottleneck: memory bandwidth.

Figure 1 HBM comprises a stack of DRAM chips linked vertically by interconnects called TSVs. The stack of memory chips sits on top of a logic chip that acts as the interface to the processor. Source: Gen AI Experts

Jinhyun Kim, principal engineer at Samsung Electronics’ memory product planning team, acknowledges that the mainstreaming of AI and machine learning (ML) inference has led to the mainstreaming of HBM. But how did this lover affair between AI and HBM begin in the first place?

As Jim Handy, principal analyst with Objective Analysis, put it, GPUs and AI accelerators have an unbelievable hunger for bandwidth, and HBM gets them where they want to go. “If you tried doing it with DDR, you’d end up having to have multiple processors instead of just one to do the same job, and the processor cost would end up more than offsetting what you saved in the DRAM.”

DRAM chips struggle to keep pace with the ever-increasing demands of complex AI models, which require massive amounts of data to be processed simultaneously. On the other hand, HBM chips, which offer significantly higher bandwidth than traditional DRAM by employing a 3D stacking architecture, facilitate shorter data paths and faster communication between the processor and memory.

That allows AI applications to train on larger and more complex datasets, which in turn, leads to more accurate and powerful models. Moreover, as a memory interface for 3D-stacked DRAM, HBM uses less power in a form factor that’s significantly smaller than DDR4 or GDDR5 by stacking as many as eight DRAM dies with an optional base die that can include buffer circuitry and test logic.

Next, each new generation of HBM incorporates improvements that coincide with launches of the latest GPUs, CPUs, and FPGAs. For instance, with HBM3, bandwidth jumped to 819 GB/s and maximum density per HBM stack increased to 24 GB to manage larger datasets.

Figure 2 Host devices like GPUs and FPGAs in AI designs have embraced HBM due to their higher bandwidth needs. Source: Micron

The neural networks in AI applications require a significant amount of data both for processing and training, and training sets alone are growing about 10 times annually. That means the need for HBM is likely to grow further.

It’s important to note that the market for HBM chips is still evolving and that HBM chips are not limited to AI applications. These memory chips are increasingly finding sockets in applications serving high-performance computing (HPC) and data centers.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Why HBM memory and AI processors are happy together appeared first on EDN.

Demo board provides dual-motor control

Thu, 05/23/2024 - 20:03

ST’s demonstration board controls two three-phase brushless motors using an onboard STSPIN32G4 controller with an embedded MCU. The controller’s integrated MCU is based on a 32-bit Arm Cortex-M4 core, which delivers the processing power to manage both motors simultaneously.

The EVSPIN32G4-DUAL demo board can be used for developing industrial and consumer products, ranging from multi-axis factory automation systems to garden and power tools. It is capable of executing complex algorithms, like field-oriented control (FOC), in real time. MCU peripherals support sensored or sensorless FOC, as well as advanced position and torque control algorithms.

Along with the integrated gate driver of the STSPIN32G4 controller, the board employs an additional STDRIVE101 gate driver. The two power stages deliver up to 10 A with a maximum supply voltage of 74 V. Built-in safety features include drain-source voltage monitoring, cross-conduction prevention, several thermal protection mechanisms, and undervoltage lockout.

The EVSPIN32G4-DUAL demo board is available now with a single-unit price of $177.62.

EVSPIN32G4-DUAL product page

STMicroelectronics

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Demo board provides dual-motor control appeared first on EDN.

Microchip grows rad-tolerant MCU portfolio

Thu, 05/23/2024 - 20:02

Offering high radiation tolerance, the SAMD21RT MCU from Microchip is capable of operating in the harsh environments found in space. The device, which is based on a 32-bit Arm Cortex-M0+ core running at up to 48 MHz, also meets the stringent size and weight constraints critical for space applications.

The SAMD21RT operates over a temperature range of -40°C to +125°C and tolerates up to 50 krads of total ionizing dose (TID) radiation. It also provides single event latch-up (SEL) immunity of up to 78 MeV.cm2/mg. Operating voltage is 3 V to 3.6 V.

Occupying a footprint of just 10×10 mm, the SAMD21RT MCU packs 128 kbytes of flash memory and 16 kbytes of SRAM in its 64-pin plastic or ceramic QFP package. It furnishes multiple peripherals, including a 12-bit ADC with up to 20 channels, a 10-bit DAC, 12-channel DMA controller, two analog comparators, and various timer/counters. To conserve power, the SAMD21RT offers idle and standby sleep modes.

Limited samples of the SAMD21RT microcontroller are available by contacting a Microchip sales representative.

SAMD21RT product page

Microchip Technology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Microchip grows rad-tolerant MCU portfolio appeared first on EDN.

Dual-channel gate drivers fit IGBT modules

Thu, 05/23/2024 - 20:02

Scale-iFlex XLT plug-and-play dual-channel gate drivers from Power Integrations operate IGBT modules with blocking voltages of up to 2.3 kV. These ready-to-use drivers work with LV100 (Mitsubishi), XHP 2 (Infineon), and equivalent IGBT modules used in wind, energy storage, and solar renewable energy installations.

Each driver board features an electrical interface, a built-in DC/DC power supply, and negative temperature coefficient (NTC) readout for isolated temperature measurement of the power module. According to the manufacturer, NTC data reporting increases reliability and module utilization by as much as 30%. It also reduces hardware complexity, eliminating multiple cables, connectors, and additional isolation circuitry.

The dual-channel gate drivers support three IGBT voltage classes: 1200 V, 1700 V, and 2300 V. They have a maximum switching frequency of 25 kHz and operate over a temperature range of -40°C to +85°C. Output power is 1 W per channel at maximum ambient temperature. Protection features include short circuit, soft shutdown, and undervoltage lockout.

Scale-iFlex XLT gate drivers are now available for sampling.

Scale-iFlex XLT product page

Power Integrations

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Dual-channel gate drivers fit IGBT modules appeared first on EDN.

SiC MOSFETs reside in 7-pin D2Pak

Thu, 05/23/2024 - 20:02

Nexperia now offers 1200-V SiC MOSFETs in 7-pin D2Pak (TO-263-7) plastic packages with on-resistance values of 30 mΩ, 40 mΩ, 60 mΩ, and 80 mΩ. With the release of the NSF0xx120D7A0 series of SiC MOSFETs, the company is addressing the need for high-performance SiC switches in surface-mount packages like the D2Pak-7.

The N-channel devices can be used in various industrial applications, including electric vehicle charging, uninterruptible power supplies, photovoltaic inverters, and motor drives. Nexperia states its process technology ensures that its SiC MOSFETs offer industry-leading temperature stability. The parts’ nominal RDS(ON) value increases by only 38% over an operating temperature range of +25°C to +175°C. In addition, tight gate-source threshold voltage allows the discrete MOSFETs to offer balanced current-carrying performance when connected in parallel.

The MOSFET’s TO-263 single-ended surface-mount package has 7 leads with a 1.27-mm pitch and occupies a footprint area of 189.2 mm2. A Kelvin source pin speeds commutation and improves switching.

For more information about the NSF0xx120D7A0 series of SiC MOSFETs in the TO-263-7 package, click here.

Nexperia

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post SiC MOSFETs reside in 7-pin D2Pak appeared first on EDN.

Gate driver duo optimizes GaN FET design

Thu, 05/23/2024 - 20:01

A two-chip set from Allegro delivers isolated gate drive for e-mode GaN FETs in multiple applications and topologies. Comprising the AHV85000 and AHV85040, the pair of ICs is the third product in the company’s high-voltage Power-Thru portfolio, transmitting both the PWM signal and bias power through a single external isolation transformer. This eliminates the need for an external auxiliary bias supply or high-side bootstrap.

Expanding on Allegro’s Power-Thru technology, the combo chipset offers the same benefits found in its existing gate drivers, but relocates the isolation transformer from internal to external. By doing so, the AHV85000 and AHV85040 afford greater design flexibility for isolation, power, and layout, as engineers can choose a transformer based on their design requirements. They are well-suited for use in clean energy applications, such as solar inverters and EV charging, as well as data center power supplies.

The AHV85000 and AHV85040 form the primary-side transmitter and secondary-side receiver of an isolated GaN FET gate driver. Together, they simplify system design and reduce EMI through reduced total common-mode capacitance. The chipset also enables the driving of a floating switch at any location in a switching power topology.

The AHV85000 and AHV85040 are sold as a two-chip set. Each chip comes in a 3×3-mm, 10-pin DFN surface-mount package. The parts are available through Allegro’s distributor network.

AHV85000/40 product page

Allegro Microsystems 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Gate driver duo optimizes GaN FET design appeared first on EDN.

Analog TV transmitter—analog problem

Thu, 05/23/2024 - 16:25

In the late 1980s the television station I worked at was still using an early 1970s transmitter, an RCA TT-50FH (50 kW, Series F, High-band VHF).

The transmitter was made with three cabinets: two 25 kW amplifiers, A and B, on the left and right, and a control cabinet with aural and visual exciters and intermediate power amplifiers (IPAs) in the center. The amplifier outputs were combined externally to produce the full 50 kW (Figure 1).

Figure 1 The TV transmitter was made with three cabinets: two 25 kW amplifiers, A and B, on the left and right, and a control cabinet with aural and visual exciters and intermediate power amplifiers (IPAs) in the center. 

Every four or five months we’d notice intermittent black lines running through the video. Apparently, this had been an ongoing problem for several years, with the problem originating in the A amplifier. The transmitter supervisor brought me out to the transmitter site, and we’d use his standard procedure, as follows:

  • Split the transmitter, so amplifier B fed the antenna and amplifier A fed the dummy load.
  • Slide the IPA chassis out from the center cabinet and remove its top.
  • Turn all of the adjustments on the IPA to a minimum.
  • Follow the IPA procedure in the maintenance manual to set up the IPA for proper operation.
  • Close up the IPA, slide the chassis back in place, and recombine the transmitter amplifiers.

This worked every time, eliminating the black lines for another few months.

After I saw this happen two or three times, I got a little suspicious, especially since the IPA adjustments always ended up exactly where they had started. I asked the transmitter supervisor how he came up with the fix. He learned it from his predecessor, who probably learned it from his predecessor.

This fix didn’t seem right.  It had more of a feel of a bad connection than an electronic component failure.

I took a look in the back of the transmitter, at the IPA’s connections. The IPA used a loop-through input, which allows a one signal to feed multiple devices. If that’s not necessary, the output is terminated with a 75-ohm resistor matching the characteristic impedance of the coax cable.

In more modern equipment, if you consider the 1980s modern, the loop-through is made with a pair of BNC connectors on a circuit board. In this transmitter, RCA built the device with N-connectors on a bracket. See Figure 2.

Figure 2 The IPA used a loop-through input, which allows a one signal to feed multiple devices. This was built with N-connectors on a bracket where the output is terminated with a 75-ohm resistor

When I checked the connections, I found the cables were tight on the chassis-mounted jacks, but the jacks themselves were not. The hex nuts on the rear of the bracket had loosened up over the years, so the ground connection, which depended on the metal bracket, was poor. We tightened the nuts, and the transmitter behaved itself for the rest of its life, well into the 1990s.

So why did the standard procedure fix the problem for a while each time? It didn’t, of course. It was the sliding back and forth of the chassis that was shaking up the cables and connectors and restoring a good ground connection, even if only slightly and only for a while.

Those were the good old days. With analog TV you could see the problem in the video or hear it in the audio. HDTV transmitters, on the other hand, just go dark and silent when there’s a problem. But those are stories for another day.

Robert Yankowitz retired as Chief Engineer at a television station in Boston, Massachusetts, where he had worked for 23 years. Prior to that, he worked for 15 years at a station in Providence, Rhode Island.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Analog TV transmitter—analog problem appeared first on EDN.

Power Tips #129: Driving high voltage silicon FETs in 1000-V flybacks

Wed, 05/22/2024 - 16:04

The 800 V automotive systems enable higher performance electric vehicles capable of driving ranges longer than 400 miles on a single charge and charging times as fast as 20 minutes. 800 V batteries rarely operate at exactly 800 V and can go as high as 900 V with converter input requirements up to 1000 V.

There are a number of power design challenges for 1000-V-type applications, including field-effect transistors (FET) selection and the need to have a strong enough gate drive for >1,000 V silicon FETs which generally have larger gate capacitances than silicon carbide (SiC) FETs. SiC FETs have the advantage of lower total gate charge than silicon FETs with similar parameters; however, SiC often comes with increased cost.

You’ll find silicon FETs used in designs such as the Texas Instruments (TI) 350 V to 1,000 V DC Input, 56 W Flyback Isolated Power Supply Reference Design, which cascodes two 950 V FETs in a 54 W primary-side regulated (PSR) flyback. In lower-power general-purpose bias supplies (<10 W), it is possible to use a single 1,200 V silicon FET in TI’s Triple Output 10W PSR Flyback Reference Design which is the focus of this power tip.

This reference design can be a bias supply for the isolated gate drivers of traction inverters. It includes a wide input (60 V to 1000 V) PSR flyback with three isolated 33 V outputs, 100 mA loads, and uses TI’s UCC28730-Q1 as the controller. Figure 1 shows the UCC28730-Q1 datasheet with a 20-mA minimum drive current.

Figure 1 Gate-drive capability of the UCC28730-Q1 with a 20-mA minimum drive current. Source: Texas Instruments

The challenge is that the 1,200 V silicon FET will have a very large input capacitance (Ciss) of around 1,400 pF at 100 V VDS, which is 4 times more than a similarly rated SiC FET.

With a relatively weak gate drive from the UCC28730-Q1, Equation 1 estimates the primary FET turn-on time to be approximately 840 ns.

Figure 2 shows that as FET gate-to-source capacitance (CGS) and gate-to-drain capacitance (CGD) increases, it consumes the on-time of the primary FET required to regulate the output voltage of the converter.

Figure 2 FET turn on and off curves, as FET CGS and CGD increase, it consumes the on-time of the primary FET required to regulate the output voltage of the converter. Source: Texas Instruments

Figure 3 shows the undesirable effect of this by looking at the gate voltage of the UCC28730-Q1 driving the primary FET directly. In this example, it takes approximately 800 ns to completely turn on the FET and 1.5 µs for the gate to reach its nominal voltage. As you go to 400 V, the controller is still trying to charge CGD when the controller decides to turn off the FET. It is much worse at 1,000 V where the CGS is still being charged before turning off. This shows that as the input voltage increases, the controller cannot output a complete on-pulse and therefore the converter cannot power up to nominal output voltage.

Figure 3 Gate voltage of UCC28730-Q1 directly driving the primary FET with increasing input voltage. Source: Texas Instruments

To solve this, you can use a simple buffer circuit using two low-cost bipolar junction transistors as shown in Figure 4.

Figure 4 Simple N-Channel P-Channel N Channel-, P-Channel N-Channel P-Channel (NPN-PNP) emitter follower gate-drive circuit. Source: Texas Instruments

Figure 5 shows the gate current waveform of the primary FET and demonstrates the buffer circuit capable of gate drive currents greater than 500 mA.

Figure 5 Gate drive buffer current waveform of PMP23431, demonstrating that the buffer circuit is capable of gate drive current greater than 500 mA. Source: Texas Instruments

As shown in Equation 2, this reduces the charge time to 33 ns and is 25 times faster compared to just using the gate drive of the controller.

A PSR flyback architecture typically requires a minimum load current to stay within regulation. This helps increase the on-time and the converter can now power up to its minimum load requirements at 1000 V as shown in Figure 6. The converter’s overall performance is in the PMP23431 test report and Figure 7 shows the switching waveform with constant pulses on the primary FET. At 1,000 V with the minimum load requirement, the on-time is approximately 1 µs. Without this buffer circuit, the converter would not power up to 1,000 V input.

Figure 6 Converter startup with minimum load requirement with a 1000-V input. Source: Texas Instruments

Figure 7 Primary FET switching waveform of PMP23431 at 1000 V input. Source: Texas Instruments

In high voltage applications up to 1,000 V, the duty cycle can be quite small—in the hundreds of nanoseconds. A high-voltage silicon FET can be the limiting factor to achieving a well-regulated output due to its high gate capacitances. This power tip introduced PMP23431 and a simple buffer circuit to quickly charge the gate capacitances to support the lower on-times of these high voltage systems.

Darwin Fernandez is a systems manager in the Automotive Power Design Services team at Texas Instruments. He has been at TI for 14 years and has previously supported several power product lines as an applications engineer designing buck, flyback, and active clamp forward converters. He has a BSEE and MSEE from California Polytechnic State University, San Luis Obispo.

 

Related Content

Additional Resources

  1. Read the application note, “Practical Considerations in High-Performance MOSFET, IGPT and MCT Gate-Drive Circuits.”
  2. Check out the application report, “Fundamentals of MOSFET and IGBT Gate Driver Circuits.”
  3. Download the PMP41009 reference design.
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Power Tips #129: Driving high voltage silicon FETs in 1000-V flybacks appeared first on EDN.

Unleashing the potential of industrial and commercial IoT

Wed, 05/22/2024 - 10:58

We’re in the fourth industrial revolution, commonly referred to as Industry 4.0, where advanced technologies are reshaping the landscape of manufacturing and business. The idea of machines communicating with each other, robots milling around, and factories practically running themselves no longer seems like a sci-fi concept.

In the fourth industrial revolution, digital and physical worlds are converging to improve the industrial and commercial (I&C) industries. The Internet of Things (IoT) is a critical player in this revolution, disrupting every facet of the global economy and laying the foundation for a comprehensive overhaul of production, management, and governance systems.

With an estimated annual economic impact ranging from $1.6 trillion to $4.9 trillion by 2025 for factories and retail settings, the rising potential of IoT is becoming increasingly evident as advancements in connectivity open new doors for innovative use cases across the I&C industries.

Despite the rapid advancements in wireless network technologies, companies have been held back from achieving their maximum efficiency and productivity gains due to several operational challenges. Many businesses in industrial and commercial settings face substantial downtime, delayed production, high operating costs, low energy efficiency, and inefficient processes.

So, how can we leverage Industry 4.0’s digital transformation to increase productivity, reduce downtime, lower costs, and drive future growth? The answer may lie in harnessing the power of the I&C IoT.

What’s industrial and commercial IoT?

The Industrial Internet of Things (IIoT) involves the integration of smart technologies and sensors in the industrial sector, enabling the collection and analysis of data to optimize processes, improve worker safety, enhance energy efficiency, improve productivity, and predict potential issues. The IIoT is indispensable for navigating global competition, striking a balance between capturing new business and ensuring sustainable operations.

Commercial IoT encompasses the application of interconnected devices and technologies in the commercial business domain, where the integration of digital solutions aims to enhance retail efficiency, reduce labor costs, and create a seamless omnichannel experience. These advancements in smart retail technology are helping transform traditional business models and increase overall profitability for companies across the globe.

Figure 1 IoT technology will contribute to the growth of commercial industries. Source: Silicon Labs

While such devices may sound out of reach, many exist and are used today for a growing number of I&C applications. In the commercial industry, facility managers seeking to upgrade their estate cost-effectively often use commercial lighting devices like the INGY smart lighting control system that incorporates sensors into luminaires to enable a variety of smart building services without needing an additional infrastructure investment.

Retailers are also adopting electronic shelf label (ESL) devices like the RAINUS InforTab that manage store-wide price automation and reduce operating costs by eliminating hours of tedious human resources. Additionally, asset tracking devices like the Zliide Intelligent Tag can provide fashion retailers with extremely precise location information on how their merchandise moves, helping improve the user experience.

Of course, the commercial industry is not the only application for asset-tracking devices. Machine manufacturers and contractors can also use asset tracking devices like the Trackunit Kin tag that helps connect the entire construction fleet through one simple platform, reducing downtime and costs associated with asset management.

Manufacturers also use smart factory automation devices like CoreTigo’s IO-Link that provide cable-grade, fast, and scalable connectivity for millions of sensors, actuators, and devices at any site worldwide to enable real-time control and monitoring across the entire operational technology.

Likewise, plant and facility managers seeking a comprehensive view of their operations can use predictive maintenance devices such as the Waites plug-and-play online monitoring system to provide a range of sensors and gateways for monitoring and analyzing data, which streamlines device setup and installation.

Benefits of industrial and commercial IoT devices

The growing use of I&C IoT devices could help businesses in the commercial industry make well-informed, real-time decisions, have better access control, and develop more intelligent, efficient, and secure IoT applications. For example, before advanced I&C IoT technology, someone at a retail store had to go out and change the tags on the store shelves if the pricing changed.

Now, with electronic shelf labels, retailers can provide real-time updates. Additionally, by using connected devices and sensors to collect data about a wide variety of business systems, companies can automate processes and improve supply chain management efficiency.

For example, a large retail chain operating hundreds of stores across the country could integrate smart shelf sensors, connected delivery trucks, and a warehouse management system to monitor goods moving through the supply chain in real time. Insights from this data would enable retailers to reduce stockouts, optimize deliveries, and improve warehouse efficiency.

Businesses are also improving control by adopting commercial lighting solutions and wireless access points. With these solutions, businesses can enable indoor location services to track assets and consumer behavior and speed up click-and-collect through shop navigation.

I&C devices also have the potential to positively impact the industrial segment by helping businesses optimize operation efficiency, routing, and scheduling. Prior to predictive maintenance devices, manufacturers had to halt their production line for hours or days if a pump failed and they weren’t planning for it. The repercussions were substantial since every hour of unplanned machine downtime costs manufacturers up to $260,000 in lost production.

Figure 2 IIoT is expected to play a critical role in reshaping the industrial automation. Source: Silicon Labs

Now, with predictive maintenance systems, manufacturers can identify early-stage failures. Moreover, recent advancements in edge computing have unlocked new capabilities for industrial IoT devices, enabling efficient communication and data management.

Machine learning (ML) integration into edge devices transforms data analysis, providing real-time insights for predictive maintenance, anomaly detection, and automated decision-making. This shift is particularly relevant in smart metering, where wireless connectivity allows for comprehensive monitoring, reducing the need for human intervention.

Challenges for industrial and commercial IoT devices

I&C IoT devices have progressed significantly due to the widespread adoption of wireless network technologies, the integration of edge computing, the implementation of predictive maintenance systems, and the expansion of remote monitoring and control capabilities.

Despite all the benefits that I&C IoT devices could bring to consumers, these technologies are not being utilized to their fullest potential in I&C settings today. This is because four significant challenges stand in the way of mass implementation:

  1. Interoperability and reliability

The fragmented landscape of proprietary IoT ecosystems is a significant hurdle for industrial and commercial industry adoption, and solution providers are addressing this challenge by developing multi-protocol hardware and software solutions.

Multi-protocol capabilities are especially important for I&C IoT devices, as reliable connectivity ensures seamless data flow and process optimization in factories, guarantees reliable connectivity across vast retail spaces, and contributes to consistent sales and operational efficiency. Due to the long product lifecycle, it is also critical for the devices to be compatible with legacy protocols and have the capability to upgrade to future standards as needed.

  1. Security and privacy

Security and privacy concerns have been major roadblocks in the growth of industrial and commercial IoT, with potential breaches jeopardizing not only data but also entire networks and brand reputations. Thankfully, solution providers are stepping in to equip developers with powerful tools. Secure wireless mesh technologies offer robust defenses against attacks, while data encryption at the chip level paves the way for a future of trusted devices.

This foundation of trust, built by prioritizing cybersecurity from the start and choosing reliable suppliers, is crucial for unlocking the full potential of the next generation of IoT. By proactively shaping their environment and incorporating risk-management strategies, companies can confidently unlock the vast opportunities that lie ahead in the connected world.

  1. Scalability of networks

Creating large-scale networks with 100,000+ devices is a critical requirement for several industrial and commercial applications such as ESL, street lighting, and smart meters. In addition, these networks may be indoors with significant RF interference or span over a large distance in difficult environments. This requires significant investments in testing large networks to ensure the robustness and reliability of operations in different environments.

  1. User and developer experience

Bridging the gap between ambition and reality in industrial and commercial IoT rests on two crucial pillars: improving the user experience and the developer experience. If we’re going to scale and deploy this market at the level that we know needs to happen, we need solutions that simplify deployment and management for users while empowering developers to build and scale applications with greater speed and efficiency.

Initiatives like Matter and Amazon Sidewalk are paving the way for easier wireless connectivity and edge computing, but further strides are needed. Solution providers can play a vital role by offering pre-built code and edge-based inference capabilities, accelerating development cycles, and propelling the industry toward its true potential.

Looking ahead

As the industrial and commercial IoT landscape evolves, we are primed for a dynamic and interconnected future. The industrial and commercial IoT industry is poised for continued growth and innovation, with advancements in wireless connectivity, edge computing, AI, and ML driving further advances in industrial automation, supply chain optimization, predictive maintenance systems, and the expansion of remote monitoring and control capabilities.

The semiconductor industry has been quietly helping the world advance with solutions that will help set up the standards of tomorrow and enable an entire ecosystem to become interoperable.

Ross Sabolcik is senior VP and GM of industrial and commercial IoT products at Silicon Labs.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Unleashing the potential of industrial and commercial IoT appeared first on EDN.

Looking inside a laser measurer

Tue, 05/21/2024 - 15:00

Tape measures are right up there with uncooperative-coiling (and -uncoiling) extension cords and garden hoses on the list of “things guaranteed to raise my blood pressure”. They don’t work reliably (thanks to gravity) beyond my arm span unless there’s something flat underneath them for the entire distance they’re measuring. Metal ones don’t do well with curved surfaces, while fabric ones are even more gravity-vulnerable. Speaking of which, the only way to keep a fabric one neatly spooled when not in use is with a rubber band, which will inevitably slip off and leave a mess in whatever drawer you’re storing it in. And when metal ones auto-spool post-use, they inevitably slap, scratch, or otherwise maim your hand (or some other body part) enroute.

All of which explains why, when I saw Dremel’s HSLM-01 3-in-1 Digital Measurement Tool on sale at Woot! for $19.99 late last October, I jumped for joy and jumped on the deal. I ended up buying three of ‘em: one for my brother-in-law as a Christmas present, another for me, and the third one for teardown for all of you:

The labeling in this additional stock photo might be helpful in explaining what you just saw:

Here’s a more meaningful-info example of the base unit’s display in action:

The default laser configuration is claimed to work reliably for more than five dozen feet, with +/- 1/8-inch accuracy:

while the Wheel Adapter enables measuring curved surfaces:

and the Tape Adapter (yes, I didn’t completely escape tape, but at least it’s optional and still makes sense in some situations) is more accurate for assessing round-trip circumference (and yes, they spelled “circumference” wrong):

I mean…look how happy this guy is with his!

Apologies: I dilly-dally and digress. Let’s get to tearing down, shall we? Here’s our victim, beginning with the obligatory outer box shots:

And here’s what the inside stuff looks like:

Here’s part of the literature suite, along with the included two AAA batteries which I’ll put to good use elsewhere:

Technology licensed from Arm and STMicroelectronics? Now that’s intriguing! Hold that thought.

Here’s the remainder of the paper:

And here’s the laser measurer and its two-accessory posse:

This snapshot of the top of the device, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

is as good a time as any to conceptually explain how these devices work. Wikipedia more generally refers to them as laser rangefinders:

A laser rangefinder, also known as a laser telemeter, is a rangefinder that uses a laser beam to determine the distance to an object. The most common form of laser rangefinder operates on the time of flight principle by sending a laser pulse in a narrow beam towards the object and measuring the time taken by the pulse to be reflected off the target and returned to the sender. Due to the high speed of light, this technique is not appropriate for high precision sub-millimeter measurements, where triangulation and other techniques are often used. It is a type of scannerless lidar.

The basic principle employed, as noted in the previous paragraph, is known as “time of flight” (ToF), one of the three most common approaches (along with stereopsis, which is employed by the human visual system, and structured light, used by the original Microsoft Kinect) to discerning depth in computer vision and other applications. In the previous photo, the laser illumination emitter (Class 2 and <1mW) is at the right, with the image sensor receptor at left. Yes, I’m guessing that this explains the earlier STMicroelectronics licensing reveal. And the three metal contacts mate with matching pins you’ll soon see on the no-laser-necessary adapters.

The bottom is admittedly less exciting:

As are the textured and rubberized (for firm user grip) left and right sides (the two-hole structure at the bottom of the left side is presumably for a not-included “leash”):

I intentionally shot the front a bit off-center to eliminate reflections-of-self from bouncing off the glossy display and case finish:

The duller-finish backside presented no such reflectance concerns:

I have no idea what that white rectangular thing was inside the battery compartment, and I wasn’t brave enough to cut it open for a more thorough inspection (an RFID tracking tag, maybe, readers?):

This closeup of the back label does double-duty as a pictorial explanation of my initial disassembly step:

Screws underneath, just as I suspected!

You know what comes next…

Liftoff!

We can already see an overview of the laser transmitter function block (complete with a heatsink) at upper right and the receptor counterpart at upper left. Turns out, in fact, that the entire inner assembly lifts right out with no further unscrew, unglue, etc. effort at this point:

From an orientation standpoint, you’re now looking at the inside of the front portion of the outer case. Note the metal extensions of the three earlier noted topside metal contacts, which likely press against matching (flex? likely) contacts on the PCB itself. Again, hold that thought.

Now we can flip over and see the (even more bare) other side of the PCB for the first time:

This is a perspective you’ve already seen, this time absent the case, however:

Three more views from different angles:

And as you may have already guessed, the display isn’t attached to the PCB other than via the flex cable you see, so it’s easy to flip 180°:

Speaking of flipping, let’s turn the entire PCB back over to its back side, now unencumbered by the case that previously held it in place:

Again, some more views from different angles:

See those two screws? Removing them didn’t by itself get us any further along from a disassembly standpoint:

But unscrewing the two other ones up top did the trick:

Flipping the PCB back over and inserting a “wedge” (small flat head screwdriver) between the PCB and ToF subassembly popped the latter off straightaway:

Here’s the now-exposed underside of the ToF module:

and the seen-before frontside and end, this time absent the PCB:

Newly exposed, previously underneath the ToF module, is the system processor, a STMicrolectronics (surprise!…not, if you recall the earlier licensing literature…) STM32F051R8T7 based on an Arm Cortex-M0:

And also newly revealed is the laser at left which feeds the same-side ToF module optics, along with the image sensor at right which is fed by the optics in the other half of the module (keep in mind that in this orientation, the PCB is upside-down from its normal-operation configuration):

I almost stopped at this point. But those three metal contacts at the top rim of the base unit intrigued me:

There must be matching electrical circuitry in the adapters, right? I figured I might as well satisfy my curiosity and see. In no particular order, I started with my longstanding measurement-media nemesis, the Tape Adapter, first. Front view:

Top view:

Bottom view, revealing the previously foreshadowed pins:

Left and right sides, the latter giving our first glimpse at the end-of-tape tip:

And two more tip perspectives from the back:

Peeling off the label worked last time, so why not try again, right?

Revealed were two plastic tabs, which I unwisely-in-retrospect immediately forgot about (stay tuned). Because, after all, that seam along the top looked mighty enticing, right?

It admittedly was an effective move:

Here’s the inside of the top lid. That groove you see in the middle mates up with the end of the “spring” side of the spool, which you’ll see shortly:

And here’s the inside of the bottom bulk of the outer case. See what looks like an IC at the bottom of that circular hole in the center? Hmmm…

Now for the spool normally in-between those two. Here’s a top view first. That coiled metal spring normally fits completely inside the plastic piece, with its end fitting into the previously seen groove inside the top lid:

The bottom side. Hey, at least the tape isn’t flesh-mangling metal:

A side view, oriented as when it’s installed in the adapter and in use:

And by the way, about the spindle that fits into that round hole…it’s metallic. Again, hold that thought (and remember my earlier comment about using a rubber band to keep a fabric tape measure neat and tidy?):

Here’s the part where I elaborate on my earlier “forgot about the plastic tabs” comment. At first things were going fine:

But at this point I was stuck; I couldn’t muscle the inner assembly out any more. So, I jammed the earlier seen flat head screwdriver in one side and wedged it the rest of the way out:

Unfortunately, mangling one of the ICs on the PCB in the process:

Had I just popped both plastic tabs free, I would have been home free. Live and learn (once again hold that thought). Fortunately, I could still discern the package markings. The larger chip is also from STMicroelectronics (no surprise again!), another Arm Cortex-M0 based microcontroller, this time the STM32F030F4. And while at first, reflective of my earlier close-proximity magnetic-tip comment, I thought that the other IC (which we saw before at the bottom of that round hole) might be a Hall effect sensor, I was close-but-not-quite: it’s a NXP Semiconductors KMZ60 magnetoresistive angle sensor with integrated amplifier normally intended for angular control applications and brushless DC motors. In this case, the user’s muscle is the motor! Interesting, eh?

Now for the other, the Wheel Adapter. Front:

Top:

Bottom (pins again! And note that the mysterious white strip seen earlier was pressed into service as a prop-up device below the angled-top adapter):

Left and right sides:

And label-clad back:

I’m predictable, aren’t I?

Note to self: do NOT forget the two now-exposed plastic tabs this time:

That went much smoother this time:

But there are TWO mini-PCBs this time, one down by the contact pins and another up by the wheel, connected by a three-wire harness:

Unfortunately, in the process of removing the case piece, I somehow snapped off the connector mating this particular mini-PCB to the harness:

Let’s go back to the larger lower mini-PCB for a moment.  I won’t pretend to feign surprise once again, as the redundancy is likely getting tiring to the readers, but the main sliver of silicon here is yet another STMicroelectronics STM32F030F4 microcontroller:

The mini-PCB on the other end of the harness pops right out:

Kinda looks like a motor (in actuality, an Alps Alpine sensor), doesn’t it, but this time fed by the human-powered wheel versus a tape spool?

So, a conceptually similar approach to what we saw before with the other adapter, albeit with some implementation variation. I’ll close with a few shots of the now-separate male and female connector pair that I mangled earlier:

And now, passing through 2,000 words and fearful of the mangling that Aalyia might subject me to if I ramble on further, I’ll close, as-usual with an invitation for your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Looking inside a laser measurer appeared first on EDN.

Why verification matters in network-on-chip (NoC) design

Tue, 05/21/2024 - 11:18

In the rapidly evolving semiconductor industry, keeping pace with Moore’s Law presents opportunities and challenges, particularly in system-on-chip (SoC) designs. Notably, the number of transistors in microprocessors soared to an unprecedented trillion.

Therefore, as modern applications demand increasing complexity and functionality, improving transistor usage efficiency without sacrificing energy efficiency has become a key goal. Thus, the network-on-chip (NoC) concept has been introduced, a solution designed to address the limitations of traditional bus-based systems by enabling efficient, scalable, and flexible on-chip data transmission.

Designing an NoC involves defining requirements, selecting an architecture, choosing a routing algorithm, planning the physical layout, and conducting verification to ensure performance and reliability. As the final checkpoint before a NoC can be deemed ready for deployment, a deadlock/livelock-free system can be built, increasing confidence in design verification.

In this article, we will dive deeper into a comprehensive methodology for formally verifying an NoC, showcasing the approaches and techniques that ensure our NoC designs are robust, efficient, and ready to meet the challenges of modern computing environments.

Emergence of network-on-chip

NoCs have revolutionized data communications within SoCs by organizing chip components into networks that facilitate the simultaneous transmission of data through multiple paths.

The network consists of various elements, including routers, links, and network interfaces, which facilitate communication between processing elements (PEs) such as CPU cores, memory blocks, and other specialized IP cores. Communication occurs through packet-switched data transmission where data is divided into packets and routed through the network to its destination.

One overview of the complexity of SoC design emphasizes the integration of multiple IP blocks and highlights the need for automated NoC solutions across different SoC categories, from basic to advanced. It advocates using NoCs in SoC designs to effectively achieve optimal data transfer and performance.

At the heart of NoC architecture are several key components:

  1. Links: Bundles of wires that transmit signals.
  2. Switches/routers: Devices routing packets from input to output channels based on a routing algorithm.
  3. Channels: Logical connections facilitating communication between routers or switches.
  4. Nodes: Routers or switches within the network.
  5. Messages and packets: Units of transfer within the network, with messages being divided into multiple packets for transmission.
  6. Flits: Flow control units within the network, dividing packets for efficient routing.

Architectural design and flow control

NoC topology plays a crucial role in optimizing data flow, with Mesh, Ring, Torus, and Butterfly topologies offering various advantages (Figure 1). Flow control mechanisms, such as circuit switching and wormhole flow control, ensure efficient data transmission and minimize congestion and latency.

Figure 1 The topology of an NoC plays an important role in optimizing data flow, as shown with Mesh and Ring (top left and right) and Torus and Butterfly (bottom left and right). Source: Axiomise

Role of routing algorithms in NoC efficiency

As we delve into the complexity of NoC design, one integral aspect that deserves attention is the routing algorithm, the brains behind the NoC that determines how packets move through the complex network from source to destination. They must be efficient, scalable, and versatile enough to adapt to different communication needs and network conditions.

Some of the common routing algorithms for network-on-chip include:

  1. XY routing algorithm: This is a deterministic routing algorithm usually used in grid-structured NoCs. It first routes to the destination columns along the X-axis and then to the destination rows along the Y-axis. It has the advantages of simplicity and predictability, but it may not be the shortest path and does not accommodate link failures.
  2. Parity routing algorithm: This algorithm aims to reduce network congestion and increase fault tolerance of the network. It avoids congestion by choosing different paths (based on the parity of the source and destination) in different situations.
  3. Adaptive routing algorithms: These algorithms dynamically change routing decisions based on the current state of the network (for example, link congestion). They are more flexible than XY routing algorithms and can optimize paths based on network conditions, but they are more complex to implement.
  4. Shortest path routing algorithms: These algorithms find the shortest path from the source node to the destination node. They are less commonly used in NoC design because calculating the path in real-time can be costly, but they can also be used for path pre-computation or heuristic adjustment.

Advantages of NoCs

  1. Scalability: As chip designs become more complex and incorporate more components, NoCs provide a scalable solution to manage interconnects efficiently. They facilitate the addition of new components without significantly impacting the existing communication infrastructure.
  2. Parallelism: NoCs enable parallel data transfers, which can significantly increase the throughput of the system. Multiple data packets can traverse the network simultaneously along different paths, reducing data congestion and improving performance.
  3. Power consumption: By providing shorter and more direct paths for data transfer, NoCs can reduce the chip’s overall power consumption. Efficient routing and switching mechanisms further contribute to power savings.
  4. Improved performance: The ability to manage data traffic efficiently and minimize bottlenecks through routing algorithms enhances the overall performance of the SoC. NoCs can adapt to the varying bandwidth requirements of different IP blocks, providing optimized data transfer rates.
  5. Quality of service (QoS): NoCs can support QoS features, ensuring that critical data transfers are given priority over less urgent communications. This is crucial for applications requiring high reliability and real-time processing.
  6. Flexibility and customization: The flexibility and customization of the NoC architecture is largely due to its ability to employ a variety of routing algorithms based on specific design requirements and application scenarios.
  7. Choice of routing algorithm: Routing algorithms in an NoC determine the network path of a packet from its source to its destination. The choice of routing algorithm can significantly impact the performance, efficiency, and fault recovery of the network.

NoC verification challenges

Designing an NoC and ensuring it works per specification is a formidable challenge. Power, performance, and area (PPA) optimizations—along with functional safety, security, and deadlock and livelock detection—add a significant chunk of extra verification work to functional verification, which is mostly centred on routing, data transport, data integrity, protocol verification, arbitration, and starvation checking.

Deadlocks and livelocks can cause a chip respin. For modern-day AI/ML chips, it can cost $25 million in some cases. Constrained random simulation techniques are not adequate for NoC verification. Moreover, simulation or emulation cannot provide any guarantees of correctness. So, formal methods rooted in proof-centric program reasoning are the only way of ensuring bug absence.

Formal verification to the rescue

Industrial-grade formal verification (FV) relies on using formal property verification (FPV) to perform program reasoning, whereby a requirement expressed using the formal syntax of System Verilog Assertions (SVA) is checked against the design model via an intelligent state-space search algorithm to conclude whether the intended requirement holds on all reachable states of the design.

The program reasoning effort terminates with either a proof or a disproof, generating counter-example waveforms. No stimulus is generated by human engineers, and the formal verification technology automatically generates almost an infinite set of stimuli only limited by the size of inputs. This aspect of verifying designs via proof without any human-driven stimulus and with almost an infinite set of stimuli is at the heart of formal verification.

It gives us the ability to pick corner-case issues in the design as well as pick nasty deadlocks and livelocks lurking in the design. Deep interactions in state space are examined quickly, revealing control-intensive issues in the design due to concurrent arbitration and routing traffic in the NoC.

With NoCs featuring numerous interconnected components operating in tandem, simulating the entire range of possible states and behaviors using constrained-random simulation becomes computationally burdensome and impractical. It is due to the intense effort needed for driving stimuli into the NoC that is needed to unravel the state-space interaction, which is not easily possible. This limitation undermines the reliability and precision of simulation outcomes.

Compelling advantages of NoC architectures tout the benefits of integrating FV into the design and verification process using easy-to-understand finite state machine notations and using protocol checkers developed for FV in chip and system integration testing increases confidence and aids error detection and isolation.

The effectiveness of this approach and the challenges of verifying complex systems with large state spaces are emphasized when compared to traditional system simulation successes.

An NoC formal verification methodology

In the complex process of chip design verification, achieving simplicity and efficiency amid complexity is the key. This journey is guided through syntactic and semantic simplification and innovative abstraction techniques.

In addition to these basic strategies, using invariants and an internal assumption assurance process further accelerates proof times, leveraging microarchitectural insights to bridge the gap between testbench and design under test (DUT). This complex verification dance is refined through case splitting and scenario reduction, breaking down complex interactions into manageable checks to ensure comprehensive coverage without overwhelming the verification process.

Symmetry reduction and structural decomposition address verification challenges arising from the complex behavior of large designs. These methods, along with inference-rule reduction and initial-value abstraction (IVA), provide a path that effectively covers every possible scenario, ensuring that even the most daunting designs can be confidently verified.

Rate flow and hopping techniques provide innovative solutions to manage the flow of messages and the complexity introduced by deep sequential states. Finally, black-box and cut-pointing techniques are employed to simplify the verification environment further, eliminating internal logic not directly subject to scrutiny and focusing verification efforts where they are most needed.

Through these sophisticated techniques, the goal of a thorough and efficient verification process becomes a tangible reality, demonstrating the state-of-the-art of modern chip design and verification methods.

Safeguarding NoCs against deadlocks

When setting up NoCs, it’s important for channels to be independent, but it’s not easy to ensure of this. Dependencies between channels can lead to troublesome deadlocks, where the entire system halts even if just one component fails.

Formal verification also contributes to fault tolerance, crucial in NoCs where numerous components communicate. When a component fails, it’s important to understand how close the system is to a permanent deadlock.

Formal verification exhaustively explores all possible system states, offering the best means to ensure fault tolerance. With the right approach, weaknesses of an NoC can be identified and addressed. Catching them early on can save the expensive respin.

Optimizing routing rules to suit the needs is common and critical for performance, but it can be tricky and hard to thoroughly test in simulation. Hundreds of new test cases may emerge just by introducing one new routing rule.

So, modelling all the optimizations in formal verification is crucial. If done properly, it can catch corner case bugs quickly or prove that optimizations behave as expected, preventing unexpected issues.

In the next section, we describe at a high level how some bugs can be caught with formal verification.

Formal verification case studies

Message dependence caused deadlock

A bug originated from a flaw in the flow control mechanism where both request and response packets shared the same FIFO. In this scenario, when multiple source ports initiate requests, the flow control method leads to a deadlock. For instance, when source port 0 sends a request reqs0, consisting of header flit h0req, body b0req, and tail t0req, it gets moved successfully.

Subsequently, the response resps0 made of (h1resp, b1resp, t1resp) intended also for source port 0 arrive, it causes no issue. However, when a subsequent request reqs2 from source port 2 with header flit h2req, body b2req, and tail t2req entered the FIFO, only its header and body move forward, but the tail is blocked from being sampled in the FIFO as the response’s header h2resp has blocked the tail t2req because they arrive in the same clock cycle.

Consequently, source port 2 was left waiting for the tail t2, and found itself blocked by the response header, resulting in a deadlock. Meanwhile, source port 1, also waiting for a response, would never get one, further exacerbating the deadlock situation. This deadlock scenario paralyzed the entire NoC grid, highlighting the critical flaw in the flow control mechanism.

Figure 2 Dependence between request and response causes deadlock. Source: Axiomise

Routing error caused deadlock

In the context of the previously mentioned flow control method, each source port awaits a response after sending a request. However, a deadlock arises due to a flaw in the routing function. When a request is mistakenly routed to an incorrect target port, triggering the assertion of the “wrong_dest” signal, the packet is discarded. Consequently, the source port remains in a state of deadlock, unable to proceed with further requests while awaiting a response that will never arrive.

Figure 3 A deadlock in the flow is caused by a routing error and is unable to proceed. Source: Axiomise

Redundant logic revealing PPA issues

Certain design choices in the routing algorithm, such as prohibiting-specific turns, lead to situations where several FIFOs never have push asserted, and some arbiters handle less than two requestors.

This has been identified during the verification process, revealing that these components—and consequently, millions of gates—are going unused in the design but still occupy chip area and, when clocked, would burn power while not contributing to any performance. Eliminating these superfluous gates significantly reduced manufacturing costs and improved design efficiency.

The case for formal verification in NoC

An NoC-based fabric is essential for any modern high-performance computing or AI/ML machine. NoCs enhance performance by efficient routing to avoid congestion. While NoCs are designed to be efficient at data transmission via routing, they often encounter deadlocks and livelocks in addition to the usual functional correctness challenges between source and destination nodes.

With a range of topologies possible for routing, directing simulation sequences to cover all possible source/destination pairs is almost impossible for dynamic simulation. Detecting deadlocks, starvation and livelocks is nearly impossible for any simulation or even emulation-based verification.

Formal methods drive an almost infinite amount of stimulus to cover all necessary pairs encountered in any topology. With the power of exhaustive proofs, we can establish conclusively that there isn’t a deadlock or a livelock or starvation with formal.

Editor’s Note: Axiomise published a whitepaper in 2022, summarizing a range of practically efficient formal verification techniques used for verifying high-performance NoCs.

Zifei Huang is a formal verification engineer at Axiomise, focusing on NoC and RISC-V architectures.

Adeel Liaquat is an engineering manager at Axiomise, specializing in formal verification methodologies.

Ashish Darbari is founder and CEO of Axiomise, a company offering training, consulting, services, and verification IP to various semiconductor firms.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Why verification matters in network-on-chip (NoC) design appeared first on EDN.

Relay and solenoid driver circuit doubles supply voltage to conserve sustaining power

Mon, 05/20/2024 - 16:10

A generally accepted fact about relays and solenoids is that after they’re driven into the actuated state, only half as much coil voltage and therefore only one fourth as much coil power, are required to reliably sustain it. Consequently, any solenoid or relay driver that continuously applies the full initial actuation voltage to merely sustain is wastefully squandering four times as much power as the job requires.

The simplest and cheapest (partial) solution to this problem is shown in Figure 1.

 Figure 1 Basic driver circuit where C1 actuates, current-halving R1 sustains, then C1 discharges through R1 during Toff.

Wow the engineering world with your unique design: Design Ideas Submission Guide

But as is often true of “simple and cheap,” Figure 1’s solution suffers from some costs and complications.

  1. While R1 successfully cuts sustaining current by half, it dissipates just as much power as the coil as it does so. Consequently, total sustaining power is ½ rather than ¼ of actuating power, so only half of the theoretical power savings are actually realized.
  2. When the driver is turned off, a long recovery delay must be imposed prior to the next actuation pulse to allow C1 enough time to discharge through R1. Otherwise, the next actuation pulse will have inadequate amplitude and may fail. This effect is aggravated by the fact that, during actuation, C1 charges through the parallel combination of R1 and Rm, but during Toff it discharges through R1 alone. This makes recovery take twice as long as actuation.

Figure 2 presents a better performing, albeit less simple and cheap, solution that’s the subject of this Design Idea.

Figure 2 Q1 and Q2 cooperate with C to double VL for actuation, Q2 and D2 sustain, then Q3 rapidly discharges C through R to quickly recover for the next cycle.

Actuation begins with a positive pulse at the input, turning Q1 on which drives the bottom end of the coil to -VL and turns on Q2 which pulls the top end of the coil to +VL. Thus, 2VL appears across the coil, insuring reliable actuation. As C charging completes, Schottky diode D2 takes over conduction from Q1. This cuts the sustaining voltage to ½ the actuation value, and therefore drops sustaining power to ¼.

At the end of the cycle when the incoming signal returns to V0, Q3 turns on, initiating a rapid discharge of C through D2 and R. In fact, recovery can easily be arranged to complete in less time than the relay or solenoid needs to drop out. Then no explicit inter-cycle delay is necessary and recovery time is therefore effectively zero!

Moral: You get what you pay for!

But what happens if even doubling the VL logic rail still doesn’t make enough voltage to drive the coil and a higher supply rail is needed? 

Figure 3 addresses that issue with some trickery described in an earlier Design Idea: Driving CMOS totem poles with logic signals, AC coupling, and grounded gates.

 

Figure 3 Level shifting Q4, R1, and R2 are added to accommodate ++V > VL.

 Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Relay and solenoid driver circuit doubles supply voltage to conserve sustaining power appeared first on EDN.

Ethernet adapter chips aim to bolster AI data center networking

Mon, 05/20/2024 - 16:00

At a time when scalable high-bandwidth and low-latency connectivity is becoming critical for artificial intelligence (AI) clusters, the new 400G PCIe Gen 5.0 Ethernet adapter chips aim to resolve connectivity bottlenecks in AI data centers.

Broadcom claims its 400G PCIe Gen 5.0 chips are the first Ethernet adapters built with 5-nm process technology. “We recognize the significance of fostering a power-efficient and highly connected data center for AI ecosystem,” said Jas Tremblay, VP and GM of the Data Center Solutions Group at Broadcom.

The 400G PCIe Gen 5.0 Ethernet adapters deliver higher rack density by driving passive copper cables up to five meters. Moreover, these Ethernet adapters employ low-latency congestion control technology and innovative telemetry features while equipped with a third-generation RDMA over Converged Ethernet (RoCE) pipeline.

These Ethernet adapters are built on Broadcom’s sixth-generation hardened network interface card (NIC) architecture. Their software is designed to be vendor agnostic; it supports a broad ecosystem of CPUs, GPUs, PCIe and Ethernet switches using open PCIe and Ethernet standards.

Ethernet adapter chips must resolve connectivity bottlenecks as cluster sizes grow rapidly in AI data centers. Source: Broadcom

According to Patrick Moorhead, chief analyst at Moor Insights and Strategy, as the industry races to deliver generative AI at scale, the immense volumes of data that must be processed to train large language models (LLMs) require even larger server clusters. He added that Ethernet presents a compelling case as the networking technology of choice for next-generation AI workloads.

AI-centric applications are reshaping the data center networking landscape, and Broadcom’s new 400G PCIe Gen 5.0 Ethernet adapters highlight the crucial importance of devices operating in the high-bandwidth, high-stress network environment that characterizes AI infrastructure.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Ethernet adapter chips aim to bolster AI data center networking appeared first on EDN.

Rack-mount oscilloscopes are just 2U high

Fri, 05/17/2024 - 17:08

The MXO 5C series of low-profile oscilloscopes from R&S provides a bandwidth of up to 2 GHz and either four or eight channels. Although they lack displays, the rack-mount scopes deliver the same performance as the MXO 5 series, while occupying only a quarter of the vertical height (3.5 inches or 8.9 cm).

Built with two in-house ASICs for fast response, the MXO 5C delivers an acquisition capture rate of up to 4.5 million waveforms per second. It also features a 12-bit ADC with a high-definition mode that increases vertical resolution to 18 bits. A small front-panel E-ink display shows key information, such as IP address, firmware version, and connectivity status.

Four-channel models offer bandwidths of 350 MHz, 500 MHz, 1 GHz, and 2 GHz. Eight-channel models provide the same bandwidths, with the addition of 100 MHz and 200 MHz options. Standard acquisition memory of 500 Mpoints per channel can be optionally upgraded to 1 Gpoint per channel.

Although tailored for rack-mount applications, the MXO 5C oscilloscopes can also be used on a bench by connecting an external display via their HDMI or DisplayPort interfaces. Other connectivity interfaces include two USB 3.0 and one 1-Gbit LAN.

The MXO 5C series oscilloscopes are now available from R&S and select distribution channel partners.

MXO 5C series product page

Rohde & Schwarz 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Rack-mount oscilloscopes are just 2U high appeared first on EDN.

Equalizer IC eases DOCSIS 4.0 CATV upgrades

Fri, 05/17/2024 - 17:07

A single-chip inverse cable equalizer, the QPC7330 from Qorvo allows CATV operators to upgrade their hybrid fiber coax (HFC) networks to DOCSIS 4.0. The QPC7330 streamlines field installation by eliminating the need for plug-ins or complicated circuitry to implement the input cable simulation function. Programmed through an I2C interface, the device seamlessly integrates into the automated setup routine.

The function of the QPC7330 75-Ω inverse cable equalizer is to flatten out an input signal with too much uptilt in a line extender or system amplifier. It features 25 states to simulate the loss of different lengths of coaxial cable, offering tilt adjustments from 1 dB to 24 dB (measured from 108 MHz to 1794 MHz). The device integrates all equalizer functions, including a low-loss bypass mode, into a 10×14-mm laminate over-mold module.

The QPC7330 inverse cable equalizer is sampling now, with production quantities available in August 2024.

QPC7330 product page

Qorvo

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Equalizer IC eases DOCSIS 4.0 CATV upgrades appeared first on EDN.

DC/DC converters shrink car body electronics

Fri, 05/17/2024 - 17:07

ST’s A6983 step-down synchronous DC/DC converters provide space savings in light-load, low-noise, and isolated automotive applications. The series offers flexible design choices, including six non-isolated step-down converters in low-power and low-noise configurations, plus one isolated buck converter. With compensation circuitry on-chip, these devices help minimize both size and design complexity.

Non-isolated A6983 converters supply a load current up to 3 A and achieve 88% typical efficiency at full load. Low-power variants minimize drain on the vehicle battery in applications that remain active when parked. Low-noise types operate with constant switching frequency and reduce output ripple across the load range. These devices offer a choice of 3.3-V, 5.0-V, and adjustable output voltage.

The A6983I is a 10-W isolated buck converter with primary-side regulation that eliminates the need for an optocoupler. It allows accurate adjustment of the primary output voltage, while the transformer turns ratio determines the secondary voltage.

All of the AEC-Q100 qualified converters have a quiescent operating current of 25 µA and a power-saving mode that draws less than 2 µA. Input voltage ranges from 3.5 V to 38 V, with load-dump tolerance up to 40 V.

The converters come in 3×3-mm QFN16 packages. Prices start at $1.75 and $1.81 for the A6983 and A6983I, respectively, in lots of 1000 units. Free samples are available from the ST eStore.

A6983 product page

A6983I product page

STMicroelectronics

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post DC/DC converters shrink car body electronics appeared first on EDN.

Position sensor suits vehicle safety systems

Fri, 05/17/2024 - 17:07

The Melexis MLX90427 magnetic position sensor is intended for applications requiring high automotive functional safety levels, such as steer-by-wire systems. It provides stray field immunity and EMC robustness, as well as SPI output. Additionally, the device transitions seamlessly between four operating modes, including rotary, joystick, rotary with stray field immunity, and raw data.

At the heart of the MLX90427 is a Triaxis Hall magnetic sensing element that is sensitive to three components of flux density (BX, BY, and BZ) applied to the IC. This allows the sensor to detect movement of any magnet in its vicinity. The part also integrates an ADC, DSP, and output stage driver for SPI signal output.

In addition to AEC-Q100 Grade 0 qualification, the MLX90427 is SEooC ASIL C ready in accordance with ISO 26262 and can be integrated into automotive safety-related systems up to ASIL D. To simplify system integration, the sensor is compatible with 3.3-V and 5-V designs and operates over a temperature range of -40°C to +160°C. Self-diagnostics are built in to ensure swift fault reporting.

The MLX90427 position sensor comes in an 8-pin SOIC package. A fully redundant dual-die variant in a 16-pin TSSOP is due to launch in Q4 2024.

MLX90427 product page

Melexis

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Position sensor suits vehicle safety systems appeared first on EDN.

Tiny transformer helps trim power supply noise

Fri, 05/17/2024 - 17:07

Murata’s L Cancel Transformer (LCT) neutralizes the equivalent series inductance (ESL) of a capacitor to optimize its noise-reducing capabilities. Leveraging nonmagnetic ceramic multilayer technology, the LCT improves power supply noise suppression, while cutting component count.

The LCT component suppresses harmonic noise in power lines within a frequency range of a few MHz to 1 GHz. It achieves this by using negative mutual inductance to lower a capacitor’s ESL, thereby increasing the capacitor’s noise-reduction effectiveness. Murata states that the LCT also significantly reduces the number capacitors required in a power supply noise-reduction circuit design.

Operating at temperatures up to 125°C, the LCT ensures stable negative inductance and low DC resistance of 55 mΩ maximum. Rated current is 3 A maximum. The part is suitable for a wide range of consumer, industrial, and healthcare products. Dimensions of the surface-mount device are just 2.0×1.25×0.95 mm.

The L Cancel Transformer, part number LXLC21HN0N9C0L, is entering production. Samples are available now.

LCT product page

Murata

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Tiny transformer helps trim power supply noise appeared first on EDN.

Pages