EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 1 hour 13 min ago

JEDEC unveils memory designs with DDR5 clock drivers

Fri, 10/18/2024 - 02:17

JEDEC announced upcoming raw card designs for memory modules, which will complement two DDR5 clock driver standards published earlier this year. These raw card memory device standards are intended for use in client computing applications, such as laptops and desktop PCs, and will be supported by related appendix specifications.

Currently, JEDEC’s JC-45 Committee for DRAM Modules is developing the raw card designs in collaboration with the JC-40 Committee for Digital Logic and the JC-42 Committee for Solid-State Memories. The DDR5 clock driver standards include JESD323 (Clocked Unbuffered Dual Inline Memory Module) and JESD324 (Clocked Small Outline Dual Inline Memory Module).

Integrating a clock driver into a DDR5 DIMM improves memory stability and performance while enhancing signal integrity and reliability at high speeds. By locally regenerating the clock signal, the clock driver ensures stable operation at elevated clock rates. The initial version of the DDR5 clock driver enables data rates to increase from 6400 Mbps to 7200 Mbps, with future versions targeting up to 9200 Mbps.

According to JEDEC, member DIMM suppliers can provide advance solutions today, while non-members will gain access to design files once published. Available configurations include 1R×8, 1R×8 with EC4, 2R×8, 2R×8 with EC4, and 1R×16.

JEDEC

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post JEDEC unveils memory designs with DDR5 clock drivers appeared first on EDN.

An engineer’s playground: A tour of Silicon Labs’ labs

Thu, 10/17/2024 - 18:20

EDN was able to tour the Si Labs engineering facility in Austin during Embedded world North America. The headquarters were located a convenient 15 minute walk from the convention center, making it a pretty natural choice to explore. The tour mainly covered the analog and RF testing and validation processes with senior applications engineer Dan Nakoneczny and RF engineer Efrain Gaxiola-Sosa. Upon entering, it was readily apparent that the facility was nicely stocked with high-spec test equipment to conduct the range of tests required. The lab managed to host large windows that let in the Texas sun instead of the usual test lab experience—an “instrumentation cave” tucked away in a basement or some windowless-wing of a building. The unfortunate side effect of this was an increase in the necessary cooling required, causing a constant hum of white noise.

Analog testing

Dan Nakoneczny began by showing us the benches to test analog peripherals such as comparators, ADCs, voltage regulators, and oscillators. Generally, the team uses sockets to hold down the device under test (DUT) which could be any of Silicon Labs’ SoCs where the various current, voltage, and timing measurements were not massively impacted by the socket and they allow for one test setup between devices. “We can use the sockets for most of our tests but for other tests, like DC/DC converters, we have to solder the parts down to our boards,” explained Dan. The test bench included oscilloscopes, power supplies, function generators, and a binary counter, “with analog peripherals, you don’t have pins so you’ll have to rely on simulations and have a more indirect way of taking measurements of your system.”

For this lab in particular, the DUTs can range from the typical devices in production to prototypes, “we’re building platform devices and a lot of this IP will be used in the next device that will come out at the end of the year or in a couple of months, so the validation team is between the product engineering and design team trying to find the small parts per million bugs, or issues that a customer might find during high volume production ten years down the road. We can make changes now before it gets stamped into subsequent designs, where we might have to do 10 different revisions,” said Dan.

The automated test setup shown in Figure 1 includes a pick and place robotic arm that uses computer vision to grab and place DUTs in the socket before pressing down the socket and locking the device in place. All measurements go up to the cloud to Silicon Labs’ database where there are special tools used to visualize the data to, for instance, compare it with past devices. 

Figure 1 Automated test setup that can be left for a weekend to test 20 to 50 parts.

RF validation Receiver station

Efrain then guided us through the RF test stations scattered throughout the lab and began at the receiver test setups that sat within large Faraday cages that provided up to 135 dB of isolation to prevent any interference. The PCB presented in Figure 2 shows the Silicon Labs motherboards that are able to receive several daughter cards, “these are developed for each of our products so that the very same infrastructure, connectivity, and flexibility in our lab can be used across multiple platforms. It’s a little challenging to keep them updated all the time, but it makes our life easier,” said Efrain. The RF tests have the unique challenge of requiring soldered down DUTs so a proper test fixture is key and using the same ones across the various RF test stations and, as much as possible, across new production devices must be a challenge. There are specialized motherboards that can go into the oven for temperature testing from -40oC to 135oC. “We have a bunch of switches and so we can test serially, but we cannot test in parallel because our equipment has a single channel for receiving information.” Efrain stressed that the most critical parameter from this test was receiver sensitivity; the better the sensitivity, the more range the wireless signal had. These test setups are also largely automated and can be remotely logged into and controlled outside of the annual calibration required to ensure there are no test errors due to drift.

Figure 2 Setup for the receiver testing with power supply, a microwave switch system, signal generator, and PXI Express backplane chassis/modules.

In-band transmit station

The next stop was the test station for in-band transmissions, “we transmit in several protocols where OFDM modulation is one of the most complex. So we want to make sure we can transmit the high data rates and that it is good enough for the receiver to actually get this information.” Efrain reminded us that the quality of the transmit signals depends largely on its error vector magnitude (EVM), causing this to be one of the more critical parameters this station was meant to measure; however, the setup only measured within the ISM bands (e.g., 2.4 GHz and 5 GHz).

Figure 3 Test station for measuring in-band transmissions.

Transmitter out-of-band station

For out of band testing (Figure 4), test and validation engineers will take a look at the emissions on other bands including cellular, radar, etc. “Ideally you want to transmit on your channel at a particular frequency alone, but you’re going to have harmonics that exist in frequencies that are a multiple of the fundamental frequency,” explained Efrain, “these cannot be higher than what the FCC allows.” He expressed how the nonlinear nature of fast-switching transistors are often the culprit of this EMI.

The out-of-band station is used for pre-compliance testing before sending their part off to an accredited test lab for full compliance testing. “Our equipment allows us to transmit and analyze some of this data (conducted emissions), so the output goes to the switch, the switch multiplexes the signal from the chip being tested, and this goes to the port of the spectrum analyzer where we can do several operations,” Efrain stated. An oscilloscope can be used in the place of the spectrum analyzer as well to perform other measurements. The power supplies within the setup must be quiet and clean to remove any unnecessary inference from the test instruments themselves. There are also battery emulators within the setup since many of Silicon Labs’ devices function with batteries. 

Efrain continued, “We are sending a signal with a given power say, 1 mW or 0 dBm, where we can go up to 20 dBm. We want to transmit at the highest power possible where one of the key figures is the output power of our power amplifiers; however, if we reach high output powers and we do not pass FCC or ETSI requirements, we cannot sell.” In this station the power of the fundamental is isolated and a notch filter is used to remove it and look at what is appearing at the harmonic frequencies. “If we leave the fundamental there, some energy will leak and the measurement we perform won’t be as accurate,” explained Efrain. 

Figure 4 Test rack for conducted emissions testing. 

Radiated emissions testing

This test setup, naturally, will not perform radiated emissions testing. The Austin facility did house a small chamber for this designed by ETS-Lindgren with a robotic arm used to adjust the DUT for testing at various orientations. This is also used for pre-compliance testing. 

Receiver out-of-band emissions

At this point, we enter yet another Faraday cage, this one much larger to see how Silicon Labs tests how the receivers of their SoCs perform with interference at different bands. “We have specialized equipment to emulate a real RF environment so we test a particular set of signals that could potentially interfere with our DUT, and we want to make sure they don’t.” The setup shown in Figure 5, hosts a lot of switches so that the engineers can test at all the bands/channels of interest.  

Figure 5 Test station to measure how out-of-band interference could impact the receivers on the DUT.

Load-pull stations

The load-pull stations in Figure 6 were a newer test that the validation lab used to make sure that the power amplifiers (PA) were delivering the maximum power efficiency. Efrain explained how fabrication could slightly adjust the load behavior of the DUT from being that more ideal ~50 ohms to something more reactive or capacitive, “in these two stations we are pulling the load that the PA is going to see. The change in impedance will mean that the power we are delivering is not the same and we need to identify what conditions will make our power amplifiers not behave properly and bring that back to our design.” The goal of the test was to build a robust product that meet customer expectations, “You can say you promise a certain performance only under ideal conditions, but can you control the output power and do a feedback loop to make sure that what you say is happening all the time?” 

Figure 6 Load-pull stations used to find the optimal load impedance at the chip pin for maximum power transfer and PA efficiency. 

Radio regression test system

The small shielded enclosures found in Figure 7 are a benchtop solution for isolation (~80 dB) used by the PHY MAC team to conduct the battery of tests necessary. There are four of these boxes carrying test fixtures with 5 different DUTs, all connected to the Keithley’s S46 microwave switch system, configured as a 2:28 multiplexer (MUX). “The team does validation at the PHY and MAC level to identify what we need to change or fix, and to make sure we don’t break anything if we make changes to firmware,” said Efrain, “when you’re working with multiple radio protocols in a single hardware platform, you need to reconfigure your radio to support these different protocols.” The test is also used to emulate the fixes that Silicon Labs develops for customers that face issues in the field, “once those issues are fixed, they’ll come here and hopefully they won’t break anything else.” The regression stations run 24/7 with daily reports on testing.

Figure 7 Radio regression testing with shielded enclosures to test PHY and MAC protocols of the various radio models used in Silicon Labs SoCs.

Aalyia Shaukat, associate editor at EDN, has worked in the engineering publishing industry for nearly a decade with published works in EE journals and other trade publications. She holds a BSEE from Rochester Institute of Technology. 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post An engineer’s playground: A tour of Silicon Labs’ labs appeared first on EDN.

80 MHz VFC with prescaler and preaccumulator

Thu, 10/17/2024 - 16:26

In 1986, famed analog innovator Jim Williams, in “Designs for High Performance Voltage-to-Frequency Converters” published his 100 MHz “King Kong” VFC. If anyone’s ever done a faster VFC, I haven’t seen it. However, Figure 1 shamelessly borrows a few of Kong’s speed secrets and melds them with some other simple tricks to achieve 80% of the awesome speed-of-Kong. I call it “Kid Kong.”

Figure 1 “Kid Kong” VFC with take-back-half (TBH) pump and ACMOS prescaler can run at 80 MHz.

Wow the engineering world with your unique design: Design Ideas Submission Guide

What lets the Kid work at a Kong-ish max output frequency with considerably less complexity (about half the parts count) than the King’s? It’s partly the self-compensating TBH diode charge pump described in an earlier Design Idea: “Take-back-half precision diode charge pump”. It also gets help from AC logic family power-thrifty speed that was brand new and just becoming available in 1986. Jim used logic technology that was more mature then, mainly MECL.

The (somewhat tachycardia-ish!) heart of Figure 1’s circuit is the super simple Q1, U1a, D5 ramp-reset oscillator. Q1’s collector current discharges the few picofarads of stray capacitance provided by its own collector, Schmidt trigger U1’s input, D5 and (as little as possible, please) of interconnections. U1’s single-digit-nanoseconds propagation times allows oscillation frequency to run from a dead stop (guaranteed by leakage-killing R4) to beyond 80 MHz, (but not reliably as high as 100). So, the Speed King’s crown remains secure. 

Each cycle, when Q1 ramps U1pin1 down to its trigger level, U1 responds with a ~5 ns ramp reset feedback pulse through Schottky D5. This pulls pin 1 back above the positive trigger level and starts the next oscillation cycle. Because the ramp-down rate is (more or less) proportional to Q1’s current, which is (kind of) proportional to A1’s output, oscillation frequency is (vaguely) likewise. The emphasis is on vaguely.

It’s feedback through the TBH pump, summation with the R1 input at integrator A1’s noninverting input, output to Q1 and thence to U1pin 1 that converts “vaguely” to “accurately”. So, what’s U3 doing?

The TBH pump’s self-compensation allows it to accurately dispense charge at 20 MHz, but 80 MHz would be asking too much. U3’s two-bit ripple-counter factor of 4 prescaling fixes this problem.

U3 also provides an opportunity (note jumper J1) to substitute a high quality 5.000v reference for the questionable accuracy of the 5v logic rail. Figure 2 provides circuitry to do that, with a 250-kHz diode charge pump boosting the rail to about 8v to be then regulated down to a precision 5.000. Max U3 current draw, including pump drive, is about 18 mA at 80 MHz, which luckily the LT1027 reference is rated to handle. Just.

Figure 2 Rail booster and 5.000 volt precision voltage reference.

The 16x preaccumulator U2 allows use of microcontroller onboard counter-timer peripherals as slow as 5 MHz to acquire a full resolution 80 MHz VFC output. It is described in an earlier DI: “Preaccumulator handles VFC outputs that are too fast for a naked CTP to swallow. Please refer to that for a full explanation.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

 

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 80 MHz VFC with prescaler and preaccumulator appeared first on EDN.

Drone regulation and electronic augmentation

Wed, 10/16/2024 - 16:50

In one of last month’s posts, I mentioned that, in addition to recently investing in a modern DJI drone (a pair of them, actually, whose identity and details I’ll save for another day), I’d also decided to hold onto (therefore batteries-resuscitate) the first-generation Mavic Air I’d bought back in mid-October 2021:

Why? Here’s a reiteration of what I recently noted:

The Mavic Air was still holding its own feature set-wise, more than six years after its January 2018 introduction. It supports, for example, both front and rear collision avoidance and accompanying auto-navigation to dodge objects in its flight path (APAS, the Advanced Pilot Assistance System), along with a downward-directed camera to aid in takeoff and landing. And its 3-axis gimbal-augmented front camera shoots video at up to 4K resolution at a 30 fps frame rate with a 100 Mbps bitrate.

But there was also this…

Other recent government regulatory action, details of which I’ll save for a dedicated writeup another day, has compelled me to purchase additional hardware in order continue legally flying the Mavic Air in a variety of locations, along with needing to officially register it with the FAA per its >249g weight.

That “another day” is today. But before diving into the Mavic Air-specific details, I’ll start out with a requirement that’s drone-generic. Effective June 2021, the FAA requires recreational drone pilots to pass no-cost online certification called The Recreational UAS Safety Test (TRUST). The FAA has a list of partners that administer the test on its behalf; I took mine on the Boy Scouts of America website (Cub Scout and Webelos alumnus here, folks). It’s quite easy, not to mention informative, and you can take it an unlimited number of times until you pass. Upon successful completion, the partner site generates a certificate for you to print out (I also saved it as a PDF for future reference) and carry with you as proof wherever and whenever you fly.

What constitutes a “recreational” drone flyer? Glad you asked. The FAA website has a descriptive page on that topic, too, which first and foremost notes that you need to “fly only for recreational purposes (personal enjoyment).” However, there’s also this qualifier, for example:

Many people assume that a recreational flight simply means not flying for a business or being compensated. But, that’s not always the case. Compensation, or the lack of it, is not what determines if a flight was recreational or not. Before you fly your drone, you need to know which regulations apply to your flight.

 Non-recreational drone flying include things like taking photos to help sell a property or service, roof inspections, or taking pictures of a high school football game for the school’s website. Goodwill can also be considered non-recreational. This would include things like volunteering to use your drone to survey coastlines on behalf of a non-profit organization.

If at all in doubt as to how your flying intentions might be perceived by others (specifically the authorities), I encourage you to read the FAA documentation in detail. As it also notes, “if you’re not sure which rules apply to your flight, fly under Part 107.” Part 107 is the Small UAS (unmanned aircraft systems) Rule, where “small” refers to aircraft weighing less than 55 lbs. Commercial operator certification involves taking a more involved test, this time at a FAA-approved center at least the first time (renewals can be done online), which costs approximately $175. If you don’t pass, you need to wait at least two weeks before you try (and pay, unless you’ve also paid upfront for prep training at a center that will compensate) again.

Regardless of whether you fly recreationally or not, you also often (but not always) need to register your drone(s), at $5 per three-year timespan (per-drone for commercial operators, or as a lump sum for your entire drone fleet for recreational flyers). You’ll receive an ID number which you then need to print out and attach to the drone(s) in a visible location. And, as of mid-September 2023, each drone also needs to (again, often but not always) support broadcast of that ID for remote reception purposes, which is where the “electronic augmentation” phrase in this post’s title comes in.

DJI, for example, firmware-retrofitted many (but not all) of its existing drones with Remote ID broadcast capabilities, along with including Remote ID support in all (relevant; hold that thought for next time) new drones. Unfortunately, my first-generation Mavic Air wasn’t capable of a Remote ID retrofit, or maybe DJI just didn’t bother with it. Instead, I needed to add support myself via a distinct attached (often via an included Velcro strip) Remote ID broadcast module.

When I first started researching Remote ID modules last year, in the lead-up to the mid-September 2023 rule going into effect, they cost well over a hundred dollars, especially for US-sourced offerings. The FAA subsequently delayed enforcement of the rule until mid-March of this year, and module prices have also dropped to below $50, notably courtesy of China-based suppliers’ market entry (trust me, the irony here is not lost on me). I’ve picked up two, from different companies, both with extended warranties (since embedded batteries don’t last forever, don’cha know) and functionally redundant (so I’m not grounded while I wait, if I need to send one in for repair or replacement). They’re from Holy Stone (on sale for $34.99 from Amazon at time of purchase), with dimensions of 1.54” x 1.18” x 0.51”/3.9 x 3 x 1.3 cm and a weight of 13.9 grams (plus Velcro, 14.2 grams total):

And Ruko (promotion-priced at $33.99 from Amazon at time of purchase), with dimensions of 1.3” x 1.1” x 0.5” and a standalone weight of 13.5g (0.48 oz):

I also got a second Holy Stone module, since this seems to be the more popular of the two options) for future-teardown purposes. And a third common, albeit less svelte, candidate comes from Potensic ($33.99 from Amazon as I write this), 3.7 cm x 3.1 cm x 1.6 cm in size and weighing “less than 20g (0.7 oz)”:

Setup video here.

Size and weight (since the module is additive to the drone itself), battery life, recharge time, broadcast distance and GPS accuracy are all factors (among others) that bear consideration when selecting among options. Also, you may have already noticed that all three suppliers mentioned are also drone manufacturers. DJI conversely doesn’t sell standalone Remote ID modules for retrofits of existing drones, but pragmatically, given its market segment share dominance, it’d probably prefer that you just buy a brand-new successor drone instead.

In closing, I’ll elaborate on my earlier repeated “often but not always” qualifier. As alluded to in my earlier Mavic Air battery teardown, drones weighing less than 250 grams (including battery, Remote ID module, etc.) are excluded from the FAA’s registration and Remote ID requirements. In an upcoming writeup, you’ll see how this “loophole” factored into my next-gen drone selection process. And regardless of the drone’s weight, you don’t need to register or Remote ID-enable it if it’s only being flown within the boundaries of a FAA-Recognized Identification Area (FRIA), several of which are within reasonable driving distance of my residence. Conversely, regardless of your registration and Remote ID status, keep in mind that specific municipalities may restrict your ability to fly in some or all locations.

By the way, the FAA DroneZone home page is a good starting point for resources on these and other drone-related topics. And on that note, if it wasn’t already obvious, the information I’ve obtained and am sharing here is United States-specific; other countries, for example, might not offer the sub-250 gram no-registration and/or recreational-flyer exemptions. If you’re not in the US, I strongly encourage you to do your own research based on whatever country you’re currently located in. And with that, I’ll sign off for now. Stay tuned for future posts in this series, and until then, sound off with your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Drone regulation and electronic augmentation appeared first on EDN.

Why NoC tiling matters in AI-centric SoC designs

Tue, 10/15/2024 - 17:07

At a time when artificial intelligence (AI)-centric system-on-chips (SoCs) are growing in size and complexity, network-on-chip (NoC) tiling hand in hand with mesh topology can support faster development of compute chip designs.

That’s the premise around which Arteris has launched tiling as the next evolutionary step in its NoC IP offerings to facilitate scaling, condense design time, speed testing, and reduce design risk. The Campbell, California-based supplier of IPs is combining NoC tiling with mesh topologies for SoC designs catering to larger AI data volumes and complex algorithms.

Figure 1 Mesh topologies complement NoC tiling to further reduce the overall SoC connectivity execution time by up to 50% versus manually integrated, non-tiled designs. Source: Arteris

SiMa.ai, a developer of machine learning (ML) SoCs, has created an Arm-based, multi-modal, software-centric edge AI platform using this mesh-based NoC IP. The upstart’s AI chip models range from CNNs to multi-modal GenAI and everything in between with scalable performance per watt.

But before we delve into further details about this new NoC technology for SoC designs, below is a brief recap of what it’s all about and why it has been launched now.

What’s NoC tiling

NoC tiling allows SoC architects to create modular, scalable designs by replicating soft tiles across the chip. And each soft tile represents a self-contained functional unit, enabling faster integration, verification and optimization.

Without NoC tiling in a neural processing unit, each neural interface unit (NIU) and transport element inside NoC is unique, and it must be implemented separately and connected to the processing element individually. That increases complexity and configuration time for the designer, which impacts time to market and makes verification effort a lot trickier.

Figure 2 NoC tiling organizes NIUs into modular, repeatable blocks to improve scalability, efficiency, and reliability in SoC designs. Source: Arteris

The tiling technique is designed to repeat modular units automatically, eliminating the need to break the design and configure each element. In other words, it divides the design into modular, repeatable units called “tiles”, enabling significant scalability, power efficiency, reduced latency, and faster development without redesigning the entire NoC architecture.

Take the example of a coherent mesh NoC with tiled CPU clusters, each containing up to 32 CPUs (Figure 3). A 5×5 mesh configuration allows 16 CPU clusters access to maximum memory bandwidth. The remaining mesh sockets are used for caches and service networks.

Figure 3 By supporting NoC tiling, mesh interconnect topologies become a common building block in AI-centric SoC designs. Source: Arteris

Mesh topology complements NoC tiling by providing an effective underlying communication infrastructure for regular processing elements. Each AI accelerator is connected to the NoC mesh, allowing seamless data exchange and collaboration in the vision processing workflow.

Otherwise, without NoC tiling, every NIU and transport element is unique and implemented separately, requiring a manual configuration step despite the same processing element in each case. And, with NoC tiling, effort to implement the NIUs—the most logically intense elements in the NoC—is drastically reduced.

Below is a sneak peek at three specific design premises accelerating AI- and ML-based semiconductor designs.

  1. Scalable performance

The number of processing elements often scales non-linearly; though they scale linearly initially until memory bottlenecks are reached. Here, NoC tiling allows designers to define one processing element and its connection point and then scale that arbitrarily until the workload is met without any redesign effort.

As a result, NoC tiling supported by mesh topology enables AI-centric SoCs to easily scale by 10x+ without changing the basic design. “It enables repeating modular units within the same chip, and that allows architects to easily create scalable and modular designs, enabling faster innovation and more reliable, power-efficient AI chip development,” said Andy Nightingale, VP of product management and marketing at Arteris.

  1. Power reduction

Another advantage is that NoC tiling allows easy partitioning for power reduction, so power management connectivity is replicated from within each individual tile. “Tiling connects into power-saving technology and replicates all that automatically,” Nightingale added.

NoC tiles use dynamic frequency scaling to turn off dynamically, cutting power by 20% on average, which is vital for energy-efficient and sustainable AI applications. Here, NoC tile boundaries interface into existing NoC clock and voltage domains as needed. So, groups of NoC tiles can be turned off when not needed.

  1. Dynamic reuse

The pre-tested NoC tiles can be reused, cutting the SoC integration time by up to 50% and thus shortening the time to market for AI chips. This pre-configured and pre-verified interconnect feature addresses the growing demand for faster and more frequent innovation cycles in AI chips.

NoC tiling: Why now?

When asked why NoC tiling has arrived now, Nightingale told EDN that while the complexity of AI chips is going up, there are still the same number of chip designers. “Anything we can do to increase the automation and decrease the design risk, especially when you have massively parallel processing in AI chips like TPUs,” he said.

He added that when you delve into the SoC design details, they are embarrassingly parallel with repeat, repeat, and repeat, and that leads to very regular structures. “So, when AI comes along and puts requirements in hardware, chip designer has the choice of working on each individual processing element or taking advantage of technology and say connect everything for me.”

Figure 4 NoC tiling enables a chip designer to define a modular unit just once and then repeat it multiple times within the same SoC design. Source: Arteris

Nightingale concluded by saying that SoC designers have been asking for this feature for a long time. “While other NoC suppliers have tiling on the check box, Arteris is first to bring this stuff out.”

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Why NoC tiling matters in AI-centric SoC designs appeared first on EDN.

Combating noise and interference in oscilloscopes and digitizers

Tue, 10/15/2024 - 16:31

Even in the best designs, noise and interference sneak in to reduce the signal-to-noise ratio (SNR), obscure desired signals, and impair measurement accuracy and repeatability. Digitizing instruments like oscilloscopes and digitizers incorporate many features to characterize, measure, and reduce the effects of noise on measurements.

 Interfering signals

Every measurement includes the signal of interest and a collection of unwanted signals such as noise, interference, and distortion. Noise and interference are generally unrelated to the signal being measured. Distortion is an interfering signal or signals related to the signal of interest, such as harmonics. 

Noise is a random signal that is described by its statistical characteristics. Interference includes signals that are coupled into the measurement system by processes like crosstalk. Interfering signals are usually periodic in nature. Figure 1 shows an example of an interfering signal containing random and periodic components and some tools for characterizing the signal. The oscilloscope is triggered on the periodic element.

Figure 1 An example of an interfering signal with random and periodic elements. Source: Arthur Pini

 The interfering signal contains both random and periodic components. The periodic component consists of 10 MHz “spikes”. The frequency at level (freq@lvl) measurement parameter (P4 beneath the display grid) reads the frequency of the spikes at approximately 70% of the signal amplitude to avoid noise peaks. Additionally, the mean, peak-to-peak, and rms levels are measured. Digitizing instruments, including oscilloscopes and digitizers, have a variety of tools to measure the characteristics of noise signals like this. They also offer a range of analysis tools to reduce the effects of these unwanted signal elements.

Instrument noise

All digitizing instruments also add noise to the measurement. Generally, instruments are selected where the noise is much lower in level and does not affect the measurement. Based on the measurement application, oscilloscopes with 8-bit or 12-bit resolution and digitizers with 8-bit to 16-bit or higher amplitude resolution can be selected to keep instrument noise within reasonable bounds.

Differential connections

When reducing noise and interfering signals, the digitizing instrument’s input is the place to start. A good starting point is using differential connections to reduce common mode signals. Many digitizers and a few oscilloscopes have differential inputs, while oscilloscopes commonly offer differential probes to connect the device under test (DUT) to the instrument. 

Differential signaling transmits a signal using two wires driven by complementary signals. Noise and interference common to both conductors (common mode signals) are removed when the voltage difference between the two lines is calculated. The common mode rejection ratio (CMRR) measures the extent to which common mode noise is suppressed. Note also that the differential signal also does not require a ground return. In some cases, this also helps minimize the pickup of interfering signals. An example of differential signaling is the controller area network or CANbus, shown in Figure 2.

Figure 2 The two differential components of the CANbus (left side) and the resultant difference showing a reduction in common mode noise. Source: Arthur Pini

The two CANbus signal components are complementary, and when one is subtracted from the other, the common mode signals, like noise and interference, cancel. Note that the difference between the two components is a voltage swing twice that of the individual signals, providing a 6 dB improvement in SNR.

The differencing operation, either in a differential probe or a difference amplifier, reduces the noise common to both lines, allowing longer cable runs. In addition to CANbus, differential signaling is common in RS-422, RS-485, Ethernet over twisted pair, and other serial data communications links.

Common mode noise and interference can be further reduced in differential signals by using twisted pairs or coaxial transmission lines which provide additional shielding from the source of the interference.

Digitizing instrument tools to reduce noise and interference.

Oscilloscopes and digitizers can perform a variety of measurements and analyses on the interfering signal. Averaging will reduce the amplitude of the random component, and background subtraction can remove the periodic component from the waveform. Figure 3 shows an analysis of the interfering signal shown in Figure 1 using these tools.

Figure 3 Using averaging and background subtraction to separate an interfering signal’s random and periodic elements. Source: Arthur Pini

 The interfering signal appears in the upper left grid. To the immediate right is the Fast Fourier Transform (FFT) of the interfering signal. The vertical spectral lines are related to the periodic component. The periodic narrow pulse train has a fundamental component of 10 MHz, which repeats at all the odd harmonic frequencies at a near-constant amplitude. The random element, which is spectrally flat and has equal energy at all frequencies, appears as the baseline of the FFT spectrum. The top right grid holds the histogram of the interfering signal. The random component dominates the histogram, which appears to have a bell-shaped normal distribution.

Averaging the interfering signal will reduce the random noise component. If the noise component has a Gaussian or normal distribution, the signal amplitude will decrease proportional to the square root of the number of averages. The average waveform appears in the center-left grid; note the absence of the random component on the baseline. The FFT of the average waveform is in the center grid, second down from the top. Note that the amplitude of the spectral lines is still the same but that the baseline is down to about -80 dBm. The histogram has a much smaller bell-shaped response due to the noise reduction. The range measurement of the histogram reads the amplitude from the maximum peak amplitude to the minimum valley amplitude or the peak-to-peak amplitude.

Subtracting the averaged background waveform from the interfering waveform as it is acquired will remove most of the periodic waveform. This process is called background subtraction. It works where the background signal is stable, and the oscilloscope can be triggered from it. The resulting waveform appears in the bottom grid on the left. The FFT of this signal is in the occupied bottom center grid. Note that its spectrum is mostly a flat baseline with an amplitude of about -68 dBm, the same level as the baseline in the original FFT. There are some small spectral lines at the harmonic frequencies of the 10 MHz periodic signal that were not canceled by the subtraction operation. They are less than ten percent of the original harmonic amplitude. The histogram of the separated random component has a Gaussian shape. Its range is lower than the original histogram due to the absence of the periodic component.

Using background subtraction with a real signal requires that the background is captured and averaged before the signal is applied. The averaged background is then subtracted from the acquired signal. 

Cleaning up a real signal

Let’s examine reducing noise and interference from an acquired signal. The signal of interest is a 100 kHz square wave, as shown in the top left grid of Figure 4.

Figure 4 Reducing noise and interference from a 100 kHz square wave using averaging and filtering. Source: Arthur Pini

The interference waveform that we have been studying has been added to a 100 kHz square wave. The oscilloscope is triggered on the 100 kHz square wave. The FFT appears in the upper right grid. The frequency spectrum consists of the square wave spectrum with a spectral line at 100 kHz and repeated at all its odd harmonics, with their amplitudes decreasing exponentially with frequency. The 10 MHz interfering signal contributes spectral lines at 10 MHz and all its odd harmonics, which have a uniform amplitude across the whole span of the FFT. The random component raises the FFT baseline to about -70 dBm.

Averaging the waveform (second grid down on the left) removes the random component but not the periodic one. The FFT of the average signal (second down on the right) shows the 100 kHz and 10 MHz components as before, but due to the reduction in the random component, the baseline of the FFT is down to about -90 dBm. Averaging does not affect the periodic component because it is synchronous with the oscilloscope trigger.

Filtering can reduce noise and interference levels. This oscilloscope includes 20 MHz and 200 MHz analog filters in the input signal path. It also included six finite impulse response lowpass digital filters known as enhanced resolution (ERES) noise filters. The third grid down on the left, shows the signal filtered using an ERES filter. This is a lowpass filter with a -3 dB cutoff frequency of 16 MHz. The signal appears to be quite clean. The effects of the filter can be seen in the FFT of the filtered signal to the right. The low-pass filter suppresses spectral components above 16 MHz. While this works, you must be careful, low-pass filtering suppresses the harmonics of the desired signal and can affect measurements like those for transition times. 

The six bandwidths available with the ERES noise filter vary with the instrument sample rate, limiting their usefulness. This oscilloscope also has an optional digital filter package that provides a greater range of filter types and cutoff characteristics, permitting the optimization of noise and interference reduction.

By background subtracting the filtered waveform from the acquired waveform, we can see what was removed by the filter (bottom left grid). The FFT (bottom right grid) shows the missing 10 MHz and 100 kHz harmonics.

Minimizing the efforts of noise with digitizing instruments

The key techniques for minimizing the effects of noise in measurements with digitizing instruments include differential acquisitions, averaging to reduce broadband noise, background subtraction, and filtering to reduce both noise and periodic signal interference.

Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Combating noise and interference in oscilloscopes and digitizers appeared first on EDN.

Silicon Labs on wireless, compute, and security

Mon, 10/14/2024 - 19:33
Keynote on wireless, compute, and security aspects of IoT

Silicon Labs had a strong presence at embedded world North America with a keynote tag teamed by CEO Matt Johnson and CTO Daniel Cooley, covering the future of embedded devices with a focus on driving innovation in terms of wireless integration, security, and AI (Figure 1). The fabless company has always pioneered the wireless aspect of their hardware, as a leading supplier for all major IoT ecosystems featuring many multi-protocol SoCs that are thoroughly tested for wireless coexistence before deployment. During the talk Cooley expanded on some of the direct benefits of wireless devices outside of the established sensing/control applications. 

Figure 1: Daniel Cooley showcasing the new Series 3 SoC from Silicon Labs at embedded world North American keynote.  

Wireless

Wireless devices are now being used to configure the product as it moves the production line. “This isn’t just built-in self-test or secure key injection,” says Cooley “they actually configure the different wireless protocols as it goes through the production to program the model at the end of the line or an OTA firmware update when it’s installed in the field.” He stressed that these updates were not a simple OTA update, “this is changing the fundamental properties of the product itself, many legacy Zigbee installations are going to flip and switch and convert straight into Thread.” Wireless enablement can also allow for remote diagnostic capability “if the product out in the field is failing, you can quickly figure out what is going on in a non-destructive way without a USB or Ethernet connection.”

Compute

Embedded applications are seeing massive boosts in compute capability to keep up with the growing applications for edge computing. “Computing has got to keep up in every way, its raw MIPs, CoreMark; however you want to measure it, it’s the memory and peripheral access.” More cores are being integrated into SoCs for more processing power and the necessary hardware acceleration as well as GPIO to connect to more varied peripherals. Cooley stressed the importance of adopting real-time operating systems (RTOS), “You can’t scale IoT applications on bare metal, you’ll certainly have a tough time connecting into the cloud applications since it’s generally not built for bare metal to OS. It’s really got to be OS-to-OS.” 

Security

The security aspect of IoT was stressed where companies need to keep up with evolving security standards as well as new legislation with security at the transistor level, security patching in the field, firmware updates, and more. “Earlier this year, the FCC approved the US Cyber Trust Mark consumer IoT. The Marks framework was developed by the CSA in close collaboration with many IoT designers and suppliers to create a living label on consumer IoT devices to give customers the confidence that their device is secure from the latest cybersecurity threats.” This is one aspect of many new cybersecurity regulations that have gone into effect in recent years, placing unprecedented compliance demands on organizations. In Europe this includes the radio equipment directive (RED) and the Directive on Network and Information Security (NIS), in Singapore the cybersecurity labeling scheme (CLS), in the UK the Product Security and Telecommunications Infrastructure Act (PSTI). “In August, NIST finalized its first set of quantum encryption standards. And while the average cybercriminal won’t have a quantum computer at their disposal, malicious state actors will, and they could use this incredible capability to cause massive disruptions for our space, our industry, and countries as a whole,” says Cooley, “by actively engaging in these efforts and aligning them with our goals, we can drive the innovation faster, build trust, increase security, and ensure a stronger and more connected future.”  

Aalyia Shaukat, associate editor at EDN, has worked in the engineering publishing industry for nearly a decade with published works in EE journals and other trade publications. She holds a BSEE from Rochester Institute of Technology. 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Silicon Labs on wireless, compute, and security appeared first on EDN.

The next GaN design frontier: EMI control

Mon, 10/14/2024 - 16:15

A new gallium nitride (GaN) power IC claims to further simplify and speed the development of small form factor, high-power-density applications by offering greater integration and thermal performance. Besides the integration of drive, control and protection, it also incorporates EMI control and loss-less current sensing, all within a high-thermal-performance proprietary DPAK-4L package.

Navitas Semiconductor has unveiled this device, GaNSlim, following the release of its GaNFast and GaNSense devices. The Torrance, California-based supplier is targeting GaNSlim devices at chargers for mobile devices and laptops, TV power supplies, and LED lighting.

Figure 1 GaNSlim is the company’s third-generation device with autonomous EMI control and loss-less sensing. Source: Navitas

“Our GaN focus is on integrated devices that enable high-efficiency, high-performance power conversion with the simplest designs and the shortest possible time-to-market,” said Reyn Zhan, senior manager of technical marketing at Navitas. The GaNSlim devices are rated at 700 V with RDS(ON) ratings from 120 mΩ to 330 mΩ.

Evolution of a GaN device

In an interview with EDN, Llew Vaughan-Edmunds, senior director of product management and marketing at Navitas, chronicled the company’s GaN technology journey. In the late 2010s, when most GaN suppliers were offering discrete devices, Navitas differentiated by integrating drivers, control and protection features alongside discrete GaN.

“The problem is that GaN switch is very fast, so while you can use it to your benefit, when gate starts to switch that fast, you inevitably see spikes,” said Edmunds. “At the same time, the gate is very sensitive, so you must regulate gate voltage as much as possible.” Otherwise, if the device voltage is 5 V and it goes to 7 V, it’s dangerous.

The GaNFast device was created by integrating a gate driver, and it significantly took off in travel adapters. Nearly three years later, in 2021, Navitas realized what OEMs and ODMs wanted. “They wanted sensing and over-temperature protection, and that’s when we released GaNSense,” Edmunds told EDN.

“Now, after several years of launch, we understand what the next requirements are, and this became GaNSlim,” Edmunds added. “Power design engineers want to reduce the heat and temperature, and they want a bigger, thermally enhanced package with the pitch between legs widened.”

Figure 2 GaNSlim, an upgrade to the GaNSense design, incorporates EMI control and loss-less current sensing alongside the gate driver and various protection features. Source: Navitas

Anatomy of GaNSlim

Moreover, as Edmunds noted, power design engineers wanted Navitas to integrate the EMI function into the switch. “What happens with the travel adapters is that many EMI issues have to be worked around because the switch is so fast.”

Figure 3 GaNSlim design comprises three basic building blocks: FET switch, gate driver IC, and thermally enhanced DPAK package. Source: Navitas

There are three basic building blocks of a GaNSlim device. First, the GaNSense Power FET is the GaN switch, a fast one, which enables loss-less current sensing. That, in turn, eliminates the need for external current sensing resistors and optimizes system efficiency and reliability.

Second, the GanSlim power IC, which integrates the gate driver and bolsters loss-less current sensing with programmable features. “We have loss-less sensing, meaning we do the sensing inside the IC, bringing half percent efficiency benefits,” Edmunds added.

It also incorporates over-temperature protection to ensure system robustness, and its auto sleep-mode increases light and no-load efficiency. Then there is autonomous turn-on/off slew rate control, which maximizes efficiency and power density while reducing external component count.

Third, the 4-pin, 6.6 x 9.6 mm DPAK package facilitates 7°C lower temperature operation versus conventional alternatives while supporting high-power-density designs with ratings up to 500 W.

GaN integration a differentiator

When summarizing GanSlim design, Edmunds said that Navitas took the 10-pin GaNSense I/O system and made it into three to four I/O systems. “We have integrated EMI control inside the switch and made it intelligent, removing a few components and thus lowering the system cost.” That’s how Navitas made GanSlim simpler and easier to use.

Edmunds added that engineers don’t have to worry about EMI, different I/Os, and how to control them with a micro because that’s all set up. He is also confident that with these integration capabilities and regulated EMI, Navitas is ahead of competition by three to four years.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The next GaN design frontier: EMI control appeared first on EDN.

To press on or hold off? This does both.

Mon, 10/14/2024 - 16:06

Let’s imagine that you need to add a power switch to something that’s battery-powered but processor-free; perhaps it must also be waterproof and thus membrane-sealed. Or perhaps you just want to use a shiny modern push-button rather than a toggle/rocker/slide thingy, which may be cheap and reliable, but would look so last millennium.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Latching bi-stable device

This design idea (DI) shows how to transform a basic momentary push or tact(ile) switch into a latching bi-stable device. It’s shown in Figure 1.

Figure 1 Two transistors form a power-switching latch, which can be set (power on) by a short button-press and then reset (power off) by a longer one.

Q1 and Q2 are cross-coupled to form a latch, Q1 being the actual power switch which is controlled by Q2. Initially, both are off. Pressing Sw1 briefly injects a pulse through C1 into Q2’s gate which turns it on, thus also turning Q1 on to deliver power to both the downstream circuitry and Q2, latching both transistors on.

Holding the button down for around a second allows C2 to charge up through R4 until Q3 starts to conduct, thus shorting the drive to Q2’s gate and breaking the feedback loop, so that Q1 and Q2 both turn off. Opening the switch lets C2 discharge through D1 and R5, ready for the next cycle. When off, the circuit draws only leakage current.

Some components are marked TBD, because while the circuit as a whole can work with supplies anywhere from 3 to 20 V (or more, if Q1 is suitably rated), individual parts or functions may not. Typical values are:

Supply

R2

R4

3 V

0R

100k

6 V

0R

330k

12 V

100k

680k

20 V

300k

1M0

R2 ensures that Q1’s gate-source voltage is enough to turn it on fully without causing its gate-protection diodes to conduct. R4 keeps the “hold-for-off” time close to a second. Other points to watch include Q1 itself. The IRLML6402 has a 20-V drain-source rating, an on-resistance of 50–100 mΩ under our conditions, and a gate-source breakdown of 12 V. It only needs 1.2 V to turn it fully on, when it will easily handle an amp or two.

Q2 and Q3 are not critical, though proper logic-level devices might be better than the ZVN3306As. If Sw1 is pressed while the circuit is on, C1 will still deliver a spike to Q2’s gate, briefly driving that to twice the supply voltage. This should be clamped by Q2’s input protection diodes, but if you don’t trust that, fit a catch diode from the bottom of C1 back up to the input rail. Those same protection diodes may also conduct with high supply voltages, the current being limited by R3.

If the switch button becomes jammed down for any reason, the circuit will stay off, though R5 will still draw some current.

Automatic turn-off

As it stands, all this works well with loads from nothing up to that amp or two and with load capacitances up to at least 100 µF. But it might be useful to add something to turn the power off automatically several minutes after the latest button-press, and Figure 2 shows how to do that.

Figure 2 Adding an oscillator/counter can turn the circuit off automatically after a suitable delay.

This adds a CD4060B oscillator/counter to the mix. It’s powered from the output, and oscillates at about 13.7 Hz—at least, my sample did—while the circuit is on. After about 10 minutes, its count reaches 8192 and Q14 goes high, charging C2 through D2 to turn Q3 on, and Q2 and Q1 off. Any extra presses of Sw1 reset it, restarting the timing cycle. The CD4060B is a 3-to-18-V part, which is why the voltage rating of the Figure 2 circuit is lower. (Data sheets claim 20 V is survivable, but I lost one at 19 V while experimenting. Beware! And that explains R8, added to avoid any spikes taking out the reset pin, which is what happened.) Because the load capacitance needs to discharge adequately to avoid the circuit restarting, it should now be no greater than about 10 µF, at least with light loads. I couldn’t find a simple (meaning cheap and reliable) way of draining or even crowbar-ing it at switch-off: thought that should be easy; it wasn’t.

Using counters and logic to control everything would be nice, but even more elaborate unless a microcontroller were handling things. Such an approach would need far less hardware and have many opportunities for extra, interestingly-coded features—but wouldn’t it be cheating?

Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post To press on or hold off? This does both. appeared first on EDN.

One-stop advanced packaging solutions for chiplets

Fri, 10/11/2024 - 16:13

The chiplet design movement is gathering steam, and the availability of one-stop advanced packaging solutions is a testament to this semiconductor technology’s advancement toward mass production. Such coordinated solutions for advanced packaging are crucial in the vertically disintegrated world of chiplets.

Take the case of Faraday Technology Corp., an ASIC design service and IP provider now eyeing advanced packaging-coordinated platforms for the vertical disintegration of chiplets. Such platforms streamline the advanced packaging processes by integrating multiple vendors and multi-source chiplets while providing three core services: design, packaging, and production.

In this role, Faraday aims to coordinate the vertically disintegrated vendors of chiplet, high-bandwidth memory (HBM), interposer, and 2.5D/3D packaging while offering chiplets design, testing analysis, production planning, outsourcing procurement, inventory management, and 2.5D/3D advanced packaging services.

On its part, Faraday designs and implements major chiplets, including I/O dies, SoC/compute dies, and interposers. Next, to ensure seamless integration of multi-source dies, Faraday has inked strategic partnerships with fabs and OSATs to support passive/active interposer manufacturing with through-silicon via (TSV) and thus effectively manage 2.5D/3D packaging logistics.

Figure 1 Faraday has introduced an advanced packaging coordinated platform for the vertical disintegration of chiplets.

Faraday’s partners include fabs such as Intel Foundry, Samsung Foundry and UMC as well as several OSATs. These partners help Faraday ensure capacity, yield, quality, reliability, and schedule in production for chiplets with multi-source dies.

One-stop chiplet solutions

Faraday has unveiled a 2.5D packaging platform jointly developed with Kiwimoore, a Shanghai, China-based interconnect solutions provider. The advanced packaging platform—which has successfully entered the mass production stage—incorporates Kiwimoore’s chiplet interconnect and network domain-specific accelerator (NDSA) solutions.

Besides NDSA, Kiwimoore provided various chiplets, including a 3D general-purpose base die and high-speed I/O die. On the other hand, Faraday integrated multi-source chiplets from different semiconductor manufacturers, encompassing compute dies, HBM design, and production.

Figure 2 Kiwimoore provided a general-purpose base die and a high-speed I/O die for this chiplet project.

The two companies collaborated on chiplet SoC/interposer design integration, testing and analysis, outsourced procurement, and production planning services. Mochen Tien, CEO of Kiwimoore, acknowledged that Faraday’s supply chain capabilities ensured stable supply of critical components like interposers and HBM memory.

Such system-level product design integration services allow chipmakers to focus on core die development, shortening design cycles and reducing R&D costs. “Through our close collaboration, we have successfully simplified the chiplet design and packaging processes and quickly integrated chiplets from different suppliers,” said Flash Lin, COO of Faraday.

The emergence of such one-stop solutions with flexible services and business models complements chiplets’ system-level design as well as the broader ecosystem encompassing multi-source chiplets, packaging, and manufacturing. It also reveals the larger technology blueprint in the commercial realizations of chiplet design and packaging.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post One-stop advanced packaging solutions for chiplets appeared first on EDN.

PWM controllers drive FETs and IGBTs

Thu, 10/10/2024 - 23:50

Optimized for AC/DC power supplies in industrial applications, Rohm’s PWM controller ICs support a wide range of power semiconductors. Mass production has begun for four variants: the BD28C55FJ-LB for low-voltage MOSFETs; BD28C54FJ-LB for medium- to high-voltage MOSFETs; BD28C57LFJ-LB for IGBTs; and BD28C57HFJ-LB for SiC MOSFETs.

The parts come in standard SOP-J8 packages (equivalent to JEDEC SOIC8), offering pin-to-pin compatibility with commonly used power supply components to minimize redesign and modification efforts. Each variant includes a self-recovery undervoltage lockout function with voltage hysteresis. According to Rohm, this improves application reliability by reducing threshold voltage error to ±5%, compared to the typical ±10% of standard products.

With an input voltage range of 6.9 V to 28.0 V, the PWM controllers provide a circuit current of 2.0 mA, a maximum startup current of 75 µA, and a maximum duty cycle of 50%. The lineup will be expanded to include products for driving GaN devices and variants that support a maximum duty cycle of 100%.

The BD28C55FJ-LB, BD28C54FJ-LB, BD28C57LFJ-LB, and BD28C57HFJ-LB are now available through Rohm’s authorized distributors. 

Rohm Semiconductor 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post PWM controllers drive FETs and IGBTs appeared first on EDN.

System performs one-pass wafer test up to 3 kV

Thu, 10/10/2024 - 23:50

Keysight’s 4881HV wafer test system enables parametric tests up to 3 kV, accommodating both high and low voltage in a single pass. Its high-voltage switching matrix facilitates this one-pass operation, boosting productivity and efficiency.

The system’s switching matrix scales up to 29 pins and integrates with precision source measure units, allowing flexible measurements from low current down to sub-pA resolution at up to 3 kV on any pin. High-voltage capacitance measurements with up to 1-kV DC bias are also possible. This switching matrix enables a single 4881HV to replace separate high-voltage and low-voltage test systems, increasing efficiency while reducing the required footprint and testing time.

Power semiconductor manufacturers can use the 4881HV to perform process control monitoring and wafer acceptance testing up to 3 kV, meeting the future requirements of automotive and other advanced applications. To safeguard operators and equipment during tests, the system features built-in protection circuitry and machine control, ensuring they are not affected by high-voltage surges. Additionally, it complies with safety regulations, including SEMI S2 standards.

To request a price quote for the 4881HV test system, click the product page link below.

4881HV product page

Keysight Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post System performs one-pass wafer test up to 3 kV appeared first on EDN.

Sink controllers ease shift to USB-C PD

Thu, 10/10/2024 - 23:50

Diodes’ AP33771C and AP33772S sink controllers enable designers to transition from proprietary charging ports, legacy USB ports, and barrel-jack ports to a standard USB Type-C PD 3.1 port. These controllers can be embedded into battery-powered devices and other types of equipment using a USB Type-C socket as a power source.

Both ICs manage DC power requests for devices with USB Type-C connectors, supporting the PD 3.1 extended power range (EPR) of up to 140 W and adjustable voltage supply (AVS) of up to 28 V. The AP33771C provides multiple power profiles for systems without an MCU, featuring eight resistor-settable output voltage levels and eight output current options. In contrast, the AP33772S uses an I2C communications interface for systems equipped with a host MCU.

The sink controllers’ built-in firmware supports LED light indication, cable voltage-drop compensation, and moisture detection. It also offers safety protection schemes for overvoltage, undervoltage, overcurrent, and overtemperature. No programming is required to activate the firmware in the AP33771C, while designers have the option of using I2C commands to configure the AP33772S.

Housed in a 3×3-mm, 14-pin DFN package, the AP33771C is priced at $0.79 each in 1000-unit quantities. The AP33772S, in a 4×4-mm, 24-pin QFN package, costs $0.84 each in like quantities.

AP33771C product page 

AP33772S product page 

Diodes

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Sink controllers ease shift to USB-C PD appeared first on EDN.

Platform advances 800G Ethernet AN/LT validation

Thu, 10/10/2024 - 23:50

Teledyne LeCroy has announced an integrated platform for validating the limits of auto-negotiation and link training (AN/LT) in 800-Gbps Ethernet. As an extension to the existing LinkExpert software, the new functionality leverages the Xena Z800 Freya Ethernet traffic generator and the SierraNet M1288 protocol analyzer to test Ethernet’s AN/LT specifications.

The fully automated test platform simplifies interoperability testing across various Ethernet switches and network interface cards. It tests each equalizer tap to its maximum limit to verify protocol compliance and ensure links re-establish if limits are exceeded. The system also verifies the stability of the SerDes interface and validates that all speeds can be negotiated to support backward compatibility.

The SierraNet M1288 protocol analyzer provides full stack capture, deep packet inspection, and analysis for links up to 800 Gbps. It also offers jamming capabilities for direct error injection on the link at wire speed. The Xena Z800 Freya Ethernet traffic generator can test up to 800G Ethernet using PAM4 112G SerDes, achieving the best possible signal integrity and bit error rate performance.

LinkExpert with the AN/LT test functionality is now shipping as part of the SierraNet Net Protocol Suite software.

SierraNet M1288 product page 

Xena Z800 Freya product page 

Teledyne LeCroy 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Platform advances 800G Ethernet AN/LT validation appeared first on EDN.

Tulip antenna delivers 360° stability for UWB

Thu, 10/10/2024 - 23:50

A tulip-shaped antenna from Kyocera AVX is designed for ultra-wideband (UWB) applications, covering a frequency range of 6.0 GHz to 8.5 GHz. The surface-mount antenna is manufactured using laser direct structuring (LDS) technology, which enables a 3D pattern design. LDS allows the antenna to operate both on-board and on-ground, offering an omnidirectional radiation pattern with consistent 360° phase stability.

The antenna’s enhanced phase stability, constant group delay, and linear polarization are crucial for signal reconstruction, improving the accuracy of low-energy, short-range, and high-bandwidth USB systems. It can be placed anywhere on a PCB, including the middle of the board and over metal. This design flexibility surpasses that of off-ground antennas, which require ground clearance and are typically positioned along the perimeters of PCBs.

The tulip antenna is 6.40×6.40×5.58 mm and weighs less than 0.1 g. It is compatible with SMT pick-and-place assembly equipment and complies with RoHS and REACH regulations. When installed on a 40×40-mm PCB, the antenna typically exhibits a maximum group delay of 2 ns, a peak gain of 4.3 dBi, CW power handling of 2 W, and an average efficiency of 61%. 

Designated P/N 9002305L0-L01K, the tulip antenna is produced in South Korea and is now available through distributors Mouser and DigiKey.

9002305L0-L01K antenna product page

Kyocera AVX 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Tulip antenna delivers 360° stability for UWB appeared first on EDN.

Analog Devices’ approach to heterogeneous debug

Thu, 10/10/2024 - 16:35
Creating a software-defined version of ADI

Embedded World this year has had a quite clear focus on the massive growth in the application space of edge computing and the convergence of IoT, AI, security, and underlying sensor technologies. A stop by the Analog Devices Inc (ADI) booth reveals quite clearly that the company aims to address the challenges of heterogeneous debugging and security compliance in embedded systems. “For some companies, an intelligent edge refers to the network edge or the cloud edge, for us it means right down to sensing, taking raw data off sensors and converting them into insights,” says Jason Griffin, managing director of software engineering and security solutions at ADI. “So bridging the physical and digital words, that’s our sweet spot.” ADI aims to bolster embedded solutions and security software where “as the signal chain becomes more digital, we add on complex security layers.” As an established leader in the semiconductor industry, ADI’s foundational components now require more software enablement, “so as we move up the stack, moving closer to our customer’s application layer, we’re starting to add an awful lot more software where our overall goal is to create a software-defined version of ADI and meet customers at their software interface.” The company is focusing their efforts on open source development “we’re open sourcing all our tools because we truly believe that software developers should own their own pipeline.” 

Enter CodeFusion Studio

This is where CodeFusion Studio comes into play, the software development environment was built to help develop applications on all ADI digital technologies. “In the future, it will include analog too,” notes Jason. “There’s three main components to CodeFusion Studio: the software development kit that includes usage guides and reference guides to get up and running; the modern Visual Studio Code IDE so customers can go to the Microsoft marketplace and download it to facilitate heterogenous debug and application development;  and a series of configuration and productivity tools where we will continue to expand CodeFusion Studio.” The initial release of this software includes a pin config tool, ELF file explorer, and a heterogeneous debug tool.  

Config tools

Kevin Townsend, embedded systems engineer offered a deeper dive into the open source platform starting with the config tools. “There’s not a ton of differentiation in the config tools themselves, every vendor is going to give you options to configure pin mux, pin config, and generate code to take those config choices and set up your device to solve your business problem.” The config tools themselves are more or less standard, “in reality, you have two problems with pin mux and pin config: you’ve got to capture your config choices, and every tool will do that for you, for example, I could want ADC6 on Pin B4, or UART TXD and RXD on C7 and D9, the problem with most of those tools today is that they lock you into a very opinionated sense of what that code should look like. So most vendors will generate code for you like everybody else but it will be based on the vendor-specific choices of RTOSs (real-time operating systems), my SDKs; and if I’m a moderately complex-to-higher-end customer, I don’t want to have anything to do with those, I need to generate code for my own scheduler, my own APIs in-house.” 

So CodeFusion Studio’s tool has done is decouple config choice capture from code generation, “we save all of the config choices for your system, we save all of that into a JSON file which is human and machine readable and rather than just generating opinionated code for you right away, we have an extensible command-line utility that takes this JSON file and it will generate code for you based upon the platform that you want.” The choices can include MSDK (microcontrollers SDK), Zephyr 3.7, ThreadX, or an in-house scheduler maybe used by a larger tech company. “I can take this utility and we have a plug-in based architecture where it’s relatively trivial for me to write my own export engine.” This gives people the freedom to generate the code they need. 

Figure 1: CodeFusion Studio Config tool demo at the ADI booth at Embedded World 2024.

ELF file explorer

Kevin bemoaned that half of a software developer’s day was spent doing meetings while the other half was spent doing productive work where so much of that already-reduced time had to include debug, profiling, and instrumentation. “That’s kind of the bread and butter of doing software development but traditionally, I don’t feel like embedded has tried to innovate on debug, it’s half of my working day and yet we’re still using 37-year-old tools like gdb on the command line to debug the system we’re using.” 

He says “if I want to have all my professional profiling tools to understand where all my SRAM or flash is going, I have to buy an expensive proprietary tool.” Kevin strongly feels that making a difference for ADI customers does not involve selling another concrete tool but an open platform that does not simply generate code, but to generates code that enables customers to get the best possible usage of their resources. “MCUs are a commodity at the end of the day, people are going to buy them for a specific set of peripherals, but it’s a lot of work to get the best possible usage out of them.” 

The ELF file explorer is one such example to improve the quality of life for developers. The ELF file is analogous to the .exe file for Windows desktop application, “it’s like an embedded .exe” , says Kevin. “It is kind of the ultimate source of the truth of the code that I am running on my device but it is a block box.” ELF file explorer attempts to take these opaque systems and build “windows” into them so see what is going on in the final binary. “There’s nothing in this tool that I couldn’t do on the command line but I would need 14 different tools, an Excel spreadsheet, and a piece of paper with pencil, and it would take me three hours.” Instead, developers can finalize debugging in a fraction of the time to potentially speed up time-to-market. 

“So for example, I can see the 10 largest symbols inside my image where I’m running out of flash; where is it all going?” In this case, the tool allows the user to readily see the 10 largest functions (Figure 2) with the ability to right click on it to go to the symbol source and get a view directly into the source code. 

Figure 2: CodeFusion Studio ELF file explorer tool demo.

Heterogenous Debug

The heterogenous debug tool is aimed at simplifying multi-core architecture debugging, this is quickly becoming a necessity in modern embedded development environments where implementing 2 cores or beyond is becoming commonplace. Kevin Townsend explains “almost all the debug tools that exist in the MCU space today are designed for one core at a time, they’re designed to solve one and analyze one thread of data on one architecture. You could have a design with an Arm core, a RISC-V core, an Xtensa DSP core, and maybe some proprietary instruction set from a vendor, all on the same chip; and I need to trace data as it moves through the system in a very complex way.” An example is used with an analog front end that goes to DSP for processing and then to an Arm core for further processing, and a final RISC-V core that might control a BLE radio to send a signal out to a mobile device. 

“It breaks down the ability to debug multiple cores in parallel inside the same IDE, in the same moment in time”, this diverges from the traditional approach with different IDEs, pipelines, and debuggers where the developer has to set a breakpoint on one core and switch over the the next processor’s tools to continue the debug process. This process is inherently cumbersome and fraught with human error and oversight where quite often, different cores might be controlled with different JTAG connectors causing the developer to manually switch connections as well while switching (alt-tabbing) between tools. 

In the heterogenous debug tool, users with multiple debuggers connected to multiple cores can readily visualize code for all the cores (there is no limit to the number) and they can set breakpoints on each (Figure 3). Once the offending line of code is found and fixed, the application can be rebuilt with the change and run to ensure that it works. 

Figure 3: The heterogenous debug tool demo at showing how a user can debug a system using a RISC-V and Arm core to play the game 2048. 

Trusted Edge Security Architecture

“We have our Trusted Edge Architecture which we’re embedding into our SDK as well as having security tooling within CodeFusion Studio itself, its all SDK-driven APIs so customers can use it pretty easily,” said Jason Griffin. Kevin Townsend also adds “traditional embedded engineers haven’t had to deal with the complexities of security, but now you have all this legislation coming down the pipeline there is a pressure that has never existed before for software developers to deliver secure solutions and they often don’t have the expertise.” The Trusted Edge Security Architecture is a program that offers access to crypto libraries all embedded within the software that can be used on an MCU (Figure 4). The secure hardware solutions include tamper-resistant key storage, root of trust (RoT), and more to provide a secure foundation for embedded devices. “How do we give you, out of the box, something that complies with 90% of the requirements in cybersecurity,” says Kevin, “We can’t solve everything, but we really need to raise the bar in the software libraries and software components that we integrate into our chips to really reduce the pressure on software developers.” 

Figure 4: ADI Trusted Edge Security Architecture demo.

Aalyia Shaukat, associate editor at EDN, has worked in design publishing industry for six years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Analog Devices’ approach to heterogeneous debug appeared first on EDN.

In-vehicle passenger detection: Wi-Fi sensing a ‘just right’ solution

Thu, 10/10/2024 - 10:28

Every year during extreme weather, infants, toddlers, and disabled adults are sickened or die overlooked in vehicles. While the numbers are not huge, each case is a tragedy for a family and community. Accordingly, regulators are moving toward requiring that new vehicles be able to detect the presence of a human left in an otherwise empty vehicle. New regulations are not a question of if, but of when and of how.

This presents vehicle manufacturers with a classic Goldilocks problem. There are three primary techniques for human-presence detection in an enclosed environment, presenting a range of cost points and capabilities.

The first alternative is infrared detection: simply looking for a change in the infrared signature of the back-seat region—a change that might indicate the presence of a warm body or of motion. Infrared technology is, to say the least, mature. And it is inexpensive. But it has proved extraordinarily difficult to derive accurate detection from infrared signatures, especially over a wide range of ambient temperatures and with heat sources moving around outside the vehicle.

In an application where frequent false positives will cause the owner to disable the system, and a steady false negative can cause tragedy, infrared is too little.

Then there are radars, cameras

Radar is the second alternative. Small, low-power radar modules already exist for a variety of industrial and security applications. And short-wavelength radar can be superbly informative—detecting not only the direction and range of objects, but even the shapes of surfaces and subtle motions, such as respiration or even heartbeat. If anything, radar offers the system developer too much data.

Radar is also expensive. At today’s prices it would be impractical to deploy it in any but luxury vehicles. Perhaps if infrared is too little, radar is a bit too much.

A closely related approach uses optical cameras instead of radar transceivers. But again, cameras produce a great flood of data that requires object-recognition processing. Also, they are sensitive to ambient light and outside interference, and they are powerless to detect a human outside their field of view or concealed by, say, a blanket.

Furthermore, the fact that cameras produce recognizable images of places and persons creates a host of privacy issues that must be addressed. So, camera-based approaches are also too much.

Looking for just right

Is there something in between? In principle there is. Nearly all new passenger vehicles today offer some sort of in-vehicle Wi-Fi. That means the interior of the vehicle, and its near surroundings, will be bathed from time to time in Wi-Fi signals, spanning multiple frequency channels.

For its own internal purposes, a modern Wi-Fi transceiver monitors the signal quality on each of its channels. The receiver records what it observes as a set of data called Channel State Information, or CSI. This CSI data comes in the form of a matrix of complex numbers. Each number represents the amplitude and phase on a particular channel at a particular sample moment.

The sampling rate is generally low enough that the receiver continuously collects CSI data without interfering with the normal operation of the Wi-Fi (Figure 1). In principle it should be possible to extract from the CSI data stream an inference on whether or not a human is present in the back seat of a vehicle.

Figure 1 To detect a human presence using Wi-Fi, a receiver records what it observes as a set of data called CSI, which can be done without interfering with the normal operation of the Wi-Fi. Even small changes in the physical environment around the Wi-Fi host and client will result in a change of the amplitude and state information on the various channels. Wi-Fi signals take multiple paths to reach a target, and by looking at CSI at different times and comparing them, we can understand how the environment is changing over time. Source: Synaptics

And since the Wi-Fi system is already in the vehicle, continuously gathering CSI data, the incremental cost to extract the inference could be quite modest. The hardware system would require only adding a second Wi-Fi transceiver at the back of the vehicle to serve as a client on the Wi-Fi channels. This might just be the middle ground we seek.

A difficult puzzle

The problem is that there is no obvious way to extract such an inference from the CSI data. To the human eye, the data stream looks completely opaque (Figure 2). There is no nice, simple stream of bearing, range, and amplitude data. There may not even be the gross changes in signature upon which infrared detectors depend. The data stream looks like white noise. But it is not.

Figure 2 Making accurate inferencing of what the CSI data is sensing in real-world scenarios is a key challenge as much of it looks the same. Using a multi-stage analysis pipeline, the Synaptics team combined spectral analysis, a set of compact, very specialized deep-learning networks, and a post-processing algorithm to continuously process the CSI data stream. Source: Synaptics

Complicating the challenge is the issue of interference. In the real world, the vehicle will not be locked in a laboratory. It will be in a parking lot, with people walking by, perhaps peering at the windows. Given the nature of young humans, if they were to discover that they could set off the alarm, they would attempt to do so by waving, jumping about, or climbing onto the vehicle.

All this activity will be well within the range of the Wi-Fi signals. Making accurate inferences in the presence of this sort of interference, or of intentional baiting, is a compounding problem.

But the problem has proven to be solvable. Recently, researchers at Synaptics have reported impressive results. Using a multi-stage analysis pipeline, the team combined spectral analysis, a set of compact, very specialized deep-learning networks, and a post-processing algorithm to continuously process the CSI data stream. The resulting algorithm is compact enough for implementation in modest-priced system-on-chip (SoC), but it has proved highly accurate.

Measured results

The Synaptics developers produced CSI data using Wi-Fi devices in an actual car. They performed tests with and without an infant doll and with babies, in both forward- and rear-facing infant seats. The team also tested with children and a live adult, either still or moving about. In addition to tests in isolation, they performed tests with various kinds of interference from humans outside the car, including tests in which the humans attempted to tease the system.

Overall, the system achieved 99% accuracy across the range of tests. In the absence of human interference, the system was 100% accurate, recording no false positives or false negatives at all. Given that a false negative caused by outside interference will almost certainly be transient, the data suggest that the system would be very powerful at saving human passengers from harm.

Using the CSI data streams from existing in-vehicle Wi-Fi devices as a means of detecting human presence is inexpensive enough to deploy in even entry-level cars. Our research indicates that a modestly priced SoC is capable, given the right AI-assisted algorithm, of achieving an excellent error rate, even in the presence of casual or intentional interference from outside the vehicle.

This combination of thrift and accuracy makes CSI-based detection a just-right solution to the Goldilocks problem of in-vehicle human presence detection.

Karthik Shanmuga Vadivel is principal computer vision architect at Synaptics.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post In-vehicle passenger detection: Wi-Fi sensing a ‘just right’ solution appeared first on EDN.

Implementing enhanced wear-leveling on standalone EEPROM

Wed, 10/09/2024 - 16:43
Introduction/Problem

Longer useful life and improved reliability of products is becoming a more desirable trait. Consumers expect higher quality and more reliable electronics, appliances, and other devices on a tighter budget. Many of these applications include embedded electronics which contain on-board memory like Flash or EEPROM. As system designers know, Flash and EEPROM do not have unlimited erase/write endurance, but even so, these memories are necessary for storing data during operation and when the system is powered off. Therefore, it has become common to use wear-reduction techniques which can greatly increase embedded memory longevity. One common method of wear-reduction is called wear-leveling.

Wear-leveling

When using EEPROM in a design, it’s crucial to consider its endurance, typically rated at 100,000 cycles for MCU-embedded EEPROM and 1 million cycles for standalone EEPROM at room temperature. Designers must account for this by estimating the number of erase/write cycles over the typical lifetime of the application (sometimes called the mission profile) to determine what size of an EEPROM they need and how to allocate data within the memory.

For instance, in a commercial water metering system with four sensors for different areas of a building, each sensor generates a data packet per usage session, recording water volume, session duration, and timestamps. The data packets stored in the EEPROM are appended with updated data each time a new session occurs until the packet becomes full. Data is stored in the EEPROM until a central server requests a data pull. The system is designed to pull data frequently enough to avoid overwriting existing data within each packet. Assuming a 10-year application lifespan and an average of 400 daily packets per sensor, the total cycles per sensor will reach 1.46 million, surpassing the typical EEPROM endurance rating. To address this, you can create a software routine to spread wear out across the additional blocks (assuming you have excess space). This is called wear-leveling.

So, how is this implemented?

To implement wear-leveling for this application, you can purchase an EEPROM twice as large, allowing you to now allocate 2 blocks for each sensor (for a total of 2 million available cycles per sensor). This provides a buffer of additional cycles if needed (an extra 540 thousand cycles for each sensor in this example).

You will then need some way to know where to write new data to spread the wear. While you could write each block to its 1-million-cycle-limit before proceeding to the next, this approach may lead to premature wear if some sensors generate more data than others. If you spread the wear evenly across the EEPROM, the overall application will last longer. Figure 1 illustrates the example explained above, with four water meters sending data packets (in purple) back to the MCU across the communication bus. The data is stored in blocks within the EEPROM. Each block has a counter in the top left indicating the number of erase-write cycles it has experienced.

Figure 1 Commercial water metering, data packets being stored on EEPROM, EEPROM has twice as much space as required. Source: Microchip Technology

There are two major types of wear-leveling: dynamic and static. Dynamic is more basic and is best for spreading wear over a small space in the EEPROM. It will spread wear over the memory blocks whose data changes most often. It is easier to implement and requires less overhead but can result in uneven wear, which may be problematic as illustrated in Figure 2.

Figure 2 Dynamic wear-leveling will spread wear over the memory blocks whose data changes most often leading to a failure to spread wear evenly. Source: Microchip Technology

Static wear-leveling spreads wear over the entire EEPROM, extending the life of the entire device. It is recommended if the application can use the entire memory as storage (e.g., if you do not need some of the space to store vital, unchanging data) and will produce the highest endurance for the life of the application. However, it is more complex to implement and requires more CPU overhead.

Wear-leveling requires monitoring each memory block’s erase/write cycles and its allocation status, which can itself cause wear in non-volatile memory (NVM). There are many clever ways to handle this, but to keep things simple, let’s assume you store this information in your MCU’s RAM, which does not wear out. RAM loses data on power loss, so you will need to design a circuit around your MCU to detect the beginnings of power loss so that you will have time to transfer current register states to NVM.

The software approach to wear-leveling

In a software approach to wear-leveling, the general idea is to create an algorithm that directs the next write to the block with the least number of writes to spread the wear. In static wear-leveling, each write stores data in the least-used location that is not currently allocated for anything else. It also will swap data to a new, unused location if the number of cycles between the most-used and least-used block is too large. The number of cycles each block has been through is tracked with a counter, and when the counter reaches the maximum endurance rating, that block is assumed to have reached its expected lifetime and is retired.

Wear-leveling is an effective method for reducing wear and improving reliability. As seen in Figure 3, it allows the entire EEPROM to reach its maximum specified endurance rating as per the datasheet. Even so, there are a few possibilities for improvement. The erase/write count of each block does not represent the actual physical health of the memory but rather a rough indicator of the remaining life of that block. This means the application will not detect failures that occur before the count reaches its maximum allowable value. The application also cannot make use of 100% of the true life of each memory block.

Figure 3 Wear-leveling extending the life of EEPROM in application, including blocks of memory that have been retired (Red ‘X’s). Source: Microchip Technology

Because there is no way to detect physical wear out, the software will need additional checks if high reliability is required. One method is to read back the block you just wrote and compare it to the original data. This requires time on the bus, CPU overhead, and additional RAM. To detect early life failures, this readback must occur for every write, at least for some amount of time after the lifetime of the application begins. Readbacks to detect cell wear out type failures must occur every write once the number of writes begins to approach the endurance specification. Any time a readback does not occur, the user will not be able to detect any wear out and, hence, corrupted data may be used. The following software flowchart illustrates an example of static wear-leveling, including the readback and comparison necessary to ensure high-reliability.

Figure 4 Software flowchart illustrating static wear-leveling, including readbacks and comparisons of memory to ensure high-reliability. Source: Microchip Technology

The need to readback and compare the memory after each write can create severe limitations in performance and use of system resources. There exist some solutions to this in the market. For example, some EEPROMs include error correction, which can typically correct a single bit error out of every specified number of bytes (e.g., 4 bytes). There are different error correction schemes used in embedded memory, the most common being Hamming codes. Error correction works by including additional bits called parity bits which are calculated from the data stored in the memory. When data is read back, the internal circuit recalculates the parity bits and compares them to the parity bits that were stored. If there is a discrepancy, this indicates that an error has occurred. The pattern of the parity discrepancy can be used to pinpoint the exact location of the error. The system can then automatically correct this single bit error by flipping its value, thus restoring the integrity of the data. This helps extend the life of a memory block. However, many EEPROMs don’t give any indication that this correction operation took place. Therefore, it still doesn’t solve the problem of detecting a failure before the data is lost.

A data-driven solution to wear-leveling software

To detect true physical wear out, certain EEPROMs include a bit flag which can be read when a single-bit error in a block has been detected and corrected. This allows you to readback and check a single status register to see if ECC was invoked during the last operation. This reduces the need for readbacks of entire memory blocks to double-check results (Figure 5). When an error is determined to have occurred within the block, you can assume the block is degraded and can no longer be used, and then retire it. Because of this, you can rely on data-based feedback to know when the memory is actually worn out instead of relying on a blind counter. This essentially eliminates the need for estimating the expected lifetime of memory in your designs. This is great for systems which see vast shifts in their environments over the lifetime of the end application, like dramatic temperature and voltage variations which are common in the manufacturing, automotive and utilities industries. You can now extend the life of the memory cells all the way to true failure, potentially allowing you to use the device even longer than the datasheet endurance specification.

Figure 5 Wear-leveling with an EEPROM with ECC and status bit enables maximization of memory lifespan by running cells to failure, potentially increasing lifespan beyond datasheet endurance specification. Source: Microchip Technology

Microchip Technology, a semiconductor manufacturer with over 30 years of experience producing EEPROM now offers multiple devices which provide a flag to tell the user when error-correction has occurred, in turn alerting the application that a particular block of memory must be retired.

  • I2C EEPROMs: 24CSM01 (1 Mbit), 24CS512 (512 Kbit), 24CS256 (256 Kbit)
  • SPI EEPROMs: 25CSM04 (4 Mbit), 25CS640 (64 Kbit)

This is a data-driven approach to wear-leveling which can further extend the life of the memory beyond what standard wear-leveling can produce. It is also more reliable than classic wear-leveling because it uses actual data instead of arbitrary counts—if one block lasts longer than another, you can continue using that block until cell wear out. This can reduce time taken on the bus, CPU overhead, and required RAM which in turn can reduce power consumption and overall system performance. As shown in Figure 6, the software flow can be updated to accommodate this new status indicator.

Figure 6 Software flowchart illustrating a simplified static wear-leveling routine using an error correction status indicator. Source: Microchip Technology

As illustrated in the flowchart, using an error correction status (ECS) bit eliminated the need to readback data, store it in RAM, and perform a complete comparison to the data just written, free up resources and creating a conceptually simpler software flow. A data readback is still required (as the status bit is only evaluated on reads), but the data can be ignored and thrown out before simply reading the status bit, eliminating the need for additional RAM and CPU comparison overhead. The number of times the software checks the status bit will vary based on the size of the blocks defined, which in turn depend on the smallest file size the software is handling.

 The following are some advantages of the ECS bit:

  • Maximize EEPROM block lifespan by running cells to failure
  • Option to remove full block reads to check for data corruption, freeing up time on the communication bus
  • If wear-leveling is not necessary or too burdensome to the application, the ECS bit serves as a quick check of memory health, facilitating the extension of EEPROM block lifespan and helping to avoid tracking erase/write cycles
Reliability improvements with an ECS bit

Error correction implemented with a status indicator is a powerful tool for enhancing reliability and extending device life, especially when used in a wear-leveling scheme. Any improvements in reliability are highly desired in automotive, medical, and other functional safety type applications, and are welcomed by any designer seeking to create the best possible system for their application.

Eric Moser is a senior product marketing engineer for Microchip Technology Inc. and is responsible for guiding the business strategy and marketing of multiple EEPROM and Real Time Clock product lines. Moser has 8 years of experience at Microchip, spending five years as a test engineer in the 8-bit microcontroller group. Before Microchip, Moser worked as an embedded systems engineer in various roles involving automated testbed development, electronic/mechanical prognostics, and unmanned aerial systems. Moser holds a bachelor’s degree in systems engineering from the University of Arizona.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Implementing enhanced wear-leveling on standalone EEPROM appeared first on EDN.

Improved PRTD circuit is product of EDN DI teamwork

Tue, 10/08/2024 - 15:03

Recently I published a simple platinum resistance temperature detector (PRTD) design idea that was largely inspired by a deviously clever earlier DI by Nick Cornford.

Remarkable and consistently constructive critical commentary of my design immediately followed.

Reader Konstantin Kim suggested that an AP4310A dual op-amp + voltage reference might be a superior substitute for the single amplifier and separate reference I was using. It had the double advantages of lowering both parts count and cost.

Meanwhile VCF pointed out that >0.1oC self-heating error is likely to result from the multi-milliamp excitation necessary for 1 mV/oC PRTD output from a passive bridge design. He suggested active output amplification because of the lower excitation it would make possible. This would make for better accuracy, particularly when measuring temperatures of still air.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 shows the outcome of some serious consideration and quiet contemplation of those, as they turned out to be, terrific ideas.

Figure 1 Nonlinearity is cancelled by positive feedback to PRTD constant excitation current feedback loop via R8. A2’s 10x gain allows reduced excitation that cuts self-heating error by 100x.

 A1’s built-in 2.5-V precision reference combines with the attached amplifier to form a constant-current excitation feedback loop (more on this to follow). Follow-on amplification allows a tenfold excitation reduction from ~2.5 mA to 250 µA with an associated hundredfold reduction in self-heating from ~1 mW to ~10 µW and a proportionate reduction in the associated measurement error.

The sixfold improvement in expected battery life from the reduced current consumption is nice, too.

The resulting 100 µV/oC PRTD signal is boosted by A2 to the original multimeter-readout compatible 1 mV/oC. R1 provides a 0oC bridge null adjustment, while R2 calibrates gain at 100oC. Nick’s DI includes a nifty calibration writeup that should work as well here as in his original.

Admittedly the 4310’s general-purpose-grade specifications like its 500-µV typical input offset (equivalent if uncompensated to a 5oC error) might seem to disqualify it for a precision application like this. But when you adjust R1 to null the bridge, you’re simultaneously nulling A2. So, it’s good enough after all.

An unexpected bonus benefit of the dual-amplifier topology was the easy implementation of a second-order Callendar-Van Dusen nonlinearity correction. Positive feedback via R8 to the excitation loop increases bias by 150 ppm/oC. That’s all that’s needed to linearize the 0 oC to 100oC response to better than +/-0.1oC.

So, cheaper, simpler, superior power efficiency, and more accurate. Cool! Thanks for the suggestions, guys!

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Improved PRTD circuit is product of EDN DI teamwork appeared first on EDN.

Disassembling a Cloud-compromised NAS

Mon, 10/07/2024 - 17:34

Back in October 2015, when I was evaluating alternatives to Microsoft’s Window Media Center for receiving, recording, and streaming cable television service around my house, I picked up a factory-refurbished Western Digital My Cloud 2 TByte (single-HDD) network-attached storage (NAS) (later rebranded as the My Cloud Home) for $99:

Introduced in late October 2013 (here’s an initial review from a few months later), the 2 TByte variant originally sold for $150. Roughly two years later came a notably feature-enhanced proprietary O/S update to My Cloud OS 3, along with additional single- and multi-HDD hardware models. I’d bought mine because it was one of the stored-recordings options then supported by SiliconDust’s HDHomeRun PVR software; I was already using the company’s HDHomeRun PRIME CableCard-supportive three-tuner networked receiver. I planned to run HDHomeRun PVR’s server software on a networked PC, with per-TV playback supported via Google Nexus Players, each paired to a micro-USB-to-Ethernet adapter and running the company’s Android client app.

Unfortunately, shortly thereafter came several successive WD Cloud OS remote device hijacks with a common attack vector—the NAS’s connectivity to WD’s “cloud” file sync and backup service—along with a common unfortunate company response—extended delay, in sloth-like reaction to both private alerts sent to the company by security vulnerability firms and public disclosures. One such patch suite belatedly arrived in March 2017; read it and weep.

In December 2021, WD “threw in the towel” on My Cloud OS 3, telling customers to upgrade applicable devices to My Cloud OS 5, as support for My Cloud OS 3 would be ending a few months later. Alas, my particular device wasn’t My Cloud OS 5-compatible; to WD’s credit, “devices that had auto-update enabled received a final firmware update that disabled remote access and outbound traffic to cloud services”, effectively transforming them into local-only NAS devices from that point forward. And the company also sent folks like me a 20%-off coupon for hardware-upgrade purposes; mine arrived on January 17, 2022.

Just in time, it turns out…two days later came the news of patches for yet another set of My Cloud OS 3 vulnerabilities. And, as it also turns out, My Cloud OS 5 users’ troubles alas weren’t over, either. In March of last year, WD’s cloud services were circumvented by a network breach that “locked them out of their data for more than 24 hours and has put company-handled information into the hands of currently unknown hackers.” I’m admittedly glad I didn’t take WD up on its discounted hardware upgrade offer, not that QNAP’s been notably better

Truth be told, I never got around to actualizing my HDHomeRun PVR aspiration; I’m still running Windows Media Center on an out-of-support Windows 7-based networked computer along with several equally geriatric per-TV-located Xbox 360s (although the expiration clock on this particular setup is growing louder by the passing day, for reasons I’ll explain in greater detail in another upcoming planned post). I’ve still got my device, which I’ll never donate to charity due to its now-neutered functionality which’d only bewilder a recipient. Worst case, I’ve got a 2 TByte WD Red HDD that I can repurpose in some other system. And, to satisfy my own curiosity, among other reasons, I’ve also decided to crack open the NAS and see what’s inside.

Here’s an initial suite of overview shots, after I’d first removed the reflection- and glare-enhancing protective clear plastic sheet from the front and sides, and as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes (the WD My Cloud Home 2 TByte has dimensions of 7.5 in x 1.9 in x 6.7 in/193.3 mm x 49 mm x 170.6 mm and weighs 2.12 lb./0.92 kg):

Right and left sides:

Top: no integrated fan on this NAS, but plenty of heat-dissipating passive airflow vents:

More on the raised bottom, this time presumably for air inflow (heat rises, don’cha know):

Here’s a closeup of the bottom-end label:

And finally, the much more interesting backside, once again vent-abundant:

Here’s a closeup of its sticker:

And another up-close perspective, this time with more intriguing elements for us techie folks:

The wired Ethernet connectivity is Gbit-capable, thereby rationalizing why I’ve held onto the NAS for this long in spite of its dearth of RAID 1 multi-HDD mirroring redundancy. Conversely, the USB connector is useful solely for expanding the internal capacity via a tethered DAS; the WD My Cloud Home cannot itself be used as a DAS to a USB-connected computer, alas.

Before diving in, here’s a look at the included accessories—a length of Ethernet cable and the external power supply—along with a closeup of the latter’s specs:

Initial path-inside suspicion focused on the backside label, but removal was unfruitful; no screw heads were behind it:

Focusing instead on the seams between the sides and the inner frame was more productive, with thanks to the publisher of this particular YouTube video for his calm-demeanor guidance:

Only a bit of collateral damage:

Here’s the first-time exposed inner frame frontside:

Along with the also now-exposed right- and left-side “guts”:

The HDD and its paired PCB assembly surprisingly (at least to me) “floats” on one end:

The other end’s two mounting points are more sturdily secure, but only somewhat (left side views first, then right):

Releasing one end of each clip:

affords liftoff of the insides:

Here’s the aforementioned assembly from multiple perspectives:

Three screws hold the PCB in place:

And I bet you know what comes next:

Slide off the SATA power and data connectors:

and, along with three spacers falling away, the separation between the PCB and HDD is complete:

including the now revealed, and much more interesting, PCB inside:

Additional closeups of the latter expose, I suspect, why this particular model never got the My Cloud OS 5 update:

The system SoC, also found in other WD My Book models, is Mindspeed Technologies’ (now NXP Semiconductor’s) M86261G-12 Comcerto 2000 communication processor, based on a dual-core Arm Cortex-A9 running at 650 MHz. Visit NXP’s product page and you’ll see that this particular chip is end-of-life (and likely has been for some time); alongside the IC’s demise, further software development support likely also ceased. For posterity, a photo of this exact SoC is even coincidentally showcased on Wikipedia’s company page:

Controller of a Western Digital My Cloud 4 TB – ARM Cortex-A9

To the system processor’s right is a Samsung K4B2G1646E 2-Gbit DDR3 SDRAM. To its left is Broadcom’s BCM54612E GbE transceiver, with an associated Delta Ethernet transformer beyond it. And in the M86261G-12’s lower right corner is a Winbond W25X40CL 4-Gbit serial flash memory, presumably housing the aforementioned OS.

In closing, here are some views of the HDD, which as previously mentioned is a conventional WD “Red” (the company’s Red series is NAS usage-tailored from access profile optimization, power consumption and other aspects) 2 TByte model.

The shielding between the HDD and the PCB lifts away easily:

And with that, I’ll close and hand the keyboard over to you for your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Disassembling a Cloud-compromised NAS appeared first on EDN.

Pages