EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 2 години 43 хв тому

Tulip antenna delivers 360° stability for UWB

Чтв, 10/10/2024 - 23:50

A tulip-shaped antenna from Kyocera AVX is designed for ultra-wideband (UWB) applications, covering a frequency range of 6.0 GHz to 8.5 GHz. The surface-mount antenna is manufactured using laser direct structuring (LDS) technology, which enables a 3D pattern design. LDS allows the antenna to operate both on-board and on-ground, offering an omnidirectional radiation pattern with consistent 360° phase stability.

The antenna’s enhanced phase stability, constant group delay, and linear polarization are crucial for signal reconstruction, improving the accuracy of low-energy, short-range, and high-bandwidth USB systems. It can be placed anywhere on a PCB, including the middle of the board and over metal. This design flexibility surpasses that of off-ground antennas, which require ground clearance and are typically positioned along the perimeters of PCBs.

The tulip antenna is 6.40×6.40×5.58 mm and weighs less than 0.1 g. It is compatible with SMT pick-and-place assembly equipment and complies with RoHS and REACH regulations. When installed on a 40×40-mm PCB, the antenna typically exhibits a maximum group delay of 2 ns, a peak gain of 4.3 dBi, CW power handling of 2 W, and an average efficiency of 61%. 

Designated P/N 9002305L0-L01K, the tulip antenna is produced in South Korea and is now available through distributors Mouser and DigiKey.

9002305L0-L01K antenna product page

Kyocera AVX 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Tulip antenna delivers 360° stability for UWB appeared first on EDN.

Analog Devices’ approach to heterogeneous debug

Чтв, 10/10/2024 - 16:35
Creating a software-defined version of ADI

Embedded World this year has had a quite clear focus on the massive growth in the application space of edge computing and the convergence of IoT, AI, security, and underlying sensor technologies. A stop by the Analog Devices Inc (ADI) booth reveals quite clearly that the company aims to address the challenges of heterogeneous debugging and security compliance in embedded systems. “For some companies, an intelligent edge refers to the network edge or the cloud edge, for us it means right down to sensing, taking raw data off sensors and converting them into insights,” says Jason Griffin, managing director of software engineering and security solutions at ADI. “So bridging the physical and digital words, that’s our sweet spot.” ADI aims to bolster embedded solutions and security software where “as the signal chain becomes more digital, we add on complex security layers.” As an established leader in the semiconductor industry, ADI’s foundational components now require more software enablement, “so as we move up the stack, moving closer to our customer’s application layer, we’re starting to add an awful lot more software where our overall goal is to create a software-defined version of ADI and meet customers at their software interface.” The company is focusing their efforts on open source development “we’re open sourcing all our tools because we truly believe that software developers should own their own pipeline.” 

Enter CodeFusion Studio

This is where CodeFusion Studio comes into play, the software development environment was built to help develop applications on all ADI digital technologies. “In the future, it will include analog too,” notes Jason. “There’s three main components to CodeFusion Studio: the software development kit that includes usage guides and reference guides to get up and running; the modern Visual Studio Code IDE so customers can go to the Microsoft marketplace and download it to facilitate heterogenous debug and application development;  and a series of configuration and productivity tools where we will continue to expand CodeFusion Studio.” The initial release of this software includes a pin config tool, ELF file explorer, and a heterogeneous debug tool.  

Config tools

Kevin Townsend, embedded systems engineer offered a deeper dive into the open source platform starting with the config tools. “There’s not a ton of differentiation in the config tools themselves, every vendor is going to give you options to configure pin mux, pin config, and generate code to take those config choices and set up your device to solve your business problem.” The config tools themselves are more or less standard, “in reality, you have two problems with pin mux and pin config: you’ve got to capture your config choices, and every tool will do that for you, for example, I could want ADC6 on Pin B4, or UART TXD and RXD on C7 and D9, the problem with most of those tools today is that they lock you into a very opinionated sense of what that code should look like. So most vendors will generate code for you like everybody else but it will be based on the vendor-specific choices of RTOSs (real-time operating systems), my SDKs; and if I’m a moderately complex-to-higher-end customer, I don’t want to have anything to do with those, I need to generate code for my own scheduler, my own APIs in-house.” 

So CodeFusion Studio’s tool has done is decouple config choice capture from code generation, “we save all of the config choices for your system, we save all of that into a JSON file which is human and machine readable and rather than just generating opinionated code for you right away, we have an extensible command-line utility that takes this JSON file and it will generate code for you based upon the platform that you want.” The choices can include MSDK (microcontrollers SDK), Zephyr 3.7, ThreadX, or an in-house scheduler maybe used by a larger tech company. “I can take this utility and we have a plug-in based architecture where it’s relatively trivial for me to write my own export engine.” This gives people the freedom to generate the code they need. 

Figure 1: CodeFusion Studio Config tool demo at the ADI booth at Embedded World 2024.

ELF file explorer

Kevin bemoaned that half of a software developer’s day was spent doing meetings while the other half was spent doing productive work where so much of that already-reduced time had to include debug, profiling, and instrumentation. “That’s kind of the bread and butter of doing software development but traditionally, I don’t feel like embedded has tried to innovate on debug, it’s half of my working day and yet we’re still using 37-year-old tools like gdb on the command line to debug the system we’re using.” 

He says “if I want to have all my professional profiling tools to understand where all my SRAM or flash is going, I have to buy an expensive proprietary tool.” Kevin strongly feels that making a difference for ADI customers does not involve selling another concrete tool but an open platform that does not simply generate code, but to generates code that enables customers to get the best possible usage of their resources. “MCUs are a commodity at the end of the day, people are going to buy them for a specific set of peripherals, but it’s a lot of work to get the best possible usage out of them.” 

The ELF file explorer is one such example to improve the quality of life for developers. The ELF file is analogous to the .exe file for Windows desktop application, “it’s like an embedded .exe” , says Kevin. “It is kind of the ultimate source of the truth of the code that I am running on my device but it is a block box.” ELF file explorer attempts to take these opaque systems and build “windows” into them so see what is going on in the final binary. “There’s nothing in this tool that I couldn’t do on the command line but I would need 14 different tools, an Excel spreadsheet, and a piece of paper with pencil, and it would take me three hours.” Instead, developers can finalize debugging in a fraction of the time to potentially speed up time-to-market. 

“So for example, I can see the 10 largest symbols inside my image where I’m running out of flash; where is it all going?” In this case, the tool allows the user to readily see the 10 largest functions (Figure 2) with the ability to right click on it to go to the symbol source and get a view directly into the source code. 

Figure 2: CodeFusion Studio ELF file explorer tool demo.

Heterogenous Debug

The heterogenous debug tool is aimed at simplifying multi-core architecture debugging, this is quickly becoming a necessity in modern embedded development environments where implementing 2 cores or beyond is becoming commonplace. Kevin Townsend explains “almost all the debug tools that exist in the MCU space today are designed for one core at a time, they’re designed to solve one and analyze one thread of data on one architecture. You could have a design with an Arm core, a RISC-V core, an Xtensa DSP core, and maybe some proprietary instruction set from a vendor, all on the same chip; and I need to trace data as it moves through the system in a very complex way.” An example is used with an analog front end that goes to DSP for processing and then to an Arm core for further processing, and a final RISC-V core that might control a BLE radio to send a signal out to a mobile device. 

“It breaks down the ability to debug multiple cores in parallel inside the same IDE, in the same moment in time”, this diverges from the traditional approach with different IDEs, pipelines, and debuggers where the developer has to set a breakpoint on one core and switch over the the next processor’s tools to continue the debug process. This process is inherently cumbersome and fraught with human error and oversight where quite often, different cores might be controlled with different JTAG connectors causing the developer to manually switch connections as well while switching (alt-tabbing) between tools. 

In the heterogenous debug tool, users with multiple debuggers connected to multiple cores can readily visualize code for all the cores (there is no limit to the number) and they can set breakpoints on each (Figure 3). Once the offending line of code is found and fixed, the application can be rebuilt with the change and run to ensure that it works. 

Figure 3: The heterogenous debug tool demo at showing how a user can debug a system using a RISC-V and Arm core to play the game 2048. 

Trusted Edge Security Architecture

“We have our Trusted Edge Architecture which we’re embedding into our SDK as well as having security tooling within CodeFusion Studio itself, its all SDK-driven APIs so customers can use it pretty easily,” said Jason Griffin. Kevin Townsend also adds “traditional embedded engineers haven’t had to deal with the complexities of security, but now you have all this legislation coming down the pipeline there is a pressure that has never existed before for software developers to deliver secure solutions and they often don’t have the expertise.” The Trusted Edge Security Architecture is a program that offers access to crypto libraries all embedded within the software that can be used on an MCU (Figure 4). The secure hardware solutions include tamper-resistant key storage, root of trust (RoT), and more to provide a secure foundation for embedded devices. “How do we give you, out of the box, something that complies with 90% of the requirements in cybersecurity,” says Kevin, “We can’t solve everything, but we really need to raise the bar in the software libraries and software components that we integrate into our chips to really reduce the pressure on software developers.” 

Figure 4: ADI Trusted Edge Security Architecture demo.

Aalyia Shaukat, associate editor at EDN, has worked in design publishing industry for six years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Analog Devices’ approach to heterogeneous debug appeared first on EDN.

In-vehicle passenger detection: Wi-Fi sensing a ‘just right’ solution

Чтв, 10/10/2024 - 10:28

Every year during extreme weather, infants, toddlers, and disabled adults are sickened or die overlooked in vehicles. While the numbers are not huge, each case is a tragedy for a family and community. Accordingly, regulators are moving toward requiring that new vehicles be able to detect the presence of a human left in an otherwise empty vehicle. New regulations are not a question of if, but of when and of how.

This presents vehicle manufacturers with a classic Goldilocks problem. There are three primary techniques for human-presence detection in an enclosed environment, presenting a range of cost points and capabilities.

The first alternative is infrared detection: simply looking for a change in the infrared signature of the back-seat region—a change that might indicate the presence of a warm body or of motion. Infrared technology is, to say the least, mature. And it is inexpensive. But it has proved extraordinarily difficult to derive accurate detection from infrared signatures, especially over a wide range of ambient temperatures and with heat sources moving around outside the vehicle.

In an application where frequent false positives will cause the owner to disable the system, and a steady false negative can cause tragedy, infrared is too little.

Then there are radars, cameras

Radar is the second alternative. Small, low-power radar modules already exist for a variety of industrial and security applications. And short-wavelength radar can be superbly informative—detecting not only the direction and range of objects, but even the shapes of surfaces and subtle motions, such as respiration or even heartbeat. If anything, radar offers the system developer too much data.

Radar is also expensive. At today’s prices it would be impractical to deploy it in any but luxury vehicles. Perhaps if infrared is too little, radar is a bit too much.

A closely related approach uses optical cameras instead of radar transceivers. But again, cameras produce a great flood of data that requires object-recognition processing. Also, they are sensitive to ambient light and outside interference, and they are powerless to detect a human outside their field of view or concealed by, say, a blanket.

Furthermore, the fact that cameras produce recognizable images of places and persons creates a host of privacy issues that must be addressed. So, camera-based approaches are also too much.

Looking for just right

Is there something in between? In principle there is. Nearly all new passenger vehicles today offer some sort of in-vehicle Wi-Fi. That means the interior of the vehicle, and its near surroundings, will be bathed from time to time in Wi-Fi signals, spanning multiple frequency channels.

For its own internal purposes, a modern Wi-Fi transceiver monitors the signal quality on each of its channels. The receiver records what it observes as a set of data called Channel State Information, or CSI. This CSI data comes in the form of a matrix of complex numbers. Each number represents the amplitude and phase on a particular channel at a particular sample moment.

The sampling rate is generally low enough that the receiver continuously collects CSI data without interfering with the normal operation of the Wi-Fi (Figure 1). In principle it should be possible to extract from the CSI data stream an inference on whether or not a human is present in the back seat of a vehicle.

Figure 1 To detect a human presence using Wi-Fi, a receiver records what it observes as a set of data called CSI, which can be done without interfering with the normal operation of the Wi-Fi. Even small changes in the physical environment around the Wi-Fi host and client will result in a change of the amplitude and state information on the various channels. Wi-Fi signals take multiple paths to reach a target, and by looking at CSI at different times and comparing them, we can understand how the environment is changing over time. Source: Synaptics

And since the Wi-Fi system is already in the vehicle, continuously gathering CSI data, the incremental cost to extract the inference could be quite modest. The hardware system would require only adding a second Wi-Fi transceiver at the back of the vehicle to serve as a client on the Wi-Fi channels. This might just be the middle ground we seek.

A difficult puzzle

The problem is that there is no obvious way to extract such an inference from the CSI data. To the human eye, the data stream looks completely opaque (Figure 2). There is no nice, simple stream of bearing, range, and amplitude data. There may not even be the gross changes in signature upon which infrared detectors depend. The data stream looks like white noise. But it is not.

Figure 2 Making accurate inferencing of what the CSI data is sensing in real-world scenarios is a key challenge as much of it looks the same. Using a multi-stage analysis pipeline, the Synaptics team combined spectral analysis, a set of compact, very specialized deep-learning networks, and a post-processing algorithm to continuously process the CSI data stream. Source: Synaptics

Complicating the challenge is the issue of interference. In the real world, the vehicle will not be locked in a laboratory. It will be in a parking lot, with people walking by, perhaps peering at the windows. Given the nature of young humans, if they were to discover that they could set off the alarm, they would attempt to do so by waving, jumping about, or climbing onto the vehicle.

All this activity will be well within the range of the Wi-Fi signals. Making accurate inferences in the presence of this sort of interference, or of intentional baiting, is a compounding problem.

But the problem has proven to be solvable. Recently, researchers at Synaptics have reported impressive results. Using a multi-stage analysis pipeline, the team combined spectral analysis, a set of compact, very specialized deep-learning networks, and a post-processing algorithm to continuously process the CSI data stream. The resulting algorithm is compact enough for implementation in modest-priced system-on-chip (SoC), but it has proved highly accurate.

Measured results

The Synaptics developers produced CSI data using Wi-Fi devices in an actual car. They performed tests with and without an infant doll and with babies, in both forward- and rear-facing infant seats. The team also tested with children and a live adult, either still or moving about. In addition to tests in isolation, they performed tests with various kinds of interference from humans outside the car, including tests in which the humans attempted to tease the system.

Overall, the system achieved 99% accuracy across the range of tests. In the absence of human interference, the system was 100% accurate, recording no false positives or false negatives at all. Given that a false negative caused by outside interference will almost certainly be transient, the data suggest that the system would be very powerful at saving human passengers from harm.

Using the CSI data streams from existing in-vehicle Wi-Fi devices as a means of detecting human presence is inexpensive enough to deploy in even entry-level cars. Our research indicates that a modestly priced SoC is capable, given the right AI-assisted algorithm, of achieving an excellent error rate, even in the presence of casual or intentional interference from outside the vehicle.

This combination of thrift and accuracy makes CSI-based detection a just-right solution to the Goldilocks problem of in-vehicle human presence detection.

Karthik Shanmuga Vadivel is principal computer vision architect at Synaptics.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post In-vehicle passenger detection: Wi-Fi sensing a ‘just right’ solution appeared first on EDN.

Implementing enhanced wear-leveling on standalone EEPROM

Срд, 10/09/2024 - 16:43
Introduction/Problem

Longer useful life and improved reliability of products is becoming a more desirable trait. Consumers expect higher quality and more reliable electronics, appliances, and other devices on a tighter budget. Many of these applications include embedded electronics which contain on-board memory like Flash or EEPROM. As system designers know, Flash and EEPROM do not have unlimited erase/write endurance, but even so, these memories are necessary for storing data during operation and when the system is powered off. Therefore, it has become common to use wear-reduction techniques which can greatly increase embedded memory longevity. One common method of wear-reduction is called wear-leveling.

Wear-leveling

When using EEPROM in a design, it’s crucial to consider its endurance, typically rated at 100,000 cycles for MCU-embedded EEPROM and 1 million cycles for standalone EEPROM at room temperature. Designers must account for this by estimating the number of erase/write cycles over the typical lifetime of the application (sometimes called the mission profile) to determine what size of an EEPROM they need and how to allocate data within the memory.

For instance, in a commercial water metering system with four sensors for different areas of a building, each sensor generates a data packet per usage session, recording water volume, session duration, and timestamps. The data packets stored in the EEPROM are appended with updated data each time a new session occurs until the packet becomes full. Data is stored in the EEPROM until a central server requests a data pull. The system is designed to pull data frequently enough to avoid overwriting existing data within each packet. Assuming a 10-year application lifespan and an average of 400 daily packets per sensor, the total cycles per sensor will reach 1.46 million, surpassing the typical EEPROM endurance rating. To address this, you can create a software routine to spread wear out across the additional blocks (assuming you have excess space). This is called wear-leveling.

So, how is this implemented?

To implement wear-leveling for this application, you can purchase an EEPROM twice as large, allowing you to now allocate 2 blocks for each sensor (for a total of 2 million available cycles per sensor). This provides a buffer of additional cycles if needed (an extra 540 thousand cycles for each sensor in this example).

You will then need some way to know where to write new data to spread the wear. While you could write each block to its 1-million-cycle-limit before proceeding to the next, this approach may lead to premature wear if some sensors generate more data than others. If you spread the wear evenly across the EEPROM, the overall application will last longer. Figure 1 illustrates the example explained above, with four water meters sending data packets (in purple) back to the MCU across the communication bus. The data is stored in blocks within the EEPROM. Each block has a counter in the top left indicating the number of erase-write cycles it has experienced.

Figure 1 Commercial water metering, data packets being stored on EEPROM, EEPROM has twice as much space as required. Source: Microchip Technology

There are two major types of wear-leveling: dynamic and static. Dynamic is more basic and is best for spreading wear over a small space in the EEPROM. It will spread wear over the memory blocks whose data changes most often. It is easier to implement and requires less overhead but can result in uneven wear, which may be problematic as illustrated in Figure 2.

Figure 2 Dynamic wear-leveling will spread wear over the memory blocks whose data changes most often leading to a failure to spread wear evenly. Source: Microchip Technology

Static wear-leveling spreads wear over the entire EEPROM, extending the life of the entire device. It is recommended if the application can use the entire memory as storage (e.g., if you do not need some of the space to store vital, unchanging data) and will produce the highest endurance for the life of the application. However, it is more complex to implement and requires more CPU overhead.

Wear-leveling requires monitoring each memory block’s erase/write cycles and its allocation status, which can itself cause wear in non-volatile memory (NVM). There are many clever ways to handle this, but to keep things simple, let’s assume you store this information in your MCU’s RAM, which does not wear out. RAM loses data on power loss, so you will need to design a circuit around your MCU to detect the beginnings of power loss so that you will have time to transfer current register states to NVM.

The software approach to wear-leveling

In a software approach to wear-leveling, the general idea is to create an algorithm that directs the next write to the block with the least number of writes to spread the wear. In static wear-leveling, each write stores data in the least-used location that is not currently allocated for anything else. It also will swap data to a new, unused location if the number of cycles between the most-used and least-used block is too large. The number of cycles each block has been through is tracked with a counter, and when the counter reaches the maximum endurance rating, that block is assumed to have reached its expected lifetime and is retired.

Wear-leveling is an effective method for reducing wear and improving reliability. As seen in Figure 3, it allows the entire EEPROM to reach its maximum specified endurance rating as per the datasheet. Even so, there are a few possibilities for improvement. The erase/write count of each block does not represent the actual physical health of the memory but rather a rough indicator of the remaining life of that block. This means the application will not detect failures that occur before the count reaches its maximum allowable value. The application also cannot make use of 100% of the true life of each memory block.

Figure 3 Wear-leveling extending the life of EEPROM in application, including blocks of memory that have been retired (Red ‘X’s). Source: Microchip Technology

Because there is no way to detect physical wear out, the software will need additional checks if high reliability is required. One method is to read back the block you just wrote and compare it to the original data. This requires time on the bus, CPU overhead, and additional RAM. To detect early life failures, this readback must occur for every write, at least for some amount of time after the lifetime of the application begins. Readbacks to detect cell wear out type failures must occur every write once the number of writes begins to approach the endurance specification. Any time a readback does not occur, the user will not be able to detect any wear out and, hence, corrupted data may be used. The following software flowchart illustrates an example of static wear-leveling, including the readback and comparison necessary to ensure high-reliability.

Figure 4 Software flowchart illustrating static wear-leveling, including readbacks and comparisons of memory to ensure high-reliability. Source: Microchip Technology

The need to readback and compare the memory after each write can create severe limitations in performance and use of system resources. There exist some solutions to this in the market. For example, some EEPROMs include error correction, which can typically correct a single bit error out of every specified number of bytes (e.g., 4 bytes). There are different error correction schemes used in embedded memory, the most common being Hamming codes. Error correction works by including additional bits called parity bits which are calculated from the data stored in the memory. When data is read back, the internal circuit recalculates the parity bits and compares them to the parity bits that were stored. If there is a discrepancy, this indicates that an error has occurred. The pattern of the parity discrepancy can be used to pinpoint the exact location of the error. The system can then automatically correct this single bit error by flipping its value, thus restoring the integrity of the data. This helps extend the life of a memory block. However, many EEPROMs don’t give any indication that this correction operation took place. Therefore, it still doesn’t solve the problem of detecting a failure before the data is lost.

A data-driven solution to wear-leveling software

To detect true physical wear out, certain EEPROMs include a bit flag which can be read when a single-bit error in a block has been detected and corrected. This allows you to readback and check a single status register to see if ECC was invoked during the last operation. This reduces the need for readbacks of entire memory blocks to double-check results (Figure 5). When an error is determined to have occurred within the block, you can assume the block is degraded and can no longer be used, and then retire it. Because of this, you can rely on data-based feedback to know when the memory is actually worn out instead of relying on a blind counter. This essentially eliminates the need for estimating the expected lifetime of memory in your designs. This is great for systems which see vast shifts in their environments over the lifetime of the end application, like dramatic temperature and voltage variations which are common in the manufacturing, automotive and utilities industries. You can now extend the life of the memory cells all the way to true failure, potentially allowing you to use the device even longer than the datasheet endurance specification.

Figure 5 Wear-leveling with an EEPROM with ECC and status bit enables maximization of memory lifespan by running cells to failure, potentially increasing lifespan beyond datasheet endurance specification. Source: Microchip Technology

Microchip Technology, a semiconductor manufacturer with over 30 years of experience producing EEPROM now offers multiple devices which provide a flag to tell the user when error-correction has occurred, in turn alerting the application that a particular block of memory must be retired.

  • I2C EEPROMs: 24CSM01 (1 Mbit), 24CS512 (512 Kbit), 24CS256 (256 Kbit)
  • SPI EEPROMs: 25CSM04 (4 Mbit), 25CS640 (64 Kbit)

This is a data-driven approach to wear-leveling which can further extend the life of the memory beyond what standard wear-leveling can produce. It is also more reliable than classic wear-leveling because it uses actual data instead of arbitrary counts—if one block lasts longer than another, you can continue using that block until cell wear out. This can reduce time taken on the bus, CPU overhead, and required RAM which in turn can reduce power consumption and overall system performance. As shown in Figure 6, the software flow can be updated to accommodate this new status indicator.

Figure 6 Software flowchart illustrating a simplified static wear-leveling routine using an error correction status indicator. Source: Microchip Technology

As illustrated in the flowchart, using an error correction status (ECS) bit eliminated the need to readback data, store it in RAM, and perform a complete comparison to the data just written, free up resources and creating a conceptually simpler software flow. A data readback is still required (as the status bit is only evaluated on reads), but the data can be ignored and thrown out before simply reading the status bit, eliminating the need for additional RAM and CPU comparison overhead. The number of times the software checks the status bit will vary based on the size of the blocks defined, which in turn depend on the smallest file size the software is handling.

 The following are some advantages of the ECS bit:

  • Maximize EEPROM block lifespan by running cells to failure
  • Option to remove full block reads to check for data corruption, freeing up time on the communication bus
  • If wear-leveling is not necessary or too burdensome to the application, the ECS bit serves as a quick check of memory health, facilitating the extension of EEPROM block lifespan and helping to avoid tracking erase/write cycles
Reliability improvements with an ECS bit

Error correction implemented with a status indicator is a powerful tool for enhancing reliability and extending device life, especially when used in a wear-leveling scheme. Any improvements in reliability are highly desired in automotive, medical, and other functional safety type applications, and are welcomed by any designer seeking to create the best possible system for their application.

Eric Moser is a senior product marketing engineer for Microchip Technology Inc. and is responsible for guiding the business strategy and marketing of multiple EEPROM and Real Time Clock product lines. Moser has 8 years of experience at Microchip, spending five years as a test engineer in the 8-bit microcontroller group. Before Microchip, Moser worked as an embedded systems engineer in various roles involving automated testbed development, electronic/mechanical prognostics, and unmanned aerial systems. Moser holds a bachelor’s degree in systems engineering from the University of Arizona.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Implementing enhanced wear-leveling on standalone EEPROM appeared first on EDN.

Improved PRTD circuit is product of EDN DI teamwork

Втр, 10/08/2024 - 15:03

Recently I published a simple platinum resistance temperature detector (PRTD) design idea that was largely inspired by a deviously clever earlier DI by Nick Cornford.

Remarkable and consistently constructive critical commentary of my design immediately followed.

Reader Konstantin Kim suggested that an AP4310A dual op-amp + voltage reference might be a superior substitute for the single amplifier and separate reference I was using. It had the double advantages of lowering both parts count and cost.

Meanwhile VCF pointed out that >0.1oC self-heating error is likely to result from the multi-milliamp excitation necessary for 1 mV/oC PRTD output from a passive bridge design. He suggested active output amplification because of the lower excitation it would make possible. This would make for better accuracy, particularly when measuring temperatures of still air.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 shows the outcome of some serious consideration and quiet contemplation of those, as they turned out to be, terrific ideas.

Figure 1 Nonlinearity is cancelled by positive feedback to PRTD constant excitation current feedback loop via R8. A2’s 10x gain allows reduced excitation that cuts self-heating error by 100x.

 A1’s built-in 2.5-V precision reference combines with the attached amplifier to form a constant-current excitation feedback loop (more on this to follow). Follow-on amplification allows a tenfold excitation reduction from ~2.5 mA to 250 µA with an associated hundredfold reduction in self-heating from ~1 mW to ~10 µW and a proportionate reduction in the associated measurement error.

The sixfold improvement in expected battery life from the reduced current consumption is nice, too.

The resulting 100 µV/oC PRTD signal is boosted by A2 to the original multimeter-readout compatible 1 mV/oC. R1 provides a 0oC bridge null adjustment, while R2 calibrates gain at 100oC. Nick’s DI includes a nifty calibration writeup that should work as well here as in his original.

Admittedly the 4310’s general-purpose-grade specifications like its 500-µV typical input offset (equivalent if uncompensated to a 5oC error) might seem to disqualify it for a precision application like this. But when you adjust R1 to null the bridge, you’re simultaneously nulling A2. So, it’s good enough after all.

An unexpected bonus benefit of the dual-amplifier topology was the easy implementation of a second-order Callendar-Van Dusen nonlinearity correction. Positive feedback via R8 to the excitation loop increases bias by 150 ppm/oC. That’s all that’s needed to linearize the 0 oC to 100oC response to better than +/-0.1oC.

So, cheaper, simpler, superior power efficiency, and more accurate. Cool! Thanks for the suggestions, guys!

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Improved PRTD circuit is product of EDN DI teamwork appeared first on EDN.

Сторінки