-   Українською
-   In English
EDN Network
AI algorithms on MCU demo progress in automated driving
Artificial intelligence (AI) algorithms and supporting hardware will be critical in heralding the next stages of automated and ultimately autonomous driving, and a collaboration between Infineon and ZF as part of the EEmotion project demonstrates the viability of this ambitious technology undertaking.
EEmotion successfully integrated AI into the safety-critical functions of the vehicle control system. Source: Infineon Technologies
The EEmotion project aimed to develop an AI algorithm-based control system for automated driving that ensures more precise trajectory control in various driving situations. The project ran from September 2021 to August 2024, was co-funded by the German Federal Ministry for Economic Affairs and Climate Action and had Infineon Technologies AG as the consortium coordinator.
It began by defining the requirements for AI-based functions while aiming to develop AI in control architectures for safety-critical applications. The project also worked on aspects like the development of secure AI-monitored communication, investigation of the simulative development, and taking validation of vehicle dynamics systems into account.
As part of this project, Infineon joined hands with ZF Group to create and implement AI algorithms to develop vehicle control software. These AI algorithms—proven in a test vehicle—controlled and optimized all actuators during automated driving according to the specified driving trajectory.
ZF added AI algorithms to its two existing software solutions cubiX and Eco Control 4 ACC. The cubiX software makes it possible to control all chassis components in passenger cars and commercial vehicles. Next, Eco Control 4 ACC, a predictive cruise control system, was upended using a computationally intensive optimization algorithm and model-predictive control to achieve as much as 8% more range under real driving conditions.
These software solutions with added AI content were implemented on Infineon’s AURIX TC4x microcontroller with integrated parallel processing unit (PPU). This MCU, offering ample computing power, is capable of supporting AI modelling, virtualization, functional safety, cybersecurity and networking functions.
The outcome of loading ZF’s software solutions with added AI algorithms on AURIX TC4x MCU was a demonstration of automated lane changes much more accurately and an energy efficiency boost in adaptive cruise control. This shows how such improvements in driving performance while using lower compute power devices like MCUs could pave the way for cost-efficient Level 2+ assistance systems.
Related Content
- Redefining Mobility with Software-Defined Vehicles
- Arm Highlights Future of the Software-Defined Vehicle
- AI’s Impact on the Current and Future Automotive Industry
- Unveiling the Transformation of Software-Defined Vehicles
- How Automated Driving Is Transforming the Sensor and Computing Market
The post AI algorithms on MCU demo progress in automated driving appeared first on EDN.
Faraday streamlines chiplet integration
Faraday has unveiled an advanced packaging service that simplifies chiplet integration by coordinating various vendors and chiplet sources. The platform offers three core services—design, packaging, and production—to improve the efficiency of assembling complex semiconductor designs.
In the chiplet era, advanced packaging capacity is increasingly constrained. Faraday’s platform tackles this issue by coordinating vendors for chiplets, high bandwidth memory (HBM), interposers, and 2.5D/3D packaging. It provides a one-stop solution with services including chiplet design, testing, production planning, procurement, inventory management, and advanced packaging. Tailored to diverse client needs, the platform ensures a reliable supply of critical components.
In addition, Faraday specializes in designing and implementing key chiplets, such as I/O dies, SoC/compute dies, and interposers. The company partners with UMC, Samsung, Intel, and OSAT providers to deliver advanced packaging solutions. These include system-level design, power and signal integrity analysis, and thermal dissipation optimization for technologies like Intel’s EMIB, Samsung’s I-Cube, and 2.5D packaging.
Faraday Technology is an ASIC design service and IP provider, certified to ISO 9001 for quality management and ISO 26262 for functional safety in automotive systems. Its silicon IP portfolio includes I/O, memory, ARM-compliant CPUs, and high-speed interfaces like USB, Ethernet, SATA, PCIe, and SerDes.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Faraday streamlines chiplet integration appeared first on EDN.
DSP-free PAM4 chipset supports PCIe 6.0
Thine is set to debut an optical DSP-less chipset supporting PAM4 64-Gbps for PCIe 6.0 at this month’s ECOC 2024 exhibition in Frankfurt, Germany. By eliminating DSPs from optical communication systems in data centers, the chipset reduces power consumption by 60% and lowers latency by 90%.
Current advanced optical communication systems for PCIe often face challenges with high power consumption and signal processing delays, particularly with silicon-photonics lasers and DSP-equipped VCSEL drivers. To address these issues, THine’s chipset for PCIe 6.0 integrates a VCSEL driver and transimpedance amplifier into a DSP-free active optical cable (AOC) solution.
This optical PAM4 64-Gbps chipset leverages THine’s analog technology to eliminate DSPs from optical modules and end-point ASICs, achieving accurate signal recovery and improvements in power efficiency. The complany also plans to develop an advanced optical chipset for PCIe 7.0.
A datasheet for the PAM4 PCIe 6.0 chipset was not available at the time of this announcement.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post DSP-free PAM4 chipset supports PCIe 6.0 appeared first on EDN.
Anechoic chamber tests SATCOM antennas
Sharp has built a state-of-the-art anechoic chamber in Japan to measure the performance of flat panel antennas for LEO and MEO satellites. Using the Compact Antenna Test Range (CATR) method, the chamber simulates long-distance communication conditions, such as those encountered in satellite communication, over a short physical distance. It is also capable of accommodating one of Japan’s largest antennas, with an aperture of up to 80 cm.
Designed with high-quality radio wave-absorbing materials on the ceiling, walls, and floor, and equipped with parabolic reflectors, the chamber suppresses unwanted reflections and measures performance over a short distance. While typical chambers require over 60 meters to test an 80-cm aperture antenna, Sharp’s CATR-based setup achieves accurate measurements over approximately 7 meters across a frequency range of 10 GHz to 40 GHz.
In addition to supporting Ku/Ka bands used for satellite communications, the anechoic chamber also accommodates measurements in the upper mid-band (FR3 6 GHz to 24 GHz), a potential frequency range for 6G deployment. The new chamber facility, launching this month, will enable testing and technical verification of various products, including next-generation smartphones.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Anechoic chamber tests SATCOM antennas appeared first on EDN.
ICs enhance automotive proximity detection
Two Hall-effect switch families, the unipolar AH332xQ and omnipolar AH352xQ from Diodes, offer a range of operating sensitivity options. The automotive-compliant ICs are suitable for a wide range of contactless position and proximity detection applications, including seatbelt fastening, door and trunk latching, windshield wipers, and steering wheel locks.
The unipolar AH332xQ switches provide 10 sensitivity options, ranging from a highly sensitive 30 G BOP to a low-sensitivity 275 G BOP. The omnipolar AH352xQ series includes three high-sensitivity options from ±20 G BOP to ±40 G BOP. Both series feature tight operating and release thresholds with sufficient hysteresis for reliable operation, while a low temperature coefficient ensures stable switching points.
These devices support a wide input voltage range of 3 V to 28 V and are AEC-Q100 Grade 0 qualified, with an extended temperature range of -40°C to +150°C. They deliver ESD protection exceeding 8 kV HBM and 1 kV CDM, along with 40-V load-dump capability. Packaging options include SIP-3, SOT23 (Type S), and SC59.
The AH332xQ and AH352xQ Hall-effect switches cost $0.30 each in lots of 3000 units.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post ICs enhance automotive proximity detection appeared first on EDN.
Infineon develops 300-mm GaN technology
Infineon has introduced 300-mm power GaN wafer technology within a scalable, high-volume manufacturing environment. The company notes that 300-mm wafers offer significant technological and efficiency advantages over 200-mm wafers, producing 2.3 times more chips per wafer due to the larger diameter.
Infineon manufactured 300-mm GaN wafers on an integrated pilot line in its existing 300-mm silicon production facility in Villach, Austria. The company is drawing on its expertise in 300-mm silicon and 200-mm GaN production and plans to scale GaN capacity according to market demand.
A key advantage of 300-mm GaN technology is its compatibility with existing 300-mm silicon manufacturing equipment, as the production processes for gallium nitride and silicon are quite similar. Once fully scaled, 300-mm GaN production is expected to achieve cost parity with silicon at the RDS(on) level, enabling comparable costs between Si and GaN products.
Infineon will present its 300-mm GaN wafers at the electronica trade show in November 2024 in Munich, Germany.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Infineon develops 300-mm GaN technology appeared first on EDN.
A short design tutorial on Bluetooth Channel Sounding
The highly anticipated Bluetooth 6.0 specification is here, and one of its most notable features is the addition of channel sounding, a two-way ranging technique between two Bluetooth Low Energy (LE) devices. While Bluetooth LE is known for its low power consumption and cost effectiveness, it isn’t an optimum solution for reliable and accurate ranging.
Bluetooth Channel Sounding addresses these shortcomings by improving reliability and accuracy with distance measurement capabilities. That will significantly enhance high-volume applications such as personal item tags and key fobs, where presence detection and proximity sensing are crucial. Channel Sounding can be integrated into Bluetooth devices using a single antenna without requiring significant hardware modifications.
What’s Bluetooth Channel Sounding
Bluetooth Channel Sounding—a new protocol stack designed to enable secure and precise distance measurement between two Bluetooth LE-connected devices—unlocks a world of possibilities for embedded developers. It enables Bluetooth received signal strength indicator (RSSI) to open the door for a new wave of applications in localization and proximity awareness.
Localization applications like pet and asset trackers utilize locator devices to find the exact position of a tracking device. Next, proximity awareness applications such as smart locks and keyless entry systems utilize enhanced security features to restrict and control access to secure spaces and systems.
Figure 1 Bluetooth Channel Sounding improves accuracy to the sub-meter level and can be used in consumer, commercial and industrial applications. Source: Silicon Labs
So far, Bluetooth RSSI has relied on estimations to determine location, which leads to issues like multipath and obstruction. That, in turn, significantly reduces accuracy. Bluetooth Channel Sounding addresses this by improving accuracy to the sub-meter level. “The Bluetooth SIG’s adoption of Channel Sounding significantly enhances the precision of previous Bluetooth distance measuring techniques and encourages innovation across the Bluetooth device ecosystem.,” said Øyvind Strøm, EVP of BU Short Range at Nordic Semiconductor.
Security is of utmost importance to ensure that no unauthorized user can access the network. Channel Sounding incorporates robust security features to protect against tampering and man-in-the-middle (MITM) attacks. That’s crucial in applications like smart door locks, home appliances, and Find My solutions. For instance, Channel Sounding ensures lock only opens when the authorized device is within a certain distance.
How it works
Bluetooth Channel Sounding uses two proven ranging methods—phase-based ranging (PBR) and round-trip time (RTT)—to deliver true distance awareness between Bluetooth-connected devices. The connected devices use PBR, RTT, or both to coordinate ranging data between up to 72 channels within the 2.4 GHz spectrum and use one to four antenna paths between the two connected devices.
PBR utilizes the difference between the phase of the transmitted and received signal to calculate the distance between the initiator and reflector devices. It compares the phase difference between these devices to accurately measure the distance between them. An initiator device sends a signal to a reflector device, which returns the signal, and this process is repeated across multiple frequencies.
Figure 2 PRB delivers precise distance measurements between two Bluetooth devices using the number of wave cycles needed for the signal to go from the transmitter to the receiver. Source: Bluetooth SIG
In RTT, the secondary ranging method, an initiator device sends cryptographically scrambled packets to a reflector device, which returns the packets. Next, the distance between the devices is calculated based on the time the packets traveled back and forth.
Figure 3 RTT uses time of flight (ToF) to estimate the distance between the initiator and the reflector and cross-check the PBR measurement. Source: Bluetooth SIG
RTT can be used to verify and cross-check the PBR measurements. This cross-verification process helps detect anomalies and ensure applications are secure. For instance, it serves as a countermeasure against sophisticated man-in-the-middle attacks.
True location awareness
Channel Sounding is expected to be widely adopted in mobile phones and a broad range of products such as Bluetooth mice, keyboards, and game controllers. Then there are Find My applications—Bluetooth tags attached to personal items such as keys, wallets, backpacks, and luggage—where developers can add true distance awareness to make it easier and quicker for users to locate lost items.
After Bluetooth SIG’s adoption of Channel Sounding as part of Bluetooth 6.0, Nordic Semiconductor has announced support for the technology in its forthcoming nRF54L and nRF54H Series RF chips. Silicon Labs has also announced the integration of this technology in its xG24 wireless chips and antenna hardware solutions.
Channel Sounding technology in Bluetooth 6.0 marks a significant step in the evolution of modern wireless, and its true wireless awareness is expected to unlock new use cases while optimizing existing ones. As Ross Sabolcik, senior VP of the Industrial and Commercial Business Unit at Silicon Labs, puts it, in a world where location awareness is critical, Channel Sounding revolutionizes proximity and location capabilities, propelling Bluetooth technology into a new era.
Related Content
- Bluetooth low energy (BLE) explained
- Inside Bluetooth low-energy technology
- The basics of Bluetooth Low Energy (BLE)
- Waking Up to Thrill of Bluetooth’s 3rd Innovation Wave
- Bluetooth Channel Sounding Improves Distance Estimation Accuracy
The post A short design tutorial on Bluetooth Channel Sounding appeared first on EDN.
It’s September in Apple land, so guess what it’s time for?
A few weeks back, in mid-August to be exact, I pointed out that Google had for the first time in my memory gotten the jump on Apple, pulling in its traditional October launch event by two months in order to launch its latest smartphones, smartwatch, and earbuds ahead of its key competitor. So, it was ironic that the very same day Apple announced its latest smartphones, smartwatch and earbuds, the reviews on Google’s Pixel Watch 3 went public, one day ahead of that device’s ship date. Touché, eh?
Smartwatches weren’t the only thing Apple unveiled today (as I write these words on Tuesday evening, September 9). In the paragraphs that follow, I’ll give commentary on each notable news item in chronologically-announced order, as well as mentioning what didn’t get first-time unveiled (or at least updated) today and therefore what might be coming in the (near?) future.
The Apple watch series 10 (and, ok, a new-color Watch Ultra 2)
Thinner. Lighter (due in part to a repackaged S10 SiP SoC, which I’m betting is otherwise identical to its S9 predecessor). A bigger, curved display with improved off-axis brightness. Only accompanied, unfortunately, by the same “all-day” (since when does a day have only 18 hours?) battery life claim as with prior-generation models. That’s the Apple Watch Series 10 in a nutshell. But what I’d like to spend a bit more time on is a first-time feature that Apple spent a fair bit of time on (and which is near and dear to my heart): sleep apnea detection.
Company officials (in their as-usual nowadays pre-recorded pitches) claimed that 80% of sleep apnea goes undetected, without referencing a specific source of this information. But its veracity wouldn’t at all surprise me. That Apple’s supporting this capability is therefore great. However, the method by which the company is claiming to detect breathing interruptions—movement—is admittedly more than a bit baffling to me.
SpO2 (oxygen saturation) sensors already in use in smartwatches as well as dedicated-function pulse oximeters would be a logical place to log such data for analysis; a fingertip sensor was in part how my own sleep apnea was initially confirmed via an at-home sleep study. But Apple remains mired in patent infringement litigation and counterclaim battles with Masimo, a pulse oximeter manufacturer. Another increasingly common approach, employed by devices such as Google’s second-generation Nest Hub (which I’ve personally test-driven) and Amazon’s Halo Rise, leverages anonymity-preserving millimeter wave radar to count the chest rise and fall cadence and any rhythm deviances…but from a device a foot (or few) away from you. How Apple’s discerning a breathing pattern from a device strapped to your wrist without using a Spo2 sensor is beyond me…unless Apple’s leveraging the integrated ultra wideband transceiver, akin to how Wi-Fi can be used to detect, count, and track the movements of people in a room? And speaking of sensors, Apple’s added water depth and temp sensing for swimmers, too.
And yes, the Apple Watch Ultra 2 now comes in a black patina, too. My wife still loves her first-generation Watch Ultra. And I still love that I got her a brand new-looking first-gen refurb, versus dropping notably more coin on a truly brand-new, marginally upgraded gen-two. ‘Nuff said.
Gen4 AirPods (and, ok, tweaked AirPods Pro and AirPods Max)
Here’s another example of the combo of notably advanced and more modestly evolved offerings, blurring product line distinctions in the process. Two years ago at this time, Apple had announced its third-generation entry-level AirPods, with only slight advancements over their precursors, along with more significant improvements to the second-generation AirPods Pro. Last year at this time, Apple migrated the AirPods Pro case from Lightning to USB-C (presumably to address European Union demands, among other factors). This year, the baseline AirPods got the notable-upgrade attention, migrating to the beefier-featured H2 SoC, moving from Lightning to USB-C too (although penny-pinching Apple dropped cables from the product packaging in the process), and including a more expensive active noise cancelation (ANC) product option.
That said, the overall noise cancellation capabilities of the AirPods Pro are probably still at least a bit better, due to their eartip-inclusive in-canal design (which some folks love, and others detest, versus the simpler in-ear approach). By the end of the year, Apple plans to add clinical-grade hearing aid support to the AirPods Pro 2, which is rumored to be gaining more meaningful third-gen upgrades such as heart rate measurement thereafter. And what about the over-ear AirPods Max? They’ve also migrated from Lightning to USB-C as of this week, along with gaining a refresh of the available color schemes but are otherwise identical to their precursors (which was a relief to this guy, who’d gotten a refurb “space gray” set for his birthday back in May). Funny how significant-discount sales can tip one off to a pending product line refresh (therefore an in-advance flush of existing-product inventory in retail channels), isn’t it?
The iPhone 16 family
It’s no secret at this point that, as this guy forecasted nearly a year ago, an increasing number (and percentage) of folks are holding onto their smartphones longer than they did before. Manufacturers’ responses to this trend are predictable: they specifically encourage folks who are upgrading in a given product cycle to pick higher-end, more expensive variants (“Pro” for both Apple and Google, for example), and in general they raise prices across the product line year-to-year. Note, for example, my commentary on Google’s last-month announcements. Or look at what Apple did a year ago at this time. Only the “Max” variants of the iPhone 15 got the newest A17 (Pro) SoC; standard iPhone 15s were stuck with the A16 Bionic from the previous year’s iPhone 14 Pro. That processing differentiation, along with a larger RAM allocation on Pros, meant that they’re the only iPhone 15s capable of running upcoming Apple Intelligence (the company’s branded, preferred spell-out of the AI acronym) capabilities.
Speaking of Apple Intelligence, its resource needs have apparently driven a strategy deviance-from-recent-norm for at least this year. Both the conventional and “Pro” variants of the new iPhone 16 come with the latest A18 SoCs, and the chip versions are seemingly quite similar (although Apple never reveals clock speeds, for example); both included six CPU cores (two performance, four efficiency) and the GPU architectures deviate only in core counts (5 for the baseline A18, 6 for the “Pro”, presumably to maximize overall manufacturing yield). That said, Apple claims that as with past generational steps, it’s made iterative optimizations to the 16-core Neural Engine on-chip deep learning inference coprocessor in both SoCs.
Speaking of manufacturing, both A18 variants are fabricated on the same second-generation 3 nm TSMC process that also acts as the foundry source for the M4 SoC announced earlier this year and to date found only in the iPad Pro. As with past development-sharing examples, I’m guessing that there’s a fair bit of architectural commonality between the M4 and A18. Here’s a summary of Apple’s claimed performance improvements for its two A18 variants:
- A18: 30% faster CPU, 40% faster GPU, 17% higher memory bandwidth than A16 Bionic
- A18 Pro: 15% faster CPU, 20% faster GPU, 17% higher memory bandwidth than A17 Pro
Integrated RAM deviances (or not) between standard and “Pro” A18 SoCs (therefore phones based on them) are unknown at this point. And by the way, the non-“Pro” version of the A17 SoC is still MIA. Furthermore, with the iPhone 15 Pro phones (along with the iPhone 13 line) obsoleted as of this week, the broader A17 SoC line may now be deceased.
Both the standard and “Pro” versions of the iPhone 16 add a dedicated function (albeit multi-function) “camera control” button, in addition to carrying forward the “action” button that had replaced the “mute” switch on the iPhone 15 Pro models. Unsurprisingly, “Pro” iPhone 16s also offer enhanced front- and rear-camera allocations as compared to their conventional siblings:
- Rear:
- 48 Mpixel main with 24 mm focal length and ƒ/1.78 aperture
- 48MP ultrawide with 13 mm focal length and ƒ/2.2
- 12 MPixel 5x telephoto with 20 mm focal length and ƒ/2.8 aperture (in contrast, the 5x support was only offered in last year’s iPhone 15 Pro Max variant)
- Front: 12 Mpixel with ƒ/1.9 aperture
Unlike in the past, the Pro and Pro Max have identical camera setups. Speaking of cameras, they now capture 4K 120 fps video, along with spatial audio (the latter courtesy of an integrated four-microphone array). Also unlike in the past, they’re both larger in display sizes than their non-Pro counterparts:
- iPhone 16: 6.1” diagonal
- iPhone 16 Max: 6.7” diagonal
- iPhone 16 Pro: 6.3” diagonal
- iPhone 16 Pro Max: 6.9” diagonal
No-shows = Next announcements?
Those are the highlights, IMHO. For more, check out the coverage elsewhere, including archived liveblogs. But what didn’t arrive this week, some of which had been rumored beforehand? Well, there’s…
- A true next-gen Watch Ultra (3)
- A next-gen Watch SE (a plastic version had specifically been prognosticated)
- The aforementioned AirPods Pro 3
- Any new iPads (the existing iPad mini 6, which I own is getting particularly “long in the tooth”)
- A new iPhone SE, or
- Any M4-based Macs, following up on the M4-based iPad Pro from earlier this year
My guess would be that, particularly focusing on that last bullet point, we’ll see at least one more round of announcements before the end of the year. Whether they’ll be press-release-only or clusters in an event (next month, mebbe?) is anyone’s guess at this point. But regardless, you know where you’ll find coverage of them. See you then!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Google’s fall…err…summer launch: One-upping Apple with a sizeable product tranche
- The 2024 WWDC: AI stands for Apple Intelligence, you see…
- Apple’s Spring 2024: In-person announcements no more?
- Apple’s latest product launch event takes flight
- Apple’s 2022 WWDC: Generational evolution and dramatic obsolescence
The post It’s September in Apple land, so guess what it’s time for? appeared first on EDN.
DIY RTD for a DMM
Psychologists tell us that frustration increases drive. I was driven to produce the circuit in this design idea by my increasing frustration with a collection of digital thermometers, all of which claimed accuracy to within 0.1°C but the readings of which were mismatched by anything up to a couple of degrees, showing a serious lack of precision.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Such thermometers generally use thermistor sensors, which have a roughly exponential relationship between temperature and resistance. That is tricky to convert into a usable signal with good linearity over a wide measurement range. Alternatives are thermocouples, with their low voltage outputs, and resistance temperature detectors (RTDs), which use the temperature coefficients of metals, usually platinum, to give decent outputs which are linear with respect to temperature over a wide range.
The final impetus for this project came from the discovery of some long-mislaid PT100 RTD sensors, which enabled something fairly precise: a box to convert the RTD’s resistance into a millivolt signal which could be read directly on a standard digital multimeter (DMM). As usual, I have tried to push a simple design to its limits and to wring every last ounce (or milli-Kelvin) of performance from it.
RTD basic circuitRTDs are generally used in bridge circuits as shown in Figure 1.
Figure 1 A bridge incorporating an RTD produces a voltage output related to the temperature, but usually with an offset 0°C point.
R1 and R2 feed current through the RTD and R3, whose resistance is around the mean value of the RTD’s. The voltage across the bridge is then nearly proportional to the temperature, but with an offset. Note that weasel word “nearly”! For an exactly linear relationship between sensor resistance and temperature, R1 and R2 would need to be infinite, implying an unhelpfully infinite drive voltage—or they could be replaced with matched current sources, as in Figure 2.
Figure 2 Using matched current sources in the bridge gives a linear relationship between temperature and output, and helps us define a reference point of 0 °C. Choosing the currents carefully gives a bridge output of 1 mV / °C.
PT100 RTDs, which use (precisely doped) platinum wire or film, are defined to have resistances of 100 Ω at 0°C and 138.5 Ω at 100°C. Using identical currents in each leg of the bridge means that if the reference resistor R3 is 100 Ω, the bridge will be perfectly balanced at 0°C, with zero voltage between the output terminals. If the currents in each leg are set to (∆T°/∆R), or (100/38.5) = 2.597…mA, the differential output voltage will change by exactly 1 mV/°C. Measuring that output with a DMM on its millivolt range will then show the temperature directly.
Practical RTD circuitFigure 3 shows how to do this for real.
Figure 3 A practical circuit capable of delivering exactly 1 mV/°C.
A1-A/Q1/R5 and A1-B/Q2/R6 form the pair of constant-current sources—or rather sinks, since we have turned the circuit upside down. The common reference for each comes from D2, a precision 1.24-V reference, potted down to about 1.12 V, which is of course the (theoretical) voltage that 2.597…mA produces across 430 Ω. The differential voltage across the outputs is now just what we want: 0 V at 0°C and 100 mV at 100°C. (In a perfectly-designed world, where TCRs stayed constant from absolute zero to 2044 K—the melting point of platinum—we’d be using a current of 2.7315 mA.)
The other odds and ends to the left of the schematic are boring practicalities: a CR2032 3-V coin cell, a push-to-read switch, as well as a series diode and resistor feeding a white LED which just dims out at the minimum usable battery voltage of about 2.7 V, where the second decimal place starts to wander. (That power/low battery indication is adequate for lab use, if rather basic.) Consumption measured around 8 mA.
Calibration is necessary but easy. To set the 0°C point, immerse the RTD in crushed, melting ice and adjust R8 for zero output voltage. Then hang it in the steam above the water in a kettle just off the boil and trim R5 for an output of 100.0 mV. That’s it!
Considering errorIt works. It’s simple. It’s correctly set up. What could possibly go wrong?
Firstly, there are the connections to the RTD, with their own resistances adding to that of the sensor. This unit only needed a meter or so of cable as it was purely for lab use. The length of 18 AWG (~1 mm2) wire has a loop resistance of ~90 mΩ, giving an error of ~0.02°C: ignorable, as is the second-order effect of the temperature coefficient of resistance (TCR) of the copper leads themselves. However, many RTD assemblies (as opposed to basic sensor elements) come with three wires, allowing a configuration where this error completely cancels out, assuming that all wires have the same resistance, as shown in Figure 4.
Figure 4 Three-wire connection to the RTD allows cancellation of the cable’s resistance.
Secondly, there is self-heating of the sensor. Most RTD circuits use a 1-mA sensing current, but our ~2.6 mA will dissipate more, around 1 mW. Basic RTD elements are quoted as having thermal resistances of about 20°C/W, so the error may be +~0.02°C, depending on the medium in which the device is immersed and whether that is still or moving. In still air, it could read at least 0.1°C higher than in flowing water. If you are going to use it in air, it’s probably best to set the zero point with the RTD in a cavity surrounded by ice and water rather than immersed in those.
Next, there will be offsets and mismatches in the circuit, which will balance as long as the two current sinks have the same errors, which can be calibrated out. Q1 and Q2 should be matched for hFE because their base currents produce slight excess voltages across R5 and R6, and those need to match for the best temperature stability. (This is really finicky. The sensor may be swinging wildly in temperature, but the measuring circuitry should not be. And the LM385-1.2 reference has a very low voltage tempco in the room-temperature region.)
Using MOSFETS for Q1/2 would have been preferable, their gate currents being zero(ish), but the 3‑V supply did not allow that, at least with the devices on hand. The finite but high values for the sinks’ compliances can be ignored.
Other errors in the 0° and 100° calibration points are possible. The ice for your calibration bath should ideally be made from distilled or at least de-ionized water. (For the curious differences between waters, see this article, and then ignore its implications for this device.)
Boiling point is trickier. At sea level with the atmospheric pressure at 1 bar, its precise value is defined as 99.97°C. The drop in pressure with height reduces water’s boiling point by about 1° for every 300 m increase. Check your altimeter and barometer and adjust accordingly. This is a useful tool for doing so.
Lastly, there is the DMM with which this will be used. The 10-MΩ input resistance of most meters will introduce only a tiny error, which will be compensated for during calibration. Use the highest-resolution meter you must set zero degrees but your target meter for the 100°C point to avoid any meter-calibration issues. Even the cheapest meters (sub-$/£/€5) usually have a 199.9 mV range; why not get one and keep it for thermometry?
Despite all the above fussing and dooming and glooming, only a little care is needed to achieve 0.1°C precision, which is much better than most thermistor-based thermometers can offer. For greater resolution, with another (accurate but imprecise) decimal place, use a 4½-digit meter for the readout. Professional metrologists may quibble over some of the details, but I hope not too violently.
Those MOSFETS for Q1/2 that we met earlier but rejected: with a higher supply voltage and a different op-amp, they can still be used. A TLV2372 (RRIO) would be ideal, but an LM358 works well, as it can sense down to ground and (just) drive up to the positive rail, with input offset currents which are adequately low and fairly constant. In the test, using ZVN3306A MOSFETs, that variant gave stable results with a supply ranging from 4.6 to 30 V. (R2 was increased for the higher voltages.)
Unequal bridge currentsSo far, the currents in the two legs of the bridge have been equal, but they need not be, as the reference current defining the 0°C point can be much lower. Increasing R6/7/8 by ten times or so saves a couple of milliamps with no practical downsides that I can spot, especially when using the MOSFETs. The leads to the sensor must then be short, because the lead resistance compensation scheme shown in Figure 4 only works with equal currents. My Mark 2 version, shown in Figure 5, uses this 10:1 current ratio along with other changes to suit the 9-V supply. It too works fine, drawing about 6 mA.
Figure 5 A higher battery voltage allows the use of MOSFETs in the current sinks, while a lower current in the reference arm of the bridge saves some supply current.
It should be possible to feed the differential output through an instrumentation amp (with gain) to an ADC. Note the phrasing, which means I have neither tried nor even considered that approach in detail. This device was developed for lab use, not a process-control environment.
For a true, full-DIY version, around 100 Ω-worth of (very) fine copper wire should make a good sensor, if you have some patience. Copper’s TCR is close to that of doped platinum (Cu: 3.93 ppt/K; Pt: 3.85) so only alterations to R7/8 (to match the actual resistance) and slight re-trimming of R3 (for the copper’s TCR) would be needed. The 100-Ω RTD figure is common but not mandatory. For higher sensor resistances, use lower drive currents (giving less self-heating), adjusting R5 (and perhaps R6/7/8) to suit.
Perhaps the coil from that junk-boxed analog meter may yet come in handy?
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- Minimize measurement errors in RTD circuits
- Designing with temperature sensors, part three: RTDs
- RTDs provide differential temperature measurement
- Current source allows measuring three-wire RTD
- ADC Requirements for RTD Temperature Measurement Systems
- Ultra-low distortion oscillator, part 1: how not to do it.
The post DIY RTD for a DMM appeared first on EDN.
New UCIe IP raises the chiplet bandwidth bar to 40 Gbps
A new UCIe IP operating at up to 40 Gbps enables more data to travel efficiently across heterogeneous and homogeneous dies—chiplets—in today’s artificial intelligence (AI)-centric data center systems. It supports organic substrates as well as high-density advanced packaging technologies, allowing designers to explore the packaging options that best fit their requirements.
The 40 G UCIe IP solution from Synopsys includes PHY, controller, and verification IP, which makes it a complete protocol stack. The PHY with a controller on top facilitates a seamless connection between two dies via on-chip interconnect protocols—including AXI, CHI C2C, CXS, PCIe, CXL, and streaming—to allow a die-to-die connection between fabrics. Verification IP comes with Synopsys 3DIC Compiler and all the required design collateral and documentation for automated routing flow, interposer studies, and signal integrity analysis.
The 40G UCIe IP is built on a silicon-proven architecture with interoperability to multiple foundry processes. Source: Synopsys
Synopsys claims its 40G UCIe IP supports 25% more bandwidth than the UCIe specification, enabling 12.9 Tbps/mm of data to travel between heterogeneous and homogeneous dies without impacting energy efficiency and silicon footprint. In other words, while complying with the latest UCIe 2.0 specification, the IP solution exceeds the standard with additional bandwidth efficiency.
“Heterogeneous integration with high-bandwidth die-to-die connectivity gives us the opportunity to deliver new memory chiplets with the efficiency needed for data-intensive AI applications,” said Jongwoo Lee, VP of system LSI IP development team at Samsung Electronics.
Key design features
The 40 G UCIe IP, while supporting both UCIe 1.1 and UCIe 2.0 standards, offers additional capabilities for designers to easily integrate die-to-die connectivity IP and simplify overall chiplet design. Start with a single clock reference that supports 100-MHz reference clocking for all UCIe PHYs, eliminating the need for additional high-frequency system PLLs.
The internal PLL generates all the high-speed peripheral clock (pclk) and lower local clock (lclk) frequencies needed during initialization and regular operation. Moreover, the lower local clock is shared with the controller to further simplify system integration. These capabilities simplify clocking architecture, optimize power, and speed up die-to-die link initialization without needing to load firmware.
Next, signal integrity monitors (SIMs) are integrated into the IP for diagnosis and analysis to ensure multi-die package reliability and quality. These test features embedded in the PHY allow high-coverage tests of the PHY at the wafer level for known good die (KGD) and after package assembly. Automotive chiplet designers can leverage the integrated SIM sensors and test and repair functions to build more reliable dies while addressing the demanding automotive requirements.
Then there are vendor-defined messages that enable the use of existing UCIe sideband channels to send low-speed, low-priority communication between dies without hampering the main data path. So, instead of interrupting the high-bandwidth path with this type of traffic, a die can use the UCIe sideband to send commands such as interrupts and telemetry to the other die.
Finally, hardware-based bring-up speeds initialization without needing to load heavy firmware on the remote chiplet. Otherwise, when a UCIe link bring-up uses heavy firmware to be loaded into the die, a separate path would be required to load the firmware. That’s wasteful and time consuming from a design standpoint.
Such capabilities and higher speeds bode well for the UCIe interconnect, a de facto standard for die-to-die connectivity. The support for advanced packaging can also make chiplets development more affordable.
Related Content
- Startup Tackles Chiplet Design Complexity
- How the Worlds of Chiplets and Packaging Intertwine
- Chiplets advancing one design breakthrough at a time
- Chiplets diary: Three anecdotes recount design progress
- Chiplets diary: Controller IP complies with UCIe 1.1 standard
The post New UCIe IP raises the chiplet bandwidth bar to 40 Gbps appeared first on EDN.
20MHz VFC with take-back-half charge pump
Way back in 1986, famed analog innovator Jim Williams, in “Designs for High Performance Voltage-to-Frequency Converters” published his “King Kong” 100 MHz VFC. I have never seen its equal. Certainly Figure 1’s little circuit, topping out around 20 MHz, is nowhere close.
Figure 1 Take-back half (TBH) charge pump gives simple VFC reasonable performance at 20 MHz.
Wow the engineering world with your unique design: Design Ideas Submission Guide
However, although left in Kong’s dust with its doors blown off, Figure 1’s VFC is nevertheless several times faster than commercially available VFCs (e.g., the 4-MHz VFC110) while conveniently running on less than 10 mA from a single +5-V supply/reference.
What makes it work at such a high output frequency (without K. Kong’s complexity) is (mainly) the self-compensating TBH diode charge pump described in an earlier design idea: “Take-back-half precision diode charge pump”. We’ll get to that shortly.
Meanwhile, here’s an overview.
A 0-to-1 mA full-scale input metered by R1 is integrated on C1, causing the input amplifier’s output to ramp up, turning on current sink Q1. The sink current ramps down the voltage at Schmidt-trigger U1 pin 1 until its negative trigger level (~1.5 V) is crossed. This starts a cascade of transitions through the three-inverter daisy chain delay line. Pin 2 snaps high, making pin 4 go low, flipping pin 6 high. Propagation through the chain takes about 20 ns. Arrival of the ramp-reset pulse at pin 6 is fed back through D5 to pin 1, pushing it through U1’s positive trigger level. This initiates a complementary wave through the daisies, eventually completing the cycle in ~40 ns.
Oscillator frequency is thus (very roughly) proportional to R1 input current. It’s the job of the pump and op-amp to make it accurately so. The trick for doing this relies on the TBH pump with its two funny looking anti-parallel diode pairs: D1 D2 and D3 D4.
D3 and D4 couple input-balancing negative feedback current to C1 that’s theoretically equal to -100 µA/MHz but in practice is reduced by sundry error terms caused by various diode non-idealities. These include forward voltage drop, reverse recovery time, stray and shunt capacitances, etc.
Meanwhile opposite-polarity D1 and D2 couple positive feedback current to C1 that’s (again theoretically) equal to +50 µA/MHz but is practically reduced by exactly the same troublesome list of nonidealities listed for D3 and D4.
Consequently, when the two opposing currents are summed on C1, the errors terms neatly cancel, leaving only the desired -(100 – error) + (50 – error) = -50 µA/MHz of accurate negative feedback, making:
Fout = 20MHz Vin (1000 / R1)
Please see “Take-back-half precision diode charge pump” for a somewhat less abbreviated derivation.
A few picky design details include these items.
Q1’s base drive resistor was chosen according to the 2N3904 datasheet min/max beta range to be low enough to allow sufficient collector current for a full 20 MHz, but high enough to prevent dragging down D5 and U1 pin 6 excessively and killing oscillation because pin 1’s positive trigger level can’t be reached. This latter condition would potentially cause the converter to latch up
Leakage-killer R4 prevents U1, D5, and Q1 summed leakage currents from generating zero offset oscillation even when the op-amp has turned Q1 off.
If you can’t find a use for the remaining elements of U1 that are unused, be sure to ground their floating inputs or tie them to +5.
Banana, anyone?
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Take-back-half precision diode charge pump
- Temperature controller has “take-back-half” convergence algorithm
- Take-back-half thermostat uses ∆Vbe transistor sensor
- Generate proportional feedback from a bang-bang sensor with a “take back half” controller
- Precision temperature controller has thermal-gradient compensation
- Synthesize variable in-circuit Rs, Ls, and Cs
The post 20MHz VFC with take-back-half charge pump appeared first on EDN.
SiC MOSFETs drive automotive architectures
Navitas has released a portfolio of third-generation automotive-qualified SiC MOSFETs in D2PAL-7L and TOLL surface-mount packages. Leveraging the company’s trench-assisted planar technology, the Gen-3 Fast SiC devices enable high-speed, cool-running operation for EV charging, traction, and DC/DC conversion.
According to Navitas, the Gen-3 Fast MOSFETs achieve up to 25°C lower case temperatures compared to conventional devices, resulting in an operating life that is up to three times longer than alternative SiC products. The new 650-V MOSFETs, with RDS(ON) ratings ranging from 20 mΩ to 55 mΩ, are designed for 400-V EV battery architectures. The 1200-V Gen-3 Fast MOSFETs, offering RDS(ON) values from 18 mΩ to 135 mΩ, are optimized for 800-V systems.
Both the 650-V and 1200-V ranges are AEC-Q101 qualified in the conventional D2PAK-7L (TO-263-7) package. For 400-V EVs, the 650 V-rated TOLL package offers several advantages: a 9% reduction in junction-to-case thermal resistance, a 30% smaller PCB footprint, 50% lower height, and 60% smaller overall size compared to the D2PAK-7L. These improvements enable high power density and fast switching with minimal package inductance of just 2 nH.
The automotive 650-V and 1200-V G3 Fast SiC MOSFETs are now available for purchase. For more information, contact info@navitassemi.com.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post SiC MOSFETs drive automotive architectures appeared first on EDN.
Wideband amplifier meets sub-6 GHz requirements
A wideband RF power amplifier module from Elite RF covers signals from 20 MHz to 6 GHz, delivering a maximum output power of 20 W. Elite RF can also combine these 20-watt amplifiers to create higher-power systems, such as 100-watt models, enabling signal transmission over greater distances.
This wideband amplifier is particularly effective in counter-drone systems, addressing growing security and privacy concerns related to drones’ increasing presence across industries. The amplifier supports key communication frequencies in the ISM band, including 433 MHz, 915 MHz, 2.45 GHz, and 5.8 GHz.
The amplifier module features a compact and rugged design for seamless integration into existing systems. With this new addition to its product line of RF amplifiers, which spans frequencies up to 40 GHz, Elite RF aims to provide one-stop power amplifier solutions to RF system integrators.
For more information or to request a price quote, contact sales@eliterf.com.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Wideband amplifier meets sub-6 GHz requirements appeared first on EDN.
Wireless combo module targets embedded devices
KAGA FEI’s WKI611AA1 Wi-Fi 6 and Bluetooth 5.4 combo module simplifies the development of low-power embedded devices. With a built-in antenna and multiple certifications, it reduces antenna development time and certification costs for applications such as industrial automation, IoT gateways, surveillance cameras, and smart home appliances.
The WKI611AA1 module employs NXP Semiconductor’s IW-611, a highly integrated dual-band 2.4/5-GHz 1×1 Wi-Fi 6 and Bluetooth/Bluetooth Low Energy 5.4 chip. Its Wi-Fi subsystem includes a Wi-Fi MAC, baseband, and direct-conversion radio with an integrated power amplifier, low-noise amplifier, and transmit/receive switch, eliminating the need for an external RF front-end module. The chip’s independent Bluetooth subsystem supports Bluetooth profiles.
Dedicated CPUs and memories for both the Wi-Fi and Bluetooth subsystems enable real-time, independent protocol processing. Interfaces to external host processors include SDIO 3.0 for Wi-Fi and UART for Bluetooth.
The WKI611AA1 module comes in an LGA surface-mount package that is 25.0×15.7×2.1 mm. It operates from a 3.3-V/1.8V power supply over a temperature range of -40°C to +85°C. Certifications for the combo module include Radio Law MIC (Japan), FCC (USA), and ISED (Canada).
The WKI611AA1 wireless combo module will be available for sampling in December 2024, with mass production slated to begin in May 2025.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Wireless combo module targets embedded devices appeared first on EDN.
PCIe 4.0 SSD optimizes system performance
Powered by a Gen 4×4 NVMe controller, the NV3 PCIe 4.0 SSD from Kingston Digital delivers sequential read and write speeds of up to 6000 MB/s and 5000 MB/s, respectively. The drive offers storage capacities of 500 GB, 1 TB, 2 TB, and 4 TB in a compact single-sided M.2 22×80×2.3-mm form factor.
The NV3 solid-state drive provides high-speed performance for tasks ranging from editing to gaming. Its low power consumption and heat generation boost overall system efficiency. The drive’s compact form factor makes it well-suited for thin laptops and small PCs, integrating seamlessly via M.2 connectors.
Endurance of the NV3 SSDs ranges from 160 TBW for the 500-GB model to 1280 TBW for the 4-TB model. Operating over a temperature range of 0°C to 70°C, the drives also feature an MTBF of 2,000,000 hours.
NV3 drives include a 1-year subscription to Acronis True Image software for disk cloning, along with Kingston SSD Manager for monitoring drive health, tracking disk usage, updating firmware, and securely erasing data.
SSD models with 500-GB, 1-TB, and 2-TB storage capacities are available now, with the 4-TB model launching in Q4 2024.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post PCIe 4.0 SSD optimizes system performance appeared first on EDN.
14-bit scopes boost resolution for general use
The InfiniiVision HD series oscilloscopes from Keysight employ a 14-bit ADC, offering four times the vertical resolution of 12-bit scopes. With a noise floor of 50 µV RMS, they also reduce noise by half compared to other general-purpose scopes.
Covering bandwidths between 200 MHz and 1 GHz, the HD3 series enables engineers to detect even the smallest signal anomalies. Its 14-bit resolution, low noise floor, and update rate of 1.3 million waveforms/s ensure fast and precise debugging across all measurements.
The oscilloscopes provide two or four analog channels, along with 16 digital channels. A deep memory of up to 100 million points captures longer time spans at the full sample rate of 3.2 Gsamples/s, enhancing measurement and analysis results. Additionally, the HD3 series introduces Fault Hunter software, which automatically analyzes signal characteristics based on user-definable criteria.
Prices for the InfiniiVision HD3 series start at $8323 for a 2-channel oscilloscope and $9187 for a 4-channel model.
InfiniiVision HD3 product page
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post 14-bit scopes boost resolution for general use appeared first on EDN.
Interconnect underdogs steering chiplet design bandwagon
The chiplets movement is gaining steam, and it’s apparent from how this multi-die silicon premise is dominating the program of the AI Hardware and Edge AI Summit to be held in San Jose, California from 10 to 12 September 2024. The annual summit focuses on deep tech and machine learning ecosystems to explore advancements in artificial intelligence (AI) infrastructure and edge deployments.
At the event, Alphawave Semi’s CTO Tony Chan Carusone will deliver a speech on chiplets and connectivity while showing how AI has emerged as the primary catalyst for the rise of chiplet ecosystems. “The push for custom AI hardware is rapidly evolving, and I will examine how chiplets deliver the flexibility required to create energy-efficient systems-in-package designs that balance cost, power, and performance without starting from scratch,” he said while talking about his presentation at the event.
Figure 1 Chiplets have played a vital role in creating silicon solutions for AI, and that’s extending to 6G communication, data center networking, and high-performance computing (HPC). Source: Alphawave Semi
At the summit, Alphawave Semi will showcase an advanced HBM3 sub-system designed for AI workloads as well as AresCORE, a 3-nm 24-Gbps UCI integrated with TSMC CoWoS advanced packaging. There will also be a live demonstration of die-to-die (D2D) traffic at 24 Gbps per lane.
LG’s chiplet design
Another chiplets-related announcement involves leading consumer electronics manufacturer LG Electronics, which has created a system-in-package (SiP) encompassing chiplets with processors, DDR memory interfaces, AI accelerators, and D2D interconnect. Blue Cheetah Analog Design provided its BlueLynx D2D interconnect subsystem IP for this chiplet-based design.
Figure 2 Chiplet designs demand versatile interconnect solutions that minimize die-to-die latency and support a variety of packaging requirements. Source: Blue Cheetah
BlueLynx D2D interconnect provides customizable physical (PHY) and link layer chiplet interfaces and supports both Universal Chiplet Interconnect Express (UCIe) and Bunch of Wires (BoW) standards. Moreover, the PHY IP solutions can be integrated with on-die buses using popular standards such as AMBA, CHI, AXI, and ACE.
The D2D interconnect IP is available for 16 nm, 12 nm, 7 nm, 6 nm, 5 nm, and 4 nm process nodes and works on multiple fabs. It also facilitates both standard and advanced packaging while supporting multiple bump pitches, metal stacks, and orientations.
Related Content
- Startup Tackles Chiplet Design Complexity
- How the Worlds of Chiplets and Packaging Intertwine
- Chiplets advancing one design breakthrough at a time
- Chiplets diary: Three anecdotes recount design progress
- Chiplets diary: Controller IP complies with UCIe 1.1 standard
The post Interconnect underdogs steering chiplet design bandwagon appeared first on EDN.
Fuse failures
Fuses placed in series with the path of some current flow are protective of excessive current flow taking place in that current path. Although fuses are rated for clearing in terms of “I²t” where “I” is in amperes and “t” is time, my personal view is that such ratings are of dubious value from a protective calculation standpoint. To me, a fuse is either a “fast blow” device or a “slow blow” device at whatever amperage applies and which type of fuse to select isn’t always cut and dried, straight forward, or unambiguously obvious.
Some fuses contain their innards inside of glass which allows you to see the current carrying element and some do not. Where glass lets you see that element, actual fuse blowouts can be instructive in terms of the overload condition that led to that blowout and can perhaps lead you to reselecting that fuse’s rating or to fixing some other problem.
Figure 1 An intact fuse and two blown fuses: one from a moderate current overload and one from a massive current overload.
The middle case in the above figure is a fuse that is probably underrated for whatever application it has been serving. Using a somewhat higher I²t device might be a good idea.
However, the lower case shows a fuse that got hit with a blast of overload current that was way, way, way beyond reason and something elsewhere in the system in which this fuse played a role had just plain better be corrected.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- What caused this outlet strip’s catastrophic failure?
- Simple blown-fuse indicator sounds an alarm
- Blown fuse has a meltdown
- Metal eFuse teardown
- PWM circuit uses fuse to sense current
- Managing inrush and system protection for electrical systems
- A new level of circuit protection: The e-fuse
The post Fuse failures appeared first on EDN.
Brute force mitigation of PWM Vdd and ground “saturation” errors
An excerpt from Christopher Paul’s “Parsing PWM (DAC) performance: Part 1—Mitigating errors”:
“I was surprised to discover that when an output of a popular µP I’ve been using is configured to be a constant logic low or high and is loaded only by a 10 MΩ-input digital multimeter, the voltage levels are in some cases more than 100 mV from supply voltage VDD and ground…Let’s call this saturation errors.”
Wow the engineering world with your unique design: Design Ideas Submission Guide
The accuracy of PWM DACs depends on several factors, but none is more important than their analog switching elements’ ability to reliably and precisely output zero and reference voltage levels in response to the corresponding digital states. Sometimes however, as Christopher Paul observes in the cited design idea (Part 1 of a 4-part series), they don’t. The mechanism behind these deviations isn’t entirely clear, but if they could be reliably eradicated, the impact on PWM performance would have to be positive. Figure 1 suggests a (literally) brute-force fix.
Figure 1 U1 is a multi-pole (e.g., 74AC04 hex inverter) PMW switch where op-amp A1 forces switch zero state to accurately track 0 = zero volts, op-amp A2 does the job for 1 = Vdd.
U1 pin 5’s connection to pin 14 drives pin 6 to logic 0, sensed by A1 pin 6. A1 pin 7’s connection to U1 pin 7 forces the pin 6 voltage to exactly zero volts, and thereby forces any U1 output to the same accurate zero level when the associated switch is at logic 0.
Similarly, U1 pin 13’s connection to pin 7 drives pin 12 to logic 1, sensed by A2 pin 2. A2 pin 1’s connection to U1 pin 14 forces the pin 12 voltage to exactly Vdd, and thereby forces any U1 output to the same accurate Vref level when the associated switch is at logic 1.
Thus, any extant “saturation errors” are forced to zero, regardless of the details of where they’re actually coming from.
Vdd will typically be c.a. 5.00V. And V+ and V- can come from a single 5-V supply via any of a number of discrete or monolithic rail boost circuits. Figure 2 is one practical possibility.
Figure 2 A practical source for V+ and V- set R1 and R2 = 200k for ∆ = 1volt.
The Figure 2 circuit was originally described in “Efficient digitally regulated bipolar voltage rail booster”.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Parsing PWM (DAC) performance: Part 1—Mitigating errors
- Efficient digitally regulated bipolar voltage rail booster
- Cancel PWM DAC ripple with analog subtraction—revisited
- Fast-settling synchronous-PWM-DAC filter has almost no ripple
- Minimizing passive PWM ripple filter output impedance: How low can you go?
- Fast PWM DAC has no ripple
- LTC Design Note: Accurate, fast settling analog voltages from PWM signals
The post Brute force mitigation of PWM Vdd and ground “saturation” errors appeared first on EDN.
Raspberry Pi Products Now Available at TME
Raspberry Pi, globally recognized for its single-board computers, has revolutionized the education and hobbyist sectors and found applications in industrial equipment. Their versatile products are now included in the TME product catalogue, making advanced technological solutions more accessible.
Diverse Product RangeRaspberry Pi’s offerings extend well beyond their renowned single-board computers. Their motherboards feature essential ports like USB, HDMI, and Ethernet, along with SD card slots and GPIO connectors for versatile project integration. The latest Raspberry Pi 5 model introduces a dual-core 64-bit Broadcom BCM2712 processor, up to 8 GB RAM, and enhanced features such as PCIe extension ports, a power switch, and an RTC clock system.
Innovative ModelsRaspberry Pi 5: It is equipped with the dual-core, 64-bit Broadcom BCM2712 system, based on the Arm Cortex-A76 architecture and clocked at 2.4 GHz. So it deivers a 2-3x increase in CPU performance relative to Raspberry Pi 4. Moreover, the computer can be fitted with operating memory up to 8 GB RAM and a graphic processor (GPU VideoCore 7) supporting the OpenGL and Vulkan technologies.
Raspberry Pi 400: This model integrates the RPi 4 board into a keyboard housing, reminiscent of classic microcomputers. It comes with a mouse, power supply, pre-installed operating system, and a detailed manual, making it particularly appealing to beginners and younger users.
Raspberry Pi Zero: Known for its compact size and energy efficiency, the Raspberry Pi Zero is ideal for mobile devices and IoT projects. Despite its smaller form factor, it includes essential features like an HDMI connector, USB output, SD card slot, CSI port, and a built-in wireless communication module (Zero W variant).
For projects requiring different formats or more powerful processing capabilities, Raspberry Pi offers Compute Modules. These miniaturized versions provide the core motherboard components without additional ports, allowing for custom configurations via high-density connectors. The CM4 variants offer SMD board-to-board connectors for an even lower profile, enhancing their flexibility for various applications.
RP2040 MicrocontrollerRecognizing the need for simpler projects, Raspberry Pi developed the RP2040 microcontroller, based on the Cortex M0+ architecture. This microcontroller, featured in the Raspberry Pi Pico module, includes 264 kB RAM, supports external memory up to 16 MB, and integrates various peripherals such as serial bus controllers, ADC converters, and PWM generators. The Pico module, with its small size and ease of use, is ideal for a wide range of applications.
Comprehensive AccessoriesRaspberry Pi also offers a wide range of accessories, including power supply modules and enclosures designed to ensure trouble-free operation and practical usability. Enclosures protect the PCB and components while providing access to all necessary ports and connectors. Raspberry Pi also manufactures peripherals like mice and keyboards, as well as the Raspberry Pi Touch Display, a 7-inch screen with a touch panel that connects via DSI and is powered through GPIO.
The inclusion of Raspberry Pi products in the TME catalogue significantly broadens the availability of cutting-edge technology for educators, hobbyists, and industrial designers. With TME’s extensive inventory, the latest Raspberry Pi solutions are now within easy reach, ready to bring your ideas to life.
The post Raspberry Pi Products Now Available at TME appeared first on EDN.