-   Українською
-   In English
EDN Network
A new embedded software platform meshes analog and digital
Analog and mixed-signal chipmakers are increasingly aiming to integrate analog signal chain with embedded processing platforms to build vertical solutions, and today’s announcement from Analog Devices Inc. reinforces this design trend.
ADI is creating what it calls a software-defined version of itself by providing a base software enablement platform that offers drivers, operating systems, middleware, and libraries built on a robust and secure software supply chain. The Wilmington, Massachusetts-based company calls it CodeFusion Studio.
Figure 1 The embedded software development platform encompasses core technologies like amplifiers, RF, and sensors as well as embedded digital software for processing, algorithms and security along with solution stacks on top. Source: Analog Devices Inc.
“ADI is expanding its digital portfolio, from lower-cost MCUs for precision applications to more advanced heterogeneous compute devices to analog chips with a digital interface,” said Rob Oshana, senior VP of Software and Security Group at ADI. “CodeFusion Studio provides a single, unified development environment for our digital portfolio.”
CodeFusion Studio
CodeFusion Studio—a software development environment tailored for ADI’s analog and digital technologies—is based on Microsoft’s Visual Studio Code. It comprises three core components. First, software development kit (SDK), which includes drivers, OSes, middleware, libraries and domain-specific reference applications.
Second, an integrated development environment (IDE) facilitates heterogeneous application development, debugging, and optimization. Third, configuration and productivity tools assist in system and core configuration, end-to-end security implementation, technical discovery, and efficient data flow through the system.
Figure 2 Essential features include breakpoints, disassembly, heterogeneous debugging, and RTOS thread awareness. Source: Analog Devices Inc.
Oshana notes that everything in CodeFusion Studio—from SDK to IDE and configuration tools—is open-source, offering design engineers greater control over their software development pipeline. “Open-source tooling provides developers full ownership of their software development pipeline.”
CodeFusion Studio leverages a modern IDE and command-line interface, encompassing open-source configuration and profiling tools to simplify development on heterogeneous processors. It also makes SDKs easily accessible by including Zephyr(r) and other communities with a broad ecosystem of technology plug-ins and providers.
Next, the new software platform supports the Assure Trusted Edge Security Architecture, ADI’s hardware and software security foundation that aims to facilitate a simple and flexible way to natively implement security inside semiconductor devices. It includes hardware security capabilities within select ADI hardware products and software layers with application programming interfaces (APIs) available within Code Fusion Studio.
Developer Portal
Besides CodeFusion Studio, an embedded software development environment, ADI has also unveiled a Developer Portal, which centralizes code samples, product documentation, and other resources to efficiently work with ADI’s technology and alleviate complexity. The Developer Portal brings together resources including tools, drivers, SDKs, sample code, tutorials, documentation, community news, and updates on design events.
ADI wants developer.analog.com to become the primary place for developers to find the tools and resources they need to create new products and solutions and to stay current with the company’s hardware and software offerings.
Figure 3 The new embedded software development offers features like quick project setup as well as clock and configuration tools. Source: Analog Devices Inc.
At a time when embedded software engineering is becoming an increasingly complex challenge, development environments such as CodeFusion Studio built from the ground up can help simplify the embedded development experience. Especially, when it comes from an analog and mixed-signal design house like ADI, already engaged in algorithm development for signal processing applications.
“We looked for new ways for ease of use by reducing complexity and we didn’t have to worry about old, legacy software offerings,” Oshana said. He added that legacy platforms are often proprietary and fail to provide open, extensible interfaces essential for modern heterogeneous systems.
“Silicon vendors rarely think about consumers and debug. Fantastic that ADI is addressing this,” noted a user after reviewing this new development platform. “I needed this 20 years ago.”
Related Content
- Embedded Basics
- All Things Embedded
- 8 pillars of embedded software
- Mixed-Signal = Analog + Digital, or is there more to it?
- 5 Steps To Designing An Embedded Software Architecture
The post A new embedded software platform meshes analog and digital appeared first on EDN.
Quartz oscillator with shock excitation
The circuit in Figure 1 seems utterly simple but demonstrates unusual behavior. It produces an almost square wave of odd-integer quartz harmonics, including its main frequency.
You can determine the output frequency of the circuit (Fo) simply by varying a resistor’s value.
Figure 1 A simple circuit that produces an almost square wave odd-integer of quartz harmonics.
The circuit uses shock excitation for the resonance oscillation of the quartz. In contrast to well-known oscillators, the circuit explores feedback from its highly nonlinear output providing the shock excitation of the quartz resonator which synchronizes the circuit oscillation.
Wow the engineering world with your unique design: Design Ideas Submission Guide
One potentially strange choice was to use a Schmitt trigger as an active element, albeit this trigger is far more helpful than an ordinary inverter; in this case it also ensures the unusual abilities of the circuit.
The output square wave of Schmitt trigger contains only components of odd-integer harmonic frequencies (of the form 2*π*(2*k−1)*f).
Hence, filtering out the undesirables with the help of LPF RC (look at the equivalent circuit on Figure 2) can provide a quite good excitation for the quartz. (Here C is the common capacitance associated with the quartz node: a parasitic capacitance plus capacitances of the trigger input and the quartz itself.)
Figure 2 A LPF RC equivalent circuit that provides excitation for quartz oscillator.
Assuming the rising threshold Vt1 and the falling threshold Vt0 are symmetrical (the case of 54HC14), the frequency of a free-running Schmitt trigger RC oscillator can be found by the approximately by equation:
Fofr = 1/(2*R*C*ln2) = 0.72/ (R*C)
To make the synchronization possible, this free-run frequency must be slightly less than the target frequency.
Note: if this condition is not held, the circuit can oscillate on a stray combination of sub-harmonics of the quartz, or any unrelated frequency determined mainly by RC. The question of the phase noise of such an oscillator is also open.
The circuit may be less useful for higher frequencies since a higher frequency means lower value of R and therefore more heavy shunting of the resonator by this resistor. The lower values of R also distort our simple model of a square wave oscillator.
But it is well suited for rather low quartz frequencies, it was used for frequencies in the range from 32 kHz to 1 or 2 MHz.
For instance, with Fq = 100 kHz the values of R in range 150k to 250k correspond to the main frequency (100 kHz), R from the range 85k to 40k gives the 3rd harmonic (300 kHz), values from the range 65k to 75k will give 5th harmonic (500 kHz) and so on. Surely, all these values are given as a guide for the case of 54HC14 and Edd = 5 V.
—Peter Demchenko studied math at the University of Vilnius and has worked in software development.
Related Content
- Crystal-oscillator circuit is ultralow power
- Crystal Oscillator Fundamentals and Operation—Part II
- Crystal Oscillator Fundamentals and Operation—Part III
- Making oscillator selection crystal clear
- Oscillators: How to generate a precise clock source
The post Quartz oscillator with shock excitation appeared first on EDN.
Portable signal generators reach 26 GHz
Two analog signal generators from Keysight enable component and device characterization at frequencies up to 26 GHz. The AP5001A RF signal generator covers 9 kHz to 6.1 GHz, while the AP5002A microwave signal generator spans 9 kHz to 26 GHz. Their compact, lightweight design allows easy transport and efficient use of lab space.
Both generators deliver accurately leveled output power at 1 GHz, ranging from -120 dBm to +17 dBm for the AP5001A and up to +23 dBm for the AP5002A. Each instrument provides an OCXO-stabilized signal with -130 dBc/Hz phase noise at 1 GHz and a 20 kHz offset, ensuring mHz resolution for precise measurements. The fast switching speed of the AP5001A, as low as 20 µs, accelerates testing and increases throughput.
Keysight’s analog signal generators offer modulation capabilities, including AM, FM, PM, pulse, pulse train, and frequency chirps. They come equipped with an LCD touch screen, remote desktop software, and a carrying handle. The company states that the generators are future-ready, with all frequencies and options available for license upgrades.
Prices for the AP5001A and AP5002A signal generators start at $7357 and $17,850, respectively.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Portable signal generators reach 26 GHz appeared first on EDN.
Ideal diode switch elevates UCB-C safety
Offering Limited Power Source (LPS) functionality, the AOZ1390DI ideal diode protection switch from AOS improves the efficiency and safety of USB Type-C applications. LPS limits the current and voltage supplied to the load, protecting sensitive components from conditions such as overcurrent and overvoltage.
In multiport ORing or parallel power applications, the LPS(B) pin of the AOZ1390DI can be connected to the Disable(B) pin of one or more AOZ1390DI devices across different ports. The LPS feature acts as a watchdog, disabling the port if another port in the same system is faulty or damaged. The ability to prevent excessive power flow from malfunctioning ports makes the AOZ1390D1 well-suited for multiport USB-C Power Deliver (PD).
The AOZ1390DI features Ideal Diode True Reverse Current Blocking (IDTRCB), effectively preventing undesired reverse current from VOUT to VIN. An integrated back-to-back MOSFET provides a typical on-resistance of 18 mΩ and a high Safe Operating Area (SOA). Input operating voltage ranges from 3.3 V to 23 V, with both VIN and VOUT terminals rated for an absolute maximum of 30 V.
The AOZ1390DI ideal diode protection switch is available in two variants. The -01 variant automatically restarts after fault conditions are cleared, while the -02 version latches the power switch off.
Both the AOZ1390DI-01 and AOZ1390DI-02 cost $1.40 each in lots of 1000 units. They are available in production quantities with a standard lead time of 12 weeks.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Ideal diode switch elevates UCB-C safety appeared first on EDN.
FPGA is optimized for high-bandwidth workloads
The Achronix Speedster AC7t800 FPGA delivers 12 Tbps of fabric bandwidth, making it well-suited for AI/ML, 5G/6G, and data center applications. Manufactured on TSMC’s 7-nm FinFET process, this midrange FPGA features a 2D network-on-chip (2D NoC) for 12 Tbps bandwidth, 864 machine learning processors, and six GDDR6 subsystems (including controller and PHY) that provide 1.5 Tbps of external memory bandwidth. It also supports double-bit error detection and single-bit error correction.
The AC7t800 supplies 711,000 logic elements (LEs), the equivalent of 730,000 system logic cells (LCs). Along with GDDR6 interfaces, the FPGA provides two 400-Gbps Ethernet channels, 16 PCIe Gen5 lanes, and 24 12-Gbps serializer/deserializer channels. The device’s 2D NoC facilitates connections among all interconnects, I/O, memory, internal functional blocks, and the FPGA fabric. According to Achronix, the 2D NoC reduces routing congestion by as much as 40% compared to conventional FPGAs.
Samples of the AC7t800 FPGA are available now.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post FPGA is optimized for high-bandwidth workloads appeared first on EDN.
Page EEPROM boasts flash-like speed
ST has launched a page EEPROM that provides the speed and density typical of serial flash, combined with the byte-level flexibility of EEPROM. The SPI page EEPROM family offers densities of 8 Mbits, 16 Mbits, and 32 Mbits, significantly increasing storage compared to standard EEPROMs. These devices can be used in wearables, healthcare devices, asset trackers, e-bikes, and other industrial and consumer products.
Embedded smart page management allows byte-level write operations for processes like data logging, while also supporting page/sector/block erase and page programming up to 512 bytes for handling firmware OTA updates. The devices also offer buffer loading, which can program several pages simultaneously. The data-read speed of 320 Mbps is about 16 times faster than standard EEPROM, while write-cycle endurance of 500,000 cycles is several times higher than conventional serial flash.
With peak current control, page EEPROM minimizes power supply noise and prolongs the runtime of battery-operated equipment. According to ST, the write current is below that of many conventional EEPROMs, and there is a deep power-down mode with fast wakeup that reduces the current to below 1 µA.
The M95P08, M95P16, and M95P32 page EEPROMs are in production now, with prices starting at $0.50 for the 8-Mbit MP95P08.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Page EEPROM boasts flash-like speed appeared first on EDN.
Eval kit promotes LoRaWAN for smart home
Semtech’s single-channel LoRaWAN hub evaluation kit supports smaller-scale network deployments, such as SMB and smart home applications. Designed for low-density networks of up to 50 end devices, the kit is compatible with the LoRaWAN standard and LoRaWAN 1.0.x devices. It uses Wi-Fi for backhaul and can be configured via an embedded webpage.
This turnkey solution provides basic LoRaWAN connectivity and supports several Semtech LoRa sub-GHz transceivers, including the SX1261, SX1262, SX1268, LR1121, and LLCC68. The LRWHUB1EVK1A evaluation kit features a shield adapter board with an Espressif ESP32-S3, a low-power MCU-based SoC with integrated 2.4-GHz Wi-Fi and Bluetooth LE. Additionally, the kit comes with a separate OLED display adapter board. It requires a LoRa radio shield, sold separately.
Global analyst Omdia predicts LoRaWAN to have the greatest annual growth, at 30% over the 2023-2030 forecast period. “It is a comparatively recent technology finding a niche in longer-distance, lower-power applications like irrigation systems and security sensors for property perimeters,” noted Omdia senior research director, Edward Wilford. “The cost effectiveness of Semtech’s one-channel hub aligns well with such smaller-scale applications.”
For more information about Semtech’s LRWHUB1EVK1A single-channel LoRaWAN hub evaluation kit and reference designs, click here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Eval kit promotes LoRaWAN for smart home appeared first on EDN.
Spin memristor mimics brain for energy efficiency in AI
A new neuromorphic element called a “spin-memristor” mimics the energy-efficient operation of the human brain to reduce the power consumption of artificial intelligence (AI) applications to 1/100th of traditional devices. TDK developed this “spin-memristor” as the basic element of a neuromorphic device in collaboration with the French research outfit CEA.
It’s apparent by now that energy consumption utilizing big data and AI will boom, inevitably leading to complexity around the computational processing of vast amounts of data. So, TDK aims to develop a device that electrically simulates human brain’s synapses: the memristor.
Figure 1 The “spin-memristor” has been demonstrated to function as the basic element of a neuromorphic device. Source: TDK
Here, it’s important to note that conventional memory elements store data as either 0 or 1. On the other hand, a spin-memristor can store data in analog form, just as the brain does. That enables it to perform complex computations with ultra-low power consumption.
While memristors for neuromorphic devices already exist, they face critical challenges, including changes in resistance over time, difficulties in controlling the precise writing of data, and the need for control to ensure data is retained. TDK’s spin-memristor overcomes these issues and provides immunity to environmental influences and long-term data storage while reducing power consumption by cutting leakage current.
Practical applications
After jointly developing spin-memristor with CEA, TDK is partnering with the Center for Innovative Integrated Electronic Systems at Tohoku University to create practical applications for this device. While the tie-up between TDK and CEA has demonstrated that spin-memristors can serve as the basic element of a neuromorphic device, manufacturing them requires the integration of semiconductor and spintronic manufacturing processes.
Spintronics is a technology that utilizes both the charge and spin of electrons or the spin element alone. TDK’s AI semiconductor development program, in collaboration with Tohoku University, will work on fusing memristors with spintronics technology.
Figure 2 TDK is collaborating with Tohoku University to develop practical applications for spin-memristors. Source: TDK
It’s worth noting that the integration between semiconductor and spintronic manufacturing processes has already been accomplished in a similar product: MRAM. TDK chose Tohoku University as its partner mainly because it’s a leading academic institution in MRAM research and development.
Related Content
- Will memristors prove irresistible?
- Memristor emulates neural learning
- Knowm’s memristors alive and shipping
- How Memristors Could Help Drive AV Evolution
- Memristor Computer Emulates Brain Functions
The post Spin memristor mimics brain for energy efficiency in AI appeared first on EDN.
Preaccumulator handles VFC outputs that are too fast for a naked CTP to swallow
Analog-to-digital conversion based on the classic combination of a voltage-to-frequency converter (VFC) with a counter has been around for (many) decades, mainly because it has some durable time-proven advantages. VFC digitization is naturally integrating, so high noise rejection is inherent, as is programmable resolution (if you want more bits, just count longer). Unfortunately, high conversion speed is not.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Useful resolution (8 or more bits) tens-of-microseconds VFC conversion times require tens-of-megahertz output frequencies. There are existing VFC designs that can flap that fast, e.g., Jim Williams’s awesome 100 MHz King Kong and my own “20 MHz VFC with take-back-half charge pump”. However, these possible solutions only pose another potentially pesky problem. What to use for a counter?
Frequently (no pun intended) the ideal and most cost-effective digital partner for a VFC is the µC’s onboard counter-timer peripheral (CTP), typically providing 16 bits of resolution at zero added parts cost. Unfortunately, the necessity of taking multiple (e.g., four) samples of each cycle of incoming pulses by onboard CTP logic limits maximum count rate to a fraction (typically ¼) of the µC’s internal clock.
Thus, for a 20-MHz internal clock, 5 MHz is the fastest achievable CTP count rate. Sorry, Kong.
Of course, an external hardwired counter peripheral could be implemented that would easily accommodate fast VFCs (okay, maybe Kong not so totally easy), but cost, parts count, and board area make this option quite unattractive.
Shown in Figure 1 is a compromise topology that combines the CTP doing what it does best (providing lots of bits), with a single external 4-bit MSI pre/scaler/accumulator chip. This extends the peripheral’s speed by up to 16x (hence up to 80 MHz with a CTP 5-MHz top end), at the cost of (at most) four additional general purpose I/O (GPIO) pins.
Here’s how it works.
Figure 1 100-MHz MSI counter prescales and accumulates VFC LSBs so clunky CTP can cope.
- Five GPIO pins are programmed for interface with the preaccumulator:
- Four as inputs (IN1 through IN4)
- One as output (OUT).
- IN4 is also programmed for input to the selected CTP, which is programmed for 16-bit accumulation.
Each VFC integration cycle comprises the following steps:
- OUT = 0 to disable counting.
- A 20-bit initial value (X1) is formed by concatenating the states of the INx bits (as 4 LSBs) with the 16 bits of the CTP (as 16 MSBs), i.e., X1=[cccc cccc cccc cccc iiii].
- OUT = 1 for the desired integration interval. A practical maximum = 220/VFCmax, shorter if lower resolution and/or higher conversion speed is required.
- OUT = 0 to freeze counting.
- A 20-bit final value (X2) is formed by concatenating INx with the CTP.
- The 20-bit conversion result = X2 – X1 modulo 220.
Note that if the ratio of max VFC output to max CTP count rate is less than 8x, then only three INx pins need be allocated to the interface (Xx = [ccc cccc cccc ciii]), with IN3 programmed as CTP input. If less than 4x, then only two, (Xx = [cc cccc cccc ccii]). And so forth.
If simpler arithmetic is more important than conserving GPIO pins, then a sixth output pin can be connected to and pulsed low at the onset of conversion to reset the INx bits to zero, along with a similar preload of the CTP bits. This would eliminate steps #6 and #10 of the conversion sequence.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- 20MHz VFC with take-back-half charge pump
- Voltage inverter design idea transmogrifies into a 1MHz VFC
- Single supply 200kHz VFC with bipolar differential inputs
- New VFC uses flip-flops as high speed, precision analog switches
The post Preaccumulator handles VFC outputs that are too fast for a naked CTP to swallow appeared first on EDN.
Getting an audio signal with a THD < 0.0002% made easy
If you need a very pure sine wave in the audio range, the circuit in Figure 1 can help. It is a simple deal: A sine wave with a THD of 1% coming from a function generator that goes through a trackable low-pass filter, which attenuates the distortion-causing harmonics by a factor of 7900 (-78 dB) or more. The result is a sine wave with less than 0.0002% (2 ppm) distortion.
Figure 1 A frequency multiplier and a tracking low-pass filter reduce the THD of the sine signal coming from a function generator by 78 dB or more.
Wow the engineering world with your unique design: Design Ideas Submission Guide
This implementation’s advantage comes from the fact that function generators provide two signals, a sine wave and a square wave, with the same frequency. The sine wave goes to the low-pass filter. A voltage divider reduces amplitude so that it does not exceed the input range of the filter. The switched-capacitor filter needs a clock signal with frequency that is 100 times larger than the frequency of the signal being filtered. The microcontroller–oscillator pair generates the clock signal. The microcontroller (µC) measures the frequency of the signal coming from the function generator, multiplies it by a factor of 100, creates a 16-bit control word, and sends it to the oscillator through an SPI interface. The oscillator generates a square-wave signal for the filter. The corner frequency of the filter equals the fundamental frequency of the input voltage.
Figure 2 and Figure 3 demonstrate the circuit operation with a 20-kHz triangular signal. Figure 2 shows the time domain signals from the function generator and at the output of the filter. Figure 3’s images are the spectrums of these signals. The filter passes the fundamental frequency and reduces the distortion-making harmonics down to the noise floor. The difference between the fundamental and the floor is about 50 dB, which is normal for the 8-bit ADC of the oscilloscope. More sophisticated (and more expensive) equipment is required to catch the difference of about 80 dB expected for the sine wave. Curious readers can get a sense of this business in references [1-4].
Figure 2 The time domain signals from the function generator and at the output of the filter.
Figure 3 The spectrum of the 20-kHz triangular signal: the fundamental harmonic is 50 dB above the noise floor. More sophisticated equipment is required for the sine wave.
It is worth mentioning that the same approach has been used to filter a square wave signal or a digitally generated sine wave with a very small number of stairs, see references [5-7]. Despite the large attenuation of the filter, the output signal is not a pure sine wave due to the high level of harmonics in the input signal: 43% for the square wave and 11-12% for a “sine” wave made of five stairs. The proposed circuit uses an input signal with 1% distortion (analog function generators) or 0.1% distortion (DDS-based function generators); hence the output signal will be at least 10 or 100 times cleaner than with the previous circuits.
If you decide to make the circuit, measure the period, not the frequency of the signal coming from the signal generator. The longest measurement time will be 50 ms instead of seconds.
Also, make sure you fill the period interval with at least 1000 clock pulses; the goal is to get 0.1% accuracy. This means clock rate of 20 MHz or more for the shortest period of 50 µs. Lower clock rates should be used for longer periods to avoid getting too large numbers for the captured clock pulses.
Finally, keep the sine signal amplitude from 3 to 9 VPP. This is the range where the filter provides minimum distortion and noise.
Jordan Dimitrov is an electrical engineer & PhD with 30 years of experience. Currently he teaches electrical and electronics courses at a Toronto community college.
Related Content
- Measure an amplifier’s THD without external filters
- Ultra-low distortion oscillator, part 2: the real deal
- A simple circuit with an optocoupler creates a “tube” sound
- Power Tips #116: How to reduce THD of a PFC
- Proper function linearizes a hot transistor anemometer with less than 0.2 % error
- RMS stands for: Remember, RMS measurements are slippery
- Precision synchronous detection amplifier facilitates low voltage measurements
References
- Williams J., G. Hoover. Test 18-bit ADCs with an ultrapure sine-wave oscillator. EDN, Aug. 11, 2011, 19-23.
- Janasek V. An ultra low-distortion oscillator with THD below -140 dB. http://www.janascard.cz/aj_Download.html
- ARTA software https://artalabs.hr/index.htm
- TSP #234 – QuantAsylum QA403 24-bit, 0.0001% THD Audio Analyzer Review, Teardown & Experiments https://www.youtube.com/watch?v=BlWzpMkX5QQ
- Horowitz P., W. Hill. The art of electronics. 3rd, 2015, pg. 436.
- Saab A. Locked-sync sine generator covers three decades with low distortion. EDN, Sep 18, 2008.
- Elliot R. Sinewave oscillators, Section 8 – Digital generation. https://sound-au.com/articles/sinewave.htm#s8
The post Getting an audio signal with a THD < 0.0002% made easy appeared first on EDN.
Electronic musical instruments design: what’s inside counts
Digital electronics has profoundly changed musical instrument design. From toy keyboards to performance-grade pianos, synthesizers, and drum sets, to name a few, instruments that once would have been finely crafted wood and metal can today find their voices in CPUs, memory, and data converters.
This does not mean craftsmanship is dead. There is as much skill, experience, and love of music in the intellectual property (IP) inside today’s electronic instruments as in the workshop of a traditional piano maker or luthier. It is just expressed differently. A look inside an instrument will illustrate this point.
A generic architecture
A concert grand piano, an early analog synthesizer, a drum set, and a clarinet could hardly look less alike. Yet, functionally, the digital electronic versions of all these instruments can share a single block diagram and signal flow. Figure 1 displays the block diagram of an ASIC inside electronic musical instruments.
Figure 1 The ASIC for electronic musical instruments is shown with its key building blocks. Source: Faraday Technology Corp.
One or more input devices capture the musician’s intent. This could be just a row of membrane switches for a toy keyboard. A professional piano might have a position sensor on each key and pedal. An electronic version of a clarinet might mean a pressure or velocity transducer and position sensors on the keys. A synthesizer might have a microphone for voice input.
The choice of sensors must both meet cost guidelines and capture what is essential about the musician’s actions at that price point. This must include subtilities, such as a pianist’s attack and graduated use of pedal or a saxophonist’s voicing and modulated use of the keys.
The analog sensor signals will pass through signal-conditioning circuitry and into analog-to-digital conversion. The resulting digital signal streams—which, at this point, represent the musician’s actions, not sounds—will go into a digital subsystem. This subsystem will generally comprise a CPU, usually a digital signal processor, memory, I/O interfaces, and a great deal of software and stored data.
This block not only interprets the incoming sensor data and controls the rest of the instrument but also combines sensor input with sampled or algorithmically generated audio waveforms and shapes these waveforms to produce a digital audio output stream. It is here that the craftsmanship happens.
The digital audio signal may be sent to external devices via an interface such as USB or passed on to digital-to-analog conversion and then to an audio amplifier.
A range of solutions
This description fits various instrument types, levels of sophistication and performance, and price points (Figure 2). In principle, the only differences between digital instruments are the input devices and the software. But the reality is more complex than that.
Figure 2 The above chart highlights three keyboard market segments. Source: Faraday Technology Corp.
Both engineering expertise and musical knowledge are used in the design decisions that produce different types of instruments. What kinds of sensors, and where? What type of analog-to-digital converter (ADC) should be employed, how many channels should be used, and what is the sample rate? What will be the tasks for the CPU and DSP, and consequently, how powerful must each be?
What are the necessary resolution, sample rate, noise level, and distortion of the digital-to-analog converter (DAC)? These choices, along with the software design and the vendor’s extensive library of sound samples, will set the instrument’s personality, whether the child’s toy or concert paragon.
Design implementation
The obvious way to implement the electronic portion of these musical instruments is with a discrete data converter, microcontroller, DSP, and memory chips. This approach allows for a fast time to market and will enable designers to select just the right chip for the intended performance level. It also allows the design team to focus most of their effort on the software from which the instrument’s character will emerge.
However, at least three issues exist with using discrete, off-the-shelf ICs for anything less than a premium professional instrument. First, suppose the organization intends to market a range of instruments at different price points. In that case, the discrete approach will lead to a proliferation of bills of materials (BOM) and board designs, complicating supply-chain management. Worse, it will require several software versions, each of which must be maintained and kept coherent.
Second, using discrete chips will make protecting proprietary software IP from theft difficult. All the pins on the chips are exposed to probing, allowing competitors to watch the operation of the digital electronics and even use diagnostic tools to examine memory through code. Further, the choice of ICs in the design will be visible, if not on the package lids, then on inspection of the dies inside.
Third, sophisticated designs may rely on proprietary hardware—especially in the data converters and the DSP core—to achieve the price/performance point intended at the high end of the product family. Duplicating these special hardware features in off-the-shelf chips may not be possible without carrying out massive overdesign.
Taking the ASIC path
These considerations have led some musical instrument design teams to employ a mixed-signal ASIC (Figure 3). An audio ASIC answers each of the three problems of a discrete design while serving as the foundation of digital electronic instrument designs.
Figure 3 The musical instruments ASIC is segmented into 186 MHz (left) and 192 MHz (right) domains. Source: Faraday Technology Corp.
First, the unit cost of an ASIC for these applications will be low enough that the same chip can be used across a broad product line, often without changing the board design. That cost may be lower than the total cost of discrete chips, especially once inventory, assembly, and test costs are included. A modular approach to software design and test design can allow one version of the software and one test bench to serve all the products in the family. This hugely simplifies debug and life-cycle management.
Second, the ASIC’s data paths and circuits are inside the die, safe from all but the most determined examination. The exception would be external code and audio sample memory. However, these can be encrypted, with the ASIC providing hardware-based encryption and decryption, so the software and data crown jewels are never exposed to the outside world in unencrypted form.
Third, suppose the developers have proprietary circuit designs for audio signal paths, a unique DSP architecture, or even a favorite CPU core. In that case, these can be implemented in the ASIC without concern for whether they are available off the shelf for the entire life of the product family.
However, there is an obvious objection to choosing an ASIC: musical instrument designers rarely have entire internal ASIC design teams. They are unlikely to want to assemble such a team for one project. Nor do they have a network of relationships with silicon IP providers, chip foundries, and outsourced assembly and test houses. These relationships turn a chip design into a reliable stream of finished chips. This is where a flexible, full-range ASIC partner comes in.
An ASIC case study
To show the importance of a partner, let’s look at a representative, composite example of an ASIC engagement. Faraday began discussions with a globally known musical instrument manufacturer. In addition to documenting the desired gross architecture, performance, and features, the initial conversation covered many of the points we have just discussed.
This organization was quite sophisticated in digital audio design, with its own DSP algorithms, logic designs for some critical digital functions, and precise specifications for mixed-signal functions. On the other hand, Faraday drew upon its internal IP libraries and extensive network of third-party IP vendors to gather the non-proprietary blocks, including an ARM CPU subsystem, memory and communications interfaces.
Next, Faraday determined that the design could meet the music company’s demanding digital/analog converter requirements with available IP, eliminating the need for an external DAC. Further, Faraday worked with the instrument designers to produce an optimized netlist for a DSP core optimized to the music company’s algorithms.
Faraday then took the chip design through the customary ASIC design flow of IP integration, functional verification, synthesis, and mixed-signal integration. At that point, it stepped in to complete the back-end design, conferring with the instrument design team when necessary, and taped out to the foundry the two partners had jointly selected.
About 18 months after the initial engagement, the musical instrument company received working silicon from the assembly and test vendor Faraday had arranged. A flexible engagement such as this can make an ASIC design the realistic best choice for a musical instrument or another such electronic product.
Kevin Kai-Wen Liu is a project manager at Faraday Technology Corp.’s headquarters in Hsinchu, Taiwan.
Related Content
- Arrow and Avnet launch ASIC design services
- Making ASIC power estimates before the design
- Alchip Technologies Offers 3nm ASIC Design Services
- An FPGA-to-ASIC case study for refining smart meter design
- Dentressangle Capital to Acquire ASIC Designer Presto Engineering
The post Electronic musical instruments design: what’s inside counts appeared first on EDN.
Lightning strikes…thrice???!!!
It happened in 2014. A year later, it happened again. And after a nine-year blessed respite, a month back it happened a third time. What am I talking about? Close-proximity lightning (each time unknown whether it just cloud-to-cloud arced overhead or actually hit the ground) that once again clobbered some of my residence’s electronics. Thursday night, August 8, we scored a direct hit from a west-to-east traversing heavy rain, hail and wind squall. When the house shook from a thunderclap seemingly directly overhead, I had a bad feeling. And the subsequent immediate cessation of both LAN and WAN connectivity sadly confirmed my suspicions.
As background for those unfamiliar with my past coverage, I’m the third owner of this house, located in the Rocky Mountain foothills just southwest of Golden, CO. The previous owner had, when retrofitting the residence to route coax and Ethernet to various locations in both the ground floor and upper level, gone the easy-and-inexpensive route of attaching the cabling to the house’s exterior, punching through rooms’ walls wherever interior connectivity was desired. Unfortunately, that cabling has also proved to act as an effective electromagnetic pulse (EMP) reception antenna whenever sufficient-intensity (strength and/or proximity) lightning is present.
This time, a few things—one of our TVs that initially no longer “saw” any of its active HDMI inputs and the exercise treadmill whose motor stalled—were temporarily stunned until after I power-cycled them, after which time they thankfully returned to normal operation. Alas, other gear’s demise was more definitive. Once again, several multi-port Ethernet switches (non-coincidentally on the ends of those exterior-attached network cable spans) got fried, along with a CableCard receiver and a MoCA transceiver (both associated with exterior-routing coax). My three-bay QNAP NAS also expired, presumably the result of its connection to one of the dead multi-port Ethernet switches. All this stuff will be (morbidly) showcased in teardowns to come.
Today, however, I’ll focus on the costliest victim, the control subsystem for the hot tub on the back deck. In the earlier 2014 and 2015 lightning incidents, we’d still been using the home’s original spa, which dated from the 1980s and was initially located inside the residence. The previously mentioned second owner subsequently moved it outside (the original “hot tub room” is now my office). The geriatric hot tub ran great but eventually leaked so badly that in 2019 we went ahead and replaced it. In retrospect, I remember having a conversation with the technician at the time about how its discrete transistor-and-relay dominant electronics would have likely enabled it to run forever, but for physical integrity compromise that led to its eventual demise.
After the storm calmed, on a hunch I went outside and lifted the hot tub cover. The control panel installed on the hot tub rim interestingly was still illuminated. But the panel itself was dead; the display was blank, and the control buttons were all inoperable. Curiously, the hot tub pump (and presumably other subsystems) also seemed to still run fine; I could hear the motor kick on as normal in response to each power activation cycle, for example. But not being able to adjust the temperature and pump speed, not to mention alter the filter cycle settings (and the clock settings they’re based on) was an obvious non-starter.
Unfortunately, the manufacturer’s three-year warranty had expired two years earlier. More generally, production on this particular control panel had ironically ended roughly coincident with when I bought the hot tub back in 2019, and my technician was no longer able to source a replacement. This meant that, although there was a chance that only the comparatively inexpensive control panel had gone bad, I was going to have to replace the entire “pod” kit that included (among other things) a newer model control panel. Here’s what the old “pod” looked like after my technician pulled it out and before he hauled it away for potential spare-parts scavenging purposes; according to him, the blue cylindrical structure on top is the water heater:
And here are some closeups. The large square IC at the center of the last one, for example, is likely the digital control processor. Unfortunately, its markings have either been intentionally obscured or were otherwise too faint for me to be able to discern:
I unfortunately don’t have any comparative pictures of the original hot tub’s electronics, but trust me, they were way more “analog”. Feel free to chime in with your thoughts in the comments as to the comparative reliability of “oldie but goodie” vs “shiny new” circuitry…
Now for the removed old control panel:
And its installed and operational successor:
So, what happened here? As setup for my theorizing, here are a few more old-panel photos:
Originally, there were actually two cables connected to the panel. One, not shown here, was a simple two-wire harness that, I hypothesize, ran power from the “pod” circuit board to the panel’s LEDs for illumination purposes. As I mentioned earlier, it seemingly survived the storm just fine. The one shown here, on the other hand, is a multi-wire cluster that terminates at and connects to the circuit board via the connector shown in the second-photo closeup.
This particular cable was, I believe, the Achilles heel. Its signals are presumably low-voltage, low-current digital in nature. Remember my earlier mention of Ethernet cables (for example) acting as EMP reception antennae, with disastrous equipment consequences? I’m guessing the same thing happened here, via this foot-or-so long multi-wire harness. Did the EMP only fry the control panel’s electronics, versus also damaging the “pod” board circuitry? Perhaps. By analogy, in some cases over these three (to date…another heavy-thunder storm is ironically brewing as I type these words) lightning-damage episodes, the Ethernet switches on both ends of a particular outdoor cable run have died, while in other cases, only one switch has expired. Regardless, given the replacement parts-(non)availability circumstances, it’s a moot point.
I’ve got more to tell, including the already-mentioned teardowns, plus (for example):
- How I resurrected my network storage, in the process bolstering my file backup scheme
- Options (some of which I’ve tried, with varying degrees of success, and documented) for dispensing with the outdoors-routed Ethernet and coax cables, and
- Residence-wide surge protection schemes
For now, however, I’ll wrap up this post’s topic focus with an as-usual invitation for readers’ thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Lightning strike becomes EMP weapon
- Devices fall victim to lightning strike, again
- Ground strikes and lightning protection of buried cables
- Teardown: Lightning strike explodes a switch’s IC
- Teardown: Ethernet and EMP take out TV tuner
- Teardown: MoCA adapter succumbs to lightning strike
The post Lightning strikes…thrice???!!! appeared first on EDN.
Power Tips #133: Measuring the total leakage inductance in a TLVR to optimize performance
A trans-inductor voltage regulator (TLVR) modifies the conventional multiphase buck converter, accelerating the converter’s output current slew-rate speed capabilities to approach the fast load slew rate of the high-speed processor or application-specific integrated circuit’s core voltage rail. The output inductors each get a secondary winding, which are connected in series to create a secondary loop to accelerate the response to load changes. This improvement in load transient performance is at the cost of increased steady-state ripple and its resulting power loss, however. The problem is that it is very hard to estimate the actual overall inductance in the secondary loop, which is a primary driver of performance, as layout and printed circuit board (PCB) construction can significantly affect it. In this power tip, I will show a simple measurement that you can use to estimate actual leakage inductance in the TLVR secondary loop and optimize performance.
Figure 1 is a simplified schematic of the multiphase buck converter without and with the TLVR circuit.
Figure 1 Simplified multiphase buck converter and TLVR schematics. Source: Texas Instruments
Note the added secondary loop in the TLVR connecting all of the secondaries of the output inductors with the compensating inductor value, Lc, and parasitic elements shown. The sum of all of these inductances is the total secondary-loop inductance, or Ltsl. Ltsl determines TLVR performance, as both the added output current slew rate and high-frequency ripple current from the TLVR loop are inversely proportional to it. Because of the unpredictability of the parasitic inductances, when the TLVR was first introduced, it included a fixed Lc in the secondary loop.
The existing approach sets Lc to “swamp out” the parasitic inductances, assuming that they are much less than Lc. But there is a scope measurement across Lc that either will verify this assumption, or if not, provide the information you need to estimate the Ltsl. You can then adjust Lc to better match the target overall leakage for best slew-rate capability and ripple current performance, and in some cases omit it.
The TLVR performance equation is the output current slew-down capability ΔI/Δt in amperes per microseconds (A/µs), with some recent applications asking for as much as 5,000 A/µs. Slew-up capability is just as important, but with VIN (12 V typically) generally much greater than VOUT (0.7 V to 1.8 V typically), the slew-up rate capability will generally be much greater, and potentially excessive. Limiting how many phases you can turn on at the same time will usually reduce excessive slew-up capability.
The equations in Table 1 show that the load slew-rate acceleration is inversely proportional to Ltsl. Table 2 shows that the high-frequency TLVR currents are also inversely proportional to Ltsl.
Buck slew down ΔI/Δt |
L is the value of the discrete output inductor at each stage |
|
TLVR slew down ΔI/Δt |
Lm is the value of the magnetizing inductance at each stage |
|
Ltsl |
(Assuming that Ltsl = Lc [1]) |
LLeakage is defined as the leakage inductance of each output inductor |
Table 1 Buck and TLVR slew-down ΔI/Δt equations. Source: Texas Instruments
Time period where all phases are off (TOFF) |
Fsw is the switching frequency of each phase |
|
High frequency p-p current ripple (ΔILtsl) |
In the secondary loop and in each power stage |
|
Root-mean-square (RMS) value of this current |
|
Table 2 TLVR high-frequency currents in the secondary winding and all phases when VOUT ´ Nphases < VIN. Source: Texas Instruments
Below in Table 3 are the expected voltages across Lc when VOUT x Nphases < VIN assuming Ltsl ≈ Lc, and recalculation of Ltsl when smaller voltages are seen.
Voltage across Ltsl (and Lc if Ltsl ≈ Lc) when one phase is on |
Assuming the polarity of Lc as shown in Figure 1 |
|
Voltage across Ltsl (and Lc if Ltsl ≈ Lc) when all phases are off |
Assuming polarity of Lc as shown in Figure 1 |
|
RMS of the waveform |
|
|
Estimating Ltsl when the actual waveform is smaller than the expected waveform |
Use calculated VLtslrms and measured VLcrms |
Table 3 Expected voltage waveform across Lc when VOUT x Nphases < VIN assuming Ltsl ≈ Lc, and recalculation of Ltsl when smaller voltages are seen. Source: Texas Instruments
Now it’s time to introduce a design example, starting with the requirements and overall approach, as shown in Table 4.
VIN |
12 V |
TLVR loops |
2 loops interleaved |
VOUT |
1.0 V |
Each loop |
>2,500 A/µs |
Maximum IOUT |
1,000 A |
Stages Ntotal |
16 |
Power stages |
32 |
Phases Nphases |
8 |
Phases |
16 |
Lm |
120 nH |
Stages/phase |
2 |
Target Ltsl |
100 nH |
Fsw each phase |
570 kHz |
Ripple frequency |
4.56 MHz |
Maximum load step |
500 A |
Ripple p-p/RMS |
11.7 A/3.4 A |
Load slew rate |
5,000 A/µs |
VLtsl on/off |
–8 V/+16 V |
|
|
VLtslrms |
11.3 VRMS |
Table 4 Design requirements and overall approach. Source: Texas Instruments
This 32-stage design uses two TLVR loops each at the near-5-MHz sawtooth frequency, but 180 degrees out of phase in order to achieve good but imperfect cancellation of the sawtooth waveforms in the output capacitors. Without TLVR, even with 32 phases and inductors at only 70 nH, the fastest slew-down rate would be 460 A/µs. Based on the equations in Table 2, the slew-down capability would be -5,387 A/µs. Getting this >5,000 A/µs slew-rate capability requires accepting a high-frequency ripple current in each phase of 3.4 ARMS.
I tested a board built up with the assumption that Ltsl ≈ Lc and used 100 nH the target Ltsl for Lc. Figure 2 shows the layout of one of the two TLVR loops.
Figure 2 The layout of a 16-power-stage TLVR loop. Source: Texas Instruments
But is the 100-nH Lc really the true Ltsl of this 16-stage loop? See the large secondary loop between “start” and “end” in Figure 2. Measuring the actual voltage waveform across Lc (L36 here) when all 16 stages and eight phases are active sheds light on this assumption. If Ltsl ≈ Lc and using the formulas from Table 3, you should expect a square wave going between +8 V and -16 V at eight times the per-phase switching frequency. The RMS value of this waveform should be 11.3 V.
Figure 3 shows what I actually measured.
Figure 3 Measured voltage waveform across an eight-phase/16-stage compensating inductor with expected TLVR waveform if Ltsl ≈ Lc, shown in black. Source: Texas Instruments
Both the actual L36 waveform (pink) versus the expected total leakage waveform (black) and the RMS value (5.02 V versus 11.3 V) point to Lc being one-half the Ltsl and point to that fact that there is another 100 nanohenries from inductor leakages and PCB traces in the secondary loop. Comparing the actual versus expected RMS values instead of peak values will reduce the confusion introduced by the parasitic ringing evident on the measured waveform.
With the total inductance in the secondary loop at 200 nH, the output current slew-down capability is reduced to -2,827 A/µs for the 32-stage design. For the 5,000 A/µs load slew-rate application, shorting out the actual Lc reduced the total secondary inductance back to 100 nH. For applications with a maximum load slew rate less than 3,000 A/µs, leaving the compensating inductors in place will reduce circulating high-frequency currents by half and reduce losses from these currents by 75%.
Obtaining leakage inductance
Knowing the actual leakage inductance in your TLVR loop will put you in the best position to get your output current slew rate while minimizing added losses caused by the TLVR loop. Discovering that one simple measurement will give you the necessary information is one example of what my colleagues and I pursue at Texas Instruments in the interests of power-management optimization.
Josh Mandelcorn has been at Texas Instrument’s Power Design Services team for almost two decades focused on designing power solutions for automotive and communications / enterprise applications. He has designed high-current multiphase converters to power core and memory rails of processors handling large rapid load changes with stringent voltage under / overshoot requirements. He previously designed off-line AC to DC converters in the 250 W to 2 kW range with a focus on emissions compliance. He is listed as either an author or co-author on 17 US patents related to power conversion. He received a BSEE degree from the Carnegie-Mellon University, Pittsburgh, Pennsylvania.
Related Content
- Power Tips #131: Planar transformer size and efficiency optimization algorithm for a 1 kW high-density LLC power module
- Power Tips #132: A low-cost and high-accuracy e-meter solution
- Power Tips #118: Using interleaved ground planes to improve noise filtering from isolated power supplies
- Power Tips #126: Hot plugging DC/DC converters safely
References
- Schurmann, Matthew, and Mohamed Ahmed. “Introduction to the Trans-inductor Voltage Regulator (TLVR).” Texas Instruments Power Supply Design Seminar SEM2600, literature No. SLUP413. 2024-2025.
The post Power Tips #133: Measuring the total leakage inductance in a TLVR to optimize performance appeared first on EDN.
EDA’s big three compare AI notes with TSMC
The premise of artificial intelligence (AI) transforming the semiconductor industry is steadily taking shape, and two critical venues to gauge the actual progress are leading EDA houses and silicon foundries. The three major EDA toolmakers—Cadence, Synopsys, and Siemens EDA—have recently telegraphed their close collaboration on AI-driven design flows for TSMC’s advanced chip manufacturing nodes.
For a start, semiconductor fabs must have accurate lithography models for optical proximity correction in advanced manufacturing nodes. Huiming Bu, VP of Global Semiconductor R&D and Albany Operations at IBM Research, acknowledges that utilizing artificial intelligence and machine learning accelerates the development of highly accurate models that yield the best results during silicon fabrication.
On the design side, AI-powered EDA software is helping optimize complex IC designs while facilitating migration toward 2D/3D multi-die architectures. “Increased complexity, engineering resource constraints and tighter delivery windows were challenges crying out for a full AI-driven EDA software stack from architectural exploration to design and manufacturing,” said Shankar Krishnamoorthy, GM of Synopsys EDA Group.
Below is a brief recap of EDA toolmakers’ current liaison with TSMC centered on AI-driven design flows for advanced process nodes.
Start with Cadence Design Systems, working closely with TSMC on Cadence.AI, a chips-to-systems AI platform that spans all aspects of design and verification while facilitating digital and analog design automation using AI tools. The two companies are also collaborating on the Cadence Joint Enterprise Data and AI (JedAI) Platform, which employs generative AI for design debug and analytics.
Figure 1 The JedAI platform for generative AI applications provides workflow automation, model training, data analytics, and large language model (LLM) services. Source: Cadence
Synopsys has also its own AI-driven EDA suite Synopsys.ai for design, verification, testing and manufacturing of advanced digital and analog chips. Synopsys.ai includes DSO.ai, an AI application for optimizing layout implementation workflows, and VSO.ai, an AI-driven verification solution.
The company’s CEO Sassine Ghazi told the Synopsys User Group (SNUG) conference audience that Synopsys.ai has achieved hundreds of tape-outs to date and is delivering more than a 10% boost in performance, power, area (PPA), double-digit improvements in verification coverage, and 4x faster analog circuit optimization when compared to optimization without the use of AI.
Figure 2 Synopsys.ai offers AI-driven workflow optimization and data analytics solutions meshed with generative AI capabilities. Source: Synopsys
Like Cadence and Synopsys, Siemens EDA is extending its AI-centric collaboration with leading fabs like Intel Foundry and TSMC. Its new Solido Simulation Suite features AI-accelerated simulators for IC design and verification. The company has also unveiled Catapult AI NN software for High-Level Synthesis (HLS) of neural network accelerators integrated into application-specific integrated circuits (ASICs) and system-on-chips (SoCs).
Figure 3 Solido Simulation Suite integrates AI-accelerated SPICE, Fast SPICE, and mixed-signal simulators to help engineers accelerate critical design and verification tasks. Siemens EDA
AI in the semiconductor industry is still in its infancy, and these efforts to create AI-optimized design flows mark baby steps for infusing AI into the electronics design realm. However, the timing seems right, given how advanced nodes are desperately seeking intelligent solutions to bolster yields and silicon defect coverage.
Related Content
- How to Make Generative AI Greener
- Optimizing Electronics Design With AI Co-Pilots
- Adapting the Microcontroller for AI in the Endpoint
- How generative AI puts the magic back in hardware design
- 4 basic steps in implementing an AI-driven design workflow
The post EDA’s big three compare AI notes with TSMC appeared first on EDN.
Cracking the case of a smartphone and its unfairly crack-accused case
By the time you read this, you likely will have already seen my upcoming coverage of Google’s August product announcement event in a bit more than a week (as I write this), where the Pixel 9 series is forecasted to be introduced. However, as regular readers may recall, I’m still toting two Pixel 7s as my smartphone “daily drivers”, along with a Pixel 6a as my “vice-phone”:
following a longstanding just-in-case spare strategy that as you’ll soon see finally came in handy!
Smartphones’ usage patterns make them particularly prone to being dropped, whether its onto hard surfaces or into fluids (such as…err…what’s in a toilet bowl). Mechanical robustness is therefore critical to long operating life, a particularly important requirement considering their stored-data and app importance to their owners coupled with their high prices. All of which has made their dependence on glass materials a longstanding necessity curiosity to me.
The screen’s an obvious one you really can’t avoid, at least until smart glasses go mainstream someday (theoretically), thereby obviating smartphones’ existence rationale. Instead, smartphone designers are stuck with relying on structurally reinforced glass compounds, such as Corning’s Gorilla Glass series, that claim to minimize the chances of a crack. Per Wikipedia:
Gorilla Glass…is a brand of chemically strengthened glass now in its ninth generation. Designed to be thin, light, and damage-resistant, its surface strength and crack-resistance are achieved through immersion in a hot potassium-salt ion-exchange bath.
But glass smartphone backs have always been a bit bizarre to me, no matter that I grok their conceptual benefits, both absolute and relative to alternative materials. Corning this time:
As leading device manufacturers unveil their latest models, many are making a shift to incorporate more advanced glass into their designs, and not just on the front as a protective cover glass. One place where more glass is appearing is on the back of new mobile consumer electronic devices. This trend is particularly exciting because glass offers benefits that other materials, like plastic and metal, just can’t offer.
With glass on the back as well as the front, the newest smartphones are meeting aesthetic and design milestones, including more elegant form factors. But, they offer additional performance benefits. The superior physical and electromagnetic properties of glass make it particularly well-suited to enable new capabilities device makers are incorporating into their designs.
So what are the benefits of having an all- glass smartphone?
- Improved Reception
- Glass is ideal for the antenna performance of your phone, unlike aluminum and other materials. Metal materials like aluminum lack radio frequency (RF) transparency, meaning that the antenna embedded in your device has a harder time finding a signal. New all-glass phone designs mean more bars in more locations leading to faster data transmission.
- Better Wireless Charging
Today’s newest phones are moving to wireless charging, and for the same reasons listed above, metal can interfere with wireless charging technology. Glass, particularly thin, tough glass like Corning® Gorilla® Glass, is used as an alternative to metal on the back of phones so consumers get a faster charge without plugging in the device to a traditional charger. - New Levels of Customization
Since the backs of phones don’t have the same optical transparency requirements as the front display, glass can provide designers with new possibilities for customization. Corning’s true-color glass ceramics — or Vibrant Gorilla Glass — offers superior scratch resistance compared to plastic and unlocks a full palette of color options with premium quality photo realistic images.
And finally, there are the cameras, whose lenses’ outer elements aren’t typically directly exposed to the outside world. Instead, there’s an intermediary transparent (duh) protective cover that’s most commonly glass-fabricated as well. Here, for example, is the front camera of one of my Pixel 7s; note that I’ve also got a tempered glass screen protector on it:
And here’s the back, for now using a case-less “stock” photo:
Until recently, there was only one camera on the back of smartphones, akin to that on the front. But as the Pixel 7 exemplifies, things have gotten a bit more complicated nowadays. Left to right in the photo, integrated within a common “bar” that juts out from the back panel, you’ll see:
- The standard primary lens, with 82° field of view (FoV)
- A spectral/flicker sensor, and above it, dual side-by-side autofocus sensors (neither of which are clearly visible in the “stock” image; hold that thought)
- An ultrawide lens with 114° FoV
- A microphone input, and
- The LED flash
The Pixel 7 Pro adds a third 5x telephoto camera, to the right of its now-125° FoV ultrawide lens. And while the Pixel 7a looks similar to the Pixel 7 (albeit with a plastic-vs-glass back), its ultrawide camera is once again focal length-tweaked, this time to deliver a 120° FoV.
The cases I use on both of my Pixel 7s are Limitless models from Mous (Aramid Fibre and Black Leather, to be precise), and although they’re a fair bit more expensive than the no-names on Amazon, they’re pretty slick. For one thing, they’re quite rugged (again, hold that thought):
and they also have Magsafe-compatible magnets built into them:
Further to Mous’s protection claims, if you revisit the earlier photo of the front camera, you’ll notice a “lip” that extends above the screen. That “lip” goes all the way around the front, creating an “air gap” designed to prevent the screen from directly impacting with the ground if the phone lands flat…as long as, for example, the ground isn’t covered by rocks or other objects thick enough to surmount that gap.
What about that rear “camera bar”? Glad you asked. Here’s what both of my Pixel 7s, an “Obsidian” one on AT&T for personal use and a “Snow” one on Verizon for work (which I’ve also included for enhanced color-contrast viewing purposes), look like from an angle when encased:
Again, note the “lip”. Here’s the former, and the specific subject of today’s tale, straight on:
So…what happened? In late June, I happened to glance at the back of my AT&T-enabled Pixel 7 and saw what looked like an impact crater centered on top of the ultrawide camera lens, with cracks emanating from there all the way to the primary camera to its left. I unfortunately only in-retrospect thought that I should have snapped a photo of it (from another camera, obviously), but the damage looked similar to a photo posted by Kyriakos Ktorides on X/Twitter:
My immediate reaction was that I must have dropped it, with a pebble or the like impacting and cracking the cameras’ common glass cover. But after wracking my brains, I couldn’t recall any time that I’d dropped the phone, landing on its back or any other orientation. Theoretically, I suppose I could have popped it with the tip of a same-pocket key, but that also seemed unlikely.
So I hit up Google and was quickly (albeit vaguely) reminded of coverage I’d previously seen on this seemingly fairly widespread issue. Initially, all the reports I came across consistently mentioned the ultrawide camera lens location as the damage origination point, so I thought that perhaps optical zoom lens back-and-forth movement had impacted the glass, iteratively weakening it to the point where it finally fractured. But then I remembered that, with a few exceptions, smartphones’ lenses don’t actually have moveable elements (aside from focus, that is). Instead, they interpolate between the viewpoints of multiple cameras, each with a fixed-focal-length lens, to generate the optical zoom-like effect. Plus, as my research eventually revealed, cracks seen by others didn’t solely originate from the ultrawide camera area, anyway.
I’d also read on Reddit and elsewhere that Google had been telling folks that, in addition to phone-drop artifacts, this cracking might be caused by using the phone in conjunction with temperature-change extremes (cold climes, to be precise). However, we’re talking about mid-summer here, folks. My wife and I had also just returned from a trip, but given that airplane cabins are pressurized, I don’t think that pressure changes were to blame (although we did go between 7,500’ elevation in Colorado and less than 1,000’ in Indiana, so…yeah, no…)
Abundant reporting online suggests that spontaneous cracking independent of mishandling or any environmental or other external factor is the root-cause conclusion in situations such as mine. Thankfully, I ended up being doubly blessed. For one thing, I was relieved to learn that I had less than a month left on my factory warranty (I’d also purchased a third-party extended warranty on the phone, but coverage for situations like this was unknown). And for another, although Google initially balked at covering these particular repairs, instead blaming user mishandling, the company ultimately relented and was doing them for free under warranty.
Two days after filing my claim with Google online on Saturday, June 29, I had a free-overnight-shipping box and FedEx label in my hands. Two days later, on July 3 (the day before the long holiday weekend) Google had already received and inspected my phone, confirming that its necessary repairs were warranty-covered. The following Tuesday, July 9th, it was back in my hands, courtesy of another one-day FedEx shipment and despite two intermediary weekends and a holiday. As documented, both the front and rear camera modules ended up being replaced in addition to the glass rear camera array cover. It’s seemingly good as new, and while it was away, I pressed my Pixel 6a into service in its stead, backing up the Pixel 7 then restoring the backup to the Pixel 6a beforehand, and reversing the process upon the Pixel 7’s return.
In closing, while the title of this piece refers to “cracking the case”, to date I admittedly remain a bit baffled as to exactly why the rear camera array cover spontaneously shattered. That said, Ars Technica coverage I came across in my research contained an interesting quote:
These specialized smartphone glass panels increase scratch resistance by building stress into the glass. We don’t know the manufacturer of Google’s camera glass, but a Corning engineer explains the general process in this Scientific American article, saying, “There’s a layer of compressive stress, then a layer of central tension, where the glass wants to press out, then another layer of compressive stress.” If you mess something up in your glass formula and these layers aren’t in a perfect balance, one day the glass will just go “pop” and you’ll get these outward mini explosions.
Here’s more background info from the Ars Technica piece:
We’ve seen this exact problem several times before in the world of smartphones. Samsung was hit with this issue in 2016 on the Galaxy S7 and again in 2021 the Galaxy S20, both of which kicked off class-action lawsuits.
Further, the Google situation isn’t restricted to the Pixel 7; user reports suggest that the Pixel 7 Pro and Pixel 7a are similarly afflicted. Nor did the company seemingly fix the problem with the successor Pixel 8 generation of products, either; here’s just one of numerous case study examples of cracking issues (and yes, Google once again seems to be initially balking at owning up to covering the repairs under warranty). Fortunately, my other (Verizon-enabled “Snow”) Pixel 7 hasn’t exhibited the same behavior, at least yet; its factory warranty also expired in mid-July, but its Preferred Care extended warranty coverage is from Google, so hope springs eternal.
Could folks who dropped their phones try to scam Google into repairing them for free, too? Perhaps. Google’s initial reticence is therefore at least somewhat understandable. But quoting a phrase I’ve also used in plenty of prior writeups, where there’s smoke there’s usually fire, and there seems to be a lot of smoke here. I hope Google sorts this situation out for its Pixel 9 and future smartphone families. And if any of you have glass-composition expertise, I’d love to hear your root-cause theories in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Playin’ with Google’s Pixel 7
- Google’s fall…err…summer launch: One-upping Apple with a sizeable product tranche
- The Pixel Watch: An Apple alternative with Google’s (and Fitbit’s) personal touch
- Pixel smartphone progressions: Latest offerings and personal transitions
- The 2024 Google I/O: It’s (pretty much) all about AI progress, if you didn’t already guess
- The Google Chromecast with Google TV: Realizing a resurrection opportunity
- Google’s Chromecast with Google TV: Car accessory similarity, and a post-teardown resurrection opportunity?
The post Cracking the case of a smartphone and its unfairly crack-accused case appeared first on EDN.
EMC: How to write a good benchtop test report
Before we delve into a step-by-step guide to good engineering practice, two thoughts to keep in mind. First, there are too many subpar test reports from engineers, who often do excellent work but aren’t given enough time to document it properly. As a result, when teams sit down to review the report weeks or even days later, important details are missing, and the test engineers have simply forgotten some of the more subtle points.
Second, there is a lack of solid guidelines for this process. A quick search on Google doesn’t turn up much on the subject. These two factors led to the creation this article.
This article won’t cover how to write an accredited EMC test report, which requires detailed documentation of things like test conditions, measurement uncertainty, and so on. Instead, it’s aimed at design engineers and test engineers who are doing hands-on work in their own labs.
Of course, managers are more than welcome to read along too. And if you’re a manager who finds this article helpful, feel free to share it with your team.
To kick things off, let’s start by looking at an example of what not to do.
Figure 1 The screenshot shows a benchtop EMC report that the author collected a while ago. Source: Min Zhang
Figure 1 shows a screenshot from a test report. Notice anything wrong?
First, there’s no setup picture. Anyone familiar with EMC/RF testing knows that even a minor setup error can lead to significantly different results. If a system is incorrectly tested as a failure, the result could lead to costly over-design. On the other hand, if a system is wrongly marked as a pass, finding out it fails at the accredited EMC lab will not only come as a surprise but will also add extra costs and impact your product’s time to market.
What else? The test conditions are not clearly defined. For example, what does the engineer mean by “extended the shield with extra foil”? It might have made sense at the time but trust me—when the same engineer revisits the report a month later, they probably won’t remember the specifics. Moreover, we’d want to know if this “extra foil” was terminated and, if so, how. Unfortunately, no supporting photo was provided.
Lastly, the engineer made a statement without offering an explanation or supporting evidence. When stating “noise was common mode rather than differential mode,” there were no test results to back this up. In EMC/RF testing, assumptions are not enough. A simple RF current probe clamped on the cable bundle could have provided the necessary proof for this statement.
So, how should we write a high-quality test report? Based on years of troubleshooting EMI issues, here is our step-by-step guide to writing an effective engineering report.
Step 1: Sharpening the tools
Before diving into the setup, it’s crucial to ensure your equipment is ready for the task—much like how a carpenter sharpens the tools before starting a project. While we’re not literally “sharpening” anything, the idea is the same: before any RF measurement, always check if your equipment is up to the job. Let me share an example to emphasize the importance of this step.
In one case, a client’s expensive receiver had a damaged RF front-end, and they were unaware of it. This is a classic case of what I call “engineers’ bias”—the belief that high-priced equipment is inherently reliable. Engineers often place full confidence in costly instruments, but even these can fail.
If your spectrum analyzer has a tracking generator, you can easily check for this issue. Simply connect the TG output and the spectrum input using a coaxial cable and perform a TG scan. One should see a flat, straight line across the whole frequency range at the supplied TG power level (often between -20 dBm and 0 dBm). This is shown in Figure 2.
Figure 2 Here is a way to check the RF front-end of a spectrum analyzer. Source: Min Zhang
The same principle applies to other test equipment. For instance, if an RF current probe is accidentally dropped, its transfer impedance may be affected. In such a case, you should recalibrate the probe before proceeding with the test. Likewise, line impedance stabilization networks (LISNs) should be regularly checked to ensure their impedance conforms to the relevant standards.
It’s also important to document the last calibration, characterization, or inspection date for all equipment used in the test. If an incident occurs (like dropping the current probe), make sure to record it in the report. This guarantees traceability. While you can continue with the test, keep in mind that such events increase measurement uncertainty.
Step 2: Test set-up
You must clearly show detailed photos of the test set-up. This should include an overall view of the test arrangement, as well as close-up shots of specific details, such as the bonding wire, how it’s bonded, and whether a continuity check was performed on the connections.
For conducted emissions/immunities and transient tests, include images showing the bonding of the test equipment, cable layout, and details of the device under test (DUT) bonding, particularly if a bonding wire is used to connect to the test ground plane.
For radiated emissions tests, assuming you’re conducting them in your own lab—since radiated immunity testing can interfere with the electromagnetic environment—you should not perform radiated immunity test without a shielded tent. Make sure to include pictures that show the antenna set-up.
Do we need to include a system diagram of the test setup?
The short answer is yes, preferably. While drawing a system diagram may take more time than simply snapping a photo, it’s still important to include a simple diagram. Popular tools for creating system diagrams include Microsoft Visio and PowerPoint. Figure 3 shows a system diagram using Keynote on a MacBook. If you’re more artistically inclined, feel free to use other drawing tools—some engineers prefer this approach.
Figure 3 The system diagram shows a test set-up for CISPR 25 conducted emissions. Source: Min Zhang
Figure 4 shows the actual test set-up used for the conducted emission test, highlighting an overall view of the test arrangement. You can see how the test equipment is listed for the test.
Figure 4 The actual test set setup lists all the equipment. Source: Min Zhang
Other key information
Your report should also include details about the power supply settings, such as voltage and current. If there’s any supporting equipment for the DUT, make sure to capture this in both the photos and the system diagram while documenting the operational status in the report.
Environmental conditions such as room temperature and humidity are generally not required for in-house tests, but if you’re conducting electrostatic discharge (ESD) investigations, it’s important to document these factors, as humidity can affect the test results.
It is recommended to always test and measure ambient EM noise before starting any benchtop EMC test, and these results should be thoroughly documented in the test report. Typically, a benchtop power supply can introduce internal noise, which may be picked up by the LISN during conducted emission tests.
Additionally, LED lights and nearby equipment often generate EM noise, which can easily couple to the DUT’s cable leads and impact the emission readings. When working without a shielded environment—which is often the case for design engineers testing and troubleshooting on the bench—the best practice is to benchmark the ambient noise. This can be done using the spectrum analyzer itself or by using software to save the ambient noise data for comparison in future studies.
Step 3: Obtaining test results
It’s always a good idea to save results directly from the equipment or through a connected computer (assuming the necessary software is installed), rather than relying on a photo of the screen. This approach offers several advantages.
First and foremost, modern equipment software typically provides far more information than what’s visible on the screen, such as the date, time, and sampling rate (for an oscilloscope, for example). Additionally, saving data digitally avoids potential issues like reflections that may occur in photos.
Another benefit is that multiple traces can later be processed for comparison purposes. Some software even allows you to document extensive details, such as test conditions and operation modes, making the report more comprehensive and traceable.
An example is shown in Figure 5.
Figure 5 Here is an example of test results generated by software Tekbox EMCview. Source: Min Zhang
Step 4: Analyzing test results
For a junior engineer, analyzing test results can seem daunting, but we encourage you to give it your best effort. To begin, focus on identifying the failure mode—sometimes it might be a resonance in the spectrum scan, or a narrowband signal failure. It’s important to provide some form of explanation. A good example is shown in Figure 6.
Figure 6 It’s important to provide some form of explanation while analysing the test results. Source: Min Zhang
In this radiated emission result, two issues are evident. First, there’s a broadband noise profile in the 50 to 80 MHz frequency range, and second, there are narrowband noise characteristics between 100 and 200 MHz. Additionally, a single narrowband spurious signal appears at 222 MHz.
In this case, we highlighted these areas of interest and provided explanations for each. As always, if you suspect a specific culprit is causing the noise, prove it by providing further results—this significantly enhances the value of your analysis, as demonstrated in Figure 6.
What if you don’t fully understand what’s happening? At the test stage, at the very least, offer a few potential explanations. You can say, “We believe it could be one of the following reasons,” and list some possibilities. You can also mention that further testing or simulation may be needed to pinpoint the root cause. This is important because when a team of engineers reviews the report together, other team members often contribute valuable insights and suggestions.
Step 5: Troubleshooting and fixing
If the test report includes troubleshooting and fixes, the solutions must be clearly stated and supported with sufficient evidence. This should include photos, test results, and a clear rationale for the fix. For example, an engineer might say, “The power cable connected to the motor proved to be the main radiating mechanism, and a ferrite sleeve on the mains cable solved the problem.”
However, this approach is problematic for the reasons we’ve discussed earlier. A more effective statement would be:
“The motor power cable was identified as the main source of radiated emissions, as disconnecting the cable significantly reduced the noise between 50 and 80 MHz. We then applied XXX (part number) ferrite cores to the motor cable, placing it near the motor connector, and ensured the ferrite cores were close to the vehicle chassis (the location is crucial). As shown in Figure 7, this resulted in improved performance. See the comparison of the before and after results in Figure 7.”
Figure 7 This is how the troubleshooting part in a report looks like. Source: Min Zhang
By stating the troubleshooting results in this manner, you provide far more confidence in the solution.
Step 6: Summary and conclusion
We believe a good report should also include suggestions, recommendations, or actions that need to be taken. Engineers may propose design changes, but it’s important to list the potential risks associated with those changes. This highlights that EMC engineering often involves compromise. While engineers may make solid suggestions, they must also consider other factors such as thermal or mechanical design, which might complicate implementation.
It’s also essential to consider alternative fixes. During troubleshooting, you are often limited by the tools at hand, and the solution you find may not be the most cost-effective. This is especially relevant for volume manufacturers, where even small cost differences can have a significant impact.
By this point, we’ve provided readers with a solid guide to writing a benchtop EMC test report. The principles outlined here are applicable across many areas of engineering. We welcome your suggestions and feedback.
Dr. Min Zhang is the founder and principal EMC consultant at Mach One Design, a UK-based engineering firm specializing in EMC consulting, troubleshooting, and training. He currently chairs the IEEE EMC Chapter for the UK and Ireland branch. Zhang can be reached at min.zhang@mach1design.co.uk.
Related Content
- EMC EMI RFI ESD
- Find EMI sources with an oscilloscope
- EMC design contest to focus on signal hunting
- An introduction to troubleshooting EMI problems
- EMI emissions testing: peak, quasi-peak, and average measurements
The post EMC: How to write a good benchtop test report appeared first on EDN.
Software eases embedded GUI development
The complimentary Microchip Graphics Suite (MGS) lets designers create sophisticated embedded GUIs for 32-bit MCUs and MPUs. Designed to easily integrate with Microchip’s broad portfolio 32-bit devices, MGS works with multiple development platforms, including MPLAB Harmony v3 and Linux environments. Additionally, the suite helps improve reusability across projects and simplify design complexities.
Compositional tools in MGS include a WYSIWYG drawing screen and tools for layer management and widget editing. Also included is a simulator for hardware-free prototyping. By using the MPLAB Code Configurator (MCC), the simulator builds the MCC-generated C code in either web or native mode. In web mode, the tool creates an HTML file that can run on most web browsers with simulated touch interactivity. In native mode, the simulator enables debugging of the GUI on a Windows desktop computer.
MGS can build GUIs for both resource-constrained devices with lower memory and system requirements and high-performance devices featuring tablet-sized touchscreens and high-fidelity video playback. It supports displays ranging from monochrome OLEDs to 1080p 16.7M color TFTs, with interfaces like MIPI DSI, LVDS, RGB, SPI, and HDMI, as well as touchscreens with 2D/3D gesture support.
The Microchip Graphics Suite is available now and free for downloading here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Software eases embedded GUI development appeared first on EDN.
SiC MOSFETs target EV traction inverters
ST has launched its fourth-generation STPOWER SiC MOSFETs, offering smaller size and greater efficiency for future EV traction inverters. Set to be available in 750-V and 1200-V classes, these devices will enhance energy efficiency and performance in 400-V and 800-V bus traction inverters, bringing SiC advantages to mid-size and compact EVs.
According to ST, Gen 4 SiC MOSFETs feature significantly lower RDS(on) compared to previous generations, reducing conduction losses. They support faster switching speeds, which lead to lower switching losses. Gen 4 technology also offers greater durability under dynamic reverse bias conditions, surpassing the AQG 324 automotive standard. Additionally, the die size of Gen 4 devices is 12% to 15% smaller than Gen 3 at the same RDS(on) at 25°C, enabling more compact power converter designs.
ST has completed qualification of its fourth-generation SiC platform for the 750-V class and expects to qualify the 1200-V class by the first quarter of 2025. Commercial availability of 750-V and 1200-V devices will follow, enabling designers to target applications from standard AC-line voltages to high-voltage EV batteries and chargers.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post SiC MOSFETs target EV traction inverters appeared first on EDN.
VNA boasts four independent RF sources
Part of Keysight’s PNA-X vector network analyzer portfolio, the NA520xA integrates up to four RF sources, two combiners, and two low-noise receivers. Its independent RF sources with pulse modulators and source filters eliminate the need for external sources, simplifying complex component characterization in a single test setup.
The NA520xA series comprises three configurable models, with frequency ranges of 10 MHz to 26.5 GHz, 43.5 GHz, and 50 GHz. Key benefits of the NA520xA PNA-X include:
- Expanded Measurement Versatility – Two low-noise receivers and signal combiners enable noise figure and intermodulation distortion measurements in two directions, without external switches.
- Advanced Receiver Architecture – Eight wideband, high dynamic range, pre-selected receivers enable faster S-parameter and spectrum analysis measurements, as well as high-resolution pulsed RF measurements.
- Flexible Design Verification Setup – Precision network analysis on a wide variety of complex active devices leveraging direct receiver access and accessible front-panel loops.
Prices quotes for the NA520xA VNA can be requested online. To learn more about the PNA-X family of network analyzers, click here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post VNA boasts four independent RF sources appeared first on EDN.
Arduino Cloud launches on AWS Marketplace
Amazon Web Services (AWS) customers can now access Arduino Cloud directly through the AWS Marketplace, streamlining IoT project development. Designed to simplify the management of IoT initiatives for professionals and businesses, Arduino Cloud integrates easily within the AWS ecosystem. With AWS Marketplace offering thousands of software listings from independent vendors, users can quickly find, test, purchase, and deploy software that runs on AWS.
With the Arduino Cloud all-in-one IoT platform, developers and enterprises can leverage a scalable, reliable, and accessible solution to build IoT applications quickly. It enables field data availability at scale for processing and control. Arduino Cloud also supports diverse IoT deployments with extensive edge and hardware options, backed by a strong developer community.
Arduino Cloud offers two plans to cater to different organizational needs. The Prototyping Plan is designed for prototyping, testing, and managing proof of concepts (PoCs) and pilot deployments, with an entitlement for up to 20 devices. The Enterprise Plan allows users to build, monitor, and control fleets of devices at scale, accommodating up to 500 devices.
Arduino Cloud is now generally available on AWS Marketplace. For more information, visit the Arduino Cloud AWS Marketplace page.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Arduino Cloud launches on AWS Marketplace appeared first on EDN.