EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 1 година 32 хв тому

A brief history and technical background of heat shrink tubing

Втр, 01/07/2025 - 14:50

Heat shrink tubing, rarely referred to simply as “HST” even in our acronym-intensive world, is made of cross-linked polymers and is primarily used to cover and protect wire splices. EDN and Planet Analog contributor Bill Schweber provides a sneak peek of this important but often underrated technology in his latest blog.

Read the full story at EDN’s sister publication, Planet Analog.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post A brief history and technical background of heat shrink tubing appeared first on EDN.

CES 2025: Moving towards software-defined vehicles

Пн, 01/06/2025 - 17:20
Major CES 2025 theme: SDVs

Software-defined vehicles (SDVs) are a big theme at CES this year, shifting vehicles from hardware-centric upgrades to over the air (OTA) software upgrades. In order to do this, vehicle subsystems must rely on a, more or less, generic processing platform that can perform a wide variety of functions to serve the various aspects of a car. As shown in Figure 1, TI’s approach to this is shifting from a “domain” architecture to a “zonal” one where ECUs that were once custom-tailored to specific domains (e.g., powertrain, ADAS, infotainment, body electronic and lighting, passive safety) are now more location, or zone-, -based to reduce weighty wire harnessing and improve processor speeds. 

Figure 1 Traditional domain versus zone architecture. Source: Texas Instruments

TI’s radar sensor, audio processors, Class-D amplifier

TI’s automotive innovations are currently focused in powertrain systems; ADAS; in-vehicle infotainment (IVI); and body electronics and lighting. The recent announcements fall into the ADAS with the AWRL6844 radar sensor as well as IVI with the AM275 and AM62D processors and the class-D audio amplifier. 

ADAS: passenger safety solution

The AWRL6844 radar sensor uses 60-GHz millimeter-wave (mm-wave) with a 4×4 antenna array and edge AI models running on an on-chip TI-specific accelerator and DSP to support several in-vehicle safety measures including occupancy monitoring for seat belt reminders, child presence detection, and intrusion (Figure 2). Presently, OEMs resort to a combination of in-seat weight sensors, two UWB sensors for front-row and back-row child presence detection, and an ultrasonic intrusion module for the same direct-sensing safety measures, directly tracking human activity such as respiration, heartbeat, movement, etc.). The technology is designed to assist OEMs in meeting evolving regulatory safety requirements such as the Euro new car assessment program (NCAP) advanced that offers rewards to manufacturers for implementing advanced safety technologies as a means to complement its established star rating system. Yariv Raveh, the vice president and business unit management of radar stated, “In 2025 the Euro NCAP requirement for child presence detection will only award points for a direct sensing system and in the near future, the in-cabin sensing system must accurately distinguish between a child and an adult in order to provide a good user experience.” 

Figure 2 A block diagram of TI’s AWRL6844 radar sensor and the three vehicle modes that the sensor can assist with (seat belt reminder, child presence detection, and intrusion detection). Source: Texas Instruments

IVI: Premium audio solution 

Some of the features of the new AM275x-Q1 and AM62D-Q1 processors are the integration of two vector-based C7x DSP cores, multiple Arm cores, on-chip memory, an NPU accelerator, and audio networking with Ethernet AVB. The differences between the processors is highlighted in Figure 3. “Tier 1 suppliers must elect the appropriate processing components to meet all of their customer needs across their fleets. So, our answer is to provide two different architectures to give engineers the flexibility to choose across the range of use cases, all using the same audio processing family where engineers can design standalone and integrated premium audio systems across a range of performance levels with minimal additional hardware and software investment,” said Sonia Ghelani, TI’s product line manager for signal processing MCUs. The company is actively working with customers to incorporate AI into the audio signal chain for unique solutions in applications such as active noise cancellation (ANC) and road noise cancellation (RNC). 

Figure 3 The AM275x DDR-less MCU and AM62D DDR-based process for premium audio in IVI applications. Source: Texas Instruments

IVI: Class-D audio amplifier

The TAS6754-Q1 class-D amplifier (Figure 4) is meant to assist engineers with implementing TI’s “1L” modulation scheme, a technology that lowers the inductor count per audio channel to one (hence the phrase “1L”). Modern vehicles can embed well over 20 speakers and, in an effort to reduce size, weight, and cost, class-D amplifiers are being used for their higher power efficiency and lower thermal dissipation. However, these amplifiers generally require two LC filters per audio channel to attenuate high frequency noise. “1L maintains class-D performance while reducing component count and cost, allowing the premium audio system to grow in terms of speakers and mics,” added Sonia Ghelani. 

Figure 4 Sample vehicle speaker and mic distribution as well as a sample block diagram of an audio signal chain including TI’s class-D amplifier. Source: Texas Instruments

Blurring the lines between IVI and ADAS

One major discussion during the press briefing involved the industry trend of integrating ADAS and IVI functions on a single SoC. “So today we see that they’re in two separate boards, however, more and more we’re seeing that they end up being in the same board,” said Mark Ng, TI’s director of automotive systems. Sonia Ghelani added with an example of an overlap between ADAS and IVI functions, “these chimes and seat belt reminders are ADAS requirements that fall into the audio domain. As we move into a world of software-defined cars with more zonal architectures, you’ll continue to see an overlap between the two.” She continued, “For TI it’s important that we understand exactly what the customer is trying to build so that we don’t silo these systems in one bucket or another, but rather understand what problems the customer is trying to solve.” 

Aalyia Shaukat, associate editor at EDN, has worked in the design publishing industry for six years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post CES 2025: Moving towards software-defined vehicles appeared first on EDN.

A two transistor sine wave oscillator

Пн, 01/06/2025 - 16:59

Figure 1 shows a variation on a sine wave oscillator, it uses just two transistors and a single variable resistor to set the frequency.

Figure 1 Just a couple of components are needed for a simple tunable sine wave oscillator.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The section around Q1 is a multiple-feedback-bandpass-filter (MFBF). The usual embodiment of this type of filter is shown in Figure 2.

Figure 2 A standard implementation of a MFBF.

The formulas for these filter can be found in almost any textbook (where C = C1 = C2):

Please note that the center frequency, among others, is determined by the resistance of R3. The gain of the filter is determined by the ratio of R2/R1 in such a way that Av = -R2/(2*R1). Usually this filter is implemented around an operational amplifier, it can also be implemented around an inverting transistor amplifier. However, because of the limited open-loop gain of the latter, the gain will be lacking at the higher frequencies.

The section around Q2 is an inverting amplifier, with an unloaded gain set by R8/R7. D1 and D2 together with R8 form a clipper to make sure that the signal offered to the MFBF is of constant level.

At the center frequency of the filter, the phase-shift is 180°. Together with the 180° phase shift of Q2 there is a total 360° phase shift at this frequency.The loop gain is >1 due to the ample gain of Q2. Thus, Barkhausen’s criteria are met.

The relative soft clipping of D1 and D2 together with the filtering of Q1 limits the amount of harmonics in the output signal. The passive components around Q1 determine the center frequency.

With the current values, the frequency can be set between 498 Hz and 1230 Hz by changing R3 between 1k and 6k. At the same time the output amplitude changes from 1.28 Vpp to 0.68 Vpp. The output shows around ~1% distortion (Figure 3).

Figure 3 The scope image shows the oscillator output at circa 1 kHz.

A variation in the supply voltage from 9 V to 12 V causes a frequency variation of only 2 Hz and a variation of output amplitude from 0.80 Vpp to 0.86 Vpp.

Cor van Rij blew his first fuse at 10 under the close supervision of his father who promptly forbade him to ever work on the house mains again. He built his first regenerative receiver at the age of 12 and his boys bedroom was decorated with all sorts of antennas and a huge collection of disassembled radios took up every horizontal plane. He studied electronics and graduated cum laude. He worked as a data design engineer and engineering manager in the telecom industry. And is working for almost 20 years as a principal electrical design engineer, specializing in analog and RF electronics and embedded firmware. Every day is a new discovery!

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post A two transistor sine wave oscillator appeared first on EDN.

CES 2025’s sensor design highlights

Пн, 01/06/2025 - 15:55

Sensing solutions—a vital ingredient in automotive, consumer and industrial applications—are prominent features in the offerings displayed at CES 2025 held in Las Vegas, Nevada on from 7 to 10 January. That encompasses sensing solutions packed into system-on-chip (SoC) devices as well as hardware components meshed with sensor fusion algorithms.

But the most exciting foray in this year’s sensor parade at CES 2025 relates to how artificial intelligence (AI) content is incorporated into sensing designs.

Read the full story published at EDN’s sister publication, Planet Analog.

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post CES 2025’s sensor design highlights appeared first on EDN.

Is Imagination Technologies for sale again?

Пн, 01/06/2025 - 07:14

Graphics chip designer Imagination Technologies is up for grabs again. A Bloomberg report claims that Canyon Bridge Capital Partners, the private equity firm with ties to Chinese state investors, has hired Lazard Inc. to seek a buyer for the Hertfordshire, U.K.-based chip designer.

Imagination, once a promising graphics technology outfit, could never recover after the Apple fiasco and the perception of Chinese ownership. According to media reports, Apple, which owned an 8.1% stake in Imagination, considered buying the British chip designer in 2016. However, after failing to agree on Imagination’s valuation, Apple left the negotiating table and announced that it would start developing its own graphics IP.

Apple contributed to nearly half of Imagination’s sale, sending shock waves at the British chip company at that time. The company’s stock fell by 70%, and in 2017, Canyon Bridge, backed by state-owned China Reform, acquired Imagination for $686 million. Soon after, Imagination began shedding its non-core businesses; for instance, it sold its connectivity business Ensigma comprising Wi-Fi and Bluetooth silicon to Nordic Semiconductor.

Next came the issue of China gaining access to key semiconductor technology. The effort to appoint new board members and Imagination’s listing in Shanghai proved hot potatoes, leading to intervention from the U.K. regulators to ensure that Imagination remains a U.K.-headquartered business. The company has been in distress since then.

Figure 1 Imagination has more than 3,500 patents related to graphics and related technologies.

Its CEO, Simon Beresford-Wylie, has denied a recent Daily Telegraph report that he’s stepping down. He also rejected some reports about the company engaging in illicit transfers of technology to China. Earlier, in November 2023, Reuters reported that Imagination was laying off 20% of its staff.

With this backdrop, let’s go back to Imagination on the selling block. The Bloomberg report has named Alphabet Inc.’s Google, MediaTek, Renesas, and Texas Instruments as Imagination’s key clients. But no suitors have been reported in trade media yet.

Imagination owners are pinning their hopes on two major factors. First, they draw their hopes from Nvidia’s runaway success in the graphics realm. Though Nvidia’s GPUs are targeted at entirely different markets such as data centers and scientific computing. Imagination, on the other hand, mainly offers graphic solutions for lower-power markets such as automotive, PC cards, drones, robotics, and smartphones.

Second, like Nvidia, Imagination aims to bolster its standing by incorporating artificial intelligence (AI) content in its graphics IP offerings. The British chip firm plans to turn its graphics IP into AI accelerators for low-power training and inference applications.

Figure 2 Imagination is aiming to bring graphics-centric AI to battery-powered devices like drones and smartphones.

Imagination, founded in 1985, has come a long way in its 40-year long technology journey. Once seen as a jewel in Britain’s technology crown, it’s now facing the paradox of a struggling company with a highly promising technology. Perhaps its new owner could address that paradox and put the graphics design house in order.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Is Imagination Technologies for sale again? appeared first on EDN.

Clapp versus Colpitts

Птн, 01/03/2025 - 17:14

Edwin Henry Colpitts (January 19, 1872 – March 6, 1949)
James Kilton Clapp (December 03, 1897 – 1965)

The two persons above are the geniuses who gave us two classic oscillator circuits as shown in Figure 1.

Figure 1 The two classic oscillators circuits: Colpitts (left) and Clapp (right).

We’ve looked at these two oscillators individually before in “The Colpitts oscillator” and “Clapp oscillator”.

However, a side-by-side examination of the two oscillators is additional time well spent.

The Clapp oscillator was devised as an improvement over the Colpitts oscillator by virtue of adding one capacitor, C3, in the above image.

The amplifier “A” is nominally at a gain value of unity, but as a matter of practicality, the gain value is slightly lower than that because the amplifier is really a “follower”. If made with a vacuum tube, then “A” is a cathode follower. If made with a bipolar transistor, then “A” is an emitter follower. If made with a field effect transistor, then “A” is a source follower. The concept itself remains the same.

Each oscillator works because the RLC network develops a voltage step-up at the frequency of oscillation. The “R” is not an incorporated component though. The “R” (R1 or R2) simply represents an output impedance of the follower. The 10 ohms that we see here is purely an arbitrary value guess on my part. The other components are also of arbitrary value choices, but they are convenient values for illustrating just how these little beasties work.

We use SPICE simulations to examine the transfer functions of the two RLC networks as shown in Figure 2.

Figure 2 Colpitts versus Clapp spice simulations using the transfer functions of the two RLC networks.

Each RLC network has a peak in its frequency response which will result in oscillation at that peak frequency. However, the peak of the Clapp circuit is much sharper and narrower than that of the Colpitts circuit. This narrowing has the beneficial effect of suppressing spectral noise centered around the oscillation frequency.

Note in the examples above that the oscillation peaks differ by 0.16% and that the reactance of the L1 inductor and the reactance of the L2 C3 pair differ by 1.12%. That’s just a matter of my having chosen some convenient numbers with the intent of having the two curves match in that regard at the same peak frequency. (I almost succeeded.)

The Clapp oscillator has several advantages over the Colpitts oscillator. The transfer function peak of the Clapp circuit is narrower than that of the Colpitts which tends to yield an oscillator output with less spurious off-frequency energy meaning a “cleaner” signal.

Another advantage of the Clapp circuit is that capacitors C4 and C5 can be made very large as the L2 C3 combination is made to look like a very small inductance value at the oscillation frequency. The larger C4 and C5 values mean that any variations of those capacitance values brought about by variations of the input capacitance of the “A” stage have a minimal effect on the oscillation frequency.

That’s because frequency control of the Clapp circuit is primarily set by the series resonance of the L2 C3 pair rather than the parallel resonance of L1 versus the C1 C2 pair in the Colpitts circuit. If the “A” input capacitance tends to vary for this reason or that, the Clapp circuit is far less prone to an unwanted frequency shift as shown in Figure 3.

Figure 3 A Clapp versus Colpitts frequency shift comparison showing how the Clapp circuit (right) is far less prone to this unwanted shift in frequency.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Clapp versus Colpitts appeared first on EDN.

Industrial MCU packs EtherCAT controller

Чтв, 01/02/2025 - 22:51

GigaDevice has introduced the GD32H75E 32-bit MCU, featuring an integrated GDSCN832 EtherCAT subdevice controller, which is also available as a standalone device. Both components target industrial automation applications, including servo control, variable frequency drives, industrial PLCs, and communication modules.

Powered by an Arm Cortex-M7 core running at up to 600 MHz, the GD32H75E microcontroller includes a DSP hardware accelerator, double-precision floating-point unit, hardware trigonometric accelerator, and filter algorithm accelerator. It also comes with 1024 KB of SRAM, up to 3840 KB of flash memory with security protection, and a 64-KB cache to enhance CPU efficiency and real-time performance.

The MCU’s integrated EtherCAT subdevice controller, licensed from Beckhoff Automation, manages EtherCAT communication, acting as an interface between the EtherCAT fieldbus and the sub-application. It includes two internal PHY ports and an external MII. With 64-bit distributed clock support, it enables synchronization with other EtherCAT devices, achieving DC synchronization accuracy to within 1 µs.

The GD32H75E MCU is available in two variants: one with two internal Ethernet PHYs and another that supports bypass mode, both housed in BGA240 packages. Samples and development boards are available now, with mass production planned for Q2 2025.

GD32H75E product page

GigaDevice 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Industrial MCU packs EtherCAT controller appeared first on EDN.

Wireless audio SoC integrates AI processing

Чтв, 01/02/2025 - 22:51

Airoha Technology’s AB1595 Bluetooth audio chip features a 6-core architecture and a built-in AI hardware accelerator. It consolidates functions typically spread across multiple chips into a single SoC and achieves Microsoft Teams Open Office certification.

The AB1595 uses AI algorithms and input from up to 10 microphones to improve speech clarity by reducing background noise. This collaboration allows it to accurately distinguish between the user’s voice and environmental sounds, achieving professional-grade speech quality. In noisy environments like offices and cafes, it enhances voice noise suppression from 10 dB up to 40 dB, optimizing speech quality and elevating consumer headsets to professional teleconference standards.

Real-time adaptive active noise cancellation (ANC) in the AB1595 boosts environmental noise attenuation across a wide frequency range. It detects the user’s wearing condition (e.g., fit or leakage) and adjusts compensation accordingly. Internal filters automatically adapt to both the fit and surrounding noise, balancing effective noise cancellation with comfort for a superior wearing and listening experience.

Airoha reports that the AB1595 has been adopted by customers, with products expected to be available in Q1 2025. A datasheet was not available at the time of this announcement. Contact Airoha Technology here.

Airoha Technology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Wireless audio SoC integrates AI processing appeared first on EDN.

85-V LED driver handles multiple topologies

Чтв, 01/02/2025 - 22:51

Designed for automotive LED lighting systems, Diodes’ AL8866Q driver supports buck, boost, buck-boost, and single-ended primary-inductance converter (SEPIC) topologies. This DC-switching LED driver-controller operates over an input voltage range of 4.7 V to 85 V, accommodating 12-V, 24-V, and 48-V battery power rails. It is suitable for applications such as daytime running lights, high/low beams, fog lights, turn signals, and brake lights.

The AL8866Q employs a 400-kHz fixed-frequency peak current-mode control architecture. Spread spectrum frequency modulation enhances EMI performance and aids compliance with the CISPR 25 Class 5 standard.

The device enables analog or PWM dimming via its DIM pin. A 1% reference tolerance ensures better brightness control and matching between lamps. With an analog dimming range of 1% to 100%, the AL8866Q maintains ±12% output current accuracy at 20% dimming. Alternatively, PWM dimming, ranging from 0.1 kHz to 1 kHz, provides a 100:1 dynamic range.

An integrated soft-start function gradually increases the inductor and switch current, minimizing potential overvoltage and overcurrent at the output. The driver also includes an open-drain fault output to signal various fault conditions.

Prices for the AEC-Q100 Grade 1 qualified AL8866Q driver start at $0.48 each in lots of 1000 units.

AL8866Q product page

Diodes

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 85-V LED driver handles multiple topologies appeared first on EDN.

PCIe Gen4 SSD delivers 6200 MB/s

Чтв, 01/02/2025 - 22:51

The P400 V4 from Patriot Memory is a PCIe Gen 4 x4 M.2 SSD, offering read speeds up to 6200 MB/s and write speeds up to 5200 MB/s. Optimized for PC and PS5 compatibility, it provides gamers and content creators with high-speed performance and enhanced thermal management. Its compact M.2 2280 form factor makes it well-suited for space-constrained systems, including thin laptops and small form-factor PCs.

With a read speed of 6200 MB/s, the P400 V4 achieves a total bytes written (TBW) rating of 1280 TB. Available in storage capacities ranging from 500 GB to 4 TB, the drive features SmartECC technology for improved reliability. To maintain consistent peak performance during intensive operations, the P400 V4 incorporates a graphene heatshield that helps prevent thermal throttling and efficiently manages thermal output.

The P400 V4’s PCIe Gen 4 x4 controller is NVMe 2.0 compliant, offering improved performance and support for the latest features. The SSD comes with a 5-year warranty and supports Windows 7, 8.0, 8.1, 10, and 11 (drivers may be required for older versions).

P400 V4 product page

Patriot Memory 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post PCIe Gen4 SSD delivers 6200 MB/s appeared first on EDN.

The advent of co-packaged optics (CPO) in 2025

Чтв, 01/02/2025 - 14:29

Co-packaged optics (CPO)—the silicon photonics technology promising to transform modern data centers and high-performance networks by addressing critical challenges like bandwidth density, energy efficiency, and scalability—is finally entering the commercial arena in 2025.

According to a report published in Economic Daily News, TSMC has successfully integrated CPO with advanced semiconductor packaging technologies, and sample deliveries are expected in early 2025. Next, TSMC is projected to enter mass production in the second half of 2025 with 1.6T optical transmission offerings.

Figure 1 CPO facilitates a shift from electrical to optical transmission to address the interconnect limitations such as signal interference and overheating. Source: TrendForce

The report reveals that TSMC has successfully trialled a key CPO technology—micro ring modulator (MRM)—at its 3-nm process node in close collaboration with Broadcom. That’s a significant leap from electrical to optical signal transmission for computing tasks.

The report also indicates that Nvidia plans to adopt CPO technology, starting with its GB300 chips, which are set for release in the second half of 2025. Moreover, Nvidia plans to incorporate CPO in its subsequent Rubin architecture to address the limitations of NVLink, the company’s in-house high-speed interconnect technology.

What’s CPO

CPO is a crucial technology for artificial intelligence (AI) and high-performance computing (HPC) applications. It enhances a chip’s interconnect bandwidth and energy efficiency by integrating optics and electronics within a single package, which significantly shortens electrical link lengths.

Here, optical links offer multiple advantages over traditional electrical transmission; they lower signal degradation over distance, reduce susceptibility to crosstalk, and offer significantly higher bandwidth. That makes CPO an ideal fit for data-intensive AI and HPC applications.

Furthermore, CPO offers significant power savings compared to traditional pluggable optics, which struggle with power efficiency at higher data rates. The early implementations show 30% to 50% reductions in power consumption, claims an IDTechEx study titled “Co-Packaged Optics (CPO): Evaluating Different Packaging Technologies.”

This integration of optics with silicon—enabled by advancements in chiplet-based technology and 3D-IC packaging—also reduces signal degradation and power loss and pushes data rates to 1.6T and beyond.

Figure 2 Optical interconnect technology has been gaining traction due to the growing need for higher data throughput and improved power efficiency. Source: IDTechEx

Heterogeneous integration, a key ingredient in CPO, enables the fusion of optical engine (OE) with switch ASICs or XPUs on a single package substrate. Here, the optical engine includes both photonic ICs and electronic ICs. The packaging in CPO generally employs two approaches. The first one involves the packaging of optical engine itself and the second one focuses on the system-level integration of the optical engine with ICs like ASICs or XPUs.

A new optical computing era

TSMC’s approach involves integrating CPO modules with advanced packaging technologies such as chip-on-wafer-on-substrate (CoWoS) or small outline integrated circuit (SOIC). It eliminates traditional copper interconnects’ speed limitations and puts TSMC at the forefront of a new optical computing era.

However, challenges such as low yield rates in CPO module production might lead TSMC to outsource some optical-engine packaging orders to other advanced packaging companies. This shows that the complex packaging process encompassing CPO fabric will inevitably require a lot of fine-tuning before commercial realization.

Still, it’s a breakthrough that highlights a tipping point for AI and HPC performance, wrote Jeffrey Cooper in his LinkedIn post. Cooper, a former sourcing lead for ASML, also sees a growing need for cross-discipline expertise in photonics and semiconductor packaging.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The advent of co-packaged optics (CPO) in 2025 appeared first on EDN.

PWM power DAC incorporates an LM317

Срд, 01/01/2025 - 16:57

Instead of the conventional approach of backing up a DAC with an amplifier to boost output, this design idea charts a less traveled by path to power. It integrates an LM317 positive regulator with a simple 8-bit PWM DAC topology to obtain a robust 11-V, 1.5-A capability. It thus preserves simplicity while exploiting the built-in fault protection features (thermal and overload) of that time proven Bob Pease masterpiece. Its output is proportional to the guaranteed 2% precision of the LM317 internal voltage reference, making it securely independent of vagaries of both the 5-V logic supply rail and incoming raw DC supply.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 diagrams how it works.

Figure 1 LM317 regulator melds with HC4053 CMOS switch to make a 16-W PWM power DAC.

CMOS SPDT switches U1b and U1c accept a 10-kHz PWM signal to generate a 0 V to 9.75 V “ADJ” control signal for the U2 regulator via feedback networks R1, 2, and R3. The incoming PWM signal is AC coupled so that U1 can “float” on U2’s output. U1c provides an inverse of the PWM signal, implementing active ripple cancellation as described in “Cancel PWM DAC ripple with analog subtraction.” Note that R1||R2 = R3 to optimize ripple subtraction and DAC accuracy.

This feedback arrangement does, however, make the output voltage a nonlinear function of PWM duty factor (DF) as given by:

Vout = 1.25 / (1 – DF(1 – R1/(R1 + R2))
= 1.25 / (1 – 0.885*DF)

This is graphed in Figure 2. 

Figure 2 The Vout (1.25 V to 11 V) versus PWM DF (0 to 1) where Vout = 1.25 / (1 – 0.885*DF).

Figure 3 plots the inverse of Figure 2, yielding the PWM DF required for any given Vout.

Figure 3 The inverse of Figure 2 where PWM DF = (1 – 1.25/Vout)/0.885.

The corresponding 8-bit PWM setting works out to: Dbyte = 255 (1 – 1.25 / Vout) / 0.885

Vfullscale = 1.25 / (R1/(R1 + R2)), so design choices other than 11 V are available. 11 V is the maximum consistent with HC4053’s ratings, but up to 20 V is feasible if the metal gate CD4053B is substituted for U1. Don’t forget, however, the requirement that R3 = R1||R2.

The supply rail V+ can be anything from a minimum of Vfullscale+3V to accommodate U2’s minimum headroom dropout requirement, up to the LM317’s absmax 40-V limit. DAC accuracy will be unaffected due to this chip’s excellent PSRR, although of course efficiency may suffer.

U2 should be heatsunk as dictated by heat dissipation caused by required output currents multiplied by the V- to Vout differential. Up to double-digit watts is possible at high currents and low Vout.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post PWM power DAC incorporates an LM317 appeared first on EDN.

2024: A year’s worth of interconnected themes galore

Срд, 01/01/2025 - 16:54

As any of you who’ve already seen my precursor “2025 Look Ahead” piece may remember, we’ve intentionally flipped the ordering of my two end-of-year writeups once again this year. This time, I’ll be looking back over 2024: for historical perspective, here are my prior retrospectives for 2019, 2021, 2022 and 2023 (we skipped 2020).

As I’ve done in past years, I thought I’d start by scoring the topics I wrote about a year ago in forecasting the year to come:

  • Increasingly unpredictable geopolitical tensions
  • The 2024 United States election
  • Windows (and Linux) on Arm
  • Declining smartphone demand, and
  • Internal and external interface evolutions

Maybe I’m just biased but I think I nailed ‘em all, albeit with varying degrees of impactfulness. To clarify, by the way, it’s not that if the second one would happen was difficult to predict; the outcome, which I discussed a month back, is what was unclear at the time. In the sections that follow, I’m going to elaborate on one of the above themes, as well as discuss other topics that didn’t make my year-ago forecast but ended up being particularly notable (IMHO, of course).

Battery transformations

I’ve admittedly written quite a lot about lithium-based batteries and the devices they fuel over the past year, as I suspect I’ll also be doing in the year(s) to come. Why? My introductory sentence to a recent teardown of a “vape” device answers that question, I think:

The ever-increasing prevalence of lithium-based batteries in various shapes, sizes and capacities is creating a so-called “virtuous circle”, leading to lower unit costs and higher unit volumes which encourage increasing usage (both in brand new applications and existing ones, the latter as a replacement for precursor battery technologies), translating into even lower unit costs and higher unit volumes that…round and round it goes.

Call me simple-minded (as some of you already may have done a time or few over the years!) but I consistently consult the same list of characteristics and tradeoffs among them when evaluating various battery technologies…a list that was admittedly around half its eventual length when I first scribbled it on a piece of scrap paper a few days ago, until I kept thinking of more things to add in the process of keyboard-transcribing it (thereby eventually encouraging me to delete the “concise” adjective I’d originally used to describe it)!

  • Volume manufacturing availability, translating to cost (as I allude to in the earlier quote)
  • Form factor implementation flexibility (or not)
  • The required dimensions and weight for a given amount of charge-storage capacity
  • Both peak and sustained power output
  • The environmental impacts of raw materials procurement, battery manufacturing, and eventual disposal (or, ideally, recycling)
  • Speaking of “environmental”, the usable operating temperature range, along with tolerance to other environment variables such as humidity, shock and vibration
  • And recharge speed (both to “100% full” and to application-meaningful percentages of that total), along with the number of recharge cycles the battery can endure until it no longer can hold enough anode electrons to be application-usable in a practical sense.

Although plenty of lithium battery-based laptops, smartphones and the like are sold today, a notable “driver” of incremental usage growth in the first half of this decade (and beyond) has been various mobility systems—battery-powered drones (and, likely in the future, eVTOLs), automobiles and other vehicles, untethered robots, and watercraft (several examples of which I’ll further elaborate on later in this writeup, for a different reason). Here, the design challenges are quite interconnected and otherwise complex, as I discussed back in October 2021:

Li-ion battery technology is pretty mature at this point, as is electric motor technology, so in the absence of a fundamental high-volume technology breakthrough in the future, to get longer flight time, you need to include bigger batteries…which leads to what I find most fundamentally fascinating about drones and their flying kin: the fundamental balancing act of trading off various contending design factors that is unique to the craft of engineering (versus, for example, pure R&D or science). Look at what I’ve just said. Everyone wants to be able to fly their drone as long as possible, before needing to land and swap out battery packs. But in order to do so, that means that the drone manufacturer needs to include larger battery cells, and more of them.

Added bulk admittedly has the side benefit of making the drone more tolerant of wind gusts, for example, but fundamentally, the heavier the drone the beefier the motors need to be in order to lift it off the ground and fly it for meaningful altitudes, distances, and durations. Beefier motors burn more juice, which begs for more batteries, which make the drone even heavier…see the quagmire? And unlike with earth-tethered electricity-powered devices, you can’t just “pull over to the side of the road” if the batteries die on you.

Now toss in the added “twist” that everyone also wants their drone to be as intelligent as possible so it doesn’t end up lost or tangled in branches, and so it can automatically follow whatever’s being videoed. All those image and other sensors, along with the intelligence (and memory, and..) to process the data coming off them, burns juice, too. And don’t forget about the wireless connectivity between the drone and the user—minimally used for remote control and analytics feedback to the user…How do you balance all of those contending factors to come up with an optimum implementation for your target market?

Although the previous excerpt was specifically about drones, many of the points I raised are also relevant at least to a degree in the other mobility applications I mentioned. That said, an electric car’s powerplant size and weight constraints aren’t quite as acute as an airborne system’s might be, for example. This application-defined characteristics variability, both in an absolute sense and relative to others on my earlier list, helps explain why, as Wikipedia points out, “there are at least 12 different chemistries of Li-ion batteries” (with more to come). To wit, developers are testing out a diversity of both anode and cathode materials (and combinations of them), increasingly aided by AI (which I’ll also talk more about later in this piece) in the process, along with striving to migrate away from “wet” electrolytes, which among other things are flammable and prone to leakage, toward safer solid-state approaches.

Another emerging volume growth application, as I highlighted throughout the year, are battery generators, most frequently showcased by me in their compact portable variants. Here, while form factor and weight remain important, since the devices need to be hauled around by their owners, they’re stationary while in use. Extrapolate further and you end up with even larger home battery-backup banks that never get moved once installed. And extrapolate even further, to a significant degree in fact, and you’re now talking about backup power units for hospitals, for example, or even electrical grid storage for entire communities or regions. One compelling use case is to smooth out the inherent availability variability of renewable energy sources such as solar and wind, among other reasons to “feed” the seemingly insatiable appetites of AI workload-processing data centers in a “green”-as-possible manner.  And in all these stationary-backup scenarios, installation space is comparatively abundant and weight is also of lesser concern; the primary selection criteria are factors such as cost, invulnerability, and longevity.

As such, non-lithium-based technologies will likely become increasingly prominent in the years to come. Sodium-ion batteries (courtesy of, in part, sodium’s familial proximity to lithium in the Periodic Table of Elements) are particularly near-term promising; you can already buy them on Amazon! The first US-based sodium-ion “gigafactory” was recently announced, as was the US Department of Energy’s planned $3 billion in funding for new sodium-ion (and other) battery R&D projects. Iron-based batteries such as the mysteriously named (but not so mysterious once you learn how they work) iron-air technology tout raw materials abundance (how often do you come across rust, after all?) translating into low cost. Vanadium-based “flow” batteries also hold notable promise. And there’s one other grid-scale energy storage candidate with an interesting twist: old EV batteries. They may no longer be sufficiently robust to reliably power a moving vehicle, but stationary backup systems still provide a resurrecting life-extension opportunity.

For ongoing information on this topic, in addition to my and colleagues’ periodic coverage, market research firm IDTechEx regularly publishes blog posts on various battery technology developments which I also commend to your inspection. I have no connection with the firm aside from being a contented consumer of their ongoing information output!

Drones as armaments

As a kid, I was intrigued by the history of warfare. Not (at all) the maiming, killing and other destruction aspects, mind you, instead the equipment and its underlying technologies, their use in conflicts, and their evolutions over time. Three related trends that I repeatedly noticed were:

  1. Technologies being introduced in one conflict and subsequently optimized (or in other cases disbanded) based on those initial experiences, with the “success stories” then achieving widespread use in subsequent conflicts
  2. The oft-profound advantages that adopters of new successful warfare technologies (and equipment and techniques based on them) gained over less-advanced adversaries who were still employing prior-generation approaches, and
  3. That new technology and equipment breakthroughs often rapidly obsoleted prior-generation warfare methods

Re point #1, off the top of my head, there’s (with upfront apologies for any United States centricity in the examples that follow):

  • Chemical warfare, considered (and briefly experimented with) during the US Civil War, with widespread adoption beginning in World War I (WWI)
  • Airplanes and tanks, introduced in WWI and extensively leveraged in WWII (and beyond)
  • Radar (airplanes), sonar (submarines) and other electronic surveillance, initially used in WWII with broader implementation in subsequent wars and other conflicts
  • And RF and other electronics-based communications methods, including cryptography (and cracking), once again initiated in WWII

And to closely related points #2 and #3, two WWII examples come to mind:

  • I still vividly recall reading as a kid about how the Polish army strove, armed with nothing but horse cavalry, to defend against invading German armored brigades, although the veracity of at least some aspects of this propaganda-tainted story are now in dispute.
  • And then there was France’s Maginot Line, a costly “line of concrete fortifications, obstacles and weapon installations built by France in the 1930s” ostensibly to deter post-WWI aggression by Germany. It was “impervious to most forms of attack” across the two countries’ shared border, but the Germans instead “invaded through the Low Countries in 1940, passing it to the north”. As Wikipedia further explains, “The line, which was supposed to be fully extended further towards the west to avoid such an occurrence, was finally scaled back in response to demands from Belgium. Indeed, Belgium feared it would be sacrificed in the event of another German invasion. The line has since become a metaphor for expensive efforts that offer a false sense of security.”

I repeatedly think of case studies like these as I read about how the Ukrainian armed forces are, both in the air and sea, now using innovative, often consumer electronics-sourced approaches to defend against invading Russia and its (initially, at least) legacy warfare techniques. Airborne drones (more generally: UAVs, or unmanned aerial vehicles) have been used for surveillance purposes since at least the Vietnam War as alternatives to satellites, balloons, manned aircraft and the like. And beginning with aircraft such as the mid-1990s Predator, UAVs were also able to carry and fire missiles and other munitions. But such platforms were not only large and costly, but also remotely controlled, not autonomous to any notable degree. And they weren’t in and of themselves weapons.

That’s all changed in Ukraine (and elsewhere, for that matter) in the modern era. In part hamstrung by its allies’ constraints on what missiles and other weapons it was given access to and how and where they could be used, Ukraine has broadened drones’ usage beyond surveillance into innate weaponry, loading them up with explosives and often flying them hundreds of miles for subsequent detonation, including all the way to Moscow. Initially, Ukraine retrofit consumer drones sourced from elsewhere, but it now manufactures its own UAVs in high volumes. Compared to their Predator precursors, they’re compact, lightweight, low cost and rugged. They’re increasingly autonomous, in part to counteract Russian jamming of wireless control signals coming from their remote operators. They can even act as flamethrowers. And as the image shown at the beginning of this section suggests, they not only fly but also float, a key factor in Ukraine’s to-date success both in preventing a Russian blockade of the Black Sea and in attacking Russia’s fleet based in Crimea.

AI (again, and again, and…)

AI has rapidly grown beyond its technology-coverage origins and into the daily clickbait headlines and chyrons of even mainstream media outlets. So it’s probably no surprise that this particular TLA (with “T” standing for “two” this time, versus the the usual) is a regular presence in both my end-of-year and next-year-forecast writeups, along with plenty of ongoing additional AI coverage in-between each year’s content endpoints. A month ago, for example, I strove to convince you that multimodal AI would be ascendant in the year(s) to come. Twelve months ago, I noted the increasing importance of multimodal models’ large language model (LLM) precursors over the prior year, and the month(-ish) before that, I’d forecasted that generative AI would be a big deal in 2023 and beyond. Lather, rinse and repeat.

What about the past twelve months; what are the highlights? I could easily “write a book” on just this topic (as I admittedly almost already did earlier re “Battery Transformations”). But with the 3,000-word count threshold looming, and always mindful of Aalyia’s wrath (I kid…maybe…), I’ll strive to practice restraint in what follows. I’m not, for example, going to dwell on OpenAI’s start-of-year management chaos and ongoing key-employee-shedding, nor on copyright-infringement lawsuits brought against it and its competitors by various content-rights owners…or for that matter, on lawsuits brought against it and its competitors (and partners) by other competitors. Instead, here’s some of what else caught my eye over the past year:

  • Deep learning models are becoming more bloated with the passage of time, despite floating point-to-integer conversion, quantization, sparsity and other techniques for trimming their size. Among other issues, this makes it increasingly infeasible to run them natively (and solely) on edge devices such as smartphones, security cameras and (yikes!) autonomous vehicles. Imagine (a theoretical case study, mind you) being unable to avoid a collision because your car’s deep learning model is too dinky to cover all possible edge and corner cases and a cloud-housed supplement couldn’t respond in time due to server processing and network latency-and-bandwidth induced delays…
  • As the models themselves grow, the amount of processing horsepower (not to mention consumed power) and time needed to train them increases as well…exponentially so.
  • Resource demands for deep learning inference are also skyrocketing, especially as the trained models referenced become more multimodal and otherwise complex, not to mention the new data the inference process is tasked with analyzing.
  • And semiconductor supplier NVIDIA today remains the primary source of processing silicon for training, along with (to a lesser but still notable market segment share degree) inference. To the company’s credit, decades after kicking off its advocacy of general-purpose graphics processing (GPGPU) applications, its longstanding time, money and headcount investments have borne big-time fruit for the company. That said, competitors (encouraged by customers aspiring for favorable multi-source availability and pricing outcomes) continue their pursuit of the “Green Team”.
  • To my earlier “consumed power” comments, along with my even earlier “seemingly insatiable appetites of AI workload-processing data centers” comments, and as my colleague (and former boss) Bill Schweber also recently noted, “AI-driven datacenter energy demand could expand 160 percent over the next two years, leaving 40 percent of existing facilities operationally constrained by power availability,” to quote recent coverage in The Register. In response to this looming and troubling situation, in the last few days alone I’ve come across news regarding Amazon (“Amazon AI Data Centers To Double as Carbon Capture Machines”) and Meta (“Meta wants to use nuclear power for its data centers”). Plenty of other recent examples exist. But will they arrive in time? And will they only accelerate today’s already worrying global warming pace in the process?
  • But, in spite of all of this spiraling “heavy lifting”, researchers continue to conclude that AI still doesn’t have a coherent understanding of the world, not to mention that the ROI on ongoing investments in what AI can do may be starting to level off (at least to some observers, albeit not a universally held opinion).
  • One final opinion; deep learning models are seemingly already becoming commodities, a trend aided in part by increasingly capable “open” options (although just what “open” means has no shortage of associated controversy). If I’m someone like Amazon, Apple, Google, Meta or Microsoft, whose deep learning investments reap returns in associated AI-based services and whose models are “buried” within these services, this trend isn’t so problematic. Conversely, however, for someone whose core business is in developing and licensing models to others, the long-term prognosis may be less optimistic, no matter how rosy (albeit unprofitably so) things may currently seem to be. Heck, even AMD and NVIDIA are releasing open model suites of their own nowadays…
Auld Lang Syne

I’m writing this in early December 2024. You’ll presumably be reading it sometime in January 2025. I’ll split the difference and wrap up by first wishing you all a Happy New Year! 😉

As usual, I originally planned to cover a number of additional topics in this piece, such as (in no particular order save for how they came out of my noggin):

But (also) as usual I ended up with more things that I wanted to write about than I had a reasonable wordcount budget to do so. Having now passed through 3,000 words, I’m going to restrain myself and wrap up, saving the additional topics (as well as updates on the ones I’ve explored here) for dedicated blog posts to come in the coming year(s). Let me know your thoughts on my top-topic selections, as well as what your list would have looked like, in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 2024: A year’s worth of interconnected themes galore appeared first on EDN.

Ternary gain-switching 101 (or 10202, in base 3)

Втр, 12/31/2024 - 16:54

This design idea is centered on the humble on/off/on toggle switch, which is great for selecting something/nothing/something else, but can be frustrating when three active options are needed. One possibility is to use the contacts to connect extra, parallel resistors across a permanent one (for example), but the effect is something like low/high/medium, which just looks wrong.

That word “active” is the clue to making the otherwise idle center position do some proper work, like helping to control an op-amp stage’s gain, as shown in Figure 1.

Figure 1 An on/off/on switch gives three gain settings in a non-inverting amplifier stage and does so in a rational order.

Wow the engineering world with your unique design: Design Ideas Submission Guide

I’ve used this principle many times, but can’t recall having seen it in published circuits, and think it’s novel, though it may be so commonplace as to be invisible. It’s certainly obvious when you think about it.

A practical application

That’s the basic idea, but it’s always more satisfying to convert such ideas into something useful. Figure 2 illustrates just that: an audio gain-box whose amplification is switched in a ternary sequence to give precise 1-dB steps from 0 to +26 dBs. As built, it makes a useful bit of lab kit.

Figure 2 Ternary switching over three stages gives 0–26 dB gain in precise 1-dB steps.

Three gain stages are concatenated, each having its own switch. C1 and C2 isolate any DC, and R1 and R12 are “anti-click” resistors, ensuring that there’s no stray voltage on the input or output when something gets plugged in. A1d is the usual rail-splitter, allowing use on a single, isolated supply.

The op-amps are shown as common-or-garden TL074/084s. For lower noise and distortion, (a pair of) LM4562s would be better, though they take a lot more current. With a 5-V supply, the MCP6024 is a good choice. For stereo use, just duplicate almost everything and use double-pole switches.

All resistor values are E12/24 for convenience. The resistor combinations shown are much closer to the ideal, calculated values than the assumed 1% tolerance of actual parts, and give a better match than E96s would in the same positions.

Other variations on the theme

The circuit of Figure 2 could also be built for DC use but would then need low-offset op-amps, especially in the last stage. (Omit C1, C2, and other I/O oddments, obviously.)

Figure 1 showed the non-inverting version, and Figure 3 now employs the idea in an inverting configuration. Beware of noise pick-up at the virtual-earth point, the op-amp’s inverting input.

Figure 3 An inverting amplifier stage using the same switching principle.

The same scheme can also be used to make an attenuator, and a basic stage is sketched in Figure 4. Its input resistance changes depending on the switch setting, so an input buffer is probably necessary; buffering between stages and of the output certainly is.

Figure 4 A single attenuation stage with three switchable levels.

Conclusion: back to binary basics

You’ve probably been wondering, “What’s wrong with binary switching?” Not a lot, except that it uses more op-amps and more switches while being rather obvious and hence less fun.

Anyway, here (Figure 5) is a good basic circuit to do just that.

Figure 5 Binary switching of gain from 0 to +31 dB, using power-of-2 steps. Again, the theoretical resistor values are much closer to the ideal than their actual 1% tolerances.

Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Ternary gain-switching 101 (or 10202, in base 3) appeared first on EDN.

A Bluetooth receiver, an identity deceiver

Пн, 12/30/2024 - 18:06

In mid-October 2015, EDN ran my teardown of Logitech’s Bluetooth Audio Adapter (a receiver, to be precise) based on a CSR (now Qualcomm) BlueCore CSR8630 Single Chip Audio ROM.

The CSR module covers the bulk of the bottom half of the PCB topside, with most of the top half devoted to discretes and such for implementing the audio line-level output amp and the like:

A couple of weeks later, in a follow-up blog post, I mentioned (and briefly compared) a bunch of other Bluetooth adapters I’d come across. Some acted as both receivers and transmitters, for example, while others embedded batteries for portable usage. They implemented varying Bluetooth profiles and specification levels, and some even supported aptX and other optional audio codecs. Among them were three different Aukey models; here’s what I said about them:

I recently saw Aukey’s BR-C1 on sale for $12.99, for example (both black and white color scheme options are available), while the BR-C2 was recently selling for $1 less, and the even fuller-featured BT-C2 was recently special-priced at $24.99.

Logitech’s device is AC-powered via an included “wall wart” intermediary and therefore appropriate for adding Bluetooth input-source capabilities to an A/V receiver, as discussed in my teardown. Aukey’s products conversely contain built-in rechargeable batteries and are therefore primarily intended for mobile use, such as converting a conventional pair of headphones into wireless units. Recharging of the Aukey devices’ batteries occurs via an included micro-USB cable and not-included 5V USB-output power source.

All of the Aukey products can also act as hands-free adapters, by virtue of their built-in microphones. The BR-C1 and BR-C2’s analog audio connections are output-only, thereby classifying them as Bluetooth receivers; the more expensive BT-C2 is both a Bluetooth transmitter and receiver (albeit not both at the same time). But the Bluetooth link between all of them and a wirelessly tethered device is bi-directional, enabling not only speakerphone integration with a vehicle audio subsystem or set of headphones (via analog outputs) but also two-way connectivity to a smartphone (via Bluetooth).

The fundamental difference between the BR-C1 and BR-C2, as far as I can tell, is the form factor; the BR-C1 is 2.17×2.17×0.67 inches in size, while the BR-C2 is 2×1×0.45 inches. All other specs, including play and standby time, seem to be identical. None of Aukey’s devices offer dual RCA jacks as an output option; they’re 3.5 mm TRS-only. However, as my teardown writeup notes, the inclusion of a TRS-to-dual RCA adapter cable in each product’s kit makes multiple integrated output options a seemingly unnecessary functional redundancy.

As time passed, my memory of the specifics of that latter piece admittedly faded, although I’ve re-quoted the following excerpt a few times in comparing a key point made then with other conceptually reminiscent product categories: LED light bulbs, LCDs, and USB-C-based devices:

Such diversity within what’s seemingly a mature and “vanilla” product category is what prompted me to put cyber-pen to cyber-paper for this particular post. The surprising variety I encountered even during my brief period of research is reflective of the creativity inherent to you, the engineers who design these and countless other products. Kudos to you all!

Fast forward to early December 2023, when I saw an Aukey Bluetooth audio adapter intended specifically for in-vehicle use (therefore battery powered, and with an embedded microphone for hands-free telephony), although usable elsewhere too. It was advertised at bargains site SideDeal (a sibling site to same-company Meh, who I’ve also mentioned before) for $12.99.

No specific model number was documented on the promo page, only some features and specs:

Features

  • Wireless Audio Stream
    • The Bluetooth 5 receiver allows you to wirelessly stream audio from your Bluetooth enabled devices to your existing wired home or car stereo system, speakers, or headphones
  • Long Playtime
    • Built-in rechargeable battery supports 18 hours of continuous playback and 1000 hours of standby time
  • Dual Device Link
    • Connect two bluetooth devices simultaneously; free to enjoy music or answer phone call from either of the two paired devices
  • Easy Use
    • Navigate your music on the receiver with built-in controls which can also be used to manage hands-free calls or access voice assistant

 Specifications

  • Type: Receiver
  • Connectivity: 3.5mm
  • Bluetooth standard: Bluetooth v5.0
  • Color: Black
  • To fit: Audio Receivers
  • Ports: 3.5 mm Jack

I bit. I bought three, actually; one each for my and my wife’s vehicles, and a third for teardown purposes. When they arrived, I put the third boxed one on the shelf.

Fast forward nearly a year later, to the beginning of November 2024 (and a couple of weeks prior to when I’m writing these words now), when I pulled the box back off the shelf and prepared for dissection. I noticed the model number, BR-C1, stamped on the bottom of the box but didn’t think anything more of it until I remembered and re-read that blog post published almost exactly nine years earlier, which had mentioned the exact same device:

(I’ve saved you from the boring shots of the blank cardboard box sides)

Impressive product longevity, eh? Hold that thought. Let’s dive in:

The left half of the box contents comprises three cables: USB-A to micro-USB for recharging, plus 3.5 mm (aka, 1/8”) TRS to 3.5 mm, and 3.5 mm to dual RCA for audio output connections:

And a couple of pieces of documentation (a PDF of the user manual is available here):

On the right, of course, is our patient (my images, this time, versus the earlier stock photos), as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

The other three device sides, like the earlier box sides, are bland, so I’ve not included images of them. You’re welcome.

Note, among other things, the FCC ID, 2AFHP-BR-C1. Again, hold that thought. By the way, it’s 2AFHP-BR-C1, not the 2AFHPBR-C1 stamped on the underside, which as it turns out is a different device, albeit, judging from the photos, also an automobile interior-tailored product.

From past experience, I’ve learned that the underside of a rubber “foot” is often a fruitful path inside a device, so once again I rolled the dice:

Bingo: my luck continues to hold out!

With all four screws removed (or at least sufficiently loosened; due to all that lingering adhesive, I couldn’t get two of them completely out of the holes), the bottom popped right off:

And the first thing I saw staring back at me was the 3.7-V, 300 mAh Li-polymer “pouch” cell. Why they went with this battery form factor and formulation versus the more common Li-ion “can” is unclear; there was plenty of room in the design for the battery, and flexibility wasn’t necessary:

In pulling the PCB out of the remaining top half of the case:

revealing, among other things, the electret microphone above it:

I inadvertently turned the device on, wherein it immediately went into blue-blinking-LED standby mode (I fortuitously quick-snapped the first still photo while the LED was illuminated; the video below it shows the full blink cadence):

Why standby, versus the initial alternating red/blue pairing-ready sequence that per the user manual (not to mention common sense) it was supposed to first-time power up in? I suspect that since this was a refurbished (not brand new) device, it had been previously paired to something by the prior owner and the factory didn’t fully reset it before shipping it back out to me. A long-press of the topside button got the device into the desired Bluetooth pairing mode:

And another long-press powered the PCB completely off again:

The previously seen bottom side of the PCB was bare (the glued-on battery doesn’t count, in my book) and, as usual for low cost, low profit margin consumer electronics devices like this one, the PCB topside isn’t very component-rich, either. In the upper right is the 3.5 mm audio output jack; to its left, and in the upper left, is the micro-USB charging connector, with the solder sites for the microphone wiring harness between them. Below them is the system’s multi-function power/mode switch. At left is the three-wire battery connector. Slightly below and to its right (and near the center) is the main system processor, Realtek’s RTL8763BFR Bluetooth dual mode audio SoC with integrated DAC, ADC (for the already-seen mic), DSP and both ROM and RAM.

To the right is of the Realtek RTL8763BFR is its companion 40 MHz oscillator, with a total of three multicolor LEDs in a column both above and below it. In contrast, you may have previously noted five light holes in the top of the device; the diffusion sticker in the earlier image of the inside of the top half of the chassis “bridges the gaps”. Below and to the left of the Realtek RTL8763BFR is the HT4832 audio power amplifier, which drives the aforementioned 3.5 mm audio output jack. The HT4832 comes from one of the most awesome-named companies I’ve yet come across: Jiaxing Heroic Electronic Technology. And at the bottom of the PCB, perhaps obviously, is the embedded Bluetooth antenna.

After putting the device back together, it seemingly still worked fine; here’s what the LEDs look like displaying the pairing cadence from the outside:

All in all, a seemingly straightforward teardown, right? So, then, what’s with the “Identity Deceiver” mention in this writeup’s title? Well, before finishing up, I as-usual hit up the FCC certification documentation, final-action dated January 29, 2018, to see if I’d overlooked anything notable…but the included photos showed a completely different device inside. This time, the bottom side of the PCB was covered with components. And one of them, the design’s area-dominant IC, was from ISSC Technologies, not Realtek. See for yourself.

Confused, I hit up Google to see if anyone else had done a teardown of the Aukey BR-C1. I found one, in video form, published on October 30, 2015. It shows the same design version as in the FCC documentation:

The Aukey BR-C1 product review from the same YouTube creator, published a week-plus earlier, is also worth a view, by the way:

Fortuitously, the YouTube “thumbnail” video for the first video showcases the previously mentioned ISSC Technologies chip:

It’s the IS1681S, a Bluetooth 3.0+EDR multimedia SOC. Here’s a datasheet. ISSC Technologies was acquired by Microchip Technology in mid-2014 and the IS1681S presumably was EOL’d sometime afterward, thereby prompting Aukey’s redesign around Realtek silicon. But how was Aukey able to take the redesign to production without seeking FCC recertification? I welcome insights on this, or anything else you found notable about this teardown, in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post A Bluetooth receiver, an identity deceiver appeared first on EDN.

Software-defined vehicle (SDV): A technology to watch in 2025

Пн, 12/30/2024 - 16:58

Software-defined vehicle (SDV) technology has been a prominent highlight in the quickly evolving automotive industry. But how much of it is hype, and where is the real and tangible value? CES 2025 in Las Vegas will be an important venue to gauge the actual progress this technology has made with a motto of bringing code on the road.

Elektrobit will demonstrate its cloud-based virtual development, prototyping, testing, and validation platform for digital cockpits and in-vehicle infotainment (IVI) at the show. The company’s SDV solutions encompass AMD’s automotive-grade hardware, Google’s Android Automotive and Gemini AI, Epic Games’ Unreal Engine for 3D rendering, and Here navigation.

Figure 1 SDV is promising future-proof cockpit agnostic of hardware and software. Source: Elektrobit

Moreover, at CES 2025, Sony Honda Mobility will showcase its AFEELA prototype for electric vehicles (EVs), which employs Elektrobit’s digital cockpit built around a software-defined approach. Elektrobit’s other partners demonstrating their SDV solutions at the show include AWS, Cognizant, dSPACE, Siemens, and Sonatus.

SDV’s 2024 diary

Earlier, in April 2024, leading automotive chipmaker Infineon joined hands with embedded software specialist Green Hills to jointly develop SDV architectures for EV drivetrains. Infineon would combine its microcontroller-based processing platform AURIX TC4x with safety-certified real-time operating system (RTOS) µ-velOSity from Green Hills.

Figure 2 Real-time automotive systems are crucial in SDV architectures. Source: Infineon Technologies

Green Hills has already ported its µ-velOSity RTOS to the AURIX TC4x microcontrollers. The outcome of this collaboration will be safety-critical real-time automotive systems capable of serving SDV designs and features.

Next, Siemens EDA has partnered with Arm and AWS to accelerate the creation of virtual cars in the cloud. The toolmaker has announced the availability of its PAVE360-based solution for automotive digital twin on AWS cloud services.

Figure 3 The digital twin solution on the AWS platform aims to create a virtual car in the cloud. Source: Siemens EDA

“The automotive industry is facing disruption from multiple directions, but the greatest potential for growth and new revenue streams is the adoption of the software-defined vehicle,” said Mike Ellow, executive VP of EDA Global Sales, Services and Customer Support at Siemens Digital Industries Software. “The hyper-competitive SDV industry is under immense pressure to quickly react to consumer expectations for new features.”

That’s driving the co-development of parallel hardware and software and the move toward the holistic digital twin, he added. Dipti Vachani, senior VP and GM of Automotive Line of Business at Arm, went a step ahead by saying that the software-defined vehicle is survival for the automotive industry.

Hype or reality

The above recap of 2024 activities shows that a lot is happening in the SDV design space. A recent IDTechEx report titled “Software-Defined Vehicles, Connected Cars, and AI in Cars 2024-2034: Markets, Trends, and Forecasts” claims that the cellular connectivity within SDVs can provide access to Internet of Things (IoT) features such as over-the-air (OTA) updates, personalization, and entertainment options.

It also explains how artificial intelligence (AI) within an SDV solution can work as a digital assistant to communicate and respond to the driver and make interaction more engaging using AI-based visual characters appearing on the dashboard. BMW is already offering a selection of SDV features, including driving assistants and traffic camera information.

Figure 4 SDV is promising new revenue streams for car OEMs. Source: IDTechEx

At CES 2025, automotive OEMs, Tier 1’s, chip vendors, and software suppliers are expected to present their technology roadmaps for SDV products. This will offer good visibility on how ready the present SDV technology is for the cars of today and tomorrow.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Software-defined vehicle (SDV): A technology to watch in 2025 appeared first on EDN.

2024: The year when MCUs became AI-enabled

Птн, 12/27/2024 - 15:04

Artificial intelligence (AI) and machine learning (ML) technologies, once synonymous with large-scale data centers and powerful GPUs, are steadily moving toward the network edge via resource-limited devices like microcontrollers (MCUs). Energy-efficient MCU workloads are being melded with AI power to leverage audio processing, computer vision, sound analysis, and other algorithms in a variety of embedded applications.

Take the case of STMicroelectronics and its STM32N6 microcontroller, which features neural processing unit (NPU) for embedded inference. It’s ST’s most powerful MCU and carries out tasks like segmentation, classification, and recognition. Alongside this MCU, ST offers software and tools to lower the barrier to entry for developers to take advantage of AI-accelerated performance for real-time operating systems (RTOSes).

Figure 1 The Neural-ART accelerator in STM32N6 claims to deliver 600 times more ML performance than a high-end STM32 MCU today. Source: STMicroelectronics

Infineon, another leading MCU supplier, has also incorporated a hardware accelerator in its PSOC family of MCUs. Its NNlite neural network accelerator aims to facilitate new consumer, industrial, and Internet of Things (IoT) applications with ML-based wake-up, vision-based position detection, and face/object recognition.

Next, Texas Instruments, which calls its AI-enabled MCUs real-time microcontrollers, has integrated an NPU inside its C2000 devices to enable fault detection with high accuracy and low latency. This will allow embedded applications to make accurate, intelligent decisions in real-time to perform functions like arc fault detection in solar and energy storage systems and motor-bearing fault detection for predictive maintenance.

Figure 2 C2000 MCUs integrate edge AI hardware accelerators to facilitate smarter real-time control. Source: Texas Instruments

The models that run on these AI-enabled MCUs learn and adapt to different environments through training. That, in turn, helps systems achieve greater than 99% fault detection accuracy to enable more informed decision-making at the edge. The availability of pre-trained models further lowers the barrier to entry for running AI applications on low-cost MCUs.

Moreover, the use of a hardware accelerator inside an MCU offloads the burden of inferencing from the main processor, leaving more clock cycles to service embedded applications. This marks the beginning of a long journey for AI hardware-accelerated MCUs, and for a start, it will thrust MCUs into applications that previously required MPUs. The MPUs in the embedded design realm are also not fully capable of controlling design tasks in real-time.

Figure 3 The AI-enabled MCUs replacing MPUs in several embedded system designs could be a major disruption in the semiconductor industry. Source: STMicroelectronics

AI is clearly the next big thing in the evolution of MCUs, but AI-optimized MCUs have a long way to go. For instance, software tools and their ease of use will go hand in hand with these AI-enabled MCUs; they will help developers evaluate the embeddability of AI models for MCUs. Developers should also be able to test AI models running on an MCU in just a few clicks.

The AI party in the MCU space started in 2024, and 2025 is very likely to witness more advances for MCUs running lightweight AI models.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 2024: The year when MCUs became AI-enabled appeared first on EDN.

Wide-creepage switcher improves vehicle safety

Птн, 12/27/2024 - 02:07

A wide-creepage package option for Power Integrations’ InnoSwitch 3-AQ flyback switcher IC enhances safety and reliability in automotive applications. According to the company, the increased primary-to-primary creepage and clearance distance of 5.1 mm between the drain and source pins of the InSOP-28G package eliminates the need for conformal coating, making the IC compliant with the IEC 60664-1 reinforced isolation standard in 800-V vehicles.

The new 1700-V CV/CC InnoSwitch3-AQ devices feature an integrated SiC primary switch delivering up to 80 W of output power. They also include a multimode QR/CCM flyback controller, secondary-side sensing, and a FluxLink safety-rated feedback mechanism. This high level of integration reduces component count by half, simplifying power supply implementation. The wider drain pin enhances durability, making the ICs well-suited for high-shock and vibration environments, such as eAxle drive units.

These latest members of the InnoSwitch3-AQ family start up with as little as 30 V on the drain without external circuitry, critical for functional safety. Devices achieve greater than 90% efficiency and consume less than 15 mW at no-load. Target automotive applications include battery management systems, µDC/DC converters, control circuits, and emergency power supplies in the main traction inverter.

Prices for the 1700 V-rated InnoSwitch3-AQ switching power supply ICs start at $6 each in lots of 10,000 units. Samples are available now, with full production in 1Q 2025.

InnoSwitch3-AQ product page

Power Integrations 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Wide-creepage switcher improves vehicle safety appeared first on EDN.

R&S boosts GMSL testing for automotive systems

Птн, 12/27/2024 - 02:07

Rohde & Schwarz expands testing for automotive systems that employ Analog Devices’ Gigabit Multimedia Serial Link (GMSL) technology. Designed to enhance high-speed video links in applications like In-Vehicle Infotainment (IVI) and Advanced Driver Assistance Systems (ADAS), GMSL offers a simple, scalable SerDes solution. The R&S and ADI partnership aims to assist automotive developers and manufacturers in creating and deploying GMSL-based systems.

Physical Medium Attachment (PMA) testing, compliant with GMSL requirements, is now fully integrated into R&S oscilloscope firmware, along with a suite of signal integrity tools. These include LiveEye for real-time signal monitoring, advanced jitter and noise analysis, and built-in eye masks for forward and reverse channels.

To verify narrowband crosstalk, the offering includes built-in spectrum analysis on the R&S RTP oscilloscope. In addition, cable, connector, and channel characterization can be performed using R&S vector network analyzers.

R&S will demonstrate the application at next month’s CES 2025 trade show. To learn more about ADI’s GMSL technology click here.

Rohde & Schwarz

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post R&S boosts GMSL testing for automotive systems appeared first on EDN.

Gen3 UCIe IP elevates chiplet link speeds

Птн, 12/27/2024 - 02:07

Alphawave Semi’s Gen3 UCIe Die-to-Die (D2D) IP subsystem enables chiplet interconnect rates up to 64 Gbps. Building on the successful tapeout of its Gen2 36-Gbps UCIe IP on TSMC’s 3-nm process, the Gen3 subsystem supports both high-yield, low-cost organic substrates and advanced packaging technologies.

At 64 Gbps, the Gen3 IP delivers over 20 Tbps/mm in bandwidth density with ultra-low power and latency. The configurable subsystem supports multiple protocols, including AXI-4, AXI-S, CXS, CHI, and CHI-C2C, enabling high-performance connectivity across disaggregated systems in HPC, data center, and AI applications.

The design complies with the latest UCIe specification and features a scalable architecture with advanced testability, including live per-lane health monitoring. UCIe D2D interconnects support a variety of chiplet connectivity scenarios, including low-latency, coherent links between compute chiplets and I/O chiplets, as well as reliable optical I/O connections.

“Our successful tapeout of the Gen2 UCIe IP at 36 Gbps on 3-nm technology builds on our pioneering silicon-proven 3-nm UCIe IP with CoWoS packaging,” said Mohit Gupta, senior VP & GM, Custom Silicon & IP, Alphawave Semi. “This achievement sets the stage for our Gen3 UCIe IP at 64 Gbps, which is on target to deliver high performance, 20-Tbps/mm throughput functionality to our customers who need the maximization of shoreline density for critical AI bandwidth needs in 2025.”

Alphawave Semi 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Gen3 UCIe IP elevates chiplet link speeds appeared first on EDN.

Сторінки