Українською
  In English
EDN Network
iToF sensor provides on-chip depth processing

An indirect time-of-flight sensor, the AF0130 from onsemi offers long-distance measurements and 3D imaging of fast-moving objects. It features a depth processing ASIC beneath its pixel area, which rapidly calculates depth, confidence, and intensity maps from laser modulated exposures.
The AF0130, part of the Hyperlux ID sensor family, combines global shutter and iToF technology for precise, high-speed depth sensing. It measures phase shifts in reflected VCSEL light, capturing four light phases in one exposure for enhanced accuracy. A global shutter reduces ambient IR noise, while onboard depth processing and memory enable real-time results without external memory or a high-performance processor.
onsemi states that the AF0130 enables depth sensing up to 30 meters—four times the range of standard iToF sensors. The 1.2-Mpixel CMOS sensor features 3.5-µm BSI pixels in a 1/3.2-in. optical format. A variant, the AF0131, delivers the same performance but excludes on-chip depth processing for manufacturers preferring off-chip depth calculation.
Availability for the AF0130 and AF0131 sensors was not provided at the time of this announcement.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post iToF sensor provides on-chip depth processing appeared first on EDN.
Buck regulator boosts transient response and stability

Kinetic’s KTB4800 2.4-MHz, 3-A buck regulator delivers fast transient response with precise switching frequency. Its OptiComp adaptive on-time PWM control scheme maintains a nearly constant switching frequency despite input and output voltage variations.
Compared to typical current-mode PWM schemes, OptiComp enables quick response to line and load transients while ensuring excellent stability and wide bandwidth. This reduces output voltage droop and overshoot for dynamic loads, even with minimal output capacitance.
The KTB4800 buck regulator supports a range of applications, including CPU and GPU cores, DSPs, DDR memory, I/O power, and sensor/analog supplies. Its output voltage is I²C-programmable from 0.6 V to 3.345 V. The regulator features soft-start and dynamic voltage scaling (DVS) with multiple programmable ramp rates, along with selectable forced-PWM and auto-skip modes for light-load efficiency.
The KTB8400 OptiComp switching regulator is available now for order from Mouser Electronics and other distributors.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Buck regulator boosts transient response and stability appeared first on EDN.
Gate driver photocoupler simplifies SiC MOSFET control

Housed in a small SO8L package, Toshiba’s TLP5814H gate driver photocoupler provides an active Miller clamp for driving SiC MOSFETs. Its built-in clamp circuit directs Miller current from the gate to ground, preventing short circuits without requiring a negative voltage. This enhances system safety while reducing external circuitry for a more compact design.
The TLP5814H delivers a peak output current of +6.8 A/-4.8 A, with the Miller clamp providing a typical channel resistance of 0.69 Ω and a peak sinking current of +6.8 A. Its -40°C to +125°C operating range is achieved by enhancing the infrared LED’s optical output and optimizing the photodetector design for better optical coupling efficiency. This makes the device well-suited for industrial equipment with strict thermal requirements, such as PV inverters and uninterruptible power supplies.
Key specifications for the TLP5814H include:
The TLP5814H’s compact 5.85×10×2.1-mm package enhances layout flexibility while providing an 8.0-mm creepage distance for high-insulation applications.
Toshiba has begun volume shipments.
Toshiba Electronic Devices & Storage
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Gate driver photocoupler simplifies SiC MOSFET control appeared first on EDN.
32-bit MCUs pack FPU and fast analog

Microchip’s PIC32A 32-bit MCUs feature an FPU coprocessor that performs both 32-bit and 64-bit operations for math-intensive tasks. Operating at 200 MHz, they also integrate high-speed analog peripherals to minimize external component requirements.
Two 12-bit ADCs, with conversion rates up to 40 Msamples/s, are complemented by three 5-ns analog comparators and 12-bit pulse density modulation DACs. The MCUs also include three rail-to-rail 100-MHz op amps with a slew rate of 100 V/µs. These features enable cost-effective edge sensing and control, making the PIC32A series well-suited for automotive, industrial, consumer, AI/ML, and medical applications.
To ensure safe software execution in embedded control systems, the PIC32A MCUs offer a range of hardware safety and security features. These include ECC on flash and RAM, Memory Built-In Self-Test (MBIST), I/O integrity monitors, fail-safe clock monitor, immutable secure boot, and flash access control.
Prices for the PIC32A microcontrollers start at less than $1 each in volume quantities.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post 32-bit MCUs pack FPU and fast analog appeared first on EDN.
Peak power point

Imagine that you have a voltage source in series with some source resistance feeding power to a variable load. The relationship between load voltage, load current, and cell current can be drawn as follows in Figure 1.
Figure 1 The load voltage versus cell and load current for a circuit where the voltage source is in series with some source resistance feeding power to a variable load.
If by multiplying load voltage times load current, we examine power delivery to the load versus load resistance where the result is a curve that looks like an upside-down soup bowl (Figure 2).
Figure 2 Power to the load (load voltage*load current) versus cell and load current.
For some specific source resistance value, we can plot a horizontal line on our graph (Figure 3).
Figure 3 Adding a specific numerical value for the source resistance.
If we next add a curve to plot the varying load resistance value (Figure 4), we find that the point of maximum power delivery to the load corresponds to equality between the load resistance to the source resistance. Of course, this is expected to be so, but we should also note that the equality of interest is really between the load resistance and the dynamic value of the source resistance as opposed to that part’s value of static resistance.
Figure 4 Discovery of the peak power point by finding the equality between the load resistance to the source resistance.
This last remark may seem trivial, but as we shall now show, it is NOT trivial at all.
From Linear Technology (a name of fond memory today) at this now inoperative URL, we had the following sketch of a photovoltaic (PV) assembly’s characteristics shown in Figure 5.
Figure 5 Solec S-70C PV panel power curve while facing the sun.
Graphically extracting some numbers from the current versus voltage curve and fitting a descriptive equation to those numbers, we find the following in Figure 6.
Figure 6 A numerical representation of the PV device shown in Figure 5.
Again, we multiply the load voltage times the cell and load current to see the curve of the power delivery to the load and we also draw the dynamic resistance of the photovoltaic device (Figure 7).
Figure 7 Current, power and dynamic resistance curves for the Solec S-70C PV device, the dynamic resistance of the PV here is no longer the static horizontal line we saw in Figure 3.
Note now that the dynamic resistance of the photovoltaic device is not a horizontal line. The dynamic resistance of the photovoltaic device is now a variable. We also note that the power curve is no longer symmetrical but has instead taken a lean over to the viewer’s right.
Identifying the point of maximum power to the load or identifying the peak power point, we see the following in Figure 8.
Figure 8 Discovery of the peak power point for the Solec S-70C PV device.
We find that the peak power point is located where the load resistance equals the dynamic source resistance of the PV device.
If you want to get as much power delivery as possible out of a PV device, the load resistance needs to match the dynamic source impedance of that device.
Please note that in order to make these sketches more viewable, the vertical axis presentation of resistance is not linear in Ohms but has been made proportional to log (1+Ohms).
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Increase efficiency of street-light solar panels using a maximum peak-power tracker
- How to design a Li-Ion battery charger to get maximum power from a solar panel
- Solar day-lamp with active MPPT and no ballast resistors
- Solar-mains hybrid lamp
- Solar-array controller needs no multiplier to maximize power
The post Peak power point appeared first on EDN.
Intel’s new CEO: What you need to know

Lip-Bu Tan, who has exposure to both chip design and chip manufacturing worlds due to his CEO stint at EDA powerhouse Cadence, is taking the reins of Intel after Pat Gelsinger was forced out by the Intel board a few months ago. Sally Ward-Foxton takes a closer look in her EE Times piece at what led to Tan’s appointment. She argues that his former leadership roles make him a suitable person to lead Intel, currently torn between its shrinking position in CPU design and its ambitious foray into the foundry business.
Read the full story at EDN’s sister publication, EE Times.
Related Content
- Intel Foundry Embraces ARM: Start of the End?
- Intel: Gelsinger’s foundry gamble enters crunch
- Pat Gelsinger: Return of the engineer CEO that wasn’t
- Intel CEO’s Departure Leaves Top U.S. Chipmaker Adrift
- Intel Halts Products, Slows Roadmap in Years-Long Turnaround
The post Intel’s new CEO: What you need to know appeared first on EDN.
100-MHz VFC with TBH current pump

Famous analog designer and author Jim Williams published an awesome design in 1986 for a 100-MHz voltage to frequency converter. He named this high-climber (picture it on the roof of the Empire State building swatting biplanes out of the air) King Kong! He followed Kong in 2005 with a significantly updated successor, “1-Hz to 100-MHz VFC features 160-dB dynamic range.”
I was fascinated by both of these impressive designs because they were way faster than any other VFC I’d ever seen! Another two decades passed before I decided to try for a 9-digit VFC of my own.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Here’s the result (Figure 1).
Figure 1 This simple VFC borrows some of Williams’ pioneering speed ideas and combines them with a few tricks of my own to reach the high altitude of a 100-MHz full scale frequency.
The Q1, D1, and Schmitt trigger U1 make a sloppy but tight and speedy VFC which is then accurized by the feedback loop comprising prescaler U3, take-back-half (TBH) charge pump D1-D4, and integrator A1. The preaccumulator U2 interfaces the 100 MHz count rate to moderate speed (~6.25 MHz) counter timer peripherals without losing resolution.
The core of Figure 1’s circuit is a very simple Q1, U1, D5 ramp-reset oscillator. Q1’s collector current discharges the few picofarads of stray capacitance provided by its own collector, Schmitt trigger U1’s input, D5, and their interconnections (as short and direct as possible, please!). U1’s sub-five-nanosecond propagation delay allows the oscillation to run from a dead stop (possible due to leakage draining R4) to beyond 100 MHz.
During each cycle, when Q1 ramps U1 pin1 down to its trigger level, U1 responds with a ~5 ns ramp reset feedback pulse through Schottky D5. This pulls pin 1 back above the positive trigger level and starts the next oscillation cycle. Because the ramp-down rate is (more or less) proportional to Q1’s base current, which is approximately proportional to integrator A1’s output voltage, oscillation frequency is likewise. The caveat is “approximately”.
The feedback through the TBH pump, summation with the R1 input at integrator A1’s noninverting input, the output to Q1 and thence to U1 pin 1 converts “approximately” to “accurately”. One item that lets this VFC work in Kong’s frequency domain but with a considerably simpler parts count is the self-compensating TBH diode charge pump described in an earlier design idea (DI): “Take-back-half precision diode charge pump.”
So, what’s U3 doing?
The TBH pump’s self-compensation allows it to accurately dispense charge at 25 MHz or so but 100 MHz would definitely be asking too much. U3’s two-bit prescaler addresses this problem. U3 also provides an opportunity (note jumper J1) to substitute a high quality 5.000v reference for the likely lesser accuracy of the generic 5v rail.
Figure 2 shows a 250-kHz diode charge pump boosting the 5v rail to about 8v which is then regulated down to a precision 5.000 by U4. U3 current demand, including pump drive, is about 23 mA at 100 MHz; U4 isn’t rated for quite that heavy a load, so buddy resistor R6 takes up the slack.
Figure 2 A 250-kHz diode charge pump Rail booster bringing rail to 8V which is then regulated down to a precision 5.000 V reference by U4.
The 16x preaccumulator U2 allows use of moderate performance counter-timer peripherals as slow as 6.25 MHz to acquire the full-scale 100-MHz VFC output. That idea is described in an earlier DI: “Preaccumulator handles VFC outputs that are too fast for a naked CTP to swallow.”
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- 1-Hz to 100-MHz VFC features 160-dB dynamic range
- Take-Back-Half precision diode charge pump
- Preaccumulator handles VFC outputs that are too fast for a naked CTP to swallow
- 80 MHz VFC with prescaler and preaccumulator
- 20MHz VFC with take-back-half charge pump
The post 100-MHz VFC with TBH current pump appeared first on EDN.
How to tackle DRAM’s power conundrum

While DRAM designers strive for incremental improvements in performance, power, bit density, and capacity with each successive node, AI-driven data centers are putting a lot of pressure on memory makers to make further advances in power efficiency. Gary Hilson provides a sneak peek of how Micron—one of the three big DRAM producers—is reducing power consumption by employing high-K metal gate CMOS technology paired with design optimizations.
Read the full story at EDN’s sister publication, EE Times.
Related Content
- LPDDR5 DRAM ready for Level 5 autonomy
- DRAM: the field for material and process innovation
- Emerging Memories May Never Go Beyond Niche Applications
- DRAM for energy- and area-efficient analog in-memory computing
- DRAM basics and its quest for thermal stability by optimizing peripheral transistors
The post How to tackle DRAM’s power conundrum appeared first on EDN.
A pitch-linear VCO, part 2: taking it further

Editor’s Note: This DI is a two-part series.
Part 1 shows how to make an oscillator with a pitch that is proportional to a control voltage.
Part 2 shows how to modify the circuit for use with higher supply voltages, implement it using discrete parts, and modify it to closely approximate a sine wave.
In Part 1, we saw how to make an oscillator whose pitch, as opposed to frequency, can be made proportional to a control voltage. In this second part, we’ll look at some alternative ways of arranging things for other possible applications.
Wow the engineering world with your unique design: Design Ideas Submission Guide
To start with, Figure 1 shows a revised version of the basic circuit, built with B-series CMOS to allow rail voltages of up to 18 or 20 V rather than the nominal 5 V of the original.
Figure 1 A variant on Part 1’s Figure 2, allowing operation with a supply of up to 20 V.
Apart from U2’s change from a 74HC74 to a CD/HEF4013B, the main difference is in U1. With a 12 V rail, TL062/072/082s and even LM358s and MC1458s all worked well, as did an LM393 comparator with an output pull-up resistor. The control voltage’s span increases with supply voltage, but remains at ~±20% of Vs. Note that because we’re only sensing within that central portion, the restricted input ranges of those devices was not a problem.
Something that was a problem, even with the original 5-V MCP6002, was a frequent inability to begin oscillating. Unlike the 74HC74, a 4013 has active-high R and S inputs, so U1a’s polarity must be flipped. It tends to start up with its output high, which effectively locks U2a into an all-1s condition, forcing Q1 permanently on. That explains the need for R5/C5/Q2. If (when!) the sticky condition occurs, Q2 will turn on, shorting C2 so that Q1 can turn off and oscillation commence. A reverse diode across R5 proved unnecessary at the low frequencies involved.
This could also be built using the extra constant-current sink, shown in Part 1’s Figure 4, but then U1 would need to have rail-to-rail inputs.
A version that lacks any logicThis is an extension of the first version that I tried, which was built without logic ICs. It’s neat and works, but U1a could only output pulses, which needed stretching to be useful. (Using a flip-flop guaranteed the duty cycle, while the spare section, used as a monostable, generated much better-defined reset pulses.) The circuit shown in Figure 2 works around this and can be built for pretty much any rail voltage you choose, as long as U1 and the MOSFETS are chosen appropriately.
Figure 2 This all-discrete version (apart from the op-amps) uses a second section to produce an output having a duty cycle close to 50%.
U1b’s circuitry is a duplicate of U1a’s but with half the time-constant. It’s reset in the same way and its control voltage is the same, so its output pulses have half the width of a full cycle, giving a square wave (or nearly so). Ideally, Q1 and Q3 should be matched, with C3 exactly half of C1 rather than the practical 47n shown. R7 is only necessary if the rail voltage exceeds the gate-source limits for Q1/3. (ZVP3306As are rated at 20 V max.)
Purity comes from overclocking a twisted ringThe final variation—see Figure 3—goes back to using logic and has a reasonably sinusoidal output, should you need that.
Figure 3 Here the oscillator runs 16 times faster than the output frequency. Dividing the pulse rate down using a twisted-ring counter with resistors on its 8 outputs gives a stepped approximation to a sine wave.
The oscillator itself runs at 16 times the output frequency. The pulse-generating monostable multivibrator (MSMV) now uses a pair of cross-coupled gates, and not only feeds Q1 but also clocks an 8-bit shift register (implemented here as two 4-bit ones), whose final output is inverted and fed back to its D input. That’s known as a twisted-ring or Johnson counter and is a sort of digital Möbius band. As the signal is shifted past each Q output, it has 8 high bits followed by 8 low ones, repeated indefinitely. U2c not only performs the inversion but also delivers a brief, solid high to U3a’s D input at start-up to initialize the register.
U2 and U3 are shown as high-voltage CMOS parts to allow for operation at much more than 5 V. Again, U1 would then need changing, perhaps to a rail-to-rail input (RRI) part if the extra current source is added. 74HC132s and 74HC4015s (or ’HC164s) work fine at ~5 V.
The Q outputs feed a common point through resistors selected to give an output which, though stepped, is close to a sine wave, as Figure 4 should make clear. R4 sets the output level and C4 provides some filtering. (Different sets of resistors can give different tone colors. For example, if they are all equal, the output (if stepped) will be a good triangle wave.)
Figure 4 Waveforms illustrating the operation of Figure 3’s circuit when it’s delivering ~500 Hz.
The steps correspond to the 15th and 17th harmonics, which, though somewhat filtered by C4/R4, are still at ~-45 dB. To reduce them, add a simple two-pole Sallen–Key filter, like that in Figure 5, which also shows the filtered spectrum for an output of around 500 Hz.
Figure 5 A suitable output filter for adding to Figure 3, and the resulting spectrum.
The 2nd and 3rd harmonics are still at around -60 dB, but the others are now well below -70 dB, so we can claim around -57 dB or 0.16% THD, which will be worse at 250 Hz and better at 2 kHz. This approach won’t work too well if you want the full 4–5-octave span (extra current sink) unless the filter is made tunable: perhaps a couple of resistive opto-isolators combined with R14/15, driven by another voltage-controlled current source?
All that is interesting, but rather pointless. After all, the main purpose of this design idea was to make useful audible tones, not precision sine waves, which sound boring anyway. But a secondary purpose should be to push things as far as possible, while having fun experimenting!
A musical codaGiven a pitch-linear tone source, it seemed silly not to try make some kind of musical thingy using a tappable linear resistance. A couple of feet, or about 10kΩ’s-worth, of Teledeltos chart paper (which I always knew would come in handy, as the saying goes) wrapped round a length of plastic pipe with a smooth, shiny croc clip for the tap or slider (plus a 330k pull-down) worked quite well, allowing tunes to be picked out as on a Stylophone or an air guitar. Electro-punk lives! Though it’s not so much “Eat your heart out, Jimi Hendrix” as “Get those earplugs in”.
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- A pitch-linear VCO, part 1: Getting it going
- VCO using the TL431 reference
- Ultra-low distortion oscillator, part 1: how not to do it.
- How to control your impulses—part 1
- Squashed triangles: sines, but with teeth?
- Simple 5-component oscillator works below 0.8V
- A two transistor sine wave oscillator
The post A pitch-linear VCO, part 2: taking it further appeared first on EDN.
How controllers tackle storage challenges in AI security cameras

Visual security systems have evolved enormously since the days of infrared motion detectors and laser tripwires. Today, high-definition cameras stream video into local vision-processing systems. These AI-enabled surveillance cameras detect motion, isolate and identify objects, capture faces, expressions, and gestures, and may even infer the intent of people in their field of view. They record interesting videos and forward any significant events to a central security console.
Integrating AI capabilities transforms security cameras into intelligent tools to detect threats and enhance surveillance proactively. Intent inference, for example, allows security cameras to quickly predict suspicious behavior patterns in crowds, retail stores, and industrial facilities. Case in point: AI-enabled cameras can detect unattended packages, license plates, and people in real time and report them to security personnel.
According to a report from Grandview Research, due to the evolving use of AI technology and growing security concerns, the market for AI-enabled security cameras is projected to grow at a CAGR of over 18% between 2024 and 2032. This CAGR would propel the market from $7.55 billion in 2023 to $34.2 billion in 2032.
The need for compute power
Increasing sophistication demands growing computing power. While that antique motion sensor needed little more than a capacitor and a diode, real-time object and facial detection require a digital signal processor (DSP). Advanced inferences such as expression or gesture recognition need edge AI: compact, low-power neural-network accelerators.
Inferring intent may be a job for a small-language model with tens or hundreds of millions of parameters, demanding a significantly more powerful inference engine. Less obviously, this growth in functionality has profound implications for the security camera’s local non-volatile storage subsystem. Storage, capacity, performance, reliability, and security have all become essential issues.
Storage’s new role
In most embedded systems, the storage subsystem’s role is simple. It provides a non-volatile place to keep code and parameters. When the embedded system is initialized, the information is transferred to DRAM. In this use model, reading happens only on initialization and is not particularly speed sensitive. Writing occurs only when parameters are changed or code is updated and is, again, not performance sensitive.
The use case for advanced security cameras is entirely different. The storage subsystem will hold voluminous code for various tasks, the massive parameter files for neural network models, and the continuously streaming compressed video from the camera.
To manage energy consumption, designers may shut down some processors and much of the DRAM until the camera detects motion. This means the system will load code and parameter files on demand—in a hurry—just as it begins to stream video into storage. So, both latency and transfer rate are essential.
In some vast neural network models, the storage subsystem may also have to hold working data, such as the intermediate values stored in the network’s layers or parameters for layers not currently being processed. This will result in data being paged in and out of storage and parameters being loaded during execution—a very different use model from static code storage.
Storage meeting new needs
Except in scale, the storage use model in these advanced security cameras resembles less a typical embedded-system model than what goes on in an AI-tuned data center. This difference will impose new demands on the camera’s storage subsystem hardware and firmware.
The primary needs are increased capacity and speed. This responsibility falls first upon the NAND flash chips themselves. Storage designers use the latest multi-level and quad-level, stacked-cell NAND technology to get the capacity for these applications. And, of course, they choose chips with the highest speeds and lowest latencies.
However, fast NAND flash chips with terabit capacity can only meet the needs of security-camera applications if the storage controller can exploit their speed and capacity and provide the complex management and error correction these advanced chips require.
Let’s look at the storage controller, then. The controller must support the read-and-write data rates the NAND chips can sustain. And it must handle the vast address spaces of these chips. But that is just the beginning.
Storage controller’s tasks
Error correction in NAND flash technology is vital. Soft error rates and the deterioration of the chips over time make it necessary to have powerful error correction code (ECC) algorithms to recover data reliably. Just how important, however, is application dependency? A wrong pixel or two in a recorded video may be inconsequential. Neural network models can be remarkably tolerant of minor errors.
But a bad bit in executable code can turn off a camera and force a reboot. A wrong most significant bit (MSB) in a parameter at a critical point in a neural network model, especially for small-language models, can result in an incorrect inference. So, a mission-critical security camera needs powerful, end-to-end error correction. The data arriving at the system DRAM must be precisely what was initially sent to the storage subsystem.
This requirement becomes particularly interesting for advanced NAND flash chips. Each type of chip—each vendor’s process, number of logic levels per cell, and number of cells in a stack—will have its error syndromes. Ideally, the controller’s ECC algorithms will be designed for the specific NAND chips.
Aging is another issue—flash cells wear out with continued reading and writing. However, as we have seen, security cameras may almost continuously read and write storage during the camera’s lifetime. That is the worst use case for ultra-dense flash chips.
To make matters more complex, cameras are often mounted in inaccessible locations and frequently concealed, so frequent service is expensive and sometimes counterproductive (Figure 1). The video they record may be vital for safety or law-enforcement authorities long after it is recorded, so degradation over time would be a problem.
Figure 1 Managing flash cell endurance is an essential issue since cameras are often mounted in inaccessible locations. Source: Silicon Motion
The controller’s ability to distribute wear evenly across the chips, scrub the memory for errors, and apply redundant array of independent disks (RAID)-like techniques to correct the mistakes translates into system reliability and lower total cost of ownership.
To counter these threats, the storage controller must be forearmed. Provisions should be made for fast checkpoint capture, read/write locking of the flash array, and a quick, secure erase facility in case of power loss or physical damage. To blunt cyberattacks, the storage subsystem must have a secure boot process, access control, and encryption.
A design example
To appreciate the level of detail involved in this storage application, we can focus on just one feature: the hybrid zone. Some cells in a multi-level or quad-level NAND storage can store only a single bit of data instead of two or four bits. The cells used as single level are called hybrid zones. They will have significantly shorter read and write latencies than if they were being used to store multiple bits per cell.
The storage controller can use this feature in many ways. It can store code here for fast loading, such as boot code. It can store parameters for a neural network model that must be paged into DRAM on demand. For security, the controller can use a hybrid zone to isolate sensitive data from the access method used in the rest of the storage array. Or the controller can reserve a hybrid zone for a fast dump of DRAM contents in case of system failure.
Figure 2 Here is how the FerriSSD controller offers a hybrid zone, the unique capability of partitioning a single NAND die into separate single-level cells (SLC) and multi-level cells/3D triple-level cells (MLC/TLC zones). Source: Silicon Motion
The hybrid zone’s flexibility ultimately supports diverse storage needs in multi-functional security systems, from high-speed data access for real-time applications such as authentic access to secure storage for critical archived footage.
Selecting storage for security cameras
Advanced AI security cameras require a robust storage solution for mission-critical AI video surveillance applications. Below is an example of how a storage controller delivers enterprise-grade data integrity and reliability using ECC technology.
Figure 3 This is how a storage controller optimizes the choice of ECC algorithms. Source: Silicon Motion
The storage needs of advanced security cameras go far beyond the simple code and parameter storage of simple embedded systems. They increasingly resemble the requirements in cloud storage systems and require SSD controllers with error correction, reliability, and security features.
This similarity also places great importance on the controller vendor’s experience—in power-conscious edge environments, high-end AI cloud environments, and intimate relationships with NAND flash vendors.
Lancelot Hu is director of product marketing for embedded and automotive storage at Silicon Motion.
Related Content
- Hybrid Camera Targets Self-Driving Car Safety
- HDDs vs SSDs: It’s all about the random speeds
- AI makes data storage more effective for analytics
- Magneto-optical disk ups camera storage capacity
- Bringing HD camera technology to the surveillance market
The post How controllers tackle storage challenges in AI security cameras appeared first on EDN.
Dead Lead-acid Batteries: Desulfation-resurrection opportunities?

Back in November 2023, I told you about how my 2006 Jeep Wrangler Unlimited Rubicon:
had failed (more accurately, not completed) its initial emissions testing the year before (October 2022) because it hadn’t been driven substantively in the prior two years and its onboard diagnostic system therefore hadn’t completed a self-evaluation prior to the emissions test attempt. Thankfully, after driving the vehicle around for a while, aided by mechanics’ insights, online info and data sourced from my OBD-II scanner, the last stubborn self-test (“oxygen sensor heater”) ran and completed successfully, as did my subsequent second emissions test attempt.
The battery, which I’d just replaced two years earlier in September 2020, had been disconnected for the in-between two-year period, not that keeping it connected would have notably affected the complications-rife outcome; even with the onboard diagnostic system powered up, the vehicle still needed to be driven in order for self-evaluation tests to run. This time, I vowed, I’d be better. I’d go down to the outdoor storage lot, where the Jeep was parked, every few weeks and start and drive it some. And purely for convenience reasons, I kept the battery connected this time, so I wouldn’t need to pop the hood both before and after each driving iteration.
I bet you know what happened next, don’t you? How’s that saying go…”the road to hell is paved with good intentions”? Weeks turned into months, months turned into years, and two years later (October 2024) to be exact, I ended up with not only a Jeep whose onboard diagnostics system tests had expired again, but one whose battery looked like this:
Here it is in the cart at Costco, after my removal of it from the engine compartment and right before I replaced it with another brand-new successor:
I immediately replaced it primarily for expediency reasons; it’s somewhat inconvenient to get to the storage lot (therefore why my prior aspirations had been for naught) and given that I already knew I had some driving to do before it’d pass emissions (not to mention that my deadline for passing emissions was drawing near) I didn’t want to waste time messing around with trying to revive this one. But I was nagged afterwards by curiosity; could I have revived it? I decided to do some research, and although in my case the answer was likely still no (given just how drained it was, and for how long it’d been in this degraded condition), I learned a few things that I thought I’d pass along.
First off: what causes a (sealed, in my particular) lead-acid (SLA) battery to fail in the first place? Numerous reasons exist, but for the purposes of this particular post topic, I’m going to focus on just one, sulfication. With as-usual upfront thanks to Wikipedia for the concise but comprehensive summary that follows:
Lead–acid batteries lose the ability to accept a charge when discharged for too long due to sulfation, the crystallization of lead sulfate. They generate electricity through a double sulfate chemical reaction. Lead and lead dioxide, the active materials on the battery’s plates, react with sulfuric acid in the electrolyte to form lead sulfate. The lead sulfate first forms in a finely divided, amorphous state and easily reverts to lead, lead dioxide, and sulfuric acid when the battery recharges. As batteries cycle through numerous discharges and charges, some lead sulfate does not recombine into electrolyte and slowly converts into a stable crystalline form that no longer dissolves on recharging. Thus, not all the lead is returned to the battery plates, and the amount of usable active material necessary for electricity generation declines over time.
And specific to my rarely used vehicle situation:
Sulfation occurs in lead–acid batteries when they are subjected to insufficient charging during normal operation, it also occurs when lead–acid batteries left unused with incomplete charge for an extended time. It impedes recharging; sulfate deposits ultimately expand, cracking the plates and destroying the battery. Eventually, so much of the battery plate area is unable to supply current that the battery capacity is greatly reduced. In addition, the sulfate portion (of the lead sulfate) is not returned to the electrolyte as sulfuric acid. It is believed that large crystals physically block the electrolyte from entering the pores of the plates. A white coating on the plates may be visible in batteries with clear cases or after dismantling the battery. Batteries that are sulfated show a high internal resistance and can deliver only a small fraction of normal discharge current. Sulfation also affects the charging cycle, resulting in longer charging times, less-efficient and incomplete charging, and higher battery temperatures.
Okay, but what if I just kept the battery disconnected, as I’d been doing previously? That should be enough to prevent sulfication-related degradation, since there’d then be no resulting current flow through the battery, right? Nope:
Batteries also have a small amount of internal resistance that will discharge the battery even when it is disconnected. If a battery is left disconnected, any internal charge will drain away slowly and eventually reach the critical point. From then on the film will develop and thicken. This is the reason batteries will be found to charge poorly or not at all if left in storage for a long period of time.
I also found this bit, both on how battery chargers operate and how sulfication adversely affects this process, interesting:
Conventional battery chargers use a one-, two-, or three-stage process to recharge the battery, with a switched-mode power supply including more stages in order to fill the battery more rapidly and completely. Common to almost all chargers, including non-switched models, is the middle stage, normally known as “absorption”. In this mode the charger holds a steady voltage slightly above that of a full battery, in order to push current into the cells. As the battery fills, its internal voltage rises towards the fixed voltage being supplied to it, and the rate of current flow slows. Eventually the charger will turn off when the current drops below a pre-set threshold.
A sulfated battery has higher electrical resistance than an unsulfated battery of identical construction. As related by Ohm’s law, current is the ratio of voltage to resistance, so a sulfated battery will have lower current flow. As the charging process continues, such a battery will reach the charger’s preset cut-off more rapidly, long before it has had time to accept a complete charge. In this case the battery charger indicates the charge cycle is complete, but the battery actually holds very little energy. To the user, it appears that the battery is dying.
My longstanding-use battery charger is a DieHard model 28.71222:
It’s fairly old-school in design, although “modern” enough that it enables the owner to front panel switch-differentiate between conventional SLA and newer absorbed glass mat (AGM) battery technologies from a charging-process standpoint (speaking of which, in the process of researching this piece I also learned that old-school vehicles like mine are also often, albeit not always, able to use both legacy SLA and newer AGM batteries). And it conveniently supports not only 10A charging but also 2A “trickle” (i.e., “maintain”) and 50A “engine start” modes.
That said, we’re storing the Volkswagen Eurovan Camper in the garage nowadays, with my Volvo perpetually parked in the driveway instead (and the Jeep still “down the hill” at the storage lot). I recently did some shopping for a more modern “trickle” charger for the van’s battery, and in the process discovered that newer chargers are not only much more compact than my ancient “beast” but also offer integrated desulfation support (claimed, at least). Before you get too excited, there’s this Wikipedia qualifier to start:
Sulfation can be avoided if the battery is fully recharged immediately after a discharge cycle. There are no known independently-verified ways to reverse sulfation. There are commercial products claiming to achieve desulfation through various techniques such as pulse charging, but there are no peer-reviewed publications verifying their claims. Sulfation prevention remains the best course of action, by periodically fully charging the lead–acid batteries.
With that said, there’s this excerpt from the linked-to ”Battery regenerator” Wikipedia entry:
The lead sulfate layer can be dissolved back into solution by applying much higher voltages. Normally, running high voltage into a battery will cause it to rapidly heat and potentially cause thermal runaway, which may cause it to explode. Some battery conditioners use short pulses of high voltage, too short to cause significant heating, but long enough to reverse the crystallization process.
Any metal structure, such as a battery, will have some parasitic inductance and some parasitic capacitance. These will resonate with each other, and something the size of a battery will usually resonate at a few megahertz. This process is sometimes called “ringing”. However, the electrochemical processes found in batteries have time constants on the order of seconds and will not be affected by megahertz frequencies. There are some websites which advertise “battery desulfators” running at megahertz frequencies.
Depending on the size of the battery, the desulfation process can take from 48 hours to weeks to complete. During this period the battery is also trickle charged to continue reducing the amount of lead sulfur in solution.
Courtesy of a recent Amazon Prime Big Deal Days promotion, I ended up picking up three different charger models at discounted prices, with the intention of tearing down at least one in the future in comparative contrast to my buzzing DieHard beast. For trickle-only charging purposes, I got two ~$20 1A 6V/12V GENIUS 1s from NOCO, a well-known brand:
Among its feature set bullet points are these:
- Charge dead batteries – Charges batteries as low as 1-volt. Or use the all-new force mode that allows you to take control and manually begin charging dead batteries down to zero volts.
- Restore your battery – An advanced battery repair mode uses slow pulse reconditioner technology to detect battery sulfation and acid stratification to restore lost battery performance for stronger engine starts and extended battery life.
Then there were two from NEXPEAK, a lesser known but still highly rated (on Amazon, at least) brand, the ~$21 6A 12V model NC101:
- [HIGH-EFFICIENCY PULSE REPAIR] battery charger automotive detects battery sulfation and acid stratification, take newest pulse repair function to restore lost battery performance for stronger engine starts and extended battery life. NOTE: can not activate or charging totally dead batteries.
And the also-$21 10A 12V/24V NC201 PRO:
with similarly worded desulfation-support prose:
- [HIGH-EFFICIENCY PULSE REPAIR]Automatically detects battery sulfation and acid stratification, take newest pulse repair function to restore lost battery performance for stronger engine starts and extended battery life. Note: can not activate or charging totally dead batteries.
In fact, with this model and as the front panel graphic shows, the default recharging sequence always begins with a desulfation step.
Do the desulfation claims bear out in real life? Read through the Amazon user comments for the NC101 and NC201 PRO and you’ll likely come away with a mixed conclusion. Cynically speaking, perhaps, the hype is reminiscent of the “peak” cranking amp claims of lithium battery-based battery jump starters. And I also wonder for what percentage of the positive reviewers the battery resurrection ended up being only partial and temporary. That said, I suppose it’s better than nothing, especially considering how cost-effective these chargers are nowadays.
And that said, my ultimate future aspiration is to not need to try to resurrect my Jeep’s battery at all. To wit, given that as previously noted, “I don’t have AC outlet access for [editor note: conventional] trickle chargers” at the outdoor storage facility, I’ve also picked up a portable solar panel with integrated trickle charger for ~$18 during that same promotion (two, actually, in case I end up moving the van back down there, too):
which, next time I’m down there, I intend to mate to a SAE extension cable I also bought:
bungee-strap the solar panel to the Jeep’s windshield (or maybe the hood, depending on vehicle and sun orientations), on top of the car cover intermediary, and route the charging cable from underneath the vehicle to the battery in the engine compartment above. I’ll report back my results in a future post. Until then, I welcome your comments on what I’ve written so far!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Vehicle emissions: Issues and workarounds for various monitoring conditions
- The PowerStation PSX3: A portable multifunction vehicular powerhouse with a beefy battery
- SLA batteries: More system form factors and lithium-based successors
- Modern UPSs: Their creative control schemes and power sources
- Energizer’s PowerSource Pro Battery Generator: Not bad, but you can do better
The post Dead Lead-acid Batteries: Desulfation-resurrection opportunities? appeared first on EDN.
How to reinvent analog design in the age of AI

Where does analog design stand in the rapidly growing artificial intelligence (AI) world? While neuromorphic designs have been around since the 1980s, can they reinvent themselves with building blocks like field-programmable analog arrays (FPAAs)? Are there appropriate design tools for analog to make a foray into the AI space? Georgia Tech’s Dr. Jennifer Hasler, known for her work on FPAAs, joins other engineering experts to discuss ways of accelerating analog design in the age of AI.
Read the full transcript of this discussion or listen to the podcast at EDN’s sister publication, EE Times.
Related Content
- Inside the walls of FPAA maker Anadigm
- Field Programmable Analog and Gallium Arsenide
- Field Programmable Analog Arrays Get Larger and Cost Less
- Field-Programmable Qubit Arrays: The Quantum Analog of FPGAs
- Lowered Price for Field Programmable Analog Array (FPAA) Development Kit
The post How to reinvent analog design in the age of AI appeared first on EDN.
The downside of overdesign, Part 2: A Dunn history

Editor’s Note: This is a two-part series. Part 1 can be found here.
My father, John Edward Dunn, was a Foreman in the New York City Department of Bridges. His shop was in Brooklyn on Kent Avenue adjacent to the Brooklyn Navy Yard. His first assistant at that shop was a man named Connie Rank. Dad’s responsibilities were to oversee the maintenance and repairs of all of the smaller bridges in Brooklyn, Staten Island, and parts of Queens. The Mill Basin Bridge was one of his.
Dad was on call 24/7 in response to any bridge emergencies. At any time of day or night a phone call would come in and he would have to respond. When calls came in at 2 AM or 3 AM or whenever, the whole household would be awakened. Dad would answer the call and I would hear “Yeah. Okay, I’m on my way.” Then I’d hear Dad dialing a call where I’d hear “Connie? Yeah. See you there,” and that would be that. The routine was that familiar. Nothing further needed to be said. He wouldn’t get home again until at least 5:30 PM the following day for having responded to whatever emergency had occurred and then having worked a full day afterward without interruption.
Many of those emergencies were at the Mill Basin Bridge. One of them made the front page of a city newspaper. There was a full page photo of the bridge taken from ground level showing all kinds of emergency vehicles on the scene with all of their lights gleaming against the dark sky. Dad showed me that paper and asked “Do you see that little dot here?” I said “Yes,” and he said, “That little dot is me.” He knew where he had been standing.
Following one accident, perhaps it was the accident above, Dad apparently saved someone’s life. He was honored for that by Mayor Robert F. Wagner. Neither I at the age of twelve nor my sister at nine were ever told the details of the event, but it led to Dad shaking hands with the Mayor at New York City Hall.
John Dunn’s late father, John Edward Dunn, shaking hands with NYC mayor Robert F. Wagner circa 1956 to receive an award for his brave work saving a life as a foreman with the NYC Department of Bridges.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- The downside of overdesign
- Loud noise makers and worker safety
- RF power amplifier safety
- Fractured wires and automotive safety
- The headlights and turn signal design blunder
The post The downside of overdesign, Part 2: A Dunn history appeared first on EDN.
Wireless MCUs deliver richer functionality

STM32WBA6 2.4-GHz wireless MCUs from ST offer increased memory and digital system interfaces for high-end applications in smart home, health, factory, and agriculture. Based on an energy-efficient Arm Cortex-M33 core running up to 100 MHz, the devices provide up to twice the flash and RAM of the previous STM32WBA5 series for application code and data storage.

With up to 2 MB of flash and 512 KB of RAM on-chip, the STM32WBA6 MCUs are able to support more advanced applications. Digital peripherals include high-speed USB, three SPI ports, four I2C ports, three USARTs, and one LPUART. By integrating the processing core, peripherals, and wireless subsystems, the MCUs streamline designs and reduce assembly size.
The STM32WBA6 wireless subsystem supports Bluetooth LE, Zigbee, Thread, and Matter, enabling concurrent communication across multiple protocols. It also enhances performance, with sensitivity increased to -100 dBm for more reliable connectivity up to the maximum specified range.
The STM32WBA6 wireless MCUs are in production and available now, priced from $2.50 each in lots of 10,000 units.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Wireless MCUs deliver richer functionality appeared first on EDN.
650-V GaN HEMT resides in TOLL package

Rohm Semiconductor introduced the GNP2070TD-Z, a 650-V enhancement-mode GaN HEMT in a TO-leadless (TOLL) package. With dimensions of 11.68×9.9×2.4 mm, this compact package enhances heat dissipation, supports high current, and enables strong switching performance.
The GNP2070TD-Z integrates second-generation GaN-on-Si technology, achieving an RDS(on) of 70 mΩ and a Qg of 5.2 nC. With a VDS of 650 V and an IDS of 27 A, the transistor is well-suited for power supplies, AC adapters, PV inverters, and energy storage systems.
For this launch, ROHM has outsourced package manufacturing to ATX Semiconductor, with TSMC handling front-end processes and ATX managing back-end processes. ROHM also plans to collaborate with ATX on automotive-grade GaN devices.
The EcoGaN HEMTs will be available starting in March from DigiKey, Mouser, and Farnell.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post 650-V GaN HEMT resides in TOLL package appeared first on EDN.
Marvell’s 2-nm silicon boosts AI infrastructure

Marvell Technology has demonstrated its first 2-nm silicon IP, enhancing the performance and efficiency of AI and cloud infrastructure. Built on TSMC’s 2-nm process, the working silicon is a key component of Marvell’s platform for developing next-generation custom AI accelerators, CPUs, and switches.
The company’s strategy focuses on developing a comprehensive semiconductor IP portfolio, including electrical and optical SerDes, die-to-die interconnects for 2D and 3D devices, advanced packaging technologies, silicon photonics, custom HBM compute architecture, on-chip SRAM, SoC fabrics, and compute fabric interfaces like PCIe Gen 7.
Additionally, the portfolio includes high-speed 3D I/O for vertically stacking die inside chiplets. This simultaneous bidirectional I/O operates at speeds up to 6.4 Gbps. By shifting from conventional unidirectional I/O to bidirectional I/O, designers can double the bandwidth and/or reduce the number of connections by 50%.
“Our longstanding collaboration with TSMC plays a pivotal role in helping Marvell develop complex silicon solutions with industry-leading performance, transistor density, and efficiency,” said Sandeep Bharathi, chief development officer at Marvell.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Marvell’s 2-nm silicon boosts AI infrastructure appeared first on EDN.
Armv9 platform advances AI at the edge

Arm’s edge AI platform features the Cortex-A320 CPU and Ethos-U85 NPU, enabling on-device execution of models exceeding 1 billion parameters. The Armv9 platform enhances efficiency, performance, and security for IoT, while unlocking new edge AI applications through support for both large and small language models.
Built on the Armv9 architecture, the Cortex-A320 delivers 10× higher ML performance and 30% better scalar performance than its predecessor, the Cortex-A35. It also achieves an 8× ML performance gain over the Cortex-M85-based platform launched last year. Additionally, Armv9.2 offers advanced security features like pointer authentication, branch target identification, and memory tagging extension.
The Cortex-A320 pairs with the Ethos-U85 AI accelerator, which supports transformer-based models at the edge and scales from 128 to 2048 MAC units. To streamline edge AI development, Arm’s Kleidi for IoT compute libraries enhance AI an ML performance on Arm-based CPUs with seamless ML framework integration. For example, Kleidi boosts Cortex-A320 performance by up to 70% when running Microsoft’s Tiny Stories dataset on Llama.cpp.
To learn more about the Armv9 edge AI platform, click on the product page links below.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Armv9 platform advances AI at the edge appeared first on EDN.
32-bit MCUs offer multiple sensing capabilities

Infineon’s PSOC 4 series microcontrollers now integrate capacitive, inductive, and liquid level sensing in a single device. The PSOC 4000T, powered by a 32-bit, 48-MHz Arm Cortex-M0+ processor, combines CAPSENSE capacitive sensing with Multi-Sense inductive sensing and non-invasive, non-contact liquid sensing.
Infineon says Multi-Sense inductive sensing offers greater noise immunity and durability than existing methods. Its differential, ratio-metric architecture supports new HMI and sensing applications, including touch-over-metal, force touch, and proximity sensing.
The PSOC 4000T’s liquid sensing uses an AI/ML-based algorithm that Infineon says is more cost-effective and accurate than mechanical sensors and standard capacitive solutions. It resists environmental factors like temperature and humidity and detects liquid levels with up to 10-bit resolution. It also rejects foam and residue and operates across varying air gaps between the sensor and container.
The fifth-generation CAPSENSE technology enables hover touch sensing, allowing interaction without direct button contact. Its always-on capability reduces power consumption by 10× while delivering 10× higher signal-to-noise ratio than Infineon’s previous devices.
The PSOC 4000T with CAPSENSE and Multi-Sense is available now. A second device, the PSOC 4100T Plus, offering more memory and I/Os, will gain Multi-Sense support in 2Q 2025.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post 32-bit MCUs offer multiple sensing capabilities appeared first on EDN.
Apple’s spring 2025 part II: Computers, tablets, and a new chip, too

Another week, another suite of press release-only-announced products. With the exception of the yearly (and mid-year) in-person WWDC, will we ever see Apple do another live event?
I digress. Some of what Apple’s rolled out (so far…it’s only Wednesday night as I write these words) this week was accurately prognosticated at the end of my last-week coverage. Some of it was near-spot-on forecasted, albeit with an unexpected (and still baffling, a day later) twist. And two of the system unveilings were a complete surprise, at least from a timing standpoint. At all the new system’s cores were processor updates (core…processor…get it?). And speaking of which, there’s a new one of those, as well. In chronological order, starting with Tuesday’s news…
The iPad Air(s)Apple had migrated the iPad Air tablet from the M1 to the M2 SoC less than a year ago, at the same time expanding the product suite to include both 11” and 13” form factors. So when Tim Cook teased that “There’s something in the Air” on Monday, M3-based iPad Airs were not what I expected. But…whatevah… By the way, careful perusers of the press release might have already noticed that all the performance-improvement claims mentioned there were versus the 2022 M1-based model, not last year’s M2. That selective emphasis wasn’t an accident, folks.
And of course, there’s a new accompanying keyboard; heaven forbid Apple forego any available opportunity for obsolescence-by-design forced updates to its devoted customer base, yes? Sigh.
The iPad
This one didn’t even justify a press release of its own; instead, Apple tacked a paragraph and photo onto the end of the iPad Air announcement. Predictably, there were performance-improvement claims in that paragraph, and once again Apple jumped two product generations in making them, comparing against the September 2021 9th-generation A13 Bionic-based iPad versus the year-later (but still 2.5 years old) 10th-generation offering running the A14 Bionic SoC. And the doubled-up internal storage is nice. But here’s the surprising-to-me (and pretty much everyone else whose coverage I read) twist; the new 11th-gen iPad is based on the A16 SoC.
“What’s the big deal, Dipert?” you might understandably be asking at this point. The big deal is that the A16 is not Apple Intelligence-compatible. On the one hand, I get it; the iPad is the lowest-priced offering in Apple’s tablet portfolio, so to maintain shareholder-friendly profit margins, the bill-of-materials cost must be similarly suppressed. But given how increasingly fiscal-reliant Apple is on the services segment of its business, I’m still shocked that Apple didn’t instead put the A17 Pro, already found in the latest iPad mini, into the new iPad too, along with enough RAM to enable AI capabilities. Maybe the company just wants to upsell everyone to the iPad Air and Pro instead? If so, I’ve got an intentionally terse response: “good luck with that”.
The MacBook Air(s)This is what everyone thought Tim Cook was alluding to with Monday’s “There’s something in the Air” tease, in-advance suggested by dwindling inventory of existing M3-based products. And one day later than the iPad Air, they belatedly got their wish. That said, with the exception of a new sky blue scheme (No more Space Gray? You gotta be kidding me!), all the changes are on the inside. The M4 SoC (this time exclusively with a 10-core CPU, albeit in both 8-and-10-core GPU variants) is more energy-efficient than its M3 forebear; we’ve already discussed this. But Apple was even more comparison-silly this time, benchmarking against the three-generations-old (and more than four years old) M1 MacBook Air, as well as even more geriatric x86-based variants (Really, Apple? Isn’t it time to stop kicking Intel?). About the most notable thing I can say, aside from the price cut, is that akin to its M4 Mac mini sibling, the M4 MacBook Air now supports up to two external displays in addition to the integrated LCD, without any software-based (therefore CPU-burdening) DisplayLink hacks. Oh, and the front camera is improved. Yay.
The Mac StudioSpeaking of the Mac mini, let’s close by mentioning its bigger (albeit not biggest) brother, the Mac Studio. Until earlier today (again, as I write these words on Wednesday evening) the most powerful Mac Studios, introduced at the 2023 WWDC, were based on M2 SoC variants: the 12 CPU core and 30-or-38 GPU core M2 Max; and dual-die (interposer-connected) 24 CPU core and 60-or-76 GPU core M2 Ultra. They were follow-ups to 2022’s M1 Max (an example of which I own) and M1 Ultra premiere Mac Studio products. So, we were clearly (over)due for next-gen offerings. But, although the M1 versions were introduced in March, M2 successors arrived the following June. So, I’d placed my bets on the (likely June) 2025 WWDC for the next-gen launch timing.
Shows you how much (or accurately, little) I know…Apple instead decided on a 2022-era early-March re-do this time. And skipping past the M3 Max, the new “lower-end” (I chuckle to even type those words, and you’ll see why in second) version of the Mac Studio is based on the 14-or-16 CPU core, 32-or-40 GPU core, and 16 neural processing core M4 Max SoC also found in the latest high-end MacBook Pros.
The M3 Ultra SoCBut, at least for now (and maybe never?) there’s no M4 Ultra processor. Instead, Apple revisited the M3 architecture to come up with the M3 Ultra, its latest high-end SoC for the Mac Studio family. It holds 28-or-32 CPU cores, 60-or-80 GPU cores, and 32 neural processing cores, all prior-gen. I’m guessing the target market will still be satisfied with the available “muscle”, in spite of the generational back-step. And it’s more than just an interposer-connected dual-die M3 Max pairing. It also upgrades Thunderbolt capabilities to v5, previously found only on higher-end M4 SoC variants, and the max RAM to 512 GBytes (the M3 Max only supports 128 GBytes max…see what I did there?).
Maybe we’ll see a next-gen Mac Pro at WWDC, then? And maybe it (or if not, which of its other product line siblings) will be the first system implementation of the next-gen M5 SoC? Stand by. Until then, let me know your thoughts on this week’s announcements in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- The Apple iPhone 16e: No more fiscally friendly “SE” for thee (or me)
- Apple’s Spring 2024: In-person announcements no more?
- The 2024 WWDC: AI stands for Apple Intelligence, you see…
- Apple’s fall 2024 announcements: SoC and memory upgrade abundance
- 2024: A technology forecast for the year ahead
The post Apple’s spring 2025 part II: Computers, tablets, and a new chip, too appeared first on EDN.
This is how an electronic system design platform works

A new design platform streamlines electronics system development from component selection to software development by integrating hardware, software, and lifecycle data into a single digital environment. Renesas 365 is built around Altium 365, a design suite that provides seamless access to component sources and intelligence while connecting all stakeholders throughout the creation process.
Embedded system developers often struggle due to manual component searches, fragmented documentation, and siloed design teams. Renesas 365 addresses these challenges by connecting Altium’s cloud-connected system design platform with Renesas’ components for embedded compute, connectivity, analog, and power applications.
Renesas 365, built around Altium’s system design platform, streamlines development from component selection to lifecycle management. Source: Renesas
Renesas CEO Hidetoshi Shibata calls it a first-of-its-kind solution. “It’s the next step in the digital transformation of electronics, bridging the gap between silicon and system development.” Renesas has joined hands with the company it acquired last year to redefine how electronics systems are designed, developed, and sustained—from silicon selection to full system realization—in a connected world.
Here is how Renesas 365 works in five steps.
- Silicon: Renesas 365 will ensure that every silicon component is application-ready, optimized for software-defined products, and seamlessly integrated with the broader system.
- Discover: This part powered by Altium enables engineers to find components as well as complete solutions from Renesas’ portfolio for faster and more accurate system design.
- Develop: Altium powers this part to provide a cloud-based development environment to ensure real-time collaboration across hardware, software, and mechanical teams.
- Lifecycle: Also powered by Altium, this part establishes persistent digital traceability to facilitate over-the-air (OTA) updates and ensure compliance and security from concept to deployment.
- Software: This part provides developers with artificial intelligence (AI)-ready development tools to ensure that the software is optimized for their applications.
The final part of Renesas 365 offerings demonstrates how a unified software framework covering low- to high-compute performance can help developers create software-defined systems. For instance, these development tools enable real-time, low-power AI inference at the edge. They can also track compliance and automate OTA updates to ensure secure lifecycle management.
This cloud-connected system design platform can aid developers in everything from component selection to embedded software development to OTA updates. Meanwhile, it ensures that existing workflows remain uninterrupted and supports everything from custom AI models to advanced real-time operating system (RTOS) implementations.
Renesas will demonstrate this system design platform live at embedded world 2025, which will be held from 11 to 13 March in Nuremberg, Germany. The company’s booth 5-371 will be dedicated to presentations and interactive demonstrations of the Renesas 365 solution.
Related Content
- PCB design basics
- PCB Design Considerations and Tools
- PCB Design Basics: Example design flow
- Renesas acquires Altium as part of its digitalization strategy
- What does Renesas’ acquisition of PCB toolmaker Altium mean?
The post This is how an electronic system design platform works appeared first on EDN.