EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 12 min 37 sec ago

Photocell makes true-zero output of the op-amp

Tue, 11/14/2023 - 15:56

While choosing an op-amp buffer for a new high-resolution single-supply DAC, a source of negative supply was considered because the buffer op-amp had to provide true zero voltage on its output.  

Wow the engineering world with your unique design: Design Ideas Submission Guide

For instance, a typical rail-to-rail output op-amp can’t provide true zero voltage, it can guarantee at least several mV on its output, while a high-resolution DAC can have resolution in the of tens microvolts. The application required a true zero output, hence the problem.

For sure, some negative supply source was needed to increase the “headroom” around zero. (I use the term “headroom” because we are dealing not with an upper, positive supply, but with lower one. A better word would be “footroom”.)    

There was an intention to use the Cuk configuration circuit again, like the old circuit in EDN, but with an output voltage of about -1 V only and a low—less than 2 mA—output current.

While exploring alternatives, the idea arose to use a photocell instead of any ordinary voltage converter. It resulted in the circuit in Figure 1.

Figure 1 Using photocells instead of a voltage converter to help provide a true zero volage on the output of an op-amp buffer for a high-resolution single-supply DAC.  

The solution has comparable dimensions with the circuit based on the Cuk configuration, albeit lower efficiency.  But since the superfluous power doesn’t excess 0.1 W, this may be of no importance.

Such a solution has important advantages:  

  • It’s far simpler.
  • It produces very low electrical noise—a fact of great significance when you are dealing with low analog signals. (In this circuit the output noise was less than 1 mV even without output capacitor C1.)  
  • Any over-voltages on its output are excluded (while the Cuk converter can produce such over-voltage if any problems with feedback occurs).
  • The perfect level of isolation should also be noted, albeit it is not important in our case.

Since the external outlines of the gadget are determined by the photocell, the tiny photocell AM-1417 (of Toshiba) was used. Its dimensions are only 34 x 14 x 2 mm, and 4 sections it has—hence 4 LEDs, one for each section—produce about 3 V without any load.

The 4 LEDs are quite ordinary ones of bright red family (L-513HURC, 1800 mcd in 15° angle) because Si photocell has its maximal efficiency in this area of spectrum.

Reds are also preferable for +5 V supply since their low forward voltage allows to double the efficiency very simply, by stacking them in pairs with the same current through both.  

The circuit produces 490…520 mV on 2k load @ 20 mA current through the LED. This is more than enough for several micropower op-amps such as the AD8603/AD8607.  

The output voltage of the photocell can be varied by changing the current through LEDs.

The photocell is a current—not voltage—source, so the capacitor C1 is required to reduce the output impedance of the circuit. Diode D1 enables a path for sinking current and protects this electrolytic capacitor if the negative voltage for some reason disappears.

As I mentioned, the output power is quite enough for a precision micropower op-amp, such as the AD8603, for example.  If you need more power, you can use higher current through LEDs, more efficient pair LED/photocell, or simply connect more such circuits in parallel.

Peter Demchenko studied math at the University of Vilnius and has worked in software development.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Photocell makes true-zero output of the op-amp appeared first on EDN.

Dissecting a malcontent (and moist?) microwave oven

Mon, 11/13/2023 - 18:43

Back in February 2015, my wife and I purchased Whirlpool’s WMC30516AB 1.6 cubic foot microwave oven from Amazon:

It lasted a bit more than four years. Toward the end, various segments of the four-digit, seven-segment per-digit display would flake out, then return, and random buttons on the front panel would also stop working only to revive later. Eventually, the resurrections ceased and, although the remainder of the microwave oven presumably still functioned fine, there no longer was any way to control it (notably via the no-longer-responsive “Start” button).

So, we hauled it to the dump and replaced it in July 2019 with…another Whirlpool (what can I say, my wife likes the brand, or should I say liked it), the familial WMC30516HV, this time for more than twice the price of its predecessor at Home Depot:

Stop me if you’ve heard this before: a bit more than four years later, and around a month ago, the replacement unit failed in exactly the same way. And if you search online for reviews of either model, you’ll find that plenty of other owners’ units have suffered the same fate.

We’ve subsequently purchased a Panasonic 2.2 cubic foot NN-SN966S in like-new claimed condition from Amazon’s Warehouse sub-site:

I’m happy to report that it was as advertised; I don’t think it had even been taken out of the box by the original purchaser (list price: twice what we paid) prior to being returned for resale. But given that we had two related-family Whirlpools die exactly the same way at near-exactly the same time, the engineer in me suspected a fundamental design flaw somewhere. So before taking this one to the dump…you guessed it…I decided to take it apart, in part because more generally I’ve always been curious about what’s inside one of these things.

My upfront suspicion was that moisture accumulation resulting from poor ventilation flow through the unit’s interior while in use was backflowing into the electronics area of the system and eventually causing something(s) on the PCB to fail. Part of this, I admit, might be our “fault”. I’m such a tightwad that if my wife doesn’t finish a Starbucks drink, I toss it in the fridge and heat it up and drink it the next morning as part of my daily caffeine intake. Further to that point, brewed coffee that we don’t finish goes into a carafe for me to later reheat and consume, too. Then there’s corn in the cob, soups, and plenty of other moisture-rich food and drinks that regularly find their way to the microwave for cooking and otherwise heating up…

But I’m not willing to shoulder all, or even most, to be blunt, of the blame. For one thing, I don’t think our usage pattern is all that atypical. And for further evidence of a potential fundamental design flaw, check out this case study example review of the WMC30516AB, complete with submitted photo and titled “Major steaming problem, and no help at all from Whirlpool”, that I found on Amazon in the process of finalizing this writeup:

This Microwave had a major problem with steaming up, even with small cook loads like a few slices of microwave bacon. This soon led to obvious streaking and spotting on the inside of the (non-cleanable) viewing window. Whirlpool customer service was the worst part. They insisted that this steaming situation is “normal performance,” though I’ve never seen another microwave steam like this. I requested a service call to evaluate the Microwave but Whirlpool refused.

Come to think of it, I’d noticed seemingly excessive interior condensation accumulation with our two units, too.

Enough preparatory chatter; let’s get to the teardown. Here’s our patient (for the bulk of this project, I’ve moved from my usual office desk to the workbench directly below it in the furnace room for perhaps-obvious available-space reasons, although the lighting’s not as stellar there):

Note that, unlike with my new Panasonic, there are no outgoing airflow vents (either passive or active) on the left side. Hold that thought:

Speaking of airflow, the back’s where the bulk of the action is:

Air is forced into the unit by a fan behind the ventilation hole mesh at left. It flows through the electronics, from there passively transitioning (theoretically, at least) into the main cooking cavity of the microwave oven, then again passively out vent holes on the opposite upper side and back corner of the interior (upper right side and back right corner, from this photo’s perspective). And how does the air exit the microwave oven? Through those passive vents you see at top and on the right edge in the photo, all at the back and on the right (again, from this rearward perspective, at least) half of the unit, mostly making a 90° turn in the process, again, versus directly out the opposite side with the Panasonic approach.

Before continuing, a couple of close-up sticker shots:

Now for the right side, those aren’t actual air vents, by the way, only cosmetic metal “trenches”:

Top:

And finally, the bottom:

Note that there are functional passive air vents here, too, but their locations are curious. They’re predominantly on the air-outflow half of the microwave oven, but since the air will be heated (albeit moist, therefore heavier than when it entered) at this point, and since hot air rises, not falls, I question just how functional they really are.

Back to the front; let’s now pop open the door:

The inside of the door is conventional for a microwave oven, as far as I can tell from my limited, elementary experience with these devices (and shielded, of course, for obvious reasons):

Again, the airflow direction through the interior is right to left from this front-view perspective. That metal plate on the right side is a cover for the waveguide, called a mica plate:

A couple more sticker closeups before continuing:

Before diving in, I decided to satisfy my curiosity and see if the microwave oven’s several-week sojourn sitting downstairs unplugged and awaiting “surgery” had left it reborn, as had happened (temporarily, at least) in the past. Nope:

The “8” front control panel button still worked, so you can tell which segments (of which digit) failed:

But many of the other numerical and functional buttons remained non-responsive…again, including the all-important “start”. Oh well.

Onward. You may have already noticed the large Torx head screw at the bottom of the right side of the unit, and the four additional ones around the edges of the back side. Let’s get those off:

With them removed, the unified right-top-left panel slides right off the back:

From boring-to-exciting (IMHO) order, here’s the now-exposed left side:

Top side:

Complete with warning (hah!) sticker closeup:

And the right side, where all the electronics action happens:

Your eye will likely be immediately drawn to the cavity magnetron at the center, behind which (not shown) is the aforementioned waveguide:

That metal shroud to the right draws in ambient air from the outside to keep it cool. Speaking of which, this doodad perched above it:

is, I’m assuming, a temperature sensor to ascertain whether the magnetron is overheating due to, for example, using the microwave oven with nothing inside it or with metallic contents.

Below the magnetron is a hefty transformer:

And to its right is an equally formidable capacitor:

In the upper right of the earlier overview shot is a small PCB:

Presumably, particularly given the diminutive size of the onboard fuse, it does AC/DC conversion for only a subset of the entire system circuitry.

And at far right is the fan:

Now let’s move to the left side of that earlier overview shot. First off, here’s the light bulb, which shines through the passive air inflow vents to illuminate the interior:

To its left and below are three components whose purpose wasn’t immediately obvious to me:

until I purposelessly pressed the latch to open the microwave door and noticed that they’d also transformed in response:

These are, I believe, triple-redundancy switches intended to ensure that the magnetron only operates when the door is closed.

Last, but not least, let’s look at the main system PCB at far left, which is the upfront intended showcase (not to mention the presumed Achille’s Heel) of this project:

Here’s a slightly tighter zoom-in:

First step: unhook the various bits of cabling connecting it to the rest of the system:

Two screws are immediately visible along the left edge. But removing them:

didn’t free the PCB from its captivity:

Looking again, I found another one hidden among the connectors, capacitors, and such on the right side of the PCB:

That’s more like it:

Left behind, among other things, is the oddly-varying-contact-color ribbon cable that originally routed between the PCB and the front control panel:

And here’s one more wiring mention; referring back to earlier airflow comments, I at first thought that the two wires heading underneath might be going to a ventilation fan, intended to pull cool air into the body cavity from the outside to the underside:

But after pulling off the bottom panel to expose the otherwise unmemorable under-interior to view, I realized that they were instead connected to (duh on me) the glass turntable motor:

Now back upstairs, where the lighting’s better, for the rest of the PCB analysis:

Let’s stick with this latter side for the first closeup shot set. Here’s that faulty-segments display:

and the exposed portions of this side of the PCB, dominated by solder points and traces:

Did you notice, though, what looks like one corner of an IC sticking out from under the display, further exposed after slipping off the surrounding gasket?

Let’s see what some side views reveal:

Yep, there’s definitely a large lead count chip underneath. Fortunately, by unclipping two of the plastic “legs” from the bracket surrounding the display, I was able to swing it out of the way, revealing both its underside and the remainder of this side of the PCB:

The glossy finish atop the IC makes it very difficult to read (far from photograph) the product markings, so you’ll need to take me at my word that it’s a M9S8AC16CG microcontroller, containing an 8-bit S08 CPU, 16 KBytes of flash memory and 512 bytes of SRAM, and still with its original Freescale Semiconductor vendor logo stamp atop it (the company, therefore product line, were later merged into NXP Semiconductors).

Let’s now flip the PCB over to its other side, starting with some side views. Check out, for example, that circular “beep” piezo transducer near the middle:

And, wrapping up, a couple of full-on closeups, starting with the top half:

The two ICs you see at left are an I-core AiP24C02 2 Kbit EEPROM (what an EEPROM is doing in a microwave oven is beyond me, unless it’s used for operating dataset fine-tuning on the assembly line, or something like that) and, below it, an unknown-supplier LM358 dual 30V 700-kHz op amp.

Now for the other (lower) half:

The clutch of ICs in the lower right corner comprises two chips oriented 180° relative to each other and strangely stamped:

1730, preceded by an upside-down 7 in a larger font size
817 C
WK

and below and to the left of them, Power Integrations’ LNK364 AC/DC converter.

No obvious failure culprit emerges from my visual inspection of the PCB; see anything, readers? It kills me that the likely moisture- or heat-induced (another potential side effect of poor ventilation, of course) failure of a single inexpensive component on this board is likely what caused the demise of the entire expensive microwave oven, but that’s our “modern disposable society” for you, I guess…Sadly, even if I could fix it, I’d be reluctant to pass it on to someone else without a plethora of upfront qualifiers, because it’d likely only be a matter of time before the unit died again, due to its innate shortcomings.

With that, I turn it over to you for your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Dissecting a malcontent (and moist?) microwave oven appeared first on EDN.

MCUs provide ML acceleration at the edge

Fri, 11/10/2023 - 20:40

PSoC Edge, a series of MCUs from Infineon, leverages hardware-assisted machine learning (ML) acceleration for responsive compute and control applications. The microcontrollers are based on an AI-capable Arm Cortex-M55 processor with Helium technology that boosts ML/DSP performance. They also employ an Arm Ethos-U55 micro neural processing unit, an Arm Cortex-M33 processor, and Infineon’s NNLite, a low-power hardware accelerator for neural networks used in ML and AI applications.

PSoC Edge devices enable end products to be more intelligent and intuitively usable by improving human-machine interaction and adding contextual awareness. Always-on sensing and response make them well-suited for IoT and industrial use cases, such as smart home, security, wearables, and robotics. On-chip memories include nonvolatile resistive random-access memory (RRAM) and can be expanded through the MCU’s high-speed, secured external memory support.

Infineon’s ModusToolbox software offers a collection of development tools, libraries, and embedded runtime assets. Additionally, the software tools integrate with Imagimob Studio, an edge AI development platform that enables end-to-end ML development.

The PSoC Edge family is available for early access customers now. For more information or to request participation in the early access program, click on the product page link below.

PSoC Edge product page

Infineon Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post MCUs provide ML acceleration at the edge appeared first on EDN.

AI tool minimizes EM/IR impact on IC designs

Fri, 11/10/2023 - 20:39

The Cadence Voltus InsightAI platform uses generative AI to identify the root cause of electromigration (EM) and voltage (IR) drop early in the design process. It not only automatically identifies EM/IR violations, but also implements the most efficient fix to improve power, performance, and area (PPA). According to Cadence, Voltus InsightAI allows chip designers to fix up to 95% of violations prior to signoff and accelerate EM/IR closure.

Voltus InsightAI employs machine learning methods for fast incremental IR analysis to enhance on-chip and chiplet power integrity. Its IR inferencing engine uses proprietary neural networks to build models of the power grid, while incremental IR analysis provides instant feedback on the impact of design changes. Deep learning allows Voltus InsightAI to discover the root cause of IR drop problems and identify aggressors, victims, and resistance bottlenecks.

Generative AI algorithms perform timing and DRC-aware fixes of IR drop using methods like placement, grid reinforcement, routing, and engineering change orders. Voltus InsightAI selects the best fix based on the root cause of the problem, driving better utilization and improved PPA.

Voltus InsightAI seamlessly integrates with Cadence digital tools for complete IR design closure, including the Innovus implementation system, Tempus timing solution, Voltus IC power integrity solution, and Pegasus verification system.

Voltus InsightAI product page

Cadence Design Systems 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post AI tool minimizes EM/IR impact on IC designs appeared first on EDN.

Silicon capacitors slash mounting area

Fri, 11/10/2023 - 20:39

Housed in tiny DFN0402-2 packages, silicon capacitors in Rohm’s BTD1RVFL series conserve space in smartphones and wearables. With dimensions of 0.4×0.2×0.185-mm, Rohm claims the devices are the industry’s smallest in a mass-produced surface-mount package. Further, the 0402 mounting area is just 0.08 mm2, approximately 55% smaller than the 0603 size package.

The silicon capacitors use a trench structure to increase the capacitance per unit area of the substrate. Miniaturization technology allows processing in 1-µm increments, which eliminates chipping during external formation and improves dimensional tolerances to within ±10 µm.

The first two devices in the lineup, the BTD1RVFL102 and BTD1RVFL471, have a rated voltage of 3.6 V and capacitance of 1000 pF and 470 pF, respectively. Capacitance tolerance is ±15%, and the parts operate over a temperature range of -55°C to +150°C. A built-in TVS protection element ensures high ESD resistance of ±8 kV. Five additional devices with capacitance ratings of 680 pF, 330 pF, 220 pF, 150 pF, and 100 pF are under development.

The BTD1RVFL102 and BTD1RVFL471 silicon capacitors are available now. Rohm is working on a second series for release in 2024 that will offer improved high-frequency characteristics for communication applications.

BTD1RVFL102 product page

BTD1RVFL471 product page

Rohm Semiconductor 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Silicon capacitors slash mounting area appeared first on EDN.

Ethernet PHYs offer flexible speed options

Fri, 11/10/2023 - 20:39

Single-port Ethernet transceivers from MaxLinear, the MxL86110 and MxL86111, operate at speeds of up to 1 Gbps to meet both consumer and industrial needs. The copper PHYs can be used in such applications as gateways, routers, industrial PCs, media converters, and SGMII to RGMII bridges.

To adapt to different network environments, the devices provide a selection of Ethernet speeds and interfaces. Speeds include 10Base-Te, 100Base-TX, and 1000Base-T over twisted pair interfaces. The parts also support both half-duplex and full-duplex modes for 10Base-Te and 100Base-TX, as well as full-duplex mode for 1000Base-T. MAC interface options include RGMII only (MxL86110) and RGMII or SGMII (MxL86111).

Compact packages—5×5-mm QFN40 for the MxL86110 and 6×6-mm QFN56 for the MxL86111—allow for lower power consumption and efficient design integration. Green power features like Energy Efficient Ethernet (EEE), Wake on LAN, and no-link detection reduce power consumption during idle times. The devices can be operated from a single 3.3-V power supply using the integrated DC/DC converter.

Available immediately, both the MxL86110 and MxL86111 PHYs come in commercial (C) and industrial (I) variants, with respective operating temperature ranges of 0°C to +70°C and -40°C to +85°C.

MxL86110 product page

MxL86111 product page

MaxLinear

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Ethernet PHYs offer flexible speed options appeared first on EDN.

Image sensor delivers 16:10 aspect ratio

Fri, 11/10/2023 - 20:38

Omnivision developed its OV05C10 5.2-Mpixel color image sensor specifically for 16:10 aspect ratio laptops, tablets, and IoT devices. The backside-illuminated (BSI) CMOS sensor employs a 1/4.7-in. optical format and features 1.12-µm pixels, staggered high dynamic range (HDR), and 1024 bytes of embedded one-time programmable memory.

With 5.2-Mpixel (2888×1808) resolution, the OV05C10 has plenty of pixels for video conferencing. Auto framing automatically adjusts the camera’s field of view to keep the person speaking at the center of the image, while cropping out distracting backgrounds. The sensor also supports AI features like human presence detection, which helps extend the battery life of portable devices.

The OV05C10 image sensor is built on the company’s PureCell Plus pixel technology for improved sensitivity, angular response, and full-well capacity. PureCell Plus also reduces noise for better signal-to-noise ratio and higher dynamic range. Dual-exposure staggered HDR timing at 5.2 Mpixels and 60 frames/s enhances image quality in both bright and dark environments.

The OV05C10 image sensor is sampling now, with mass delivery in February 2024.

OV05C10 product page

Omnivision

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Image sensor delivers 16:10 aspect ratio appeared first on EDN.

Completely wireless measurement of the voltage of objects

Fri, 11/10/2023 - 12:33

Imagine a satellite, floating in space. How would you measure its voltage? Yes, it probably has a non-zero voltage, a consequence of geospatial electric fields and ion and electron currents.

The question above is discomfiting for several reasons. We’re used to the idea that a voltage is the potential difference between points A and B; if the satellite is point A, where’s point B? We can possibly imagine a tremendously long wire attaching the satellite to a voltmeter, but where would you put the other terminal of the voltmeter? What would the measurement even mean? Is there an absolute voltage, independent of a reference point?

It turns out that there are some simple answers to these questions, rooted in basic electrostatics. In this article, we’ll unpack the physics, and then show some surprisingly pragmatic sensors and circuits for measuring the voltages of objects, without attaching any wires at all.

 

Physics of wireless voltage measurement

All conductive objects have some capacitance, which we can separate into self-capacitance, and capacitance with respect to other conductors. For isolated objects, the self-capacitance dominates; for a conductive sphere this is the textbook expression C = 4πεε0R, where ε0 is the permittivity of free space, ε the relative permittivity, and R the radius of the sphere. Human beings have a self-capacitance in the range 100-300 pF, which gives us enough energy storage to blow up CMOS chips, or ignite chemical fires, if we are electrostatically charged enough.

When conductive objects acquire charge q, their voltage changes: V = q/C. This answers one of the questions above—the voltage of the satellite is its total charge divided by its capacitance.

How can we measure the charge? Gauss’ Law tells us that for a given charge density  on a conductive surface, there is a corresponding electric field  perpendicular to the surface:

E = σ/ε n ̂

Where n ̂ is the unit vector normal to the surface; hereafter we’ll call the field magnitude E. The charge density depends on the total charge, which is spread over the effective surface area A of the object:

Σ = q/A

Our process for measuring the voltage emerges from the following:

  1. Measure the field magnitude E perpendicular to the surface
  2. This gives us the charge density
  3. Extrapolate over the whole surface A to get q
  4. Divide q by the (measured or estimated) self-capacitance C to get V

So, our satellite measurement problem can be solved by making an electric field measurement E perpendicular to the surface of the satellite and calculating V from that.

But where is the reference for this voltage measurement? A clue is in the relationship V = q/C. For a satellite in space, C is effectively the self-capacitance, which is often described as capacitance relative to a hypothetical hollow conducting sphere of infinite radius; that sphere is effectively our voltage reference.

In the case where the object’s capacitance is increased by a mutual capacitance to a nearby conductor, that object will affect both the local charge density , the E field magnitude, and the V reference accordingly.

Pragmatic voltage measurements on floating objects

Making good DC electric field measurements is surprisingly difficult. Notable 19th Century scientists such as Kelvin, Coulomb and Peltier all developed “electrometers” with varying degrees of success. Kelvin also invented an electrostatic generator—the “Kelvin water dropper”—which is worth a few minutes reading on Wikipedia for its sheer ingenuity.

Some reasonably good electromechanical sensors emerged in the 20th Century, but progress stalled there. We have fantastic MEMS sensors for acceleration, magnetic fields, pressure—almost every physical variable; but there are no commercial silicon sensors for precise DC electric field measurement.

The reason for this is a fundamental problem of packaging. Silicon sensors must be packaged to protect the chip from contamination, oxidation, and mechanical damage. If we package a DC E-field sensor in the usual plastic material, the material cover acquires and holds static charge from everyday contamination such as dust and airborne moisture, which usually carry ionic charge. And the contaminant charge affects the local E-field. If we package the sensor in a conductive material, the conductor shorts the field we are trying to measure.

This is an unsolved problem. The best sensors we have right now are all conventional electromechanical devices, in which an electrode is alternately covered or exposed to the field by a conductive shutter. This turns the DC field into an AC signal, and eliminates a lot of offset, drift and contamination sources because most of these create a DC signal that is not modulated by the shutter movement. The AC signal is then demodulated in phase with the shutter movement.

The two standard sensor types are the tuning fork, in which the electrode is mounted behind shutters mounted on the tines of a vibrating tuning fork; and the electric field mill, which has a rotating shutter on an electric motor, which alternately exposes or covers the sense electrode.

Figure 1 shows a miniature electric field mill, integrated onto the PCB of a sensing unit.

Figure 1 The shutter can clearly be seen above differentially connected cloverleaf-shaped sense electrodes. The object at right of the arrangement is a photodetector, which provides the position of the shutter to the demodulation circuits. The four leaves of the clover are connected into two pairs diagonally, so that each pair is alternately exposed or covered by the shutter. Source: Iona Tech

The actual signal detected is the movement of charge on and off the sensor plates, as the inducing field comes and goes. The plates are connected to an electrometer-grade op-amp for differential amplification, and then to a phase-sensitive demodulator (the demodulation can be done in a microcontroller).

The field mill sensor requires careful packaging, as nearby conductors will distort the electric field, and nearby insulators will collect static charge and ionic contamination, also distorting the electric field. In practice, we have found that covering the sensor with a carefully designed conductive grid will protect it, without affecting measurements more than can be accounted for in calibration.

Wireless control of ESD

The satellite example is fun, but what could we actually use this method for? At Iona Tech, we’re focused on the problem of preventing electrostatic discharge (ESD) damage in electronics manufacturing. ESD has been a problem since the adoption of CMOS technology in the 1980’s, and the method for tackling it mostly date back to that era—essentially, ground everything in sight. Ground the workers, ground the workstations, ground the machinery, and ground the components.

Because those methods occasionally fail—wrist tethers break, conductive floors lose their conductivity, and brush contacts fail—grounding is backed up with humidifiers and ionizers to neutralize static buildup at source. The biggest technical advance in the last few decades has been constant monitors for wrist straps, which measure the impedance load on the strap, and infer whether it’s being worn and operating correctly, or not.

These methods don’t work for everybody. They don’t work in situations where mobility is important—and for some reason, engineers in particular can simply not sit still. They are always moving from one workstation to another, or fetching more coffee, and if required to tether before and after each movement, they get annoyed. As one ESD manager told us, “The laws of physics apparently don’t apply to engineers.”

Other problem situations are aerospace manufacturing, where integrating a large satellite becomes a tether spaghetti nightmare. Then there is automotive industry, which assembles highly computerized cars on moving conveyor belts, around which tethers are a major occupational safety hazard.

Iona Tech’s CEO Daan Stevenson learned this the hard way. He was developing prototype autonomous UAV ground stations for a now-defunct UAV startup in Denver, Colorado. Working outdoors in the high and dry air of the Rocky Mountain foothills, he was continuously frying circuit boards; and he had only bad options for grounding, and none at all for humification or ionization.

He asked the question: what if we just measured whether people were becoming charged up, and set off an alarm if they were? With a wearable body voltmeter, we could have complete mobility and assurance of ESD safety, without tethers or any of the other paraphernalia.

This idle question led to a five-year quest for truly wireless control of ESD, which included a grant to study the problem from the National Science Foundation. The solution came in the form of the miniature wearable electric field mill shown below. These have been integrated into an Industrial Internet of Things (IIoT) module, called a StatIQ Band, that is worn on a strap on the upper arm, as shown in the image below.

Figure 2 The technician is wearing an ESD smock, which provides a conductive plane for measurement, similar to direct skin contact. Source: Iona Tech

Figure 2 shows a wearable body voltage monitor (an Iona Tech StatIQ Band) being worn on the arm of an electronics assembly technician. The device would not work well over a static charging garment material such as polyester. The electric field mill can be seen, protected by a minimal gold-plated cover.

Once you can measure your body voltage in a simple and convenient way, all kinds of applications become possible. You can evaluate everything in your working environment for the effect it has on charge generation—carpets, apparel, chairs, literally everything you come into contact with.

You can evaluate the effects of flooring and ionization. Having an alarm go off before you touch the electronics you are working on is a great way to avoid damage, but also creates a Pavlovian response, so that you automatically touch a grounded point before picking up the soldering iron or scope probe.

Figure 3 shows an example of measurements comparing a floating on-body voltage measurement with a conventional wired measurement using a charge plate monitor (a 3M 711 Electrostatic Voltmeter).

Figure 3 The subject was walking on a conductive floor, and then on a regular carpeted floor. The measurements indicate the difference in triboelectric charging in the two cases as well as illustrate the accuracy of the floating voltage measurement.

A further possibility with this type of device is to detect ESD events. When your body discharges into a circuit, its voltage drops dramatically. Detecting this high dv/dt event, and measuring its magnitude, gives us a direct measure of the energy absorbed by the discharge sink. This can be useful in making pass/fail/rework decisions for any electronics being handled when an ESD discharge occurs.

Other applications

Outside of electronics ESD, there are many domains in which detecting static voltage buildup is of real value. Fuel and chemical fires are often triggered by ESD, and the consequence are much higher than just a failed PCB. And then there is lightning, the biggest ESD event of all. We have often sat in our laboratory with a StatIQ Band propped in the window, watching the Rocky Mountain thunderstorms rolling in, and betting on the timing of next blast, based on measured field strength.

This technique is not only useful for human body voltage measurement. There are many cases where knowing the static buildup on a vehicle or aircraft is of importance. For example, fuel trucks have to pay careful attention to grounding and bonding when transferring fuel; the opportunity to generate voltage alarms or interlock the fuel pump if dangerous voltages are detected is significant.

In addition, being able to create a record of traceability of the charge or voltage history of an electronic package has great value in industries like aerospace manufacturing, where a single PCB can be worth $100,000 and may pass through the hands of several contractors while being produced and installed.

Measuring the voltage of a floating object seems like an impossibility, until the electrostatic theory is unpacked, and the necessary sensor is made available. When implemented, it opens up a world of interesting possibilities in instrumentation.

Jonathan Tapson is the chief technology officer (CTO) at Iona Tech.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Completely wireless measurement of the voltage of objects appeared first on EDN.

Op-amp wipes out DPOT wiper resistance

Thu, 11/09/2023 - 18:06

An interesting variation on the theme of digital to analog converters (DACs) is the digital potentiometer (DPOT). In addition to being able to output a voltage proportional to a digital value when used as a multiplying DAC (MDAC), the DPOT can also work as a digitally programmed resistor (rheostat). When used this way, an important parameter that sometimes limits DPOT accuracy is wiper resistance: Rw.

Of course, being solid state devices, unlike electromechanical potentiometers DPOTs don’t have a physical resistance element with an actual wiper running around on it. What their “Rw” really comprises is the ON resistances of the array of FET switches that select the desired tap on the internal resistor ladder (26 + 1 = 65 taps for a 6 bit pot, 27 + 1 = 129 for 7 bits, 28 + 1 = 257 for 8, etc.). Rw effectively appears in series with the selected resistance so that if:

Rab = total resistor ladder resistance = 5k (typical) for exemplar DPOT(MCP4161-502)
N = ladder tap selection setting (0 <= N <= 256 for 8 bit exemplar)

Wow the engineering world with your unique design: Design Ideas Submission Guide

 Then in an ideal world (where all Rw = 0), the resulting resistance would be simply:

Raw = Rab (N / 256)

 Unfortunately, in the world of real DPOTs, Rw > 0. Consequently:

Raw = Rab N / 256 + Rw

For the exemplar 8 bit 5k DPOT, Rw = 75ohms (typical, 160 max), setting a:

minimum (N = 0) Rab = 75/5000*256 = ~4lsb (typical) 8lsb (max)

Being unable to set Rab < 75 ohms for N = 0 may already be problematic for many applications, but the ill effects of Rw > 0 extend to other N. For example, the Rab ladder tempco of the MCP41 series is an excellent 50 ppm/oC (typical), but Rw’s tempco is orders of magnitude worse at ~3000 ppm/oC.  Therefore, Rw dominates net tempco for any N < 230.

Suffice to say, cancellation of Rw would make worthwhile improvements to DPOT performance in many precision applications. Figure 1’s topology does this. Here’s how it works.

Figure 1 Op-amp A1 actively drives digital pot U1 wiper terminal P0W to force Vpob = Vb while drawing negligible current through resistance Rwb, thus cancelling the effect of Rw.

The connections of A1’s (+) input to reference voltage Vb (typically, but not necessarily, ground), (-) input to U1’s P0B pin, and its output to U1’s P0W pin establishes a feedback loop that forces Vpob = Vb independently of Rw. This, as advertized, wipes out Rw effects.

Compensation capacitor C1 probably isn’t absolutely necessary for the part selection shown in Figure 1 for A1 and U1, but if a faster A1 amplifier or a higher Rab resistance Dpot where chosen, it probably would be.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Op-amp wipes out DPOT wiper resistance appeared first on EDN.

An FPGA-to-ASIC case study for refining smart meter design

Thu, 11/09/2023 - 09:07

Many embedded system designs are first implemented using FPGAs. This may be for quicker prototyping or to provide a platform for software development. Sometimes, the FPGAs will remain in the design after production begins. But usually, the plan is to convert the FPGA (or FPGAs) to an ASIC for volume manufacturing.

It is easy to think of this conversion as nearly automatic. Just recompile the verified FPGA RTL code using ASIC libraries, verify the resulting netlist, and send the files to a back-end design shop. But to get the best results, the process may not be that simple—especially if there is an opportunity to consolidate multiple chips into the ASIC or if mixed-signal functions are required.

Recently, Faraday Technology participated in such an FPGA-to-ASIC conversion project for a smart electric meter. The design illustrates many of the important nuances of the conversion process. And it shows the importance of finding the right conversion partner.

 

Smart meter design

At first glance, an electric meter is simple. It monitors the power-line voltage and current at the entry to the customer’s premises and records the cumulative energy delivered, usually in kW-hours in the United States. Traditionally, this task was done by a rather clever electric motor driving a mechanical counter.

But a smart meter is different. Eliminating the electromechanical moving parts, the smart meter samples the voltage and current and accumulates their product digitally. It also provides a means of remote reading, eliminating the human walking from house to house with a pencil and clipboard. In the design we are discussing, there are other features, such as cutting off power to the customer premises and reporting a fault to the network control center if the meter detects an abnormal voltage or current event.

Since most of the functionality of the smart meter is in the software, the hardware is relatively simple and mostly related to I/O (Figure 1). In the client’s prototype, the hardware included a host processor, an FPGA that acted as an I/O hub, various sensors, displays that are mostly connected through serial interfaces, and a subsystem for power-line communications (PLC) back to the network control center. The latter element turned out to be a vital part of the design.

Figure 1 The client’s initial design included an FPGA and an external host CPU. Achieving EMI and ESD compliance was a concern throughout the design. Source: Faraday Technology

The PLC interface requires a unique media access controller (MAC), physical-layer interface (PHY), and analog front-end (AFE). In the prototype design, all these functions were implemented in the FPGA. However, the PLC interface also required an external line driver and power-line coupler.

The main goal of the FPGA-to-ASIC conversion was to reduce cost, partly by eliminating some external components. However, the need to meet electrostatic discharge (ESD) and electromagnetic interference (EMI) immunity specifications for the challenging use environment—a real struggle for the FPGA-based AFE—also weighed heavily on the design team.

The conversion process

Far from being automatic, the FPGA-to-ASIC conversion process involved significant interaction between the design services company and the client. The result was an ARM-based, mixed-signal ASIC design that reduced the chip count (Figure 2).

Figure 2 The ASIC design included an internal CPU, elaborate multiplexing of external control signals, and the analog front-end for the power-line communications interface. Source: Faraday Technology

Faraday began by evaluating the smart-meter architecture. It was decided that in addition to implementing the FPGA’s existing functions as an I/O hub and PLC interface, the ASIC could also take on the functions of the host processor. This led to a reasonably conventional architecture based on an ARM Cortex-M4F CPU core, an AHB-Lite system bus, and an ARM peripheral bus.

The system bus connected interfaces to internal memory, plus major subsystems. That included a gigabit MAC, DMA controller, and, since data integrity and security are crucial to this application, a CRC controller and an AES crypto engine.

The plethora of I/O pins connect to the peripheral bus through multiplexing. The design included internal ROM and SRAM for the CPU, internal eFuses with their controller, plus interfaces for external SDRAM and serial flash.

Gathering IP

Faraday and the client worked together on selecting and configuring the ASIC IP. This allowed simply replacing many of the blocks in the FPGA with either Faraday, ARM, or third-party ASIC IP blocks. The remaining HDL logic was translated to the ASIC, and Faraday replaced the FPGA phase-locked loops, SRAM, and I/O instances with equivalent ASIC IP.

The PLC interface was vital to the conversion process, particularly integrating the analog front-end into the ASIC. This was a challenging analog design, as no off-the-shelf IP existed that could clearly meet the stringent ESD and EMI requirements. So, it was decided that a Faraday analog design team would create a new design for the block.

Faraday performed the final integration of the SoC and clock distribution and basic functional verification to ensure all the pieces worked correctly and talked to each other. The client did application-driven verification and software integration. Faraday then did the physical design and sign-off verification.

Once again, during verification, the PLC interface was a special case. The only way to ensure ESD/EMI compliance was to put a physical chip on a test bench. Rather than gamble on an entire production run, it was decided to do a test chip of the AFE and put it on a shuttle run for a quick fab turn-around.

Faraday and the client agreed that if the test chip passed compliance tests, the client would pay for the test chip. Faraday would correct the design if it failed and provide a free shuttle for the corrected test chip. Nevertheless, the initial test chip passed, greatly relieving the client.

A full-fledged collaboration

Faraday now supplies the finished, assembled, and tested SoC to the client in volume.

This smart-meter design illustrates that FPGA-to-ASIC conversion is not a push-button process or a send-files-and-forget contract relationship. Especially when there are unique functional blocks, special electrical requirements, or the opportunity to consolidate multiple chips into one ASIC, the process becomes a close collaboration between the client and the design services company.

The first step toward success is to carefully evaluate the design service company’s technology, IP access, experience, and willingness to work one-on-one with the client design team. Each of these factors will contribute to the outcome of the relationship, as it did with this smart meter.

Barry Lai heads the System Development and Chip Design Department at Faraday Technology, an ASIC design service and IP provider.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post An FPGA-to-ASIC case study for refining smart meter design appeared first on EDN.

Switches controlled by the duration of the input control signal

Wed, 11/08/2023 - 18:00

Among the switches of various kinds of loads, the circuit of the switch is of particular interest, controlled by varying signal duration. So, for example, feeding a short pulse to the input will turn on the load, and a longer pulse will turn it off. Such an impulse can also be given by pressing the control button.

Figure 1 shows an example of the implementation of a switch operating according to the principle described above. A short input signal switches the state of the device, and a voltage appears at its output, approximately corresponding to the supply voltage, for example, 10 V. A memory element is made on elements U1.1 and U1.2 of the CD4001 chip. If a longer pulse is applied to the input of the device, the capacitor C1 will charge through the resistor R2. The transistor Q1 2N7000 will open, shunting the input of the element U1.1, and will switch the memory element U1.1 and U1.2. At the output of the device, the voltage will drop to zero, the load will turn off.

Figure 1 A switch controlled by the duration of the input control signal.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The second version of such a device is shown in Figure 2. It works on the same principle and contains a voltage repeater on the U1.1 element of the CD4050 chip as a memory element.

Figure 2 A variant of the switch circuit that contains a voltage repeater on the U1.1 element of the CD4050 chip.

Michael A. Shustov is a doctor of technical sciences, candidate of chemical sciences and the author of over 800 printed works in the field of electronics, chemistry, physics, geology, medicine, and history.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Switches controlled by the duration of the input control signal appeared first on EDN.

The next embedded frontier: machine learning enabled MCUs

Tue, 11/07/2023 - 11:41

A new microcontroller claims to offer hardware-assisted machine learning (ML) acceleration for the Internet of Things (IoT) and industrial applications such as smart home, security surveillance, wearables, and robotics. That’s expected to significantly lower the barrier in human-machine interaction and add contextual awareness to end applications.

Figure 1 The ML-enabled MCU supports extensive MHI, situational awareness, and autonomous operation. Source: Infineon

Infineon’s high-end MCU with ML compute acceleration—PSoC Edge—is targeting a new space of responsive compute and control applications. Steve Tateosian, senior VP of microcontrollers at Infineon, calls it a game changer in terms of compute performance on the hardware side. “It will lead to significant performance improvements when running neural network applications.”

He added that advanced ML applications have traditionally been done in the cloud. “With PSoC Edge, tasks like natural language processing can be carried out locally on the device,” Tateosian said. “Developers will also get code and ML tool support for their applications.

Tool enablement and software support infrastructure are crucial for ML-enabled MCUs. So, Infineon has integrated the end-to-end ML tool suite from Imagimob, the Stockholm, Sweden-based startup that Infineon acquired earlier this year, in its Modus Toolbox software platform. ModusToolbox provides a collection of development tools, libraries, and embedded runtime assets for embedded system developers.

Figure 2 The ML toolchain supports a wide range of use cases, including consumer IoT, industrial, smart home, and wearable designs. Source: Infineon

The PSoC Edge MCUs are based on a high-performance Arm Cortex-M55 processor complemented by Arm’s Helium technology for enhanced DSP and ML capabilities to accelerate neural network processing. Cortex-M55 is paired with Arm Ethos-U55, an NPU specifically designed to accelerate ML inference in area-constrained embedded and IoT devices.

Moreover, Cortex-M33 is paired with Infineon’s ultra-low power NNLite, a proprietary hardware accelerator intended to accelerate the neural networks used in ML applications. And there is ample on-chip memory, including non-volatile RRAM as well as high-speed, secured external memory support.

This mix of scalable compute power and memory is supported by an ecosystem of software and tools. Which, according to Infineon, will deliver support for end-to-end ML development, from data entry to model deployment.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The next embedded frontier: machine learning enabled MCUs appeared first on EDN.

The Microsoft Surface Pro 5 succession: Selections, motivations, and initial impressions

Mon, 11/06/2023 - 20:18

As I’ve already alluded to both in recent coverage of Microsoft’s late-September product launch event, of power-vs-energy exemplified by SSDs and of hardware obsolescence forced by software-based usage lock-outs, the Microsoft Surface Pro 5 (SP5, for short, and also referred to as the Surface Pro 2017) system I’ve been using since early 2020 is in the process of being superseded:

In part, I’m being (overly, admittedly) proactive (in contrast, apparently, to plenty of others). The Windows 10 O/S is scheduled to “sunset” in two years (as I write these words in early October) and the SP5 isn’t supported by the Windows 11 successor. As I’ve mentioned before, I regularly donate my prior-generation tech hardware to local charities for others’ reuse, and I prefer to do so while the gear’s still of meaningful (specifically, security-safe) use by its recipients, i.e., while it’s still actively being supported with BIOS, driver, O/S and other software updates. Both of my SP5s (I’ve got a spare sitting on the shelf in case the primary breaks down, since I use the SP5 for work) still qualify under that criterion.

That said, as I mentioned last month:

The SP5 is getting a bit “long in the tooth” at this point, anyway; more frequently than I’d prefer, for example, it overheats and automatically downclocks to unusable-performance levels until it cools back down. And its limited system memory (8 GBytes) and storage (256 GBytes), both non-user-upgradeable, are increasingly constraining (although everything’s relative).

Also introduced last month (with further elaboration promised, specifically in this post) was the fact that I’ve decided on a two-pronged update path (once again, in both cases, comprising both a primary and spare system). To wit, I picked up both a pair of Surface Pro 7+ (SP7+) computers:

and two Surface Pro 8s (SP8s):

Why on earth did I make these seeming redundant purchases? The short candid answer is that:

  • I couldn’t definitively decide on one or the other, and
  • I’m sufficiently fiscal resource-blessed that I don’t have to “settle” for one or the other.

Why I couldn’t choose between them, along with what I didn’t choose instead, is what I hope to explain in the remainder of this writeup. First off, I wanted my successor system(s) to include built-in cellular data connectivity, like that in my SP5. I don’t leverage cellular service much, in part because the SP5 isn’t my primary on-the-go laptop; I’m mostly an Apple guy. But when Wi-Fi is unavailable (or has sketchy security) but I still need to jump on the Internet, cellular sure is convenient. This requirement meant that standard Surface Pro 7x and 8 versions weren’t options; I had to go with LTE-inclusive “For Business” variants (as well as, more generally, with the SP7+ vs the SP7, since the latter doesn’t come in an integrated-cellular option).

This LTE-inclusive qualifier also meant, for example, that the highest-end SP7+ based on Intel’s Core i7 CPU wasn’t an available option to me, since that particular CPU is only offered with Wi-Fi-only systems. Similarly, I couldn’t select a version of either the “For Business” SP7+ or SP8 with 32 GBytes of system memory; again, for unknown reasons, that feature set option is also only available for Wi-Fi-only systems. And it also excluded any x86 CPU-based models of the even newer Surface Pro 9, since integrated cellular (5G in this case) is included only with Arm CPU-based system “flavors” (which are largely redundant with my existing Surface Pro X).

Here’s what I ended up with (all sourced from eBay merchants, and all with 16 GBytes of RAM and 256 GByte SSDs):

SP7+ (both primary and backup, $545 used each, including Type Cover keyboards):

SP8 (primary, $949 open-box with factory warranty until July 2024):

SP8 (backup, $849 used with 1-year Allstate warranty):

Note that the SP7+ and SP8 systems are all based on 11th-generation Intel mobile CPUs. This is one key difference (of several) between the SP7+ and precursor SP7 systems, which are based on 10th-generation Intel processors. Both the SP7+s and SP8s came standard with the “Pro” version of Windows 11, befitting their “For Business” status.

A key difference between the SP7+ (along with the SP8) and both the SP7 and prior-generation Surface Pros, including my SP5, is that the newer systems support user-replaceable and capacity-upgradeable SSDs, as those of you who read last month’s piece already know. Although the lack of user-upgradeable memory—not to mention of a 32 GByte option—in both system generations I bought is unfortunate, it’s tempered by the fact that they both include twice the DRAM of my SP5. The 256 GByte SSDs that came with all four systems are even less of a concern, due to their user accessibility, which has already enabled me update both generations’ primary systems to 1 TByte storage capacities.

Speaking of memory and storage, by the way, some of you who’ve also been shopping for Surface Pros might be wondering why I seemingly spent so much on mine. The answer to your question lies in the fine print; make sure you’re comparing apples to apples (and speaking of which, what I’m about to say also applies to Apples). Entry-level Core i3 configurations are less expensive, even brand new, sometimes even including Type Cover keyboard accessories, as are those with only 8 GBytes of RAM and/or 128 GByte SSDs (not to mention Wi-Fi-only models). But for already-explained reasons, those low-end variants weren’t suitable for my purposes. And anyway, it’s all relative; a still-available SP8 kitted out just like my primary system currently sells brand new for $1649.99 direct from Microsoft.

Now, what are the differences between the SP7+ and SP8, and why wasn’t I able to definitively choose a winner between them? For one thing, only the Core i7-based variants of the SP7+ come with fans to assist in thermal management; mine are fanless. Conversely, all SP8 models (including both of mine) come with fans. This differentiation is a mixed blessing, frankly; on the one hand, with the SP7+ I won’t ever need to listen to rapidly spinning motor-fed fan blades. On the other hand, as previously noted with the SP5, there’ve been plenty of times when I yearned for a fan that would keep the CPU cool enough to head off speed-killing auto-downclocking.

The broader divergence between the two alternatives involves their displays and related form factor deviances. For one thing, as I noted last month, while Microsoft had swapped out the upper right-side DisplayPort connector on the SP6 and predecessors (including my SP5) with a more function-flexible USB-C in the SP7x generations, the company went several steps further in this regard with the SP8. First off, Microsoft replaced the second (lower) right-side connector, formerly USB-A, with a second USB-C port. Both ports also now support not only USB 3.1 Gen 2 10 Mbps bandwidth but also 40 Mbps Thunderbolt 4. And, while on the subject of side-located ports and such, it also bears mentioning that with the SP8, Microsoft relocated the formerly top-side situated power switch and volume toggle assembly to the right and left sides, respectively. That said, on the SP9 both USB-C/TB4 ports move to the left side, too.

With the SP8, Microsoft also swapped out the longstanding 2736×1824 pixel 12.3” LCD for a 13” 2880×1920 pixel resolution successor, though you won’t necessarily notice the change unless you turn both computers on, when the shrunk-down bezels (leading to ~11@ greater usable display area) of the new LCD will be obvious. The SP8’s display also delivers refresh rates up to 120 Hz, and it supports the haptics-augmented second-generation Surface Slim Pen. That all said, these and other design changes compelled Microsoft to change the system physical specs:

  • SP7+: 11.5 x 7.9 x 0.33 inches (292 x 201 x 8.5 mm), 1.75 lb (796 g)
  • SP8: 11.3 x 8.2 x 0.37 inches (287 x 208 x 9.3 mm), 1.96 lb (891 g)

These seemingly miniscule variances (only 0.2 inches shorter, 0.3 inches wider and 0.04 inches thicker!) still mean that my Kensington Dock:

which deftly converts the Surface Pro into a pseudo desktop computer and worked with my SP4, works with my SP5 and will work with my SP7+ units…won’t fit the SP8.

The same form factor shifts adversely affect my existing Brydge 12.3 Pro+ keyboard, which transform everything up to and including the SP7+, but not the SP8 (for which I was compelled to purchase a successor), into a “true” laptop:

The bottom line: had I not found two like-new SP7+ systems complete with normally-$120-standalone Type Covers for a shade over $500 each, I probably would have bit the bullet and gone straight to the SP8, obsoleting my existing accessories in the process. Thankfully, I didn’t need to do that, and I’ll hopefully be able to get at least a few more years out of them via this upgrade two-step. That said, the SP7+ will eventually fall by the wayside for any number of possible reasons, and I’ll then transition to the SP8 full-time. But that said, I certainly hope that what compels the SP7+ retirement isn’t yet another round of Microsoft software-induced premature obsolescence. Quoting from Betanews’ coverage of the supposedly soon-upcoming Windows 12 (which by the way is also rumored to be subscription-based like Office 365…sigh…):

Microsoft may be saying nothing about the release of Windows 12, but that’s not stopping news slipping out about the successor to Windows 11. And thanks to information from Intel, it seems that 2024 is when we can expect to see a new Windows release.

 The leak comes courtesy of David Zinsner — Chief Financial Officer at Intel — who confidently referred to “the Windows refresh” which he says is due to land next year. While his comments are not solid confirmation of the launch of Windows 12, it is a credible addition to the ever-growing pile of Windows rumors.

 “We actually think 2024 is going to be a pretty good year for client, in particular because of the Windows refresh. We still think that the install base is pretty old, and does require a refresh. We think next year may be the start of that given the Windows catalyst.”

 While this is a long way from being confirmation of Microsoft’s Windows 12 plans, it is very unlikely that Intel is referring to a minor update to Windows 11. The phrase “Windows refresh” is vague, but it strongly implies a major upgrade is on the way. With Intel being a significant partner for Microsoft, the company will be aware of many — if not all — of the plans for the operating system, lending validity to what’s been said.

So let me get this straight. Windows 11 was introduced in June 2021 and released to production in October of that same year…only two years ago. The Windows 10 predecessor, released to production in July 2015, is scheduled to go end-of-life two years from now, driving plenty of otherwise perfectly good hardware into extinction in the process. But Microsoft’s supposedly already planning to release Windows 11’s successor next year, leading to Windows 11’s own inevitable sooner-or-later lost-support demise, apparently one (judging from Intel’s comments) even more aggressively hardware-obsoleting than the one we’re dealing with now? Maybe it’s time for me to give Linux yet another try…

Reader thoughts are as-always welcomed in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The Microsoft Surface Pro 5 succession: Selections, motivations, and initial impressions appeared first on EDN.

The importance of phase-coherent RF signals

Mon, 11/06/2023 - 19:18

As the number of higher-throughput applications grows, so does the need for a wider bandwidth and network coverage in wireless systems. Given limited spectrum allocation, wireless communication engineers must look for ways to improve spectral efficiency and the signal-to-noise ratio (SNR) of systems. Multiple-input multiple-output (MIMO) and beamforming can help RF designers achieve diversity, multiplexing, and antenna gain to improve spectral efficiency and SNR.

Testing multi-antenna systems requires a test system capable of providing multiple signals and a constant phase relationship between the signals. This article provides an overview of phase coherence and why it matters. It also offers tactics for generating phase-coherent signals.  

What Is phase coherence?

Two signals are coherent if they have a constant relative phase at all times. Figure 1a illustrates two non-coherent signals with phase variances, and Figure 1b shows coherent signals with fixed phase offset. When present together, signals will combine constructively or destructively, depending on their relative phase.

In cases where a multichannel component such as a phased-array antenna are characterized, precise control of the phase angle relationship between the channels is needed (Figure 1c). For digitally modulated signals, phase coherence indicates both timing synchronization between baseband generators and phase coherence between RF carriers (Figure 1d). Similarly, radar pulses require precise timing of the pulse bursts to simulate the appropriate spatial delays (Figure 1e).

 Figure 1 Phase relationships between two signals including (a) non-coherent, (b) coherent, (c) controllable phase relationships, (d) configurable modulation, and (e) triggerable pulses. Source: Keysight

Why phase coherence matters

Most wireless systems, whether in commercial applications or aerospace and defense, use multi-antenna techniques at the receiver, transmitter, or both to improve overall system performance. These techniques include spatial diversity, spatial multiplexing, and beamforming. Engineers use multi-antenna techniques to achieve diversity, multiplexing, or antenna gains. Through these gains, wireless systems can increase a receiver’s data throughput and SNR. 

Spatial diversity

When multipath signals arrive at a receiver, they combine constructively or destructively, depending on their relative phase. The quality and reliability of a wireless link can be improved by using two or more antennas. This can be accomplished with channel switching, signal weighting, time delay, or transmit diversity.

In any case, the goal of spatial diversity is to provide multiple paths for a radio signal to reach a receiver’s antenna. Figure 2 illustrates that not all methods require multiple antennas at the receiving side.

Figure 2 Spatial diversity techniques for receiver diversity and transmitter diversity including (a) channel switching, (b) signal weighting, (c) time delay, and (d) transmit diversity. Source: Keysight

Spatial multiplexing

The system splits transmitted data into multiple encoded data streams. Then it transmits all data streams simultaneously over the same radio channel through different antennas. In order to recover the original data at the receiver, MIMO systems use computationally inverse channel property estimation algorithms. To simulate the MIMO multipath signals for spatial multiplexing performance tests, multiple signal generators and channel simulators are needed. They emulate the multipath scenarios and inject additive white Gaussian noise (AWGN) to emulate the desired SNR.

Spatial multiplexing is a transmission technique for a MIMO system. The system splits transmitted data into multiple encoded data streams. It transmits all data streams simultaneously over the same radio channel through different antennas. To recover the original data at the receiver, MIMO systems use computationally inverse channel property estimation algorithms.

Figure 3 represents a 2×2 (two transmitters and two receivers) MIMO diagram where two symbols (b1 and b2) transmit simultaneously for double the data throughput.

Figure 3 A 2×2 MIMO system diagram where two symbols (b1 and b2) transmit simultaneously for double the data throughput. Source: Keysight

A simple formula appears in Equation 1:

where r is the received signal, s is the source signal, and h is the wireless channel response.

The receiver can perform channel estimation (the h matrix above) using training sequence algorithms. Transmit signals (s1 and s2 ) can be recovered through signal processing using the formula in Equation 2:

The calculation in Equation 2 uses timing-aligned signals and a common local oscillator (LO) to upconvert and downconvert multichannel signals. This technique increases test challenges for simulating multichannel RF signals and the channel matrix, as most commercial signal generators have an individual baseband generator and LO. To simulate the MIMO multipath signals for spatial multiplexing performance tests, multiple signal generators and channel simulators are needed. They emulate the multipath scenarios and inject AWGN to emulate the desired SNR.

Antenna array—beamforming

An antenna array is a set of antenna elements used to transmit or receive signals. Coherently driven antennas with the appropriate phase delay between antenna elements can form signal beams. The uniform wave front allows a group of low-directivity antenna elements to behave like a highly directional antenna. The phase delays between the channels ultimately decide the antenna pattern, as seen in Figure 4.

Figure 4 A phased array of antennas forms a beam by adjusting the phase between coherent antennas. Source: Keysight

When the number of antenna elements by a half wavelength separation is increased, the antenna beamwidth gets narrower. By applying a 90-degree phase shift to the signal at each antenna, the direction of the beam can be changed. As phase shifts change between elements in different amounts, the beam can be steered in a range of directions. To simulate such multichannel signals, precise control of the phase difference between the channels for both transmitter and receiver tests is needed.

Generating multiple phase-coherent signals

Testing multi-antenna systems such as spatial diversity, spatial multiplexing, and antenna arrays requires a test system capable of providing multiple signals with stable phase relationships between them. However, a commercial signal generator has an independent synthesizer to upconvert an intermediate frequency (IF) signal to an RF signal. To simulate the multichannel test signals, the phase between test signals must be coherent and controllable. Let us explore different tactics to generate multichannel signals and assess the pros and cons of these tactics.

Independent and shared local oscillators

The easiest way to achieve a certain amount of phase stability between signal generators is to use two signal generators with synchronized baseband generators, a triggering signal, and a common 10 MHz frequency reference. Using an independent LO that other signal generators share prevents phase drift caused by the machines having their own phase-locked loops. Phase noise is another factor a shared LO helps. Phase drift and phase error can be improved by using high-quality, stable references and instruments with low phase noise.

Alternatively, if an independent LO is unavailable, multiple machines can share the internal LO of one signal generator. Figure 5 represents two Keysight MXG N5182B vector signal generators that are set up for a phase-coherent test system. The system takes the LO of the top signal generator, splits it, and uses it as the LO input (see red lines) for both signal generators. With this configuration, the RF paths of the two signal generators are fully coherent. The fully coherent configuration appears on the left side of Figure 5, while the right side shows that the phase difference between the two signal generators is less than 1 degree.

Figure 5 Setup for two phase-coherent RF channels with a common LO. The fully coherent configuration (left) and the phase difference between the two signal generators is less than 1 degree (right). Source: Keysight

When using a shared LO, some static time and phase skew between instrument channels will still be encountered. Cable lengths and connectors cause static time and phase variations. The delays and phase shifts skew the phase relationship between the channels. Correction of these offsets is needed to ensure that the measured differences come from the device under test and not from the test system.

Direct digital synthesis

Direct digital synthesis (DDS) produces an analog waveform by generating a time-varying signal in digital form and then performing a digital-to-analog conversion. The DDS architecture provides an optimal path to low phase noise, fast frequency switching speed, and extremely fine frequency tuning resolution.

DDS maintains a fixed phase relationship between its output for each frequency. The synchronization requires initial clock alignment using a common reference clock. Synchronous reset to the phase accumulator achieves the phase alignment. This reset can be applied on every frequency update. The synchronous reset of the phase produces a fixed and repeatable phase relationship for each channel.

Generating phase-coherent and phase-stable signals

As multi-antenna technology matures and the demand for diversity, multiplexing, and antenna gains grows, test systems require tightly aligned channels for accurate tests. When performing a characterization test, the operational environment must be accurately re-created. To accomplish this, signals must be created in such a way that they will coherently combine to simulate their real-world behavior.

There are different tactics for generating phase-coherent or phase-stable signals for various multi-antenna test applications and requirements. Always strive to minimize errors that various tactics may cause. In addition, ensure that test instruments are phase coherent and phase controllable for the test applications, such as beamforming and phased-array antennas.

TJ Cartwright is a product marketing manager focused on analog and digital RF signal generators at Keysight Technologies. He has spent time in markets for medical, pro audio and video, a variety of wireless communication protocols, semiconductor design, and several industrial applications. He is currently expanding into a deeper knowledge base in GNSS, 5G NR, and Quantum.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The importance of phase-coherent RF signals appeared first on EDN.

Will 1.4-nm help Samsung catch up with TSMC, IFS?

Mon, 11/06/2023 - 14:10

Samsung, playing a distant second to TSMC for quite some time, has vowed to launch the 1.4-nm chip manufacturing node by 2027, leapfrogging both TSMC and Intel Foundry Services (IFS) by a wide margin. The South Korean electronics conglomerate is also confident about producing the 2-nm chips in 2025 as planned.

Both 1.4 nm and 2 nm chips will be fabricated using the gate-all-around (GAA) technology that Samsung pioneered on its 3-nm chips released this year. Archrivals TSMC and IFS will transition from Fin field-effect transistors (FinFETs) to GAA transistors at their 2-nm nodes due for commercial launch in 2025 and 2024, respectively.

Samsung unveiled its semiconductor fabrication roadmap at the annual Samsung Foundry Forum (SFF) 2023.

In another major design overhaul, Samsung plans to incorporate an additional nanosheet in the 1.4-nm node, increasing the number of nanosheets from three to four. With more nanosheets per transistor, 1.4-nm chips will bolster switching capabilities as well as operational speed. Moreover, more nanosheets will lead to better control of the current flow, which in turn, generates less heat and reduces leakage current.

The GAA transistors address the FinFET limitations by achieving higher speed in smaller transistors than finFETs. While the GAA transistor architecture is 90% similar to FinFET, the remaining 10% difference comes from stacking horizontal nanosheets on top of one another. Nanosheet transistors provide a larger drive current for a given footprint compared to FinFET technology, and this high drive current is obtained by stacking nanosheets.

As mentioned above, Samsung was the first to implement the GAA transistors, which it calls multi-bridge-channel field-effect transistors MBCFETs). However, the South Korean electronics giant has been losing ground to TSMC and IFS in a relentless march toward smaller process nodes. Now, the GAA breakthrough in 1.4 process nodes provides it with much-needed breathing space in the nanoscale roadmap.

Samsung’s commitment to mass-produce 1.4 nm chips is around four years away, and a lot could happen during this time. Still, it shows that Samsung is fully in the game and vying for parity with TSMC in the mega-fab contest.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Will 1.4-nm help Samsung catch up with TSMC, IFS? appeared first on EDN.

The first MCU with an Arm Cortex-M85 processor finally arrives

Fri, 11/03/2023 - 16:23

More than a year after Arm unveiled Cortex-M85, its fastest core for standalone microcontrollers (MCUs) and MCU-like subsystems, Renesas has become the first supplier to incorporate this superscalar Cortex-M processor in its RA8 MCUs. Renesas calls it the world’s most powerful MCU, delivering 6.39 CoreMark/MHz, a level of performance previously possible with microprocessors (MPUs).

The RA8 Series MCUs also deploy Helium, Arm’s vector extension that provides up to 4x performance boost for digital signal processor (DSP) and machine learning (ML) implementations versus MCUs based on the Arm Cortex-M7 processor. That enables design engineers to eliminate an additional DSP in their systems for certain applications.

The Cortex-M85 processor includes Arm TrustZone, which enables isolation and secure/non-secure partitioning of memory, peripherals, and code. Next, the RA8 Series MCUs incorporate Renesas Security IP (RSIP-E51A), which provides cryptographic accelerators while supporting secure boot. Other security features include immutable storage for a strong hardware root-of-trust, secure authenticated debug, secure factory programming, and tamper protection.

Renesas has already begun shipping the first devices in the series: the RA8M1 Group. These are general-purpose MCUs that address diverse compute-intensive applications in industrial automation, home appliances, smart home, consumer, building/home automation, medical, and artificial intelligence (AI) applications. Mantra Softech, a supplier of biometric solutions, has employed the MCUs in its fingerprint scanner.

Figure 1 RA8M1 is the first MCU to incorporate an Arm Cortex-M85 processor. Source: Renesas

Arm Cortex-M85 processor and Helium technology set a new bar for MCU performance and enhanced DSP and AI/ML capabilities. For instance, MCUs like RA8 can enable edge and endpoint devices to implement natural language processing in voice AI while using Helium to accelerate neural network processing.

Below is a quick look at Cortex-M85 and the supporting Helium instruction augmentation designed specifically for accelerating AI/ML workloads.

Anatomy of Cortex-M85

Arm Cortex-M85—the highest performing Cortex-M processor—claims a 30% boost in scalar compute over Cortex-M7 to unlock new Internet of Things (IoT) and embedded applications. It’s the first Cortex-M processor to deliver over 6 CoreMarks/MHz performance, and that’s a significant scalar performance uplift.

M85, like M7, is a dual-issue 32-bit design with a longer pipeline than other M-series models. However, it incorporates on-chip memory caches protected with error-correction-code (ECC) support. Moreover, the addition of CoreLink DMA-350 direct-memory-access (DMA) controller facilitates tightly coupled memory (TCM) capability, which is useful in AI/ML and signal-processing applications.

Figure 2 The Cortex-M85 processor delivers over 6 CoreMarks/MHz, a level of compute performance that so far requires an MPU. Source: Arm

Besides scalar performance uplift, M85 also adds Helium vector processing extensions, which are also compatible with Cortex-M55, but they are much faster in this new processor core. In fact, the M85 delivers 20% more AI throughput than the M55.

Then there is compatibility with Arm’s Virtual Hardware platform, designed to give software developers a starting point before they get hold of physical chips. It also provides additional insights into the processor operation that are not available in physical hardware.

More M85 MCUs on way?

Design engineers and embedded system developers have been keenly waiting for Cortex-M85 to arrive in MCUs and see how its number-crunching and vectorization features work. They also seem keen to see how M85 performs in a real-time audio context.

Renesas, the first chipmaker to incorporate M85 in MCUs, is confident that the fastest Cortex-M processor will cater to the growing AI opportunities in the embedded and IoT space without compromising security. And while Renesas has the first-mover advantage, other MCU suppliers are expected to make their M85 announcements soon.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The first MCU with an Arm Cortex-M85 processor finally arrives appeared first on EDN.

MCUs harness the power of Arm Cortex-M85 core

Thu, 11/02/2023 - 19:27

Renesas reports its RA8 series of MCUs is the first to employ a 480-MHz Arm Cortex-M85 processor boasting a performance rating of 6.39 CoreMark/MHz. In addition to achieving over 3000 CoreMark points, the RA8 provides fully deterministic, low-latency, real-time operation.

RA8 series MCUs deploy Arm Helium technology, an M-Profile vector extension that boosts performance for machine learning and digital signal processing applications. This increase can be as much as 4 times that of MCUs based on a Cortex-M7 processor and can even eliminate the need for an additional DSP in some applications.

The devices’ integrated memory comprises up to 2 Mbytes of code flash, 12 kbytes of data flash (100,000 program/erase cycles), and 1 Mbyte of SRAM, including 128 kbytes of tightly coupled memory (TCM). Connectivity interfaces include SCI, SPI, CAN-FD, Ethernet, USB 2.0 (FS/HS), 16-bit camera, I2C, and I3C. Additionally, RA8 MCUs leverage Arm TrustZone technology and Renesas Security IP to provide cryptographic acceleration, secure boot, and immutable storage.

The first devices in the RA8 series, the RA8M1 group, are supported by the company’s Flexible Software Package (FSP) for embedded system development. FSP furnishes production-ready drivers, Azure RTOS, FreeRTOS, and other middleware stacks. It also eases the migration of existing designs to the RA8 series.

RA8M1 group MCUs are available now, along with the FSP software. Samples and evaluation kits can be ordered on the Renesas website or through its distributors.

RA8M1 product page

Renesas Electronics 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post MCUs harness the power of Arm Cortex-M85 core appeared first on EDN.

AI SoC targets advanced UCC terminals

Thu, 11/02/2023 - 19:27

The DVF120 SoC from Synaptics integrates all of the major features required for Unified Communication and Collaboration (UCC) terminals and devices. Along with a quad-core Arm Cortex-A55 processor and AI-based audio/voice processing, the chip provides enterprise-grade security and native support for collaboration platforms such as Microsoft Teams, Zoom, and Cisco WebEx. It is also supported by a field-hardened Linux and Android software development kit.

To meet the demand for embedded AI processing, the DVF120 offers a dual-core GPU and out-of-the-box AI models that enable background noise reduction and acoustic echo cancellation. The SoC also works with Synaptics’ AI framework, which allows developers to create and deploy applications such as voice authentication and auto-generation of meeting summaries.

Arm TrustZone technology delivers secure boot, a dedicated security processor, trusted execution environment (TEE), hardware security module (HSM), secure storage, and on-chip anti-fuse OTP. Other features of the DVF120 include DDR4/DDR3/DDR3L, NAND, and eMMC 5.1 controllers; 3D graphics; dual-display capability; gigabit networking; and multiple connectivity options.

The DVF120 SoC is sampling now and is slated for mass production in Q1 2024.

DVF120 product page 

Synaptics

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post AI SoC targets advanced UCC terminals appeared first on EDN.

GaN switcher IC handles 1250 V

Thu, 11/02/2023 - 19:27

Joining Power Integrations’ InnoSwitch3-EP family is a GaN power supply IC that integrates a 1250-V PowiGaN primary-side switch. The InnoSwitch3-EP series of off-line CV/CC QR flyback switcher ICs offers a variety of switch options, including 725-V silicon, 1700-V silicon carbide, and PowiGaN 750-V, 900-V, and now 1250-V varieties. In addition to the high-voltage power switch, these devices provide primary and secondary controllers, a synchronous rectification driver, and FluxLink safety-isolated feedback.

Switching losses for the 1250-V PowiGaN technology are less than a third of that seen in equivalent silicon devices at the same voltage. The resulting power conversion efficiency, as high as 93%, enables compact flyback power supplies that can deliver up to 85 W without a heatsink.

Designers using the InnoSwitch3-EP 1250-V ICs can specify an operating peak voltage of 1000 V, which allows for industry-standard 80% derating from the 1250-V absolute maximum. This headroom is useful in both industrial applications and power grid environments, where robustness is an essential defense against grid instability.

Prices for InnoSwitch3-EP 1250-V devices (INN3629C-H606) in InSOP-24D packages start at $3 each in lots of 10,000 units. Samples are available now, with volume-shipment lead times of 16 weeks.

InnoSwitch3-EP product page

 Power Integrations

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post GaN switcher IC handles 1250 V appeared first on EDN.

GaN FET package improves thermal performance

Thu, 11/02/2023 - 19:26

Transphorm continues to expand its GaN device packaging options with a TO-leaded topside-cooled (TOLT) housing for its 650-V SuperGaN FET. According to the company, the TP65H070G4RS is the first topside-cooled surface-mount GaN device in the JEDEC standard TOLT package.

With its reduced heat path, the TOLT package minimizes heat dissipation through the PCB. The thermal performance of TOLT is similar to that of a thermally robust TO-247 through-hole package, while offering the added benefit of efficient manufacturing processes enabled by SMD-based printed circuit board assembly.

The TP65H070G4RS is a normally off D-mode device that combines a high-voltage GaN HEMT and a low-voltage silicon MOSFET. In addition to improved thermals, the part offers low gate charge, output capacitance, crossover loss, reverse recovery charge, and dynamic resistance. It provides a maximum on-resistance of 72 mΩ typical, total gate charge of 9 nC, and a temperature coefficient of resistance that is 20% lower than normally off E-mode GaN devices.

The TP65H070G4RS SuperGaN device in the TOLT package is currently available to sample. To submit a sample request, click here.

TP65H070G4RS datasheet

Transphorm

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post GaN FET package improves thermal performance appeared first on EDN.

Pages