-   Українською
-   In English
EDN Network
Preconfigured ICs enable secure key storage
Microchip’s CryptoAuthentication ICs for secure key provisioning are preconfigured to reduce development time and accelerate prototyping. As part of the TrustFLEX platform, the ECC204, SHA104, and SHA105 offer hardware-based secure storage to prevent unauthorized attacks.
The chips come preconfigured with defined use cases, customizable cryptographic keys, and code examples to simplify development. Microchip expects these devices to lower the barrier to secure key provisioning, making them particularly suitable for high-volume, cost-sensitive applications.
ECC20x and SHA10x devices achieve a High Joint Interpretation Library (JIL) score for secure key storage. They are also NIST-certified under ESV and CAVP, complying with the Federal Information Processing Standard (FIPS). These secure ICs enable trusted authentication to protect the confidentiality, integrity, and authenticity of data and communications across various systems and applications.
The ECC20x and SHA10x ICs are supported by the Trust Platform Design Suite, which offers secure credential transfer for integration with Microchip’s key provisioning service. Devices are also compatible with the MPLAB X IDE and CryptoAuthentication library.
Prices for the ECC204 start at $0.52 each in lots of 2000 units. Prices for the SHA104 and SHA105 start at $0.50 each in like quantities. To learn more about the Trust Platform, click here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Preconfigured ICs enable secure key storage appeared first on EDN.
Powering outdoor IP cameras
The Internet Protocol (IP) camera market size is projected to surpass 39 billion by 2033 with a compound annual growth rate (CAGR) of 11.5% a year, according to Precedence research. Driving this growth is the expansion of cameras being deployed outdoors in homes, commercial and residential buildings and smart city initiatives (Figure 1). IP cameras are quickly replacing traditional closed-circuit television (CCTV) cameras, which commonly use coaxial cables and require a second power cable to the device. When deploying cameras outdoors, routing power to the device is more complex and can become costly.
Figure 1 IP security cameras deployed in smart city applications, one sector that is driving the steady growth of IP cameras.
IP vs CCTVIP cameras have many benefits over traditional CCTV cameras. First, they are connected to a network via a single Ethernet cable. Second, the Ethernet cable can provide both data and power to the devices through a technology called power over Ethernet (PoE). Since PoE is categorized as Class 2 Power—a safe power with low voltage and current that does not present fire or shock hazard according to the National Electric Code—it does not require a licensed electrician to install, thus saving time and installation costs.
PoE is acknowledged as an energy-efficient technology. PoE selectively activates and supplies power solely when a device requests it, delivering the precise quantity of power required. It automatically shuts off upon fault detection or when the device no longer requires power. PoE midspans, equipped with management capabilities, enable the scheduling of power availability.
The evolution of PoE techPoE is an IEEE® standard which can provide up to 90 W of power. It was pioneered by PowerDsine in the late 1990s which has now become Microchip Technology’s PoE business unit. PowerDsine was working with the earliest IP phone manufacturers to deliver voice, data, and power over a single Ethernet cable. In 1998 they introduced the first PoE ICs including the power source equipment (PSE) IC which puts power onto an Ethernet cable as well as the powered device (PD) IC, an IC that takes power off the Ethernet cable for the device. Since there were no Ethernet switches in 1998, PowerDsine introduced the first midspan, also known as an injector the following year. A midspan is a device connected to an Ethernet switch that does not provide power, instead it injects power onto a second Ethernet cable connected to the device.
Many IP camera manufacturers offer PoE midspans as a power option in instances where the network for the camera that will be deployed may not offer PoE power. The midspan is analogous to a laptop’s power supply where IP camera manufacturers offer a PoE midspan as a “power supply”.
Today, PoE midspans are a popular choice despite the option to deploy power via PoE switches. Only 20% of existing networks provide PoE power. For those deploying over existing networks that do not offer power, PoE Midspans are the fastest and most cost-effective way to add power to those networks. And even if the network does contain a PoE Ethernet switch there are limitations. They all have a maximum available power for PoE known as a power budget. This is the amount of power available to be offered over the switch’s multiple ports. Most switch power budgets are not enough to provide full power on all ports. Therefore, to supplement power on ports that cannot offer power, the PoE midspan continues to be an excellent option.
Outdoor IP Camera Power RequirementsThe first outdoor IP cameras were simple in design and had minimal power requirements. The four basic types of IP cameras are Dome, Bullet, Turret and Fisheye. These are fixed focal lenses lens mounted onto a ceiling or wall with few functions. These cameras usually require no more than the PoE power supplied by IEEE 802.3af (15.4 W at the source) or IEEE 802.3at (30 W at the source).
Today, there are advanced outdoor IP cameras that require more power. Motors have also been introduced into cameras to enable more features such as variable focal lengths and point, tilt and zoom (PTZ) capabilities (Figure 2). These features are often triggered by motion, light or sound sensors reacting to different events.
Figure 2 An advanced point, tilt and zoom weatherproof IP camera with night vision.
Other PoE cameras offer with special features for operating in adverse weather conditions such as lens defoggers. While some offer LED lighting or infrared night vision to monitor areas of low light, as well as storage on the camera itself in case the network fails or is taken down. An emerging popular feature is two-way audio, so people monitoring not only have the ability to see the video feed but can also listen to audio and communicate via the camera as well.
All these features require more power than 25.5 W of power. As shown in Table 1, the IEEE PoE standards offer eight different classes of power that fall into four “types”, offering up to 90 W at the source.
Table 1 IEEE® 802.3af/at/bt PoE source and device power standards.
Powering outdoor IP camerasIP cameras that are mounted on the walls of buildings may be supplied power from an indoor PoE source. For far reaching outdoor applications, a separate power supply for the camera is necessary because of the 100-meter cable reach limitation of Ethernet. In that case, it is important to select the right PoE midspan or switch for the outdoor environment.
Many choose to purchase an indoor, or industrial, PoE midspan or switch, and place it in an outdoor enclosure commonly known as a NEMA box to power their outdoor PoE cameras. The National Electrical Manufacturers Association (NEMA) defines standards for different types of electrical enclosures in various environments. These enclosures are excellent for a range of applications; however, deploying PoE midspans or switches in such enclosures usually results in failure because outdoor enclosures can get twice as hot as the ambient outdoor temperature within the enclosure. Extreme heat affects key components such as capacitors and can directly lead to failure. Installing proper ventilation and fans to cool the units can potentially compromise the weatherproof seals, allowing dust, moisture, and water in the unit, which can also lead to failure. Moreover, there is a good possibility that the midspan and switch will fail when deployed with a NEMA enclosure in these extremely cold environments as they are typically not rated to operate below certain temperatures.
In addition, deploying in a NEMA enclosure also requires proper grounding and surge protection to prevent failure from outdoor events such as lightning strikes (Figure 4). Without proper grounding and surge protection, not only can the midspan fail, but it could also damage the IP camera. To prevent power failures and avoid damaging the cameras, the best solution is to find a PoE midspan or switch that is designed for the outdoor environment.
Figure 4 Surge protection must be designed into the NEMA enclosure to ensure the unit is adequately protected against failures from nearby lightning strikes.
What to look for in an outdoor PoE midspanA considerable number of PoE midspan manufacturers offer indoor devices while few specialize in the production of outdoor units. Ingress protection (IP) ratings are the industry standard for assessing how dust- and water-proof a component is. The IP ratings involve a two-number system where the first digit signifies the level dust resistance while the second digit represents the water resistance. A rating of IP6x signifies that the device is entirely shielded from dust particles. For water resistance, a rating of IP65 protects from dust and low-pressure water jets, IP66 has resistance from strong water jets, and IP67 allows for immersion up to 1 meter for 30 minutes. Many vendors may offer an IP65 rating for outdoor midspans and switches, however, an IP66 and IP67 is recommended to effectively protect the unit from exposure.
It is also important to choose PoE midspans that have built in surge protection to shield it from transient events such as lightning strikes. IEEE 802.3 standards define protection from such events as power surges or lightning strikes through isolation. The conductors of an Ethernet interface must be isolated through grounding or other circuitry. However, this approach typically only protects up to 2 kV. Look for units that are designed to meet industry-defined outdoor protection standards. There are several standards today such as the GR-1089-Core Issue 6 and the ITU-T K.21 enhanced surge level protection that provides up to 6 kV of protection on both data and AC lines.
What to look for in an outdoor PoE switchAlthough outdoor PoE midspans are an excellent solution for outdoor IP cameras, oftentimes an outdoor PoE switch is necessary instead. This may be due to the distance from the base network, or because multiple devices need to be managed. The same base criteria for an outdoor PoE midspan is also necessary for a switch where it is important to:
- Avoid placing an indoor or industrial switch in a NEMA enclosure as they have a high incidence of failure.
- Make certain that the unit has a minimum IP66 rating.
- The unit has built-in surge protection.
On top of these features, there are several other characteristics that are important for outdoor PoE switches. Since it will be deployed outdoors in a public area, always make certain that the switch is fully sealed and tamper-proof (Figure 5). There are outdoor switches that must be opened and configured that become security risks. Sealed units can be remotely managed, whereas those that need to be opened cannot be reconfigured once they are installed.
Figure 5 An outdoor switch that is fully sealed can be remotely managed with security features and two fiber ports.
Since the switch is deployed in public, it is important to have the highest level of hardware and software security to avoid hacking. Such features as user authentication, HTTPS, encryption, certificate management, and distributed denial of service (DDoS) are basic features to ensure that the unit will not be compromised.
Additionally, it is critical to look for a unit that has at least one fiber optic port. Fiber links can provide data over significantly longer distances than Ethernet, enabling data transmission from several kilometers away. Units that have two fiber ports can receive data from long distances and can also forward data to create a daisy chain, eventually returning to the base switch. If the unit has automatic backup failover, it can continue to function and communicate with the base even if one of the fiber links is broken.
Future requirements for outdoor IP camerasMost of the advanced features available today in IP outdoor cameras require higher PoE power levels. There are new features being added to outdoor IP cameras that will not only demand more PoE power, but they will also require faster data rates. Most of the PoE midspans and switches today support data rates up to 1 Gbps. As more companies incorporate advanced features such as edge computing with AI such as facial recognition, outdoor PoE midspans and switches will need to support higher data rates such as 10 Gbps.
Today, a few manufacturers of outdoor PoE midspans have models that can support data rates up to 10 Gbps. As these features get added to improve the functionality of outdoor PoE cameras, expect to see even faster outdoor PoE midspans and switches.
By Alan Jay Zwiren, senior marketing manager of Microchip Technology’s Power over Ethernet business unit
Related Content
- Will PoE++ be helpful, a headache, or both?
- Has PoE for lighting finally arrived?
- Teardown: Powerline networking plus PoE
The post Powering outdoor IP cameras appeared first on EDN.
8-bit PWM + 8-bit Dpot = 16-bit hybrid DAC
Pulse width modulation (PWM) is a terrific basis for digital to analog conversion. Credit goes to features like simplicity and (theoretically) perfect differential and integral linearity. Unfortunately, PWM’s need for ripple filtering tends to make it slow, especially if high resolution (upward of 8 bits) is required.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Figure 1 offers a workaround for PWM’s lethargy by using it to implement only the most significant 8 bits of a high resolution (16-bit) DAC while a different technology (digital potentiometer) provides the low order 8. The two outputs are then passively summed in a simple 256:1 ratio resistor voltage divider. The payoff is 256 times faster settling (than if PWM were used for a full 16-bit count), combined with 16-bit resolution, monotonicity, linearity (both INL and DNL) and microvolt zero stability. The circuit lives off just a few mA drawn from a single 5-V rail while incorporating a pretty good voltage reference. And it’s cheap.
Here’s how it works.
Figure 1 PWM most significant byte (msbyte) combines with Dpot least significant byte (lsbyte) to provide 16-bit resolution, monotonicity, and linearity.
Incoming 3 to 5v logic, 8-bit resolution PWM is inverted and level-shifted by R5C7 and high-speed AC inverter U1 to become an accurate 0 to 2.50v square wave thanks to the LM4040 voltage reference and the inherent properties of CMOS logic when used as precision analog switches. The waveform is un-inverted and buffered by the other five elements of U1 to become a low impedance (~5 Ω) high quality 0 to 100% duty cycle PWM output. U1’s excellent transition symmetry (Tphl and TPlh propagation times differ by less than 100 ps) helps promote accuracy and linearity while the positive feedback through R5 creates a latching action that accommodates static 0% (0v) and 100% (2.5v) duty cycle states.
Active low-pass analog-ripple-subtraction filtering occurs via the R1C1 + R2C2 network as described in “Cancel PWM DAC ripple with analog subtraction”. The 4.99 kΩ x 0.1 µF = 499 µs RC time-constant shown is appropriate for 16-bit (96 dB) ripple attenuation if we assume an 256/32 MHz = 8 µs PWM period. The capacitances will of course need proportional adjustment for different PWM clock frequencies.
Meanwhile 1k Dpot U2 provides an SPI controlled, 8-bit resolution, 0 to 2.5v lsbyte contribution that’s summed with the U1’s PWM output in a 256:1 ratio by the R2R3 voltage divider. The R2:R3 ratio should be accurate and stable to better than 0.5%. R3 is so much higher than the 2.5k (max) variable impedance provided by the pot that its contribution to nonlinearity stays less than +/-½ lsb.
Meanwhile wiper resistance effects are so small as to be completely academic.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Dpot pseudolog + log lookup table = actual logarithmic gain
- Keep Dpot pseudologarithmic gain control on a leash
- Synthesize precision Dpot resistances that aren’t in the catalog
- Reducing error of digital potentiometers
- Adjust op-amp gain from -30 dB to +60 dB with one linear pot
- Op-amp wipes out DPOT wiper resistance
The post 8-bit PWM + 8-bit Dpot = 16-bit hybrid DAC appeared first on EDN.
Disassembling a premium webcam
Back in late April, EDN published my teardown of an entry-level webcam, Avaya’s Huddle HC010, at the time selling for $14.99 (but having been priced a few years earlier, in the midst of pandemic-induced home office equipment shortages, for nearly 10x that amount). In the intro to that piece, I briefly mentioned other, higher-end webcams, one of which was BenQ’s ideaCam S1 Plus and Pro series.
Here’s a stock photo of the $169.99 “Plus” variant, whose internals we’ll be examining today:
For $30 more, the “Pro” version comes with a separate wireless remote control (and USB receiver) for the company’s computer-based EnSpire (which BenQ also refers to in some places as Enspire) software suite:
Some upfront qualifiers:
- Unlike some of its comparably-priced peers sold by other companies, the ideaCam S1 series does not support interpolated-pixel digital zoom capabilities, including the ability to “follow” the user’s face as he or she moves around in the frame and thereby present a consistently-centered image to viewers (which Apple, for example, calls “Center Stage”).
- Instead, Benq includes a magnetically attached 15x multiplier “zoom” supplemental lens which the company claims is also “macro”-capable. Not yet sold (at least in the U.S.), as far as I know, but inferred in the user manual is an ideaCam S1 standard version, which dispenses with both the “Plus” supplemental lens and the “Pro” remote control.
- The ideaCam S1 series’ market uniqueness derives from a flexible magnet-enhanced mount, which enables you to attach (and even lock down) the webcam in a “normal” on-display orientation, completely detach it to show something in the vicinity of the computer to your audience, and in-between rotate the webcam near-90° down at the desktop in front of you. In the latter case, the aforementioned EnSpire software driver auto-rotates and keystone-corrects the captured image as well as tweaking autofocus so that what’s seen by others looks as close as possible to what’s actually in front of you.
- Benq calls the ideaCam S1 a “4K” camera, which is close but not quite right. “4K”, at least from a display standpoint, references a 3840×2160 (8,294,400 total) pixel image. The ideaCam S1 captures still images with 3264×2448 (7,990,272 total) pixels. And its video resolution options, in both cases limited to 30 (not 60) fps frame rates, are 3264×1836 (5.992,704 total) pixels in 16:9 ratio mode and 3264×2448 pixels (the same as with still images) in 4:3 ratio mode.
- The webcams are based on an 8 Mpixel Sony CMOS image sensor. It admittedly took me a few tries to realize what the “COMS” reference on Benq’s web page meant Low light performance is surprisingly subpar, per multiple reviewers’ comments, even when the integrated ring light is in use. Here’s Benq’s feedback when I inquired about this quirk: “ideaCam is a webcam designed primarily for capturing objects, so it works best in well-lit environments.”
- I get why Benq made the ideaCam S1 series natively USB-A-interfaced, given the sizeable installed base of computers that offer at least one USB-A port. That said, I’m admittedly surprised that Benq didn’t also include an inexpensive USB-A to USB-C adapter in the box for use with the increasingly common laptop PCs and the like that are USB-C-only.
Upfront thoughts now concluded, let’s get to the tearing down, beginning with the obligatory outer-box shots (after I removed the shiny, reflection-inducing shrink-wrap, that is):
Flip open the box:
And underneath the top flap you’ll first find a plethora of paper (you can alternatively find the quick-start guide in digital form here, along with the digital-only full user manual):
Underneath it (and a thin sheet of protective black foam):
are, clockwise beginning from left, the main webcam assembly, the privacy cover, the “macro zoom” supplemental lens, and the mounting bracket, all cushioned by more foam:
In front of the foam is the bulk of the webcam’s permanently connected USB cable, enclosed within a white cardboard sleeve:
Here are the various constituent pieces out of the box:
Two views of the mounting bracket, which also integrates a ¼” screw hole for a not-included optional tripod or other stand:
Now for the webcam itself. Front view first; the ring light shines through the frosted white circumference when on. Also note the hole for the single microphone input in the lower right corner of the “lens” (curiously, this design doesn’t seem to leverage a traditional multi-microphone array for ambient noise cancellation purposes, instead per product documentation relying on “AI processing”) and the barely visible activity LED “hole” below the lens:
Here’s what it looks like with the supplemental lens installed (note to potential customers; there’s a near-invisible clear piece of protective plastic at the rear of the supplemental lens that, unless first removed, will result in poor image results when the supplemental lens is in use):
And here’s the privacy cover installed:
The magnet that holds both it and the supplemental lens in place is located within the common primary lens assembly to which they both adhere:
The two-switch assembly at the top toggles the ring light on and off and, in conjunction with the EnSpire software suite, freezes the captured image:
At bottom is the magnet-augmented rectangular hole into which you insert the mounting bracket (also note the permanently attached USB cable coming out of the webcam):
And last but not least (or maybe least after all…it’s pretty bland) is the BenQ-branded backside:
Time to dive inside. Next to the USB cable entry point is a tiny Philips screw whose removal would seemingly be a logical starting point:
That’s what I’m talking about:
Next, let’s get the multi-wire harness for the USB cable outta there:
Two more screws to go (the first one had already been removed in conjunction with disconnecting what I assume is the USB cable’s ground strap):
And…nothing’s budging yet. Let’s try those three additional screws visible deeper inside:
Getting them out was a bit dodgy because every time I unscrewed one, it immediately went airborne and adhered itself to the magnet at the bottom bracket hole…but I managed…
Hmmm…still no meaningful disassembly progress, however. Time to turn the webcam around and turn our attention to the front assembly:
That’s more like it!
Even with the screws removed, it had still been tenuously held in place by the four-pin connector that mated the PCB to the two-switch topside assembly:
Front and back standalone views of the chassis now absent the front assembly:
And now what you really care about; the first unobstructed view of the system PCB’s backside:
As you may have already inferred, there’s a gyro IC (likely MEMS-based) in the webcam that determines (and communicates to system software) whether it’s in its “normal” or downward orientation. Fortunately, BenQ provides an exploded-view video that shows where it’s located:
Specifically, assuming the video is accurate in pinpointing what it calls the “Webcam flip sensor,” it’s the tiny five-lead chip labeled U12 on the PCB and marked FT8DSN, below and to the left of the left-side PCB hole. To the right of the flip sensor and toward the center of the PCB is a larger IC whose identity I unfortunately can’t discern. It’s marked as follows:
IG1600
2109AAD
TP1X841
0570011
Ideas, anyone? And while we’re at it, does anyone know the identity of the tall rectangular eight-lead chip at far right, above the USB wiring harness connector, and marked as follows (accompanied by a yellow paint “dot” in its lower left corner)?
GD
N1C0
UF8096
Flip the front assembly over:
and with the retaining screws now removed, the cover portion lifts right off:
The clear plastic middle region is purely protective, as far as I can tell, with no meaningful optical properties of its own that I can ascertain:
Note the holes for the microphone input, in the black piece’s lower left region, and the activity LED, below and to its right (and at the black piece’s bottom). And around the perimeter is the frosted white opaque plastic thru which the ring light LEDs diffuse-shine when illuminated.
Speaking of which:
Items of particular note include the lens assembly at center (with the aforementioned 8 Mpixel Sony CMOS image sensor unseen behind it), the system processor to its left (a Sunplus Innovation Technology SPCA2680A, not found on the manufacturer’s website, although note the presumably related SPCA2688), and the surprisingly large MEMS mic to the lens’s lower right. Along with, of course, the activity LED below the lens and the six-LED ring around the perimeter.
I’m going to stop at this point, in the hopes that if I’m careful with my reassembly, I might actually be able to return the ideaCam S1 Plus to its original fully functional condition…
Success! It still works! Over to you for your thoughts in the comments.
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Disclosing the results of a webcam closeup
- The slammed webcam: An impromptu teardown erases my frown
- Prosumer and professional cameras: High quality video, but a connectivity vulnerability
- Digital camera design, part 2: Motion considerations for frame rate, exposure time, and shuttering
The post Disassembling a premium webcam appeared first on EDN.
How to control your impulses—part 2
Editor’s note: The first part of this two-part design idea (DI) shows how modifications to an oscillator can produce a useful and unusual pulse generator; this second and final part extends that to step functions.
In the first part of this DI, we saw how to gate an oscillator to generate well-behaved impulses. Now we find out how to extend that idea to producing well-behaved step functions, or nicely smoothed square waves.
The ideal here is the Heaviside or unit step function, which has values of 0 or 1 with an infinitely sharp transition between them. Just as the Dirac delta impulse which we met in Part 1 is the extreme case of a normal distribution or bell curve, the Heaviside is the limit of the logistic function (which I gather logisticians use about as often as plumbers do bathtub curves).
Wow the engineering world with your unique design: Design Ideas Submission Guide
Square wave with smooth edgesAnyone working with audio kit will have employed square-wave testing with that infinity tamed by an RC time-constant, which is good enough for everyday use, but another approach is to replace that still-sharp step with a portion of a cosine wave. Taking the circuit from Part 1 and adding some more gating means that instead of generating a full raised-cosine pulse for every trigger input, we get a half cycle at each transition, with alternating polarities. The result: a square wave at half the frequency of the trigger and with smooth edges. The revised circuit is in Figure 1.
Figure 1 Extra logic added to the original circuit now gives half a cosine on each trigger pulse, with alternating polarities, generating a square wave with smoothed edges.
In pulse or oscillator modes, U1b delivers a reset to U2 whenever A1b’s output goes high, which gives a full cycle of the raised cosine. In the square wave mode, U2 is reset whenever A1b changes, irrespective of polarity, at the half-cycle point. U1b and U3b/c act as a gated EXOR with delays through one leg to generate the reset pulse. Some waveforms are shown in Figure 2; compare these with those in Figure 2 of Part 1. As before, A2 is jammed when the oscillator mode is selected, forcing continuous, sine-wave operation.
Figure 2 Some waveforms from the circuit in Figure 1.
A single, positive-going transition is shown in Figure 3, with our target curve for comparison. These are both theoretical plots, but the actual output is very close to the cosine.
Figure 3 The target step-function is a logistic curve; a segment of a cosine is shown for comparison.
In Part 1, we tried to get closer to a normal distribution curve by some extra squashing of our tri-wave. This worked up to a point but was clunkily over-elaborate, partly owing to the waveform’s lack of symmetry. We now have a symmetrical function to aim at, which should be easier to emulate.
Building our target curveThe spare section of mux U1 together with three new resistors offers a neat solution, and the circuit fragment in Figure 4 shows how.
Figure 4 Adding the components in red gives a much better fit to our target curve. The tri-wave amplitude is increased and can now be squashed even more.
Putting 47k (R14) in series with D3/4 increases the trip points’ levels, so that the tri-wave now spans ~4.3 V rather than ~1.1 V. The increased drive to D5/6 through R7 results in the diodes not so much squashing the triangle into a (co)sine as crushing it into something much squarer though with greater amplitude. R24 and R25, connected across D7/8, pot the voltage across the diodes down so that the peaks—which are now gentle curves—are cropped by A2b’s (rail-to-rail) output. (The resistive loading of D7/8 slightly softens their response, which also helps.)
U1c does two jobs. When pulses or a continuous sine wave are to be generated, it shorts out R14 and opens R24, giving our standard operating conditions, but in square-wave mode, R14 is left in circuit while R24 is grounded, as needed for the extra tri-wave amplitude and crushing.
The waveforms now look like Figure 5 (note the change of scale for trace C) while a single, actual edge is shown in Figure 6 with a theoretical, ideal step for comparison—and the match is now very good.
Figure 5 Waveforms after adding the mods shown in Figure 4.
Figure 6 Comparison of the target curve with part of the trace D in Figure 5.
There is some fudging involved here, the two curves in Figure 6 having been adjusted for the same slope at the half-height point. Because R24/R25 reduce the amplitude of the signal across the diodes by nearly 20%, the slope will also be that much shallower than for the cosine version, which is not a practical problem.
The final circuitTo turn all this into a functional piece of kit ready for doing some audio testing, we need to add some extras:
- A rail-splitter to define the central, common rail
- Level-control pot with an output buffer
- Simple oscillator to produce the trigger pulses, with an input so that an external TTL signal can override the internal one
- A switch to select the mode.
Putting all these together, we reach the full and reasonably final circuit of Figure 7. Multiple ranges can easily be accommodated by adding the extras detailed in Part 1, Figure 5. The modified pulse-shaping circuit shown in Part 1, Figure 6 could also be added, but may be more fiddly than it is worth.
Figure 7 The full circuit, which now produces square waves with well-shaped edges as well as pulses and continuous sine waves.
The absence of pin numbers is deliberate, because their inclusion would imply an optimized layout. Be careful to keep the logic signals away from analog ones, especially at and around the earthy end of R24, which can pick up switching spikes when open-circuited. U1’s E-not (pin 6) and VEE (pin 7) must be at 0 V.
While this approach to generating nicely-formed pulses is perhaps more interesting than accurate, it does show that crunching up triangles with diodes is not limited to generating sine(ish) waves, which was the starting-point for this idea. For anything more complex, an AWG is probably a better solution, if less fun.
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- How to control your impulses—part 1
- Squashed triangles: sines, but with teeth?
- Dual RRIO op amp makes buffered and adjustable triangles and square waves
- Arbitrary waveform generator waveform creation using equations
- 555 triangle generator with adjustable frequency, waveshape, and amplitude; and more!
- Adjustable triangle/sawtooth wave generator using 555 timer
The post How to control your impulses—part 2 appeared first on EDN.
How scanning acoustic microscopy (SAM) aids hybrid bonding test
Hybrid bonding—a significant advancement in chip packaging technology—is becoming vital in heterogeneous integration, which enables semiconductor companies to merge multiple chiplets with diverse functions, process nodes, and sizes into a unified package. It vertically links die-to-wafer or wafer-to-wafer via closely spaced copper pads, bonding the dielectric and metal bond pads simultaneously in a single bonding step.
However, the enhanced reliability and mechanical strength of its interconnects compared to traditional bump-based interconnections don’t come without challenges. For instance, to successfully transition to high-volume manufacturing with high yields, it requires advanced metrology tools that can quickly identify defects such as cracks and voids within the bonded layers.
PVA TePla OKOS, a Virginia-based manufacturer of industrial ultrasonic non-destructive (NDT) systems, claims to have a solution based on scanning acoustic microscopy (SAM). A non-invasive and non-destructive ultrasonic testing method, SAM is quickly becoming the preferred technique for testing and failure analysis involving stacked dies or wafers, according to Hari Polu, president of PVA TePla OKOS.
SAM utilizes ultrasound waves to non-destructively examine internal structures, interfaces, and surfaces of opaque substrates. The resulting acoustic signatures can be constructed into 3D images that are analyzed to detect and characterize device flaws such as cracks, delamination, inclusions, and voids in bonding interfaces. The images can also be used to evaluate soldering and other interface connections.
Figure 1 SAM is becoming a preferred technique for testing and failure analysis involving stacked dies or wafers. Source: PVA TePla OKOS
SAM—an industry standard for inspection of semiconductor components to identify defects such as voids, cracks, and delamination—has been adapted to facilitate 100% inspection of hybrid bonded packages, says Polu.
How it works
In hybrid bonding, various steps must be reliably performed to ensure quality. The process starts with manufacturing the wafers or dies in a semiconductor fab before the chips are bonded together. The next key steps include the preparation and creation of the pre-bonding layers, the bonding process itself, the post-bond anneal, and the associated inspection and metrology at each of the step.
However, in conventional SAM techniques, wafers are held horizontally in a chuck and processed in a water medium. That, in turn, could lead to water ingress, which could cause significant issues in the next step of assembly. On the other hand, by re-designing the chuck in a vertical orientation, engineers can use gravity to eliminate any concern over water ingress while also using other water management technologies.
Here, SAM directs focused sound from a transducer at a small point on a target object. The sound hitting the object is either scattered, absorbed, reflected, or transmitted. As a result, the presence of a boundary or object and its distance can be determined by detecting the direction of scattered pulses as well as the time of flight. Next, samples are scanned point by point and line by line to produce an image.
Figure 2 SAM stands ready to deliver 100% non-destructive inspection of vertically stacked and bonded die-to-wafer or wafer-to-wafer packages to help facilitate the adoption of hybrid bonding. Source: PVA TePla OKOS
It’s important to note that scanning modes range from single-layer views to tray scans and cross-sections and that multi-layer scans can include up to 50 independent layers. The process can extract depth-specific information and apply it to create 2D and 3D images. Then, the images are analyzed to detect and characterize flaws like cracks, delamination, and voids.
The AI boost
Polu is confident that advancements in artificial intelligence (AI)-based analysis of the data collected from SAM inspection of wafer-to-wafer hybrid bonding will further automate quality assurance and increase fab production. “Innovations in the design of wafer chucks, array transducers, and AI-based analysis of inspection data are converging to provide a more robust SAM solution for fabs involved in hybrid bonding,” he said.
So, when fabs take advantage of the higher level of failure detection and analysis, the production yield and overall reliability of high-performance chips improve significantly. “Every fab will eventually move toward this level of failure analysis because of the level of detection and precision required for hybrid bonding,” Polo concluded.
Especially when the stakes are higher than ever because one bad wafer, die, or interconnection could cause the entire package to be discarded down the line.
Related Content
- EAG Adds IC Analytical Tools
- The Importance of 3D IC Ecosystem Collaboration
- CEA-Leti Presents TSVs that Promise Smarter Cameras
- Applied Materials, IME Extend Hybrid Bonding Research
- Intel and FMD’s Roadmap for 3D Heterogeneous Integration
The post How scanning acoustic microscopy (SAM) aids hybrid bonding test appeared first on EDN.
Rad-hard SBC enables on-orbit computing
Moog’s Cascade single-board computer supports multiple payloads and spacecraft bus processing needs within a single radiation-hardened unit. Cascade was created through an R&D partnership with Microchip Technology, as part of NASA’s early-engagement ecosystem for its next-gen High-Performance Spaceflight Computing (HPSC) processor.
The SBC is based on Microchip’s PIC64-HPSC, a radiation-hardened microprocessor with 10 64-bit RISC-V cores. In addition to advanced computing power, the processor provides an Ethernet TSN Layer 2 switch for data communications, fault tolerance and correction, secure boot, and multiple levels of encryption.
Available with or without an enclosure, Cascade is an extended 3U SpaceVPX board that conforms to the Space Standards Open Systems Architecture (Space SOSA) standard for maximum interoperability. The rad-hard SBC can withstand a total ionizing dose (TID) of 50 krad without shielding and has a single event latchup (SEL) tolerance of 78 MeV/cm² after bootup.
For more information about the Cascade SBC, click the product page link below.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Rad-hard SBC enables on-orbit computing appeared first on EDN.
Molex shrinks busbar current sensors
Percept current sensors from Molex employ a coreless differential Hall-effect design and proprietary packaging to slash both size and weight. The sensor-in-busbar configuration allows for simple plug-and-play installation in automotive and industrial current sensing applications, such as inverters, motor drives and EV chargers.
Percept integrates an Infineon coreless magnetic current sensor in a Molex package to create a component that is 86% lighter and up to half the size of competing current sensors. The design also suppresses stray magnetic fields and reduces sensitivity and offset errors.
Automotive and industrial-grade Percept sensors are available in current ranges from ±450 A to ±1600 A, with ±2% accuracy over temperature. They offer bidirectional sensing with options for full-differential, semi-differential, and single-ended output modes. AEC-Q100 Grade 1-qualified devices operate across a temperature range of -40°C to +125°C.
Sensors for industrial applications are expected to be available in October 2024, with the automotive product approval process scheduled for the first half of 2025. Limited engineering samples for industrial applications are available now.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Molex shrinks busbar current sensors appeared first on EDN.
20-A models join buck converter lineup
TDK-Lambda expands its i7A series of non-isolated step-down DC/DC converters with seven 500-W models that provide 20 A of output current. The converters occupy a standard 1/16th brick footprint and use a standardized pin configuration.
With an input voltage range of 28 V to 60 V, the new converters offer a trimmable output of 3.3 V to 32 V and achieve up to 96% efficiency. This high efficiency reduces internal losses and allows operation in ambient temperatures ranging from -40°C to +125°C. Additionally, an adjustable current limit option helps manage stress on the converter and load during overcurrent conditions, enabling precise adjustment based on system needs.
The 20-A i7A models are available in three 34×36.8-mm mechanical configurations: low-profile open frame, integrated baseplate for conduction cooling, and integrated heatsink for convection or forced air cooling.
Samples and price quotes for the i7A series step-down converters can be requested on the product page linked below.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post 20-A models join buck converter lineup appeared first on EDN.
Discrete GPU elevates in-vehicle AI
A discrete graphics processing unit (dGPU), the Arc A760A from Intel delivers high-fidelity graphics and AI-driven cockpit capabilities in high-end vehicles. According to Intel, the dGPU supports smooth and immersive AAA gaming and responsive, context-aware AI assistants.
The Arc A760A marks Intel’s entry into automotive discrete GPUs, complementing its existing portfolio of AI-enhanced, software-defined vehicle (SDV) SoCs with integrated GPUs. Together, these devices form an open and flexible platform that scales across vehicle trim levels. Automakers can start with Intel SDV SoCs and later add the dGPU to handle larger compute workloads and expand AI capabilities.
Enhanced personalization is enabled by AI algorithms that learn driver preferences, adapting cockpit settings without the need for voice commands. Automotive OEMs can transform the vehicle into a mobile office and entertainment hub with immersive 4K displays, multiscreen setups, and advanced 3D interfaces.
Intel expects the Arc A760A dGPU to be commercially deployed in vehicles as soon as 2025. Read the fact sheet here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Discrete GPU elevates in-vehicle AI appeared first on EDN.
Raspberry Pi SBC touts RISC-V cores
The Raspberry Pi Pico 2 single-board computer is powered by the RP2350 MCU, featuring two Arm cores or optional RISC-V cores. This $5 computer board also boasts higher clock speeds, twice the memory, enhanced security, and upgraded interfacing compared to its predecessor, the Pico 1.
Designed by Raspberry Pi, the RP2350 MCU leverages a dual-core, dual-architecture with a pair of Arm Cortex-M33 cores and a pair of Hazard3 RISC-V cores. Users can select between the cores via software or by programming the on-chip OTP memory. Both the Arm and RISC-V cores run at clock speeds of up to 150 MHz.
Pico 2 offers 520 kbytes of on-chip SRAM and 4 Mbytes of onboard flash. A second-generation programmable I/O (PIO) subsystem provides 12 PIO state machines for flexible, CPU-free interfacing.
The security architecture of the Pico 2 is built around Arm TrustZone for Cortex-M and includes signed boot support, 8 kbytes of on-chip antifuse OTP memory, SHA-256 acceleration, and a true random number generator. Global bus filtering is based on Arm or RISC-V security/privilege levels.
Preorders of the Pico 2 are being accepted now through Raspberry Pi’s approved resellers. Even though Pico 2 does not offer Wi-Fi or Bluetooth connectivity, Raspberry Pi expects to ship a wireless-enabled version before the end of the year.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Raspberry Pi SBC touts RISC-V cores appeared first on EDN.
RISC-V migration to mainstream one startup at a time
As noted by Kleiner Perkins partner Mamoon Hamid, the migration to RISC-V is in full flight. Kleiner Perkins, along with Fidelity and Mayfield, is a backer of RISC-V upstart Akeana, which has officially launched itself after exiting the stealth mode.
Akeana marked this occasion by unveiling RISC-V IPs for microcontrollers, Android clusters, artificial intelligence (AI) vector cores and subsystems, and compute clusters for networking and data centers. Its 100 Series configurable processors come with 32-bit RISC-V cores and support applications spanning from MCUs to edge gateways.
Akeana’s 1000 Series processor line includes 64-bit RISC-V cores and an MMU to support rich operating systems as well as in-order or out-of-order pipelines, multi-threading, vector extension, hypervisor extension and other extensions that are part of recent and upcoming RISC-V profiles.
Next, its 5000 Series features 64-bit RISC-V cores optimized for demanding applications in data centers and cloud infrastructure. These processors are compatible with the Akeana 1000 Series but offer much higher single-thread performance.
Three RISC-V processors come alongside an SoC IP suite. Source: Akeana
Akeana feels especially confident in data center processors while having acquired the same team that designed Marvell’s ThunderX2 server chips. “Our team has a proven track record of designing world-class server chips, and we are now applying that expertise to the broader semiconductor market as we formally go to market,” said Rabin Sugumar, Akeana CEO.
Besides RISC-V processors, Akeana offers a collection of IP blocks needed to create processor system-on-chips (SoCs). That includes coherent cluster cache, I/O MMU, and interrupt controller IPs. The company also provides scalable mesh and coherence hub IP compatible with AMBA CHI to build large coherent compute subsystems for data centers and other use cases.
Akeana, another RISC-V startup challenging the semiconductor industry status quo, has been officially launched three years after its foundation. And it has raised over $100 million from A-list investors like Kleiner Perkins, Mayfield, and Fidelity.
Related Content
- Navigating the RISC-V Revolution in Europe
- Amidst export restrictions, RISC-V continues to advance
- Accelerating RISC-V development with network-on-chip IP
- RISC-V venture in Germany to accelerate design ecosystem
- RISC-V as you like it: the ‘first’ fully configurable 64-bit processor core
The post RISC-V migration to mainstream one startup at a time appeared first on EDN.
Google’s fall…err…summer launch: One-upping Apple with a sizeable product tranche
Within last month’s hands-on (or is that on-wrist?) coverage of Google’s first-generation Pixel Watch, I alluded to:
…rumors of the third-generation offering already beginning to circulate…
Later in that writeup, I further elaborated:
…the upcoming Pixel Watch 3, per leaked images, will not only be thicker but also come in a larger-face variant.
I (fed by the rumor mill, in my defense) was mostly right, as it turns out. The “upcoming” Pixel Watch 3 was released today (as I write these words on Tuesday, August 13). It does come in both legacy 41 mm and new, larger 45 mm flavors. And, in a twist I hadn’t foreseen, the bezel real estate is decreased by 16%, freeing up even more usable space with both screen sizes (also claimed to be twice as bright as before). But they’re not thicker than before; they’ve got the same 12.3 mm depth as that of the second-gen precursor. And anyway, I’m getting ahead of myself, as today’s live event (including live demos, full of jabs at competitor Apple’s seeming scripted, pre-recorded preferences in recent years):
was predated by several notable preparatory press release-only announcements last week.
4th-generation Nest learning thermostatI’ve got a Lennox “smart” system, so don’t have personal experience with Nest (now Google Nest) gear, but I know a lot of folks who swear by it, so for them the latest-generation addition will likely be exciting. Aside from various cosmetic and other aesthetic tweaks, it’s AI-centric, which won’t surprise anyone who saw the Google I/O keynote (or my coverage of it):
With the Nest Learning Thermostat we introduced the concept of intelligence to help you save energy and money. It keeps you comfortable while you’re home and switches to an energy-efficient temperature when you’re away. And the Nest Learning Thermostat (4th gen) is our smartest, most advanced thermostat yet.
It uses AI to automatically make micro-adjustments based on your patterns to keep you comfortable while saving both energy and money. Now, AI can more quickly and accurately create your personalized, energy-saving temperature schedules. With Smart Schedule, the thermostat learns which temperatures you choose most often or changes in behavior based on motion detected in your home — like coming home earlier — and automatically adjusts your temperature schedule to match. These energy-saving suggestions can be implemented automatically, or you can accept or reject them in the Google Home app so you’re always in control.
The thermostat also analyzes how the weather outdoors will affect the temperature inside. For example, if it’s a sunny winter day and your home gets warmer on its own, it will pause heating. Or, on a humid day, the indoor temperature may feel warmer than intended, so the thermostat will adjust accordingly.
TV StreamerThis once was admittedly something of a surprise, albeit less so in retrospect. Google is end-of-lifeing its entire Chromecast product line, replacing it with the high-end TV Streamer, which not only handles traditional audio and video reception-and-output tasks but also, for example, does double-duty as a “smart home” hub for Google Home and Matter devices. The reason why it wasn’t a complete surprise was that, as I’d mentioned before, the existing Chromecast with Google TV hardware was getting a bit long in the tooth, understandable given that the original 4K variant dated from September 2020 with the lower-priced FHD version only two years newer.
With its Chromecast line, Google has always strived to deliver not only an easy means of streaming Internet-sourced content to (and displaying it on) a traditional “dumb” TV but also a way to upgrade the conventional buggy, sloth-like “smart” TV experience. As “smart” TVs have beefed up their hardware and associated software over the past four years, however, the gap between them and the Chromecast with Google TV narrowed and, in some cases, collapsed and even flipped. That said, I still wonder why the company decided to make a clean break from the longstanding Chromecast brand equity investment versus, say, calling it the “Chromecast 2”.
The other factor, I’m guessing, has at least something to do with comments I made in my recent teardown of a Walmart Android TV-based onn. UHD streaming device:
Walmart? Why?… I suspect it has at least something to do with the Android TV-then-Google TV commodity software foundation on which Google’s own Chromecast with Google TV series along with the TiVo box I tore down for March 2024 publication (for example) are also based, which also allows for generic hardware. Combine that with a widespread distribution network:
Today, Walmart operates more than 10,500 stores and clubs in 19 countries and eCommerce websites.
And a compelling (translation: impulse purchase candidate) price point ($30 at intro, vs $20 more for the comparable-resolution 4K variant of Google’s own Chromecast with Google TV). And you’ve got, I suspect Walmart executives were thinking, a winner on your hands.
Competing against a foundation-software partner who’s focused on volume at the expense of per-unit profit (even willing to sell “loss leaders” in some cases, to get customers in stores and on the website in the hopes that they’ll also buy other, more lucrative items while they’re there) is a tough business for Google to be in, I suspect. Therefore, the pivot to the high end, letting its partners handle the volume market while being content with the high-profit segment. This is a “pivot” that you’ll also see evidence of in products the company announced this week. To wit…
The Pixel 9 smartphone seriesNow’s as good a time as any to discuss the “elephant in the room”. Doesn’t Apple generally (but not always) release new iPhones (and other things) every September? And doesn’t Google generally counter with its own smartphone announcements (not counting “a” variants) roughly one month later? Indeed. But this time, Mountain View-headquartered Google apparently decided to get the jump on its Cupertino-based Silicon Valley competitor (who I anticipate will once again unveil new iPhones next month; as always, stay tuned for my coverage!).
Therefore the “One-upping Apple” phrase in this post’s title. And my already mentioned repeated snark from Google regarding live-vs- pre-recorded events (and demos at such). Along with plenty of other examples. That said, Google wasn’t above mimicking its competitor, either. Check out the first-time (versus historically curved) flat edges in the above video. Where have you seen them (plenty of times) before? Hmmm? That said, Pixels’ back panel camera bar (which I’ll cover more in an upcoming post) currently remains a Google-only characteristic.
Another Apple-reminiscent form factor evolutionary adaptation also bears mentioning. Through the Pixel 4 family generation, Google sold both standard and large screen “XL” smartphone variants. The “XL” option disappeared with the Pixel 5. In its stead, starting with the Pixel 6, a “Pro” version arrived…a larger screen size than the standard, as in the “XL” past, but also accompanied by a more elaborate multi-camera arrangement, among other enhancements.
And now with the Pixel 9 generation, there are two “Pro” versions, both standard (6.3” diagonal, the same size as the conventional Pixel 9, which has grown a bit from the 6.2” Pixel 8 predecessor) and resurrected large screen “XL” (6.8” diagonal). Remember my earlier comments about media streamers: how Google was seemingly doing a “pivot to the high end, letting its partners handle the volume market while being content with the high-profit segment”? Sound familiar? This is also reminiscent of how Apple dropped its small-screen “mini” variant after only one generation (13).
Even putting aside standard-vs-“Pro” and standard-vs-large screen product line proliferation prioritization by Google, the overall product line pricing has also increased. The Pixel 7 phones that I’m currently using, for example, started at $599, with the year-later (and year-ago) Pixel 8 starting at $699; the newly unveiled Pixel 9 successor begins at $799. That said, in fairness, you now get 50% more RAM (12 vs 8 GBytes). Further to that point, especially given that the associated software suite is increasingly AI-enhanced (yes, Google Assistant is being replaced by Gemini, including the voice-based and Alexa- and Siri-reminiscent Gemini Live), Google isn’t making the same mistake it initially did with the Pixel 8 line.
At intro, only the 12 GByte RAM-inclusive “Pro” version of the Pixel 8 was claimed capable of supporting Google’s various Gemini deep learning models; the company later rolled out “Nano” Gemini variants that could shoehorn in the Pixel 8’s 8 GBytes. This time, both the Pixel 9 (again, 12 GBytes) and Pixel 9 Pro/Pro XL (16 GBytes) are good to go. And I suspect Apple’s going to be similarly memory-inclusive from an AI (branded Apple Intelligence, in this case) standpoint with its upcoming iPhone 16 product line, given that its current-generation AI support is comparably restrictive, to only the highest-end iPhone 15 Pro and Pro Max.
Accompanying the new-generation phones is, unsurprisingly, a new-generation SoC powering them: the Tensor G4. As usual, beyond Google’s nebulous claim that “It’s our most efficient chip yet,” we’ll need to wait for Geekbench leaks, followed by actual hands-on testing results, to assess performance, power consumption and other metrics, both in an absolute sense and relative to precursor generations and competitors’ alternatives. They all (like Apple for several years now) come with two years of gratis satellite SOS service, which is nice. And they all also, after three generations’ worth of oft-frustrating conventional under-display fingerprint sensor usage (oh, how I miss the back panel fingerprint sensors of the Pixel 5 and precursors!), switch to a hopefully more reliable ultrasonic sensor approach (already successfully in use in Samsung Galaxy devices, which is an encouraging sign).
That said, the displays themselves differ between standard and “Pro” models: the Pixel 9 has a 6.3-inch OLED with 2,424×1,080-pixel resolution (422 pixels per inch, i.e., ppi, density) and 60-120 Hz variable refresh rate, while its same-size Pixel 9 Pro sibling totes a 2,856×1,280-pixel resolution (495 ppi density) and its low-temperature polycrystalline oxide (LTPO) OLED affords an even broader 1-120 Hz variable refresh rate range to optimize battery life. The Pixel 9 Pro XL’s display is also LPTO OLED in nature, this time with a 2,992×1,344-pixel resolution (486 ppi density). And where the phones also differ, among other things (some already mentioned) and speaking of AI enhancements, is in their front and rear camera allotments and specifications. With the Pixel 9, you get two rear cameras—50 Mpixel main and 48 Mpixel ultrawide—along with a 10.5 Mpixel front-facing. The Pixel 9 Pro and Pro XL add a third rear camera, a 48 Mpixel telephoto with 5x optical zoom, as well as bumping up the front camera to 42 Mpixel resolution. And for examples of some of the new and enhanced AI-enabled computational photography capabilities, check out this coverage, along with a first-look video from Becca, at The Verge:
Video Boost’s cloud-processed smoother transitions between lenses, dealing with an issue whose root cause I also discuss in the aforementioned upcoming blog post, is very cool.
In closing (at least for this section), a few words on the Pixel 9 Fold Pro, the successor to last year’s initial Pixel Fold. Read through The Verge’s recently published long-term usage report on the first-generation device (or Engadget’s version of the same theme, for that matter), and you’ll see that one key hoped-for improvement with its successor was increased display brightness. Well, Google delivered here, claiming that it’s “80% brighter than Pixel Fold”.
The Pixel 9 Fold Pro also inherits other Pixel 9 series improvements, such as to the SoC and camera subsystem. After some initial glitches, Google seems to have solved the Pixel Fold’s screen reliability issue, a key characteristic that I assume will carry forward to the second generation. And the company’s also currently offering a generous first-generation trade-in offer, although you’ll still be shelling out $1,000+ for the second-gen upgrade. That all said, as I read through the coverage of both generation foldable devices, I can’t help but wonder what could have also been, had Google and Microsoft more effectively worked together to harness the Surface Duo’s hardware potential with equally robust (and long-supported) software. Sigh.
Smart watchesI already teased the new Pixel Watch 3 variants at the beginning of the piece, specifically with respect to their dimensions and display characteristics. Interestingly, they run the same SoC as that found in second-generation predecessors, Qualcomm’s Snapdragon SW5100, and have the RAM allocation, 2 GBytes. The new Loss of Pulse Detection capability is compelling, specifically its multi-sensor implementation that strives to prevent “false positives”:
Loss of Pulse Detection combines signals from Pixel Watch 3’s sensors, AI and signal-processing algorithms to detect loss of pulse events, with a thoughtful design — built from user research — to limit false alarms. The feature uses signals from the Pixel Watch 3’s existing Heart Rate sensor, which uses green light to check for a user’s pulse.
If the feature detects signs of pulselessness, infrared and red lights also turn on, looking for additional signs of a pulse, while the motion sensor starts to look for movement. An AI-based algorithm brings together the pulse and movement signals to confirm a loss of pulse event, and if so, triggers a check-in to see if you respond.
The check-in asks if you’re OK while also looking for motion. If you don’t respond and no motion is detected, it escalates to an audio alarm and countdown. If you don’t respond to the countdown, the LTE watch or phone your watch is connected to automatically places a call to emergency services, and shares an automated message that no pulse is detected along with your location.
And unlike the Pixel 9 smartphones, which will still be “stuck” on Android 14 at initial shipment (Android 15 is still in beta, obviously), the new watches will ship with Wear OS 5, whose claimed power consumption improvements I’m anxious to see in action on my first-gen Pixel Watch, too. Speaking of which, that “free two years of Google Fi-served wireless service for LTE watches” short-term promotion that I’d told you I snagged? It’s now broadly available.
I should note that Google launched a new smartwatch last week, too, the kid-tailored Fitbit Ace LTE. But what about broader-audience new Fitbit smartwatches? Apparently, the Pixel Watch family is the solitary path going forward here; in the future, Google will be refocusing the Fitbit brand specifically at lower-end activity tracker-only devices.
EarbudsLast (but not least), what’s up with Google and earbuds? At the beginning of last year, within a teardown of the first-generation Pixel Buds Pro, I also gave a short historical summary of Google’s to-date activity in this particular product space. Well, two years after the initial Pixel Buds Pro series rolled out, the second-generation successors are here. They check all the predictable improvement-claim boxes:
- “Twice as much noise cancellation”
- “24% lighter and 27% smaller”
- “increase[d]…battery life, despite the smaller and lighter design. When Active Noise Cancellation is enabled, you get up to 8 hours”
And, for the first time, they’re powered by Google-designed silicon, the Tensor A1 SoC (once again reminiscent of Apple’s in-house supply chain strategy). That said, I was happily surprised to see that the “wings” designed to help keep the earbuds in place when in use have returned, albeit thankfully in more subdued form than that in the standard Pixel Buds implementation:
Although I’m blessed to own several sets of earbuds from multiple suppliers, I inevitably find myself repeatedly grabbing my first-gen Pixel Buds Pros in all but critical-music-listening situations. However, even when all I’m doing is weekend vacuuming, they don’t stay firmly in place. The second-gen “wings” should help here. Hear? (abundant apologies for the bad pun).
Having just passed through 2,500 words, I’m going to wrap up at this point and pass the cyber-pen over to you all for your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Playin’ with Google’s Pixel 7
- Pixel smartphone progressions: Latest offerings and personal transitions
- The 2024 Google I/O: It’s (pretty much) all about AI progress, if you didn’t already guess
- The Google Chromecast with Google TV: Realizing a resurrection opportunity
- Google’s Chromecast with Google TV: Car accessory similarity, and a post-teardown resurrection opportunity?
- The Pixel Watch: An Apple alternative with Google’s (and Fitbit’s) personal touch
- Walmart’s onn. UHD streaming device: Android TV at a compelling price
The post Google’s fall…err…summer launch: One-upping Apple with a sizeable product tranche appeared first on EDN.
Dpot pseudolog + log lookup table = actual logarithmic gain
This is the Microchip MCP41xxx digital potentiometer data sheet that includes (on page 15, their Figure 4-4) an interesting application circuit comprising a Dpot controlled amplifier with pseudologarithmic gain settings. However, as explained in the Microchip text, the gains implemented by this circuit start changing radically as the control setting of the pot approaches 0 or 256. As Microchip puts it: “As the wiper approaches either terminal, the step size in the gain calculation increases dramatically. This circuit is recommended for gains between 0.1 and 10 V/V.”
That’s good advice. Unfortunately, following it would effectively throw away some 48 of the 256 8-bit pot settings, amounting to a loss of nearly 20% of available resolution. The simple modification shown in Figure 1 gets rid of that limitation.
Figure 1 Two fixed resistors are added to bound the gain range to the recommended limits while keeping full 8-bit resolution.
Wow the engineering world with your unique design: Design Ideas Submission Guide
This results in the gain vs code red curve of Figure 2.
Figure 2 Somewhat improved pseudologarithmic gain curve from the simple modification shown in Figure 1.
However, despite this improvement, the key term remains pseudologarithmic. It still isn’t a real log function and, in fact, isn’t quantitatively even that close, deviating by almost a factor of two in places. Can we do better? Yes!
The simple (software) trick is to prepare a 257-byte logarithmic lookup table that translates the 0.1 to 10.0 gain range settings to the Dpot codes needed to logarithmically generate those gains.
Let’s call the table index variable J. Then for a 257-byte table of (abs) gains G from 0.1 to 10.0 inclusive,
J(G) = (128 LOG10(abs(G)) + 128)
…examples…
J(0.1) = 0,
J(0.5) = 89,
J(1.0) = 128,
J(10.0) = 256,
etc.
Inspection of the gain expression in Figure 1 reveals that the Dpot decimal code N required for (abs) gain G is:
N(G) = (284.4G – 28.4)/(G + 1)
…thus…
N(.1) = (28.4 – 28.4)/(.1 + 1) = 0/1.1 = 0,
N(.5) = (142 – 28.4)/(.5 + 1) = 114/1.5 = 76,
N(1.0) = (284.4 – 28.4)/(1 + 1) = (256)/2 = 128,
N(10.0) = (2844 – 28.4)/(10 + 1) = 2816/11 = 256,
etc.
Figure 3 summarizes the resulting relationship between G, J, and N.
Figure 3 The Dpot settings [N(J)] versus log table indices [J(G)], summarizing the relationship between G, J, and N.
The table of log gains can be found in this excel sheet. The net result, with as good log conformity as 8 bits will allow, is exhibited as Figure 4’s lovely green line.
Figure 4 The absolute gain [Gabs = 10(J/128 -1)] versus decimal code (J).
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Keep Dpot pseudologarithmic gain control on a leash
- Synthesize precision Dpot resistances that aren’t in the catalog
- Reducing error of digital potentiometers
- Adjust op-amp gain from -30 dB to +60 dB with one linear pot
- Op-amp wipes out DPOT wiper resistance
The post Dpot pseudolog + log lookup table = actual logarithmic gain appeared first on EDN.
How to control your impulses—part 1
Editor’s note: The first part of this two-part design idea (DI) shows how modifications to an oscillator can produce a useful and unusual pulse generator. The second part will extend this to step function generation.
The principle behind testing the impulse response of circuits is simple: hit them with a sharp pulse and see what happens. As usual, Wikipedia has an article detailing the process. This notes that the ideal pulse—a unit impulse, or Dirac delta—is infinitely high and infinitely narrow with an area beneath it of unity, so it’s infinitely tricky to generate, which is just as well, considering the effects one would have on everything from protection diodes to slew rates. Fortunately, it’s just an extreme case of the normal or Gaussian distribution, or bell curve, which is a tad easier to generate or at least emulate and, which this DI shows how to do.
Wow the engineering world with your unique design: Design Ideas Submission Guide
In the real world, the best testing impulses come from arbitrary waveform generators. An older technique is to filter narrow rectangular pulses, but if you change the pulse width, the filter’s characteristics also need to be varied to maintain the pulse shape. The approach detailed in here avoids that problem by generating raised cosine pulses (not to be confused with raised-cosine filters) which are close enough to the ideal to be interesting. But let’s be honest: simple rectangles, slightly slugged to avoid those slew-rate problems, are normally quite adequate.
Producing our pulsesWe make our pulses by taking the core of a squashed-triangle sine-wave oscillator and adding some logic and gating so that when triggered, it produces single cycles which rise from a baseline to their peak and then fall back again, following a cosine curve. The schematic in Figure 1 shows the essentials.
Figure 1 A simple oscillator with some added logic generates single pulses when triggered.
How the oscillator worksThe oscillator’s core is almost identical to the original, though it looks different having been redrawn. Its basic form is that of an integrator-with-Schmitt, where C1 is charged up through resistors R2 and R3 until its voltage reaches a positive threshold defined by D3, which flips A1b’s polarity, so that C1 starts to discharge towards D4’s negative threshold. D1/D2 provide bootstrapping to give linear charge/discharge ramps while compensating for variations in D3/D4’s forward voltages with temperature (and supply voltage, though that should not worry us here). The resulting triangle wave on A2’s output is fed through R7 into D5/D6 which squash it into a reasonable (co)sine wave (<0.5% THD). The diode pairs’ forward voltages need to be matched to maintain symmetry and so minimize even-harmonic distortion. A4 amplifies the signal across D5/6 so that the pulse just spans the supply rails, thermistor Th1 giving adequate compensation for temperature changes.
If A2’s output were connected directly to R1’s input, the circuit would oscillate freely—and we’ll allow it to later on—but for now we need it to start at its lowest point, make one full cycle, and then stop.
In the resting condition, U2a is clear and A1b’s output is high, producing a positive reference voltage across D3. (That’s positive with respect to the common, half-supply internal rail.) That voltage is inverted by A2a and applied through U1a to R1, so that there is negative feedback round the circuit, which stabilizes at the negative reference. (Using a ‘4053 for U1 may seem wasteful, but the other sections of it will come in handy in Part 2.)
When U2a’s D input sees a (positive-going) trigger, its outputs change state. This way, U1a connects R1 to A1b’s (still high) output, starting the cycle; the feedback is now positive. After a full cycle, A1b’s output goes high again, triggering U2b and resetting U2a, thus stopping the cycle and restoring the circuit to its resting state. The relevant waveforms are shown in Figure 2.
Figure 2 Some waveforms from the circuit in Figure 1.
Comparing raised cosines with ideal normal-distribution pulses is instructive, and Figure 3 shows both. While most of the curves match reasonably, the bottom third or so is somewhat wanting, though it can be improved on with some extra complexity—but that’s for later.
Figure 3 A comparison between an ideal normal-distribution curve and a raised cosine, including the output from Figure 1.
As previously mentioned, and apparent from the schematic, the circuit works as a simple oscillator if U2a’s operation is disabled by inhibiting its trigger input and jamming its preset input low to force its Q high and Q low. U1a now connects A1b’s output to R1, and the circuit runs freely. Apart from being useful as a feature, this helps us to set it up.
Trimming the oscillatorA few trims, in the oscillator mode, are needed to get the best results.
- R3 must be set to give equal tri-wave amplitudes at the maximum and minimum settings of R2, or distortion will vary with frequency (or pulse width). Set R2 to max (lowest frequency) and R3 to min (towards the right on the schematic), then measure the amplitude at A1’s output. Now set R2 to min and adjust R3 to give the same amplitude as before. (Thanks to Steve Woodward for the idea behind this.)
- R7 defines the drive to the squashing diodes D5/6 and thus the distortion. Using a ‘scope’s FFT is preferable: adjust R7 to minimize the third and fifth harmonics. (The seventh remains fairly constant.) Failing that, set R7 so that the voltage across the diodes is precisely 2/3 of the tri-wave’s value. As a last resort, a 30k fixed resistor may be close enough, as it was in my build.
- Set the output level using R9. The waveform should run from rail to rail, just shaving the tips of the residual pips (which are mainly responsible for those seventh harmonics) from the peaks. Don’t overdo it, or the third and fifth harmonics will start to increase. This depends on using RRO op-amps for at least A1b and A2b and carefully-split rails for symmetry.
Once trimmed as an oscillator, it’s good to go as a pulse generator, which relies on exactly the same settings, so that each pulse will be a single cycle of a cosine wave, offset by half its amplitude.
The schematic in Figure 1 gives the bare bones of the circuit, which will be fleshed out in Part 2. The op-amps used are Microchip MCP6022s, which are dual, 5-V, 10-MHz CMOS RRIO devices with <500 µV input offsets. Power is at 5 V, with the central “common” rail derived from another op-amp used as a rail-splitter: shown in Figure 4 together with a suitable output buffer.
Figure 4 A simple rail-splitter to derive the 2.5-V “common” rail, and an output level control and buffer with both AC- and DC-coupled outputs.
C1 can be switched to give several ranges, allowing use from way over 20 kHz (for 25 µs pulses, measured at half their height) down to as low as you like. R3 then also needs to be switched; see Figure 5 for a three-range version. (The lowest range probably won’t need an HF trim.) While the tri-wave performance is good to around 1 MHz, the squashing diodes’ capacitance starts to introduce waveform distortion well before that, at least for the 1N4148 or the like.
Figure 5 For multi-range use, timing capacitor C1 is switched. To trim the HF response for each range, R3 must also vary.
Improving the pulse shapeNow for that extra complexity to improve the pulse shape. In very crude terms, the top half of the desired pulse looks (co)sinusoidal but the bottom more exponential, and that part must be squashed even further if we want a better fit. We can do that by bridging D6 with a series pair of Schottky diodes, D7 and D8. The waveform’s resulting asymmetry needs offsetting, necessitating a slightly higher gain and different temperature compensation in the buffer stage A2b. These mods are shown in Figure 6.
Figure 6 Bridging D6 with a pair of Schottky diodes gives a better fit to the desired curve, though the gain and offset need adjusting.
In this mode, R16 sets the offset and R9A the gain. The three sections of U3 will:
- Switch Schottkys D7/8 into circuit
- Select the gain- and offset-determining components according to the mode
- Short out R8 to place the thermistor directly across R12 and optimize the temperature compensation of the pulse’s lower half
Figure 7 shows the modified pulse shape. Different diodes or combinations thereof could well improve the fit, but this seems close enough.
Figure 7 The improved pulse shape resulting from Figure 6.
To set this up, adjust R16 and R9A (which interact; sorry about that) so that the bottom of the waveform is at 0 V while the peaks are at a little less than 5 V. Because the top and bottom halves of each pulse rely on different diodes, their tempcos will be slightly different. The 0-V baseline is now stable, but the peak height will increase slightly with temperature.
To be continued…By now, we’ve probably passed the point at which it’s simpler, cheaper, and more accurate to reach for a microcontroller (Arduino? RPi?) and add a DAC—or just use a PWM output, at these low frequencies—equip it with look-up tables (probably calculated and formatted using Python, rather like the reference curves in these Figures) and then worry about how to get continuous control of the repetition rate and pulse width. Or even just buy a cheap AWG, which is cheating, though practical.
But all that is a different kind of fun, and we have not yet finished with this approach. Part 2 will show how to add more tweaks so that we can also generate well-behaved step-functions.
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- Squashed triangles: sines, but with teeth?
- Dual RRIO op amp makes buffered and adjustable triangles and square waves
- Arbitrary waveform generator waveform creation using equations
- 555 triangle generator with adjustable frequency, waveshape, and amplitude; and more!
- Adjustable triangle/sawtooth wave generator using 555 timer
- Voltage-controlled triangle wave generator
The post How to control your impulses—part 1 appeared first on EDN.
Chiplets diary: Controller IP complies with UCIe 1.1 standard
While physical layer (PHY) interconnect IP has been making headlines after the emergence of the Universal Chiplet Interconnect Express (UCIe) specification, a Korean design house has announced the availability of controller IP that complies with the UCIe 1.1 standard.
The PHY part in UCIe encompasses link initialization, training, power management states, lane mapping, lane reversal, and scrambling. On the other hand, UCIe’s controller part includes the die-to-die adapter layer and the protocol layer.
Openedges Technology calls it OUC, and it derives its name from the term Openedges UCIe controller. Openedges, a supplier of memory subsystem IP, is based in Seoul, South Korea. Its controller IP extends on-chip AXI interconnections to multi-die connections to deliver multi-die connectivity across diverse applications.
The chiplet controller IP employs flits or flow control units for reliability and latency, thus preventing overflow at the receiver buffer. It also ensures seamless communication by synchronizing AXI parameters with its link partner, accommodating different AXI configurations through padding and cropping as per the default operation rules defined in AXI.
The highly configurable UCIe controller IP facilitates die-to-die interconnect and protocol connections. Source: Openedges Technology
In short, the new controller IP effortlessly integrates with the company’s on-chip interconnect IP. That synergy simplifies multi-chiplet interconnects while facilitating efficient bandwidth transfer capabilities.
Related Content
- TSMC, Arm Show 3DIC Made of Chiplets
- Chiplets Get a Formal Standard with UCIe 1.0
- How the Worlds of Chiplets and Packaging Intertwine
- Cadence and Arm launch ADAS chiplet development platform
- Imec’s Van den hove: Moving to Chiplets to Extend Moore’s Law
The post Chiplets diary: Controller IP complies with UCIe 1.1 standard appeared first on EDN.
A new era in electrochemical sensing technology
At the forefront of scientific exploration, electrochemical sensing is an indispensable and adaptable tool that impacts a diverse range of industries. From life and environmental science to industrial material and food processing, the ability to quantify chemicals can provide greater insight, elevating safety, efficiency and awareness.
In this era of advanced interconnected technology, the significance of low power and highly accurate electrochemical sensors cannot be overstated. In our homes, connected devices allow us to monitor the quality of our air, water, and soil for our plants.
Across the industry, there is even greater demand. Smart medical devices, including wearables, move healthcare into the 21st century by providing real-time continuous monitoring of patient vital signs both inside and outside of clinical facilities, improving insight and increasing quality of care.
Similarly, the expanse of Industry 4.0 in manufacturing and industrial automation has seen many sectors deploy extensive networks of sensing nodes in order to improve their efficiency and safety. Sensors can monitor toxic gasses created during various industrial processes and enable feedback systems in industrial equipment. In food processing, the detection of spoilage and allergenic substances is essential—electrochemical sensors can help to automate pre-cooking taste verification, reporting pH levels and detecting histamines.
Whether it’s monitoring glucose levels in diabetic patients, assessing environmental pollutants, ensuring food safety, or characterizing materials at the atomic level, electrochemical sensors play a pivotal role in advancing scientific knowledge and improving our quality of life.
This article will explore the principles that support electrochemical sensing, the requirements for effective sensor performance, how an analog front-end (AFE) device can be a bridge for current measurement and analysis and delve into specific examples of how these sensors are utilized in medical, environmental, food, and material science applications.
Electrochemical sensor requirements
The typical setup for an electrochemical sensor in electronic engineering involves a three-electrode system, an arrangement seen across many other sensor types (Figure 1).
Figure 1 Two diagrams indicate the construction of a typical electrochemical sensor. Source: onsemi
Within the sensor, there is a substrate surface material which acts as a protective layer for the sensing electrode. This material’s primary function is to regulate the quantity of molecules that can access the electrode surface and filter out any undesirable particles that may impact the accuracy of the sensor.
At the core of the sensor are three main parts. The working electrode (WE) is where the electrochemical reaction takes place. As particles impact the WE, a reaction occurs, creating either a loss or gain of electrons, leading to electron flow and the production of current. Maintaining a constant potential at the WE is vital, as it enables accurate measurement of the current generated by redox reactions (Figure 1).
The counter electrode (CE) supplies sufficient current to balance out the redox reactions happening at the WE, creating a complementary pair. While the reference electrode (RE) is employed for measuring the potential of the WE and offering feedback to establish the CE voltage.
Figure 2 The circuit diagram highlights an electrochemical sensor design. Source: onsemi
The high-side resistance in an electrochemical sensor (Figure 2) is an undesired factor that should be minimized, which can be achieved by positioning the RE near the WE. The current flowing through the lower-side resistance indicates the output of the electrochemical measurement and is therefore used to derive the sensor’s output voltage.
Whether an electrochemical sensor is being used in consumer, healthcare, or industrial applications, there are several key technical requirements set by designers that sensors must meet. Factors like high accuracy and low noise go without saying, but alongside this, electrochemical sensors must allow for simple calibration to help cater for the wide range of applications—as packaging or usage may influence calibration, either immediately or over time.
Moreover, with many electrochemical sensors being deployed in portable or low-power solutions, such as wearable medical technology or industrial technology nodes, there are a number of packaging requirements that must be addressed.
Engineers require solutions that feature low-power operation, thus supporting battery powered applications, and that are miniaturized and flexible, allowing for various sensor configurations and easy system integration. Intelligent pre-processing is another important feature on many engineer’s radars, as it can enable more sophisticated calibration and noise filtering, supporting more accurate data delivery.
Common sensor applications
Electrochemical sensors are extensively utilized for several purposes in life science and healthcare, including in the detection of blood alcohol levels and facilitating continuous glucose monitoring (CGM)—a critical component in the management of diabetes, a chronic illness that affects 1 in 11 people worldwide. The CGM device market is projected to grow at a compound annual growth rate (CAGR) of 9% during 2023 to 2032.
Targeting the latest clinical and portable medical devices, a miniaturized AFE is employed for highly accurate measurement of electrochemical currents. The combination of ultra-low-power consumption, flexible configuration, and small size makes it a compelling solution wherever an electrochemical sensor is used.
Beyond medical sciences, electrochemical sensors are ideal for detecting toxic gasses in industrial applications, or for measuring pollution and air quality in environmental applications. They employ a chemical reaction between the target gas and an electrode, generating an electrical current proportional to the specified gas concentration.
The 20-mm electrochemical sensors are widespread and are available for several toxic gasses, including carbon monoxide, hydrogen sulfide and oxides of nitrogen and sulfur, and allow for simple ‘drop in’ replacement. These sensors are utilized in a diverse array of applications, spanning from air quality sensors in urban settings to smart agricultural applications for monitoring plant growth.
Similarly, electrochemical sensors such as potentiostat or corrosion sensors are crucial in environments such as laboratories, mining operations, and material production. They serve as important tools for providing feedback within production systems and managing hazardous substances, ensuring the safety of the operation.
In search of increased yield and production efficiency, food production has also turned to electrochemical sensors. Here, both handheld portable devices and larger automations are deployed for food quality control, ensuring taste and identifying spoilage, allergens or hazardous chemicals.
Sensor design blueprint
Sensors based on electrochemical measurements are readily available. From healthcare and glucose monitoring to broader environmental applications, these sensors provide a complete solution that is designed to increase reliability, accuracy and improve the user experience of wearables and portable medical devices.
These solutions, for instance, can pair with AFE for continuous electrochemical measurement and Bluetooth Low Energy 5.2 technology-enabled microcontroller. Such integrations play a crucial role in making devices smaller and ensuring long-lasting functionality—a vital factor for battery-powered solutions.
The solution, built around CEM102 AFE and RSL15 microcontroller, is complemented with development support, firmware and software, including iOS and Android demo applications (Figure 3).
Figure 3 Examples screens display demo applications for iOS and Android platforms. Source: onsemi
There is also a CEM102 evaluation board complete with sample code for setting up and conducting measurements with CEM102, making it easier to begin system development. This combined offering is designed to streamline development and promote greater integration and innovation for the next generation of amperometric sensor technologies.
During operation, the CEM102’s function is to connect the sensor network to the digital processing. It is responsible for conditioning the sensor by applying the necessary signals to the electrodes and ensuring accurate measurement from the sensor network, while the RSL15 connects the sensor to wireless Bluetooth LE networks (Figure 4).
Figure 4 Here is how the CEM102 + RSL15 combo facilitates a wireless electrochemical sensing solution. Source: onsemi
Advancing scientific research
The precise measurement provided by electrochemical sensors is a critical enabler for advancing scientific knowledge. For example, by carefully examining factors such as glucose levels, researchers can obtain valuable insight into chronic illnesses like diabetes. This knowledge can enhance our understanding and expedite innovation, ultimately benefiting a significant portion of the global population.
In the ever-evolving world of electronics, companies require pioneering solutions that not only redefine expectations but also allow for shorter time to market and increased flexibility to provide scope for new applications. From remote healthcare to environmental monitoring and industrial safety, electrochemical sensors fulfill a diverse range of applications and have a significant impact on society.
And the potential of this versatility extends far beyond current applications. Through manufacturing support and collaboration, electrochemical sensors can contribute to advancing research and enhancing comprehension in the medical field and beyond.
The ongoing development of smart technology, along with complementary technologies such as artificial engineering and machine learning, will drive the growing influence of electrochemical sensors on our lives, resulting in the emergence of new innovations and the effective resolution of many longstanding global challenges.
Hideo Kondo is product marketing engineer at onsemi’s Analog Mixed-Signal Group.
Related Content
- Alchimer improves electrochemical coating process
- NEXX licenses Alchimer’s electrochemical coating process
- AFE facilitates electrochemical and impedance measurement
- ADI: impedance & potentiostat AFE for biological and chemical sensing
- Configurable sensor AFE solution aims to simplify sensor systems designs, speeds time-to-market
The post A new era in electrochemical sensing technology appeared first on EDN.
Ground strikes and lightning protection of buried cables
There was a recent lightning incident where fifty people were hurt while standing on wet soil at the moment of a nearby lightning strike that caused an electrical current to flow through the ground. Seven people were hospitalized but fortunately, there were no fatalities.
The incident raises a point that I have seen made as to whether overhead power lines are more prone or less prone to lightning strike damage than buried power lines.
The issue is not as simple as some would have you believe.
Consider the following image in Figure 1.
Figure 1 An elevated power line and a lightning strike where the power line is isolated from the wet soil current. Source: John Dunn
Apart from a direct strike to the power line itself (I once saw that very thing happen but that’s a separate story), an overhead power line is pretty much isolated and protected from the wet soil’s current paths.
However, if the power line is buried, the wet soil’s current paths can impinge on the power line very much in the same the way as lightning currents impinged on those fifty people (See Figure 2).
Figure 2 A buried power line and a lightning strike where the power line is subjected to the wet soil current. Source: John Dunn
It has been suggested from time to time that power line burial is a guaranteed way to protect any power line from a lightning event. That may or may not be true depending on many circumstances, but power line burial is NOT an absolute panacea, not by any means.
Soil composition, the presence or absence of nearby structures, the presence or absence of water mains, various lightning arrestor arrangements, dollar expenditures for excavation efforts, and so forth must all be assessed by experts of which I most definitely am NOT.
In the midst of many buildings—many tens of stories tall—within the borough of Manhattan, New York City, many power lines are located below ground, underneath all sorts of concrete and asphalt. In the borough of Queens, however, where I grew up (Rego Park to be precise); overhead power lines are found all over the place.
There are no simple answers and no clear-cut conclusions. Rather, this essay’s purpose is merely to dispel any simplistic thinking about the issue.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Lightning rod ball
- Teardown: Zapped weather station
- No floating nodes
- Why do you never see birds on high-tension power lines?
- Birds on power lines, another look
- A tale about loose cables and power lines
- Shock hazard: filtering on input power lines
- Misplaced insulator proves fatal
The post Ground strikes and lightning protection of buried cables appeared first on EDN.
ADAS and autonomous vehicles with distributed aperture radar
The automotive landscape is evolving, and vehicles are increasingly defined by advanced driver-assistance systems (ADAS) and autonomous driving technologies. Moreover, radar is becoming increasingly popular for ADAS applications, offering multiple benefits over rival technologies such as cameras and LiDAR.
It’s a lot more affordable, and it also operates more efficiently in challenging conditions, such as in the dark, when it’s raining or snowing, or even when sensors are covered in dirt. As such, radar sensors have become a workhorse for today’s ADAS features such as adaptive cruise control (ACC) and automatic emergency braking (AEB).
However, improved radar performance is still needed to ensure reliability, safety, and convenience of ADAS functions. For example, the ability to distinguish between objects like roadside infrastructure and stationary people or animals, or to detect lost cargo on the road, are essential to enable autonomous driving features. Radar sensors must provide sufficient resolution and accuracy to precisely detect and localize these objects at long range, allowing sufficient reaction time for a safe and reliable operation.
A radar’s performance is strongly influenced by its size. A bigger sensor has a larger radar aperture, which typically offers a higher angular resolution. This delivers multiple benefits and is essential for the precise detection and localization of objects in next-generation safety systems.
Radar solutions for vehicles are limited by size restrictions and mounting constraints, however. Bigger sensors are often difficult to integrate into vehicles, and the advent of electric vehicles has resulted in front grills increasingly being replaced with other design elements, creating new constraints for the all-important front radar.
With its modular approach, distributed aperture radar (DAR) can play a key role in navigating such design and integration challenges. DAR builds on traditional radar technology, combining multiple standard sensors to create a solution that’s greater than the sum of its parts in terms of performance.
Figure 1 DAR combines multiple standard sensors to create a more viable radar solution. Source: NXP
The challenges DAR is addressing
To understand DAR, it’s worth looking at the challenges the technology needs to overcome. Traditional medium-rage radar (MRR) sensors feature 12-16 virtual antenna channels. This technology has evolved into high-resolution radars, which provide enhanced performance by integrating far more channels onto a sensor, with the latest production-ready sensors featuring 192 virtual channels.
The next generation of high-resolution sensors might offer 256 virtual channels with innovative antenna designs and software algorithms for substantial performance gains. Alternative massive MIMO (M-MIMO) solutions are about to hit the market packing over 1,000 channels.
Simply integrating 1000s of channels is incredibly hardware-intensive and power-hungry. Each channel consumes power and requires more chip and board area, contributing to additional costs. As the number of channels increases, the sensor becomes more and more expensive, while at the same time, the aperture size remains limited by the physical realities of manufacturing and vehicle integration considerations. At the same time, the large size and power consumption of an M-MIMO radar make it difficult to integrate with the vehicle’s front bumper.
Combining multiple radars to increase performance
DAR combines two or three MRR sensors, operated coherently together to provide enhanced radar resolution. The use of two physically displaced sensors creates a large virtual aperture enabling enhanced azimuth resolution of 0.5 degrees or lower, which helps to separate objects which are closely spaced.
Figure 2 DAR enhances performance by integrating far more channels onto a sensor. Source: NXP
The image can be further improved using three sensors, enhancing elevation resolution to less than 1 degree. The higher-resolution radar helps the vehicle navigate complex driving scenarios while recognizing debris and other potential hazards on the road.
The signals from the sensors, based on an RFCMOS radar chip, are fused coherently to produce a significantly richer point cloud than has historically been practical. The fused signal is processed using a radar processor, which is specially developed to support distributed architectures.
Figure 3 Zendar is a software-driven DAR technology. Source: NXP
Zendar is a DAR technology, developing system software for deployment in automobiles. The performance improvement is software-driven, enabling automakers to leverage low-cost, standard radar sensors yet attain performance that’s comparable to or better than the top-of-the-line high-resolution radar counterparts.
How DAR compares to M-MIMO radars
M-MIMO is an alternative high-resolution radar solution that embraces the more traditional radar design paradigm, which is to use more hardware and more channels when building a radar system. M-MIMO radars feature between 1,000 and 2,000 channels, which is many multiples more than the current generation of high-resolution sensors. This helps to deliver increased point density, and the ability to sense data from concurrent sensor transmissions.
The resolution and accuracy performance of radar are limited by the aperture size of the sensor; however, M-MIMO radars with 1,500 channels have apertures that are comparable in size to high-resolution radar sensors with 192 channels. The aperture itself is limited by the sensor size, which is capped by manufacturing and packaging constraints, along with size and weight specifications.
As a result, even though M-MIMO solutions can offer more channels, DAR systems can outperform M-MIMO radars on angular resolution and accuracy performance because their aperture is not limited by sensor size. This offers significant additional integration flexibility for OEMs.
M-MIMO solutions are expensive because they use highly specialized and complex hardware to improve radar performance. The cost of M-MIMO systems and their inherently unscalable hardware-centric design make them impractical for everything but niche high-end vehicles.
Such solutions are also power-hungry due to significantly increased hardware channels and processing requirements, which drive expensive cooling measures to manage the thermal design of the radar, which in turn, creates additional design and integration challenges.
More efficient, cost-effective solution
DAR has the potential to revolutionize ADAS and autonomous driving accessibility by using simple, efficient, and considerably more affordable hardware that makes it easy for OEMs to scale ADAS functionality across vehicle ranges.
Coherent combining of distributed radar is the only radar design approach where aperture size is not constrained by hardware, enabling an angular resolution lower than 0.5 degrees at significantly lower power dissipation. This is simply not possible in a large single sensor with thousands of antennas, and it’s particularly relevant considering OEM challenges with the proliferation of electric vehicles and the evolution of car design.
DAR’s high resolution helps it to differentiate between roadside infrastructure, objects, and stationary people or animals. It provides a higher probability of detection for debris on the road, which is essential for avoiding accidents, and it’s capable of detecting cars up to 350-m away—a substantial increase in detection range compared to current-generation radar solutions.
Figure 4 DAR’s high resolution provides a higher probability of detection for debris on the road. Source: NXP
Leveraging the significant detection range extension enabled by an RFCMOS radar chip, DAR also provides the ability to separate two very low radar cross section (RCS) objects such as cyclists, beyond 240 m, while conventional solutions start to fail around 100 m.
Simpler two-sensor DAR solutions can be used to enable more effective ACC and AEB systems for mainstream vehicles, with safety improvements helping OEMs to pass increasingly stringent NCAP requirements.
Perhaps most importantly for OEMs, DAR is a particularly cost-effective solution. The component sensors benefit from economies of scale, and OEMs can achieve higher autonomy levels by simply adding another sensor to the system, rather than resorting to complex hardware such as LiDAR or high-channel-count radar.
Because the technology relies on existing sensors, it’s also much more mature. Current ADAS systems are not fully reliable—they can disengage suddenly or find themselves unable to handle driving situations that require high-resolution radar to safely understand, plan and respond. This means drivers should be on standby to react and take over the control of the vehicle suddenly. The improvements offered by DAR will enable ADAS systems to be more capable, more reliable, and demand less human intervention.
Changing the future of driving
DAR’s effectiveness and reliability will help carmakers deliver enhanced ADAS and autonomous driving solutions that are more reliable than current offerings. With DAR, carmakers will be able to develop driving automation that is both safer and provides more comfortable experiences for drivers and their passengers.
For a new technology, DAR is already particularly robust as it relies on the mainstream radar sensors which have already been used in millions of cars over the past few years. As for the future, ADAS using DAR will become more trusted in the market as these systems provide comprehensive and comfortable assisted driving experiences at more affordable prices.
Karthik Ramesh is marketing director at NXP Semiconductors.
Related Content
- Radar Basics: Range, Pulse, Frequency, and More
- Is Digital Radar the Answer to ADAS Interference?
- Cameras, Radars, LiDARs: Sensing the Road Ahead
- Challenges in designing automotive radar systems
- Automated Driving Is Transforming the Sensor and Computing Market
- Implementing digital processing for automotive radar using SoC FPGAs
The post ADAS and autonomous vehicles with distributed aperture radar appeared first on EDN.
Client DIMM chipset reaches 7200 MT/s
A memory interface chipset from Rambus enables DDR5 client CSODIMMS and CUDIMMs to operate at data rates of up to 7200 MT/s. This product offering includes a DDR5 client clock driver (CKD) and a serial presence detect (SPD) hub, bringing server-like performance to the client market.
The DDR5 client clock driver, part number DR5CKD1GC0, buffers the clock between the host controller and the DRAMs on DDR5 CUDIMMs and CSODIMMs. It receives up to four differential input clock pairs and supplies up to four differential output clock pairs. The device can operate in single PLL, dual PLL, and PLL bypass modes, supporting clock frequencies from 1600 MHz to 3600 MHz (DDR5-3200 to DDR5-7200). An I2C/I3C sideband bus interface allows device configuration and status monitoring.
Equipped with an internal temperature sensor, the SPD5118-G1B SPD hub senses and reports important data for system configuration and thermal management. The SPD hub contains 1024 bytes of nonvolatile memory arranged as 16 blocks of 64 bytes per block. Each block can be optionally write-protected via software command.
The DR5CKD1GC0 client clock driver is now sampling, while the SPD5118-G1B SPD hub is already in production. To learn more about the DDR5 client DIMM chipset, click here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Client DIMM chipset reaches 7200 MT/s appeared first on EDN.