-   Українською
-   In English
EDN Network
The CEO office transitions at Microchip and Wolfspeed
As we near the end of the year, two CEOs at prominent semiconductor firms are leaving, and in both cases, the chairmen of the board are replacing them as interim CEOs. What’s common in both companies is the quest for a turnaround in the rapidly evolving semiconductor market.
First, Ganesh Moorthy, president and CEO, is leaving Microchip, and chairman Steve Sanghi is taking back the charge of the top job at the Chandler, Arizona-based semiconductor firm. While the announcement states that Moorthy is retiring after his nearly three-year stint at the corner office, the fact that Sanghi is back at the helm immediately doesn’t exactly signal a smooth transition.
Figure 1 Before joining Microchip, Moorthy was CEO of Cybercilium, the company he co-founded in Tempe, Arizona.
Sanghi, who will remain chairman, is taking charge as interim president and CEO. Moorthy joined Microchip as VP of advanced microcontrollers and automotive division in 2001, and he was appointed chief operating officer before being elevated to the CEO job in 2021. He had served at Intel for 19 years before his stints at Cybercilium and Microchip.
Microchip has been confronting an inventory stock and sales slump for some time, and its shares are down 28% in 2024. Sanghi’s statement on taking the charge as CEO clearly points toward an aim to return to growth in revenue and profitability.
Then there is the news about Wolfspeed’s CEO change, and it’s more startling and less subtle. The Wolfspeed board has ousted CEO Gregg Lowe without cause, and like Microchip, chairman of the board Thomas Werner is taking over as interim CEO before Wolfspeed finds Lowe’s replacement.
Lowe, who spearheaded Freescale’s sale to NXP in 2015 as CEO, took the helm of Cree in 2017 and transformed it from an LED lighting company to a silicon carbide (SiC) IDM. During this transformation under Lowe, the company acquired a new name: Wolfspeed. Also, during this time, Infineon made a failed attempt to acquire Wolfspeed.
However, the Durham, North Carolina-based chipmaker seems to have failed to translate its enviable position as a pure-play SiC company in this high-growth market, and that probably sums up Lowe’s ouster. It’s apparent from Werner’s statement announcing this CEO transition. “Wolfspeed is materially undervalued relative to its strategic value, and I will focus on driving the company’s priorities to explore options to unlock value.”
Figure 2 Lowe sold off Cree’s LED lighting business and turned the sole focus on SiC under the Wolfspeed brand.
For a start, Wolfspeed has been struggling in the transition from 150-mm to 200-mm SiC wafers. It has also been facing slowing orders from the electric vehicle (EV), industrial and renewable energy markets. The company recently dropped plans to build a SiC fab in Ensdrof, Germany.
These two CEO office transitions don’t come as a surprise to the semiconductor industry watchers. And it surely won’t be the last as we are about to enter 2025. The semiconductor industry is highly competitive, and stakes are even higher when you are a vertically-integrated chip outfit.
Related Content
- Microchip, Micrel CEOs Duel Over Deal
- CEO interview: Microchip’s Steve Sanghi
- Wolfspeed Set to Invest $5 Billion in SiC Expansion
- Wolfspeed to Build 200-mm SiC Wafer Fab in Germany
- CEO Sanghi bets Microchip’s future on power, connectivity
The post The CEO office transitions at Microchip and Wolfspeed appeared first on EDN.
In-situ software calibration of the flying capacitor PGINASH
A recent design idea, “Negative time-constant and PWM program a versatile ADC front end,” offered a pretty peculiar ADC front end (see Figure 1). It comprises a programmable gain (PG) instrumentation amplifier (INA). It uses PWM control of a flying capacitor to implement a 110-dB CMRR, high impedance differential input and negative time-constant exponential amplification with more than 100 discrete programmable gain steps. It’s then topped off with a built-in sample and hold (S&H). Hence PGINASH. Catchy. Ahem.
Figure 1 PGINASH: An unconventional ADC front end with INA inputs, programmable gain, and sample and hold.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Due to A1c’s gain of (R3 / R2 + 1) = 2, during the PWM = 1 gain accumulation phase the connection established from U1c’s output through U2a and R1 to C creates positive feedback that makes the voltage captured on C multiply exponentially with a (negative) time-constant Tc of (nominally):
Tc = R1*(C + Cstray) =
= 14.3k*(0.001µF + (8pF (from U2a) + 1pF (from U1c)))
= 14.3k*1009pF = 14.43µs
= 10µs / ln(2)
G = gain increment of 20.1 = 1.0718 = 0.6021dB per us of accumulation time T
G10 = 2.0 = 6.021dB per 10µs of T
This combines with A1c’s fixed gain of two to total
Nominal net Gain = 2GT/10µs
Of course, the keyword here is “nominally.” Both R1 and C will have nonzero tolerances, perhaps as poor as ±1%, and ditto for R2 and R3. Moreover, further time-constant, and therefore gain, error can arise from U2 switch to switch ON resistance mismatches. The net bad news, pessimistically assuming worst case mutual error reinforcement of all the time-constant component tolerances, is A1c’s gain may vary by ±2% and G by as much as ±3%. This is far from adequate for precision data acquisition! What to do?
The following sequence is suggested as a simple software-based in-circuit calibration method using a connected ADC and requiring just two calibration voltages to be manually connected to the IA inputs as calibration progresses, to combat the various causes of front-end error.
GAIN ERRORThe first calibration voltage (Vcal) is used to explicitly measure the as-built gain factors. Here’s how it works:
Vcal = Vfs/Vheadroom
where
Vfs = ADC full-scale Vin
Vheadroom = (2*1.02)*(2*1.04)2 = 8.8
e.g., if Vfs = 5v, Vcal = 0.57v
Vcal’s absolute accuracy isn’t particularly important, +/-1% is plenty adequate. But it should be stable to better than 1 lsb during the calibration process. Connect Vcal to the INA inputs, then take two ADC conversions: D1 with gain accumulation time T =10 µs and D2 with T = 20 µs. Thus, if 2x = the as-built A1c gain and G = the as-built exponential gain, the ADC will read:
D1 = ADC(2x *G10*Vcal)
D2 = ADC(2x*G10*G10*Vcal)
Averaging a number (perhaps 16) acquisitions of each value is probably a good idea for best accuracy. The next step is some arithmetic:
D2/D1 = (2x*G10*G10*Vcal)/(2x*G10*Vcal) = G10
D1/ (G10*Vcal) = (2x*G10*Vcal)/(G10*Vcal) = 2x
G = (G10)0.1
That wasn’t so bad, was it? Now we if we want to set (most) any desired conversion gain of Y, we just need to compute a gain accumulation interval of:
T(µs) = log(Y/2x)/log(G)
Note if that this math yields T < 1 µs, we’ll need to bump Y for some extra time (and gain) to allow for capacitor “flight” and signal acquisition.
INPUT OFFSET ERRORThere is, however, another error source we haven’t covered: U1 input offsets. Although the TLV9164 typical offset is only 200 µV, max can range as high as 1.2 mV. If uncorrected, the three input amplifiers’ offsets could sum to 3.6 mV. This would render the upper gain range of our amplifier of little value. To fix it, we need another input voltage reference (Vzero), some more arithmetic, and another ADC conversion to measure the Voff offset and allow software subtraction. We’ll use lots of gain to get plenty of resolution. Vzero should ideally be accurate and stable to <10 µV to take full advantage of the 9164’s excellent 0.25 µV/oC drift spec’.
Let Vzero = 4.00mV
N = log(Vfs/(.008v * 2x))/log(G)
D3 = ADC(2x*GN*(Vzero + Voff))
Voff = D3/(2x*GN) – Vzero
And there you have it. To accurately massage any raw ADC result into the actual Vin input that produced it, write:
Vin = (ADC(Vin)/(2x GN)) – Voff
But avoid GN > Vfs /(2x*Voff). Otherwise A1c and the ADC may be driven into saturation by amplified offset. Also, things may (okay, will) get noisy.
Okay. But what about…
LEAKAGE CURRENT ERRORThe leakage current conundrum comes from the fact that negative time-constant current from U1c through R1 isn’t the only source of gain-phase charge for C. Unfortunately, leakage currents from U2’s X pin and U1’s noninverting input also contribute a mischievous share. U1’s contribution is a negligible 10 pA or so, but U2’s can be large enough to become problematic.
The burning question is: How much to HC4053 switches really leak? Reeeeeally? Datasheets are of surprisingly little help, with the answer seeming to range over literally a million-to-one, pA to µA, range.
Figure 2 quantifies the result for some plausible 100 pA to 1 µA numbers.
Figure 2 The input referred current – equivalent voltage offsets.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Negative time-constant and PWM program a versatile ADC front end
- Simulating the front-end of your ADC
- Parsing PWM (DAC) performance: Part 1—Mitigating errors
- PWM DAC settles in one period of the pulse train
The post In-situ software calibration of the flying capacitor PGINASH appeared first on EDN.
Speeding AI SoC development with NoC-enabled tiling
Employing network-on-chip (NoC) technology in system-on-chip (SoC) designs has been proven to reduce routing congestion and lower power consumption. Now, a new NoC-enabled tiling methodology helps speed development, facilitates scaling, participates in power reduction technology and contributes to increased design reuse for SoCs targeting artificial intelligence (AI) applications.
For these discussions, we will assume that AI encompasses use cases such as machine learning (ML) and inferencing.
Soft and hard tiles
One challenge in engineering is that the same term may be used to refer to different things. The term “tile,” for example, has multiple meanings. Some people equate tiles with chiplets, which are small, independent silicon dies, all presented on a common silicon or organic substrate or interposer. Chiplets may be thought of as “hard tiles.”
By comparison, many SoCs, including those intended for AI applications, employ arrays of processing elements (PEs), which can be considered “soft tiles.” For example, refer to the generic SoC depicted in Figure 1.
Figure 1 High-level block diagram shows SoC containing a neural processing unit (NPU). Source: Arteris
In addition to a processor cluster comprising multiple general-purpose central processing units (CPUs), along with several other intellectual property (IP) blocks, the SoC may also contain specialized processors or hardware accelerators. These units include an image signal processor (ISP), a graphics processing unit (GPU) and a neural processing unit (NPU), designed for high-performance, low-power AI processing.
In turn, the NPU comprises an array of identical PEs. In the not-so-distant past, these PEs were typically realized as relatively simple multiply-accumulate (MAC) functions, where MAC refers to a multiplication followed by an addition. By comparison, today’s SoCs often contain PEs with multiple IPs connected via an internal NoC.
Implementing soft tiling by hand
In the common SoC scenario we are considering here, NoCs may be employed at multiple levels in the design hierarchy. For example, a NoC can be used at the top level to connect the processor cluster, ISP, GPU, NPU and other IPs. NoCs may be implemented in various topologies, including ring, star, tree, mesh and more. Even at the top level of the SoC hierarchy, some devices may employ multiple NoCs.
As has already been noted, each PE in the NPU may consist of multiple IPs connected using an internal NoC. Furthermore, all the PEs in the NPU can be connected using a NoC, typically implemented as a mesh topology.
The traditional hand-crafted approach to implementing the NPU starts by creating a single PE. In addition to its AI accelerator logic, the PE will also contain one or more network interface units (NIUs) to connect the PE to the main mesh NoC. This is illustrated in Figure 2a.
Figure 2 This is how designers implement soft tiling by hand. Source: Arteris
If we assume that the NPU specification calls for a 4×4 array of PEs, the designer will replicate the PE 16 times using a cut-and-paste methodology (Figure 2b). Next, NoC tools will be used to auto-generate the NoC (Figure 2c). During this process, the NoC generator automatically assigns unique identifiers (IDs) to each of the NoC’s switching elements. However, the NIUs in the PEs will still have identical IDs; that is, the default ID from the PE’s creation.
For the NoC to transfer data from source nodes to destination nodes, the NIU in each PE must have a unique ID. This requires the designer to hand-modify each PE instance to provide it with its own ID. In addition to being time-consuming, this process is prone to error, which can impact downstream testing and verification.
This hand-crafted tiling technique poses several challenges. For example, changes to the PE specification are often made early in the process. For each change, the designer has two options: (a) manually replicate the change across all PE instances in the array, or (b) modify only the original PE and then repeat the entire hand-crafted soft tiling process. Both options are time consuming and error prone.
Also, performing soft tiling by hand is not conducive to scaling. If it becomes necessary to replace the original 4×4 array with an 8×8 version, such as for a derivative product, the process becomes increasingly cumbersome and problematic.
NoC-enabled tiling
The phrase “NoC-enabled tiling” refers to an emerging trend in SoC design. This evolutionary approach uses proven, robust NoC IP to facilitate scaling, condense design time, speed testing and reduce design risk.
NoC-enabled tiling commences with the designer creating a single PE as before. In this case, however, the NoC tools can be used to automatically replicate the PEs, generate the NoC and configure the NIUs in the PEs, all in a matter of seconds. The designer only needs to specify the required dimensions of the array.
Figure 3 This is how NoC-enabled tiling is carried out. Source: Arteris
In addition to dramatically speeding the process of generating the array, this “correct by construction” approach removes any chance of human-induced errors. It also enables the design team to quickly and easily accommodate change requests to the PE early in the SoC development process. Furthermore, it greatly facilitates scaling and design reuse, including the creation of derivative designs.
An evolving market
Based on an analysis of AI SoC designs currently under development by their customers, the Arteris team has determined the relative use of soft tiling in key verticals and horizontals for AI today. This is illustrated in Figure 4, where the areas of the circles reflect the relative number of application use cases.
Figure 4 NoC-enabled tiling is shown in key verticals and horizontals for AI today. Source: Arteris
Designing multi-billion-transistor SoCs is time-consuming and involves many challenges. Some SoC devices, such as those intended for AI applications, may include functions like NPUs that comprise arrays of PEs. Here, NoC-enabled tiling is an emerging trend and it’s supported only by leading NoC IPs and tools.
Andy Nightingale, VP of product management and marketing at Arteris, has over 37 years of experience in the high-tech industry, including 23 years in various engineering and product management positions at Arm.
Related Content
- SoC Interconnect: Don’t DIY!
- What is the future for Network-on-Chip?
- SoC design: When is a network-on-chip (NoC) not enough
- Network-on-chip (NoC) interconnect topologies explained
- Why verification matters in network-on-chip (NoC) design
The post Speeding AI SoC development with NoC-enabled tiling appeared first on EDN.
A simple, purely analog -130dB THD sine wave generator
The recent Design Idea “Getting an audio signal with a THD < 0.0002% made easy,” discloses a low THD sine generator which led me to dust off a design that I had published in AudioXpress magazine [1] (see Figure 1).
Figure 1 The “Simple Sineman” circuit [1] is based on a simpler version of the circuit having approximately -80dB THD [2].
Wow the engineering world with your unique design: Design Ideas Submission Guide
Requirements for an analog oscillatorBefore getting into the detail of how this circuit works, it’s worth recalling certain requirements for an analog oscillator: a feedback circuit which, at the oscillation frequency fosc Hz, has a loop gain magnitude of unity and a phase shift of either 0 or a multiple of 360 degrees. One means of implementing this is to place a notch filter in the feedback loop of an op amp. You might be forgiven for thinking that fosc is at the filter’s notch frequency fnotch Hz. But obviously, infinite attenuation is not consistent with a unity gain loop. Not so obviously, the op amp’s internal compensation network adds a -90° phase shift to its inherent -180° inverting input-to-output phase shift. What is then needed for oscillation is a filter which, at fosc, exhibits both a -90° phase shift and has an attenuation of Aosc, where Aosc is the op amp gain magnitude at fosc. But how can we find a filter capable of meeting such precise constraints?
The “dual-T” notch filterThe innovative “dual-T” notch filter in Figure 1 saves the day. It’s made up of C1, C2, C3, R1, R2, R3A, and R3B. I had a need for a 2400-Hz oscillator and so chose the values shown. One way to place a notch at fnotch Hz is to use the following process and equations:
Choose a value C for C1, C2 and C3 (1)
and set R1 and R2 equal to 1 / (2π* fnotch*C*√3) (2)
set R3 = R3A + R3B equal to 12 / (2π* fnotch*C*√3) (3)
An analysis of this filter type shows that there is always a value of R3 which produces an infinite attenuation notch regardless of the variations of the other component values due to tolerances. Since there is clearly no attenuation at DC, this means that any attenuation from none to infinity can be had at some frequency. The analysis also shows that there is always some frequency below fnotch at which the phase shift is -90°. The appropriate value of R3 causes that phase shift to coincide with the necessary attenuation of Aosc at fosc. Figure 2 gives a feeling for some phase and gain magnitude responses of the filter as R3B is varied. Table 1 relates the oscillation and notch frequencies and values of R3 for a -90° phase shift at various attenuations Aosc.
Figure 2 Responses of the dual-T notch filter. To simulate practical variances from the ideal, capacitor values were randomly selected to be within 1% of 10 nF, and R1 and R2 to be within 0.1% of ideal values for a 2400-Hz fosc. A value of R3 that produced a 130 dB notch depth was calculated, and results are shown with it and with several slightly larger R3 values. -90° phase shifts with attenuations from 65 to 130 dB are evident for various R3 values.
Attenuation, dB | 1 – fosc/fnotch | NOtol = 1 – fosc/fnotch |
-90 | 0.01% | -0.01% |
-80 | 0.03% | -0.02% |
-70 | 0.11% | -0.05% |
-60 | 0.35% | -0.18% |
-50 | 1.07% | -0.56% |
Table 1. Variations in the oscillation with respect to the notch frequencies and in R3 values for a -90° phase shift at various Aosc attenuations.
Knowing the values of fosc and Aosc, the value of fnotch can be calculated from Table 1. From this, the values of the capacitors and the resistors R1 and R2 can be calculated from equations (1) and (2). With 0.1% resistors for R1 and R2 and 1% capacitors, fnotch will be kept within a range of the tolerance product Stol = 1 +/- 1.01*1.001 ≈ 1.1% of the intended value. Note that regardless of component tolerances, there is always the option of adding a pot in series with either R1 or R2. The aggregate value of that pot plus resistor should have a range of Stol centered at the equation (2) value. The values and tolerances of R3A and R3B should be selected so that R3 can be adjusted to within Stol – NOtol (see Table 1) of the equation (3) value.
It’s worth noting that with the better-known twin-T notch filter [3], I was unable to meet the phase and attenuation requirements simultaneously by varying only a single resistor value. Even if this were possible, the capacitors in the dual T are conveniently identical, while the twin-T’s requirement of a value ratio of 2 limits capacitor choices. This is also a good time to mention that polystyrene capacitors offer the lowest harmonic distortion [4], with non-metalized polypropylene being a secondary choice.
Establishing oscillation amplitudeOf course, the elephant in the room is what I haven’t yet mentioned—the requirement for establishing an oscillation amplitude. One way of doing this is to parallel the R3 resistor plus pot with a non-linear resistor whose value varies inversely with signal level. Unfortunately, any such non-linearity increases harmonic distortion. So it makes sense to choose a non-linear component designed specifically for low harmonic distortion audio applications. The NE570 (an improved version of the SA571 seen in Figure 1) is a low harmonic distortion compressor/expandor IC intended for audio applications [5]. A block diagram of the part appears in Figure 3.
Figure 3 A block diagram of the function of the SA571 and NE570 compandor IC, curtesy of On Semiconductor.
As can been seen, the part has a “delta G” cell whose current gain is controlled by the capacitively filtered output of the rectifier. The capacitively-coupled inputs to both functions are connected in Figure 1 through resistors I’ve added to reduce the functions’ operating levels. These are driven by the output of the LME49720 op amp U2A. (The op amp provided with the SA571/NE570 is of the 741 type and should not be used in extremely low THD applications. Its output and one end of the 20K resistor R3 can be left unconnected. Its inverting input is connected to that of U2A.) Note the 1.8-V reference which is the unavoidable DC operating voltage of the delta G cell and both inputs of U2A.
The SA571/NE570 are dual parts, and use is made of the secondary unit. Its rectifier capacitor pin is grounded to disable its delta G cell, whose input is floating. The uncommitted side of its R3 is connected to its op amp output to produce a stable 3 VDC source. This source drives the Figure 1 R10 pot to supply a current to the THD trim pin. R10 is adjusted to null out the small amount of 2nd harmonic distortion produced by the delta G cell (and possibly by U2A). I powered the circuit from batteries for portability and added the LEDs to keep fresh 9-V batteries from exceeding the +/- 18 V maximum power supply ratings of the op amp. The SA571’s 30k resistor connecting the op amp inverting inputs to ground is unavoidable. With Figure 1’s R3, it biases that op amp’s output to approximately 4.5 V( (≈45k/30k + 1)*1.8 V ). This level can be reduced by connecting a resistor from the 3-V source to U2A‘s inverting input (not provided in the Figure 1 circuit). With or without this additional resistor, remember to keep a proper DC bias across output electrolytic capacitor C5.
The added passive components at the NE570 inputs are chosen to allow R3 to be adjusted for a 3 Vrms output from U2A, the level at which its datasheet indicates that that op amp exhibits the lowest THD.
Measuring distortionTo measure distortion, I attenuated the oscillator output’s fundamental by running the signal through a second dual T filter with a pot in series with each resistor. By laboriously tweaking each pot in turn, I was able to attenuate the fundamental by 70 dB. The filtered output was applied to an SR770 spectrum analyzer which can accurately measure signals within an 80 dB dynamic range. Tweaking the THD pot to minimize the 2nd harmonic level, I measured the levels of the oscillator harmonics and applied corrections for the filter attenuations at each frequency (see Table 2.) I then took the rms of the levels corrected for the attenuations of the second dual T filter and arrived at a THD more than 130 dB below the oscillator fundamental.
Harmonic Number | Filter Attenuation, dB |
2 | 11.81 |
3 | 6.54 |
4 | 5.14 |
5 | 3.78 |
7 | 2.78 |
9 | 2.49 |
Table 2 Attenuation of higher harmonics by a dual-T filter tuned as described in the text to maximize attenuation of the oscillator fundamental.
The NE570 and LME49720 datasheets and parts are available online and through DigiKey. Small quantities of the NE570 for experimenters can be had from numerous eBay vendors.
I believe that it’s tough to beat the combination of simplicity and performance afforded by this design and welcome comments from anyone who builds and tests it.
Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.
Related Content
- Getting an audio signal with a THD < 0.0002% made easy
- Measure an amplifier’s THD without external filters
- Ultra-low distortion oscillator, part 2: the real deal
- A simple circuit with an optocoupler creates a “tube” sound
- How to control your impulses—part 2
References
- Paul, C, The Simple Sineman, audioxpress, November 2013, p. 52
- Jung, “Gain Control 1C for Audio Signal Processing,” Ham Radio, 1977,http://waltjung.org/PDFs/Gain_Control_IC_for_Audio_Signal_Processing_HR_0777.pdf.
- https://learningaboutelectronics.com/Articles/Notch-filter-calculator.php#answer1
- https://www.tedss.com/LearnMore/Polystyrene-Film-Capacitors offers a wide array of polystyrene capacitors
- ON Semiconductor, NE570 datasheet, https://www.onsemi.com/pdf/datasheet/ne570-d.pdf
The post A simple, purely analog -130dB THD sine wave generator appeared first on EDN.
Startups eye opportunities amid analog industry consolidation
An ongoing semiconductor industry consolidation has been somewhat challenging for companies that consume analog chips in their designs. However, these conditions have created opportunities for startups to fill the gap, meeting design engineers’ needs on a smaller scale and keep the analog semiconductors industry vibrant.
Many companies using semiconductors need highly specific options, and it does not make sense for them to approach some of the industry’s largest companies. Many of those have minimum order requirements, so large chipmakers are usually unwilling to provide the level of customization required.
Here, startups like Orca Semiconductor are willing to serve designers seeking specialized hardware. Catering to that market is increasingly important, particularly as many OEMs explore ways to commercialize innovative new devices. If a proposed product seems too far-fetched, large semiconductor companies may not want to associate with it. Semiconductor executives in these large outfits may determine that it’s safer to continue working with well-established customers.
Orca’s first commercial product is an advanced power management integrated circuit (PMIC) for wearable devices. The component also has a battery-preserving feature that reduces current draw during inactive periods.
ASSP business model
The company’s CEO Andrew Baker clarified that Orca’s business model does not revolve around making custom chips in the traditional sense. Instead, it will make analog-based application-specific standard products (ASSPs) rather than focusing on custom silicon. The company’s leaders believe this will keep the business agile and free it from the slow decision-making processes that generally stifle innovation at larger analog chipmakers.
Figure 1 Orca Semi has recently unveiled an IO-Link transceiver for smart factory environments.
OEMs needing analog semiconductors rely on the way chipmaker’s resources are set up and are ready to serve them. However, the industry’s broader consolidation has made it more challenging for some would-be customers to find analog companies that will work with them.
Here, outfits like Spirit Electronics demonstrate what can happen when designers have more available options. Although not a startup, this business specializes in analog and mixed-signal ICs.
It provides design engineers with an alternative to the long lead times that other outlets often have. One thing that sets this company apart is that it manages its foundry services under one roof rather than outsourcing. That strategy gives it more control and allows the business to meet emerging needs.
Analog in machine learning
Another notable company in the analog design space is Aspinity. The company’s business model addresses the growing need for always-on devices, such as those that continually listen for inputs and respond accordingly.
Since such devices process all analog sounds and not just particular command words or other cues, they can be incredibly power-intensive. However, a notable characteristic of Aspinity’s components is how well they conserve energy.
Figure 2 Aspinity’s AML100 chip runs machine learning completely within the analog domain.
The company first gained attention by releasing an analog chip for machine learning. It consumes less than 20 microamps of power when determining the data’s relevancy. It also shrinks that information’s size more than 100 times, freeing up memory space on the respective device.
These examples show that startups and smaller companies have emerged to meet a need driven by the analog semiconductor industry’s consolidation. Not all electronic outfits can approach larger analog chipmakers with their orders, but the above-mentioned businesses are well-positioned to assist.
Related Content
- Aspinity Expands into Audio Event Detection
- AI Startup Aspinity Launches Low-Power Analog Chip
- Analog startup eyes ASSPs for wearables, smart factory
- Audio chip moves machine learning from digital to analog
- IO-Link transceiver bolsters smart factory productivity, intelligence
The post Startups eye opportunities amid analog industry consolidation appeared first on EDN.
Current shunt probes feature RF isolation
TICP Series IsoVu current probes from Tektronix provide complete galvanic RF isolation between the measurement system and DUT. This isolation eliminates ground loops and significantly reduces common mode noise. The probes ensure high precision and safety when measuring fast-changing, shunt-based current in both low- and high-voltage systems.
The series includes three models with bandwidths of 1 GHz, 500 MHz, and 250 MHz. According to Tektronix, TICP current probes deliver over 30 times the common-mode rejection of conventional differential voltage probes, achieving 140 dB CMRR at DC and up to 90 dB at 1 MHz.
With minimal noise contribution, the 50-Ω probe input in a 1X configuration provides ultra-low noise levels of less than 4.7 nV/√Hz, or under 150 µV at 1 GHz. TICP probes use a TekVPI interface and work seamlessly with Tektronix 4, 5, and 6 series MSO oscilloscopes.
The TICP Series IsoVu probes are now available for order and will start shipping this month.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Current shunt probes feature RF isolation appeared first on EDN.
Application processors optimize industrial control
NXP’s i.MX 94 family of application processors combines communications, safety, and real-time control functions into a single SoC. They include a 2.5-Gbps Ethernet time-sensitive networking (TSN) switch, enabling configurable, secure communications with protocol support for both industrial and automotive applications. These processors are suited for industrial control, programmable logic controllers, telematics, industrial and automotive gateways, and building and energy control.
The 64-bit processors employ up to four Arm Cortex-A55 cores for Linux operation, complemented by two Cortex-M33 cores and two Cortex-M7 cores for enhanced real-time processing. This multicore architecture delivers low latency across both application and real-time domains. The devices also include an eIQ Neutron neural processing unit and a functional safety island to ensure compliance with IEC 61508 SIL2 and ISO 26262 ASIL-B standards.
To safeguard against quantum computing attacks, the i.MX 94 processors support post-quantum public key cryptography. An integrated EdgeLock secure enclave enables the system to configure and restore equipment to a trusted state at any time. It provides robust security features, including secure boot, secure debug, and secure updates, all leveraging post-quantum cryptography without compromising performance.
The i.MX 94 family of application processors is expected to begin sampling in Q1 2025.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Application processors optimize industrial control appeared first on EDN.
Microchip launches broad IGBT 7 portfolio
IGBT Trench 7 devices from Microchip offer power system designers a wide selection of current and voltage ranges, topologies, and package types. Promising increased power capability, reduced power losses, and compact sizes, the IGBTs are key components in such applications as renewable energy systems, uninterruptible power supplies, commercial and agricultural vehicles, and More Electric Aircraft (MEA).
IGBT 7 modules support voltages from 1200 V to 1700 V and currents from 50 A to 900 A. Packaging options include standard D3 and D4 62-mm types, as well as SP6C, SP1F, and SP6LI. They are also available in various configurations and topologies, including three-level NPC, three-phase bridge, boost and buck choppers, dual-common source, full-bridge, phase leg, single switch, and T-type.
These power components provide lower on-state voltage, improved antiparallel diode performance, and higher current capacity, reducing power losses and enhancing efficiency. With low-inductance packages and high overload capability at a junction temperature of +175°C, they are useful for rugged aviation and defense applications. When used for motor control, they ensure smooth switching, improving reliability, reducing EMI, and minimizing voltage spikes.
IGBT Trench 7 devices are available now in production quantities.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Microchip launches broad IGBT 7 portfolio appeared first on EDN.
Portable test gear enables mmWave signal analysis
Select Tektronix FieldFox handheld analyzers now cover frequencies up to 170 GHz for mmWave signal analysis. In a collaboration with Virginia Diodes Inc. (VDI), Keysight’s A- and B-Series analyzers (18 GHz and up) can pair with VDI’s PSAX frequency extenders to reach sub-THz frequencies.
Precise mmWave measurements are essential for testing wireless communications and radar systems, particularly in 5G, 6G, aerospace, defense, and automotive radar applications. Because mmWave signals are sensitive to obstacles, weather, and interference, understanding their propagation characteristics helps engineers design more efficient networks and radar systems.
FieldFox with PSAX allows users to capture accurate mmWave measurements in a lightweight, portable package. It supports in-band signal analysis through selectable spectrum analyzer, IQ analyzer, and real-time spectrum analyzer modes, achieving typical sensitivity of -155 dBm/Hz.
The PSAX module connects directly to the RF ports on the FieldFox analyzer. Its adjustable IF connector aligns with the LO and IF port spacings on all FieldFox models. VDI also offers the PSGX module, which, when paired with a FieldFox equipped with Option 357, enables mmWave signal generation up to 170 GHz.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Portable test gear enables mmWave signal analysis appeared first on EDN.
Renesas expands line of programmable mixed-signal ICs
Renesas has launched the AnalogPAK series of programmable mixed-signal ICs, including a 14-bit SAR ADC with a programmable gain amplifier. According to the company, this industry-first device combines a rich set of digital and analog features to support measurement, data processing, logic control, and data output.
AnalogPAK devices, a subset of the GreenPAK family, are NVM-programmable ICs that enable designers to integrate multiple system functions. ICs in both groups minimize component count, board space, and power and can replace standard mixed-signal products and discrete circuits. They also provide reliable hardware supervisory functions for SoCs and microcontrollers.
The SLG47011 multichannel SAR ADC offers user-defined power-saving modes for all macrocells. Designers can switch off some blocks in sleep mode to reduce power consumption to the microamp level. Key features include:
- VDD range of 1.71 V to 3.6 V
- SAR ADC: up to 14-bit, up to 2.35 Msps in 8-bit mode
- PGA: six amplifier configurations, rail-to-rail I/O, 1x to 64x gain
- DAC: 12-bit, 333 ksps
- Hardware math block for multiplication, addition, subtraction, and division
- 4096-word memory table block
- Oscillators: 2/10 kHz and 20/40 MHz
- Analog temperature sensor
- Configurable counter/delay blocks
- I2C and SPI communication interfaces
- Available in a 16-pin, 2.0×2.0×0.55-mm QFN package
In addition to the SLG47011, Renesas announced three other AnalogPAK devices. The compact SLG47001 and SLG47003 enable precise, cost-effective measurement systems for applications like gas sensors, power meters, servers, and wearables. The SLG47004-A is an automotive Grade 1 qualified device for infotainment, navigation, chassis and body electronics, and automotive display clusters.
The AnalogPAK devices are available now from Renesas and authorized distributors.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Renesas expands line of programmable mixed-signal ICs appeared first on EDN.
RS232 meets VFC
In the early days of small (e.g., personal) computers, incorporation of one or two (or more) RS232 serial ports as general purpose I/O adaptors was common practice. Recently, this “vintage” standard has been largely replaced (after all, it is 64 years old) by faster and more power thrifty serial interface technologies (e.g., USB, I2C, SPI). Nevertheless, RS232 hardware is still widely and inexpensively available, and its bipolar signaling levels remain robustly noise and cable-length-effects resistant. Another useful feature is the bipolar supply voltages (usually +/-6 V) generated by typical RS232 adaptors. These can be conveniently tapped into via standard RS232 output signals (e.g., RTS and TXD) and used to power attached analog and digital circuitry.
Wow the engineering world with your unique design: Design Ideas Submission Guide
This design idea (DI) does exactly that by using asynchronous RS232 to power and count pulses from a simple 10 kHz voltage-to-frequency converter (VFC). Getting only one bit of info from each 10-bit serial character may seem inefficient (because it is), but in this case it’s a convenient ploy to add a simple analog input that can be located remotely from the computer with less fear of noise pickup.
See Figure 1 for the mind meld of RS232 with VFC.
Figure 1 A 10-kHz VFC works with and is powered by a generic RS232 port.
Much of the core of Figure 1 was previously described in “Voltage inverter design idea transmogrifies into a 1MHz VFC.”
One difference, other than the 100x lower max frequency, between that older DI and this one is the use of a metal gate CMOS device (CD4053B) for U1 instead of a silicon gate (HC4053) U1. That change is made necessary by the higher operating voltage (12 V versus 5 V) used here. Other design elements remain (roughly) similar.
Input current = Vin/R1, charges C3 which causes transconductance amplifier Q1,Q2 to sink, increasing current from Schmidt trigger oscillator cap C1. This increases U1c oscillator frequency and the current pumped by U1a,b and C2. Because the pump current has negative polarity, it completes a feedback loop that continuously balances pump current to equal input current:
Note that R1 can be chosen to implement almost any desired Vin full-scale factor.
D3 provides the ramp reset pulse that initiates each oscillator cycle and also sets the duration of the RS232 ST start pulse to ~10 µs as illustrated in Figure 2. Note that this combination of time constants and baud rate gives ~11% overrange headroom.
Figure 2 Each VFC pulse generates a properly formatted, but empty, RS232 character.
The ratio of R5/R3 is chosen to balance Q2/Q1 collector currents when Vin and Fpump equal zero, thus minimizing Vin zero offset. Consequently, linearity and zero offset errors are less than 1% of full-scale.
However, this leaves open the possibility of unacceptable scale factor error if the +6 logic power rail isn’t accurate enough, which it’s very unlikely to be. If we want a precision voltage reference that’s independent of +6 V instability, the inexpensive accurate 5 V provided by U2, C5, and R7 will fill the bill.
However, if the application involves conversion of a ratiometric signal proportional to +6 V such as provided by a resistive sensor (e.g., thermistor), then U2 and friends should be omitted, U1 pin 2 connected to -6 V, and C2 reduced to 1.6 nF. Then:
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Voltage inverter design idea transmogrifies into a 1MHz VFC
- A simple, accurate, and efficient charge pump voltage inverter for $1 (in singles)
- Single supply 200kHz VFC with bipolar differential inputs
- New VFC uses flip-flops as high speed, precision analog switches
- Inexpensive VFC features good linearity and dynamic range
- Turn negative regulator “upside-down” to create bipolar supply from single source
The post RS232 meets VFC appeared first on EDN.
Applying AI to RF design
Human inventions, namely engineered systems, have relied on fundamental discoveries in physics and mathematics, e.g., Maxwell’s equations, Quantum mechanics, Information theory, etc., thereby applying these concepts to achieve a particular goal. However, engineered systems are rapidly growing in complexity and size, where the functionality of subcomponents may be nonlinear in nature and starting from these first principles is restrictive. MathWorks has steadily laid a foundation in modeling and simulation with MATLAB/Simulink for over four decades and now assists designers with these complex, multivariate systems with AI.
Houman Zarrinkoub, MathWorks principal product manager for wireless communications, discussed with EDN the growing role AI plays in the design of next generation wireless systems.
MATLAB’s toolboxes for wireless design“So you’re building a wireless system and, at a basic level, you have a transmission back and forth between, for example, a base station and a cell phone,” said Houman, “this is known as a link.”
To begin, Houman explains at a very basic level engineers are building the two subsystems (transmitter and receiver) that “talk” to each other with this link. There are the digital components that will sample, quantize, and encode the data and the RF components that will generate the RF signal, upconvert, downconvert, mix, amplify, filter, etc. MATLAB has an array of established toolboxes such as the 5G Toolbox, LTE Toolbox, Wi-Fi Toolbox, and Satellite communications Toolbox that already assist with the design, simulation, and verification of all types of wireless signals from 5G NR and LTE to DVB-S2/S2X/RCS2 and GPS waveforms. This is extended to subcomponents with the tools including (but not limited to) the RF Toolbox, Antenna Toolbox, and Phase Array System Toolbox.
Now with AI, two main design approaches are used leveraging the Deep Learning Toolbox, Reinforcement Learning Toolbox, and Machine Learning Toolbox.
AI workflowThe workflow includes four basic steps that are further highlighted in Figure 1.
- Data generation
- AI training
- Integration, simulation, and testing
- Deployment and implementation
These basic steps are necessary for implementing any deep learning model in an application, but how does it assist with RF and wireless design?
Figure 1 MATLAB workflow for implementing AI in wireless system design. Source: MathWorks
Data generation: Making a representative dataset
It goes without saying that data generation is necessary in order to properly train the neural network. For wireless systems, data can either be obtained from a real system by capturing signals with an antenna or done synthetically on the computer.
The robustness of this data is critical. “The keyword is making a representative dataset, if we’re designing for a wireless system that’s operating at 5 GHz we have data at 2.4 GHz, it’s useless.” In order to ensure the system is well-designed the data must be varied including signal performance in both normal operating conditions and more extreme conditions. “You usually don’t have data for outliers that are 2 or 3 standard deviations from the mean, but if you don’t have this data your system will fail when things shift out of the comfort zone,” explains Houman.
Houman expands on this by saying it is best for designers to have the best of both worlds and use both a real world, hardware-generated dataset as well as the synthetic dataset to include some of those outliers. “With hardware, there are severe limitations where you don’t have time to create all that data. So, we have the Wireless Waveform Generator App that allows you to generate, verify, and analyze your synthetic data so that you can augment your dataset for training.” As shown in Figure 2, the app allows designers to select waveform types and introduce impairments for more real world signal scenarios.
Figure 2 Wireless Waveform Generator application allows users to generate wireless signals and introduce impairments. Source: MathWorks
Transfer learning: Signal discriminationThen, AI training is performed to either train a model that was built from scratch or, to train an established model (e.g., AlexNet, GoogleNet) to optimize it for your particular task; this is known as transfer learning. As shown in Figure 3, pretrained networks can be reused in a particular wireless application by adding new layers that allow the model to be more fine-tuned towards the specific dataset. “You turn the wireless signal, and in a one-to-one manner, transform it into an image,” said Houman when discussing how this concept was used for wireless design.
Figure 3 Pretrained networks can be reused in a particular wireless application by adding new layers that allow the model to be more fine-tuned towards the specific dataset. Source: MathWorks
“Every wireless signal is IQ samples, we can transform them into an image by taking a spectrogram, which is a presentation of the signal in time and frequency,” said Houman, “we have applied this concept to wireless to discriminate between friend or foe, or between 5G and 4G signals.” Figure 4 shows the test of a trained system that used an established semantic segmentation network (e.g., ResNet-18, MobileNetv2, and ResNet-5). The test used over-the-air (OTA) signal captures with software-defined radio (SDR). Houman elaborated, “So you send a signal and you classify, and based on that classification, you have multiple binary decisions. For example, if it’s 4G, do this; if it’s 5G to this, if it’s none of the above, do this. So the system is optimized by the reliable classification of the type of signal the system is encountering.”
Figure 4 Labeled spectrogram outputted by a trained wireless system to discriminate between LTE and 5G signals. Source: MathWorks
Building deep learning models from scratch Supervised learning: Modulation classification with built CNNModulation classification can also be accomplished with the Deep Learning Toolbox where users generate synthetic, channel-impaired waveforms for a dataset. This dataset is used to train a convolutional neural network (CNN) and tested with hardware such as SDR with OTA signals (Figure 5).
Figure 5 Output confusion matrix of a CNN trained to classify signals by modulation type with test data using SDR. Source: MathWorks
“With signal discrimination, you’re using more classical classification so you don’t need to do a lot of work developing those trained networks. However, since modulation and encoding is not found on the spectrogram, most people will then choose to develop their models from scratch,” said Houman, “in this approach, designers will use MATLAB with Python and implement classical building blocks such as rectifier linear unit (ReLU) to build out layers in their neural network.” He continues, “Ultimately a neural network is built on components, you either connect them in parallel or serially, and you have a network. Each network element has a gain and training will adjust the gain of each network element until you converge on the right answer.” He mentions that, while a less direct path is taken to obtain the modulation type, systems that combine these allow their wireless systems to have a much deeper understanding of the signals they are encountering and make much more informed decisions.
Beam selection and DPD with NNUsing the same principles neural networks (NNs) can be customized within the MATLAB environment to solve inherently nonlinear problems such as applying digital predistortion (DPD) to offset the nonlinearities in power amplifiers (PAs). “DPD is a classical example of a nonlinear problem. In wireless communications, you send a signal, and the signal deteriorates in strength as it leaves the source. Now, you have to amplify the signal so that it can be received but no amplifier is linear, or has constant gain across its bandwidth.” DPD attempts to deal with the inevitable signal distortions that occur when using a PA that is operating within its compression region by observing the output signal from the PA and using that as feedback for the alterations to the input signal so that the PA output is closer to ideal. “So the problem is inherently non-linear and many solutions have been proposed but AI comes along, and produces superior performance than other solutions for this amplification process,” said Houman. The MATLAB approach trains a fully connected NN as the inverse of the PA and uses it for DPD (NN-DPD), then, the NN-DPD is tested using a real PA and compared with a cross-term memory polynomial DPD.
Houman goes on to describe another application for NN-based wireless design (Figure 6), “Deep learning also has a lot of applications in 5G and 6G where it combines sensing and communications. We have a lot of deep learning examples where different algorithms are used to position and localize users so you can send data that is dedicated to the user.” The use case that was mentioned in particular related to integrated sensing and communication (ISAC), “When I was young and programming 2G and 3G systems, the philosophy of communication was that I would send the signal in all directions, and if your receiver got that information, good for it; it can now decode the transmission. If the receiver couldn’t do that, tough luck,” said Houman, “With 5G and especially 6G, the expectations have risen, you have to have knowledge of where your users are and beamform towards them. If your beamwidth is too big, you lose energy. But, if your beamwidth is too narrow, if your users move their head, you miss them. So you have to constantly adapt.” In this solution, instead of using GPS signals, lidar, or roadside camera images, the base station essentially becomes the GPS locator; sending signals to locate users and based upon the returned signal, sends communications.
Figure 6 The training phase and testing phase of a beam management solution that uses the 3D coordinates of the receiver. Source: MathWorks
Unsupervised learning: The autoencoder path for FECAlternatively, engineers can follow the autoencoder path to help build a system from the ground up. These deep learning networks consist of an encoder and a decoder and are trained to replicate their input data to, for instance, remove noise and detect anomalies in signal data. The benefit of this approach is that it is unsupervised and does not require labeled input data for training.
“One of the major aspects of 5G and 6G is forward error correction (FEC) where, when I send something to you, whether its voice or video, whether or not the channel is clean or noisy, the receiver should be able to handle it,” said Houman. FEC is a technique that adds redundant data to a message to minimize the number of errors in the received information for a given channel (Figure 7). “With the wireless autoencoder, you can automatically add redundancy and redo modulation and channel coding based on estimations of the channel condition, all unsupervised.”
Figure 7 A wireless autoencoder system ultimately restricts the encoded symbols to an effective coding rate for the channel. Source: MathWorks
Reinforcement learning: Cybersecurity and cognitive radar“With deep learning and machine learning, where the process of giving inputs and receiving an output will all be performed offline,” explained Houman. “With deep learning, you’ve come up with a solution and you simply apply that solution in a real system.” He goes on to explain how reinforcement learning must be applied to a real system at the start. “Give me the data and I will update that brain constantly.”
Customers in the defense industry will leverage Reinforcement Learning Toolbox to, for example, assess all the vulnerabilities of their 5G systems and update their cybersecurity accordingly. “Based upon the vulnerability, they will devise techniques to overcome the accessibility of the unfriendly agent to the system.” Other applications might include cognitive radar where cognitive spectrum management (CSM) would use reinforcement learning to analyze patterns in the spectrum in real-time and predict future spectrum usage based upon previous and real-time data.
Integration, simulation, and testingAs shown in many of these examples, the key to the third step in the workflow is to create a unique dataset to test the effectiveness of the wireless system. “If you use the same dataset to train and test, you’re cheating! Of course it will match. You have to take a set that’s never been seen during training but is still viable and representative and use that for testing,” explains Houman, “That way, there is a confidence that different environments can be handled by the system with the training we did in the first step of data gathering and training.” The Wireless Waveform Generator App is meant to assist with both these stages.
Deployment and implementationThe MathWorks approach to deployment works with engineers at the language level with a more vendor-agnostic approach to hardware. “We have a lot of products that turn popular languages such into MATLAB code, to train and test the algorithm, and then turn that back into the code that will go into the hardware. For FPGAs and ASICs, for example, the language is Verilog or VHDL. We have a tool called the HDL Coder that will take the MATLAB and Simulink model and turn that into low level VHDL code to go into any hardware.”
Addressing the downsides of AI with the digital twinThe natural conclusion of the interview was understanding the “catch” of using AI to improve wireless systems. “AI takes the input, trains the model, and produces an output. In that process, it merges all the system components into one. All those gains, they change together, so it becomes an opaque system and you lose insight into how the system is working,” said Houman. While this process has considerable benefit, troubleshooting issues can be much more challenging than debugging with solutions that leverage the traditional, iterative approach where isolating problems might be simpler. “So, in MathWorks, we are working on creating a digital twin of every engineered system, be it a car, an airplane, a spacecraft, or a base station.” Houman describes this as striking a balance between the traditional engineered system approach and an AI-based engineering solution, “Any engineer can compare their design to the all-encompassing digital twin and quickly identify where their problem is. That way, we have the optimization of AI, plus the explainability of model-based systems. You build a system completely in your computer before one molecule goes into the real world.”
Aalyia Shaukat, associate editor at EDN, has worked in the engineering publishing industry for over 8 years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.
Related Content
- Artificial intelligence for wireless networks
- The next embedded frontier: machine learning enabled MCUs
- Use digital predistortion with envelope tracking
- RF predistortion straightens out your signals
- Millimeter wave beamforming and antenna design
The post Applying AI to RF design appeared first on EDN.
Shift in electronic systems design reshaping EDA tools integration
A new systems design toolset aims to create a unified user experience for board designers while adding cloud connectivity and artificial intelligence (AI) capabilities, which will enable engineers to adapt to rapidly changing design and manufacturing environments.
To create this highly integrated and multidisciplinary tool, Siemens EDA has combined its Xpedition system design software with Hyperlynx verification software and PADS Professional software for integrated PCB design. The solution also includes Siemens’ Teamcenter software for product lifecycle management and NX software for product engineering.
Figure 1 The new systems design toolset enhances integration by combining multiple design tools. Source: Siemens EDA
Evolution of systems design
Systems design—spanning from IC design and manufacturing to IC packaging and board design to embedded software—has been constantly evolving over the past decades, and so have toolsets that serve these vital tenets of electronics.
Take IC design, for instance, which is now carried out by multiple outfits. Then, there are PCBs that unified early on with large vendors offering front-to-back solutions. PCBs then entered an era of multidiscipline design, needing more design automation. Finally, we entered the modern era that encompasses cloud computing and AI.
Siemens EDA’s next-generation electronic systems design software takes an integrated and multidisciplinary approach to cater to this changing landscape. David Wiens, project manager for Xpedition at Siemens EDA, told EDN that this solution took five years to develop through extensive beta cycles with designers to validate generational shifts in technologies. It’s built on five pillars: Intuition, AI, cloud, integration, and security.
Figure 2 The next-generation electronic system design aims to deliver an intuitive, AI-enhanced, cloud-connected, integrated, and secure solution to empower engineers and organizations in today’s dynamic environment. Source: Siemens EDA
But before explaining these fundamental tenets of electronic systems design, he told EDN what drove this initiative in the first place.
- Workforce in transition
A lot of engineers are retiring, and their expertise is going with them, creating a large gap for young engineers. Then there is this notion that companies haven’t been hiring for a decade or so and that there is a shortage of new engineers. “The highly intuitive tools in this systems design solution aim to overcome talent shortages and enable engineers to quickly adapt with minimal learning curves,” Wiens said.
- Mass electrification
Mass electrification leads to a higher number of design starts, faster design cycles, and increased product complexity. “This new toolset adds predictive engineering and new support assistance using AI to streamline and optimize the design workflows,” said Wiens.
- Geopolitical and supply chain volatility
Wiens said that the COVID era introduced some supply chain challenges while some challenges existed before that due to geopolitical tensions. “COVID just magnified them.”
The new electronic systems design solution aims to address these challenges head-on by providing a seamless flow of data and information throughout the product lifecycle using digital threads. It facilitates a unified user experience that combines cloud connectivity and AI capabilities to drive innovation in electronic systems design.
Below is a closer look at the key building blocks of this unified solution and how it can help engineers to tackle challenges head-on.
- Intuitive
The new toolset boosts productivity with a modern user experience; design engineers can start with a simple user interface and switch to a complex user interface later. “We have taken technologies from multiple acquisitions and heritages,” said Wiens. “Each of those had a unique user experience, which made it difficult for engineers to move from one environment to the next.” So, Siemens unified that under a common platform, which allows engineers to seamlessly move from tool to another.
Figure 3 The new toolset allows engineers to seamlessly move from one tool to another without rework. Source: Siemens EDA
- AI infusion
AI infusion accelerates design optimization and automation. For instance, with predictive AI, design engineers can leverage simulation engines from a broader Siemens portfolio. “The goal is to expand engineering resources without necessarily expanding the human capital and compute power,” Wiens said.
Figure 4 The infusion of AI improves design process efficiency and leverages the knowledge of experienced engineers in the systems design environment. Source: Siemens EDA
Here, features like chat assistance systems allow engineers to ask natural language questions. “We have a natural language datasheet query, which returns the results in natural language, making it much simpler to research components,” he added.
- Cloud connected
While cloud-connected tools enable engineers to collaborate seamlessly across the ecosystem, PCB tools are practically desktop-based. In small- to mid-sized enterprises, some engineers are shifting to cloud-based tools, but large enterprises don’t want to move to cloud due to perceived lack of security and performance.
Figure 5 Cloud connectivity facilitates collaboration across the value chain and provides access to specialized services and resources. Source: Siemens EDA
“Our desktop tools are primary offerings in a simulation environment, but we can perform managed cloud deployment for design engineers,” said Wiens. “When designers are collaborating with outside engineering teams, they often struggle collaborating with partners. We offer a common viewing environment residing in the cloud.”
- Integration
Integration helps break down silos between different teams and tools in systems design. Otherwise, design engineers must spend a lot of time in rework to create the full model when moving from one design tool to another. The same thing happens between design and manufacturing cycles; engineers must rebuild the model in the manufacturing phase.
The new systems design toolset leverages digital threads across multiple domains. “We have enhanced integration with this release to optimize the flow between tools so engineers can control the ins and outs of data,” Wiens said.
- Security
Siemens, which maintains partnerships with leading cloud providers to ensure robust security measures, manages access control based on user role, permission, and location in this systems design toolset. The next-generation systems design offers rigid data access restrictions that can be configured and geo-located.
“It provides engineers with visibility on how data is managed at any stage in design,” said Wiens. “It also ensures the protection of critical design IP.” More importantly, security aspects like monitoring and reporting behavior and anomalies lower the entry barriers for tools being placed in cloud environments.
Need for highly integrated toolsets
The electronics design landscape is constantly changing, and complexity is on the rise. This calls for more integrated solutions that make collaboration between engineering teams easier and safer. These new toolsets must also take advantage of new technologies like AI and cloud computing.
With the evolution of the electronics design landscape, that’s how toolsets can adapt to changing realities such as organization flexibility and time to productivity.
Related Content
- It’s Time for AI in PCB Design
- Board Systems Design and Verification
- Optimizing Electronics Design With AI Co-Pilots
- ‘Cloud Computing Is Changing Everything About Electronic Design’
- How highly skilled engineers manage complexity of embedded systems design
The post Shift in electronic systems design reshaping EDA tools integration appeared first on EDN.
Taking a peek inside an infrared thermometer
Back in September, within the introduction to my teardown of a pulse oximeter, I wrote:
One upside, for lack of a better word, to my health setback [editor note: a recent, and to the best of my knowledge first-time, COVID infection over the July 4th holidays] is that it finally prompted me to put into motion a longstanding plan to do a few pandemic-themed teardowns.
That pulse oximeter piece was the kickoff to the series; this one, a dissection of an infrared thermometer, is the second (and the wrap-up, unless I subsequently think of something else!). These devices gained pervasive use during the peak period of the COVID-19 pandemic, courtesy of their non-contact subject measurement capabilities. As Wikipedia puts it:
At times of epidemics of diseases causing fever…infrared thermometers have been used to check arriving travelers for fever without causing harmful transmissions among the tested. In 2020 when [the] COVID-19 pandemic hit the world, infrared thermometers were used to measure people’s temperature and deny them entry to potential transmission sites if they showed signs of fever. Public health authorities such as the FDA in United States published rules to assure accuracy and consistency among the infrared thermometers.
And how do they work? Wikipedia again, with an introductory summary:
An infrared thermometer is a thermometer which infers temperature from a portion of the thermal radiation sometimes called black-body radiation emitted by the object being measured. They are sometimes called laser thermometers as a laser is used to help aim the thermometer, or non-contact thermometers or temperature guns, to describe the device’s ability to measure temperature from a distance. By knowing the amount of infrared energy emitted by the object and its emissivity, the object’s temperature can often be determined within a certain range of its actual temperature. Infrared thermometers are a subset of devices known as “thermal radiation thermometers”.
Sometimes, especially near ambient temperatures, readings may be subject to error due to the reflection of radiation from a hotter body—even the person holding the instrument—rather than radiated by the object being measured, and to an incorrectly assumed emissivity. The design essentially consists of a lens to focus the infrared thermal radiation on to a detector, which converts the radiant power to an electrical signal that can be displayed in units of temperature after being compensated for ambient temperature. This permits temperature measurement from a distance without contact with the object to be measured. A non-contact infrared thermometer is useful for measuring temperature under circumstances where thermocouples or other probe-type sensors cannot be used or do not produce accurate data for a variety of reasons.
Today’s victim, like my replacement for the precursor pulse oximeter teardown subject, came to me via a May 2024 Meh promotion. A two-pack had set me back only $10, believe it or not (I wonder what they would have cost me in 2020?). One entered our home health care gear stable, while the other will be disassembled here. I’ll start with some stock photos:
Now for some as-usual teardown-opening box shots:
Speaking of opening:
The contents include our patient (of course), a set of AA batteries (which I’ll press into reuse service elsewhere):
and a couple of slivers of literature:
Now for the star of the show, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes (the Meh product page claims that the infrared thermometer is “small” and “lightweight” but isn’t any more specific than that). Front:
They really don’t think that sticker’s going to deter me, do they?
Back:
A closeup of the “LCD Backlit display with 32 record memory”, with a translucent usage-caution sticker from-factory stuck on top of it:
Right (as defined from the user’s perspective) side, showcasing the three UI control buttons:
Left:
revealing the product name (Safe-Mate LX-26E, also sold under the Visiomed brand name) and operating range (2-5 cm). The label also taught me something new; the batteries commonly referred to as “AAs” are officially known as “LR6s”:
Top:
Another sticker closeup:
And bottom, showcasing the aforementioned-batteries compartment “door”:
Flipping it open reveals a promising screw-head pathway inside:
although initial subsequent left-and-right half separation attempts were incomplete in results:
That said, they did prompt the battery-compartment door to fall out:
I decided to pause my unhelpful curses and search for other screw heads. Nothing here:
or here:
Here either, although I did gain a fuller look at the switches (complete with intriguing connections-to-insides traces) and their rubberized cover:
A-ha!
That’s more like it (complete with a trigger fly-away):
I was now able to remove the cap surrounding the infrared receiver module:
Followed by the module itself, along with the PCB it was (at the moment) connected to:
Some standalone shots of the module and its now-separated ribbon cable:
And of the other now-disconnected ribbon cable, this one leading to the trifecta of switches on the outside:
Here’s the front of the PCB, both in with-battery-compartment overview:
and closeup perspectives, the latter more clearly revealing its constituent components, such as the trigger switch toward the bottom, an IC from Chipsea Technologies labeled “2012p1a” toward the top, and another labeled:
CHIPSEA
18M88-LQ
2020C1A
at the top (reader insights into the identities of either/both of these ICs is greatly appreciated):
And here’s the piezo buzzer-dominant, comparatively bland (at least at first glance) backside:
which became much more interesting after I lifted away the “LCD Backlit display with 32 record memory”, revealing a more complex-PCB underside than I’d originally expected:
That’s all I’ve got for today. What did you find surprising, interesting and/or potentially underwhelming about the design? Let me (and your fellow readers) know in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Peering inside a Pulse Oximeter
- Thermometer measures temperature without contact
- Teardown: A smartwatch with an athletic tradition
- Teardown: Inside the art of pulse oximetry
- Teardown: Fitness band hardware
- Teardown: A fitness tracker that drives chip demand
- Teardown: Misfit Shine 2 and the art of power management
The post Taking a peek inside an infrared thermometer appeared first on EDN.
Simple 5-component oscillator works below 0.8V
Often, one needs a simple low voltage sinusoidal oscillator with good amplitude and frequency stability and low harmonic distortion; here, the Peltz oscillator becomes a viable candidate. Please see the Peltz oscillator Analog Devices Wiki page here and a discussion on my Peltz oscillator here.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Shown in Figure 1, the Peltz oscillator requires only two transistors, one capacitor, one inductor and one resistor. In this configuration, the output voltage is a ground referenced, direct coupled, low distortion sinewave, swinging above and below ground at ~1 Vbe, while operating from a low negative supply voltage (AAA battery).
Figure 1 Basic configuration of a Peltz oscillator with a low component count yielding a low distortion sinewave output.
The oscillating frequency is shown:
A simplified analysis shows the minimum negative supply voltage (Vee) is:
Where Vt is the Thermal Voltage (kT/q), Z is the total impedance “seen” at the parallel resonant LC network, Vbe is the base emitter voltage of Q1 [Vt*ln(Ic/Is)], and Is is the transistor saturation current.
Here’s an example with a pair of 2N3904s, a 470 µH inductor, 0.22 µF capacitor, and a 510 Ω bias resistor, powered from a single AAA cell (the oscillator actually works at ~0.7 VDC), producing a stable, low noise ~16 kHz sinewave as shown in Figure 2, Figure 3, and Figure 4.
Figure 2 Peltz oscillator output with a clean 16 kHz sinewave.
Figure 3 Spectral view of sinewave showing fundamental as well as 2nd and 3rd harmonics.
Figure 4 Zoomed in view of ~16 kHz sinewave.
Note the output frequency, peak to peak amplitude and overall waveform quality is not bad for a 5-element oscillator!
Michael A Wyatt is a life member with IEEE and has continued to enjoy electronics ever since his childhood. Mike has a long career spanning Honeywell, Northrop Grumman, Insyte/ITT/Ex-elis/Harris, ViaSat and retiring (semi) with Wyatt Labs. During his career he accumulated 32 US Patents and in the past published a few EDN Articles including Best Idea of the Year in 1989.
Related Content
- Ultra-low distortion oscillator, part 1: how not to do it.
- Ultra-low distortion oscillator, part 2: the real deal
- Oscillators: How to generate a precise clock source
- Oscillator has voltage-controlled duty cycle
- The Colpitts oscillator
- Add one resistor to give bipolar LM555 oscillator a 50:50 duty cycle
The post Simple 5-component oscillator works below 0.8V appeared first on EDN.
The whole-house LAN: Achilles-heel alternatives, tradeoffs, and plans
I mentioned recently that for the third time in roughly a decade, a subset of the electronics suite in my residence had gotten zapped by a close-proximity lightning storm. Although this follow-up writeup, one of a planned series, was already proposed to (and approved by) Aalyia at the time, subsequent earlier-post comments exchanges with a couple of readers were equal parts informative and validating on this one’s topical relevance.
First off, here’s what reader Thinking_J had to say:
Only 3 times in a 10-year span, in the area SW of Colorado Springs?
Brian, you appear to be lucky.
My response:
Southwest of Golden (and Denver, for that matter), not Colorado Springs, but yes, the broader area is active each year’s “monsoon season”:
https://climate.colostate.edu/co_nam.html
The “monsoon season” I was referencing historically runs from mid-June through the end of September. Storms normally fire up beginning mid-afternoon and can continue overnight and into the next morning. As an example of what they look like, I grabbed a precipitation-plot screenshot during a subsequent storm this year; I live in Genesee, explicitly noted on the map:
Wild, huh?
Then there were the in-depth thoughts of reader “bdcst”, in a posting only the first half of which I’ve republished here for brevity (that said, I encourage you to read the post in its entirety at the original-published location):
Hi Brian,
Several things come to mind. First is, if you think it was EMP, then how will moving your copper indoors make a difference unless you live in a Faraday cage shielded home? The best way to prevent lightning induced surges from entering your equipment via your network connection, is to go to a fiber drop from your ISP, cable or telecom carrier. You could also change over to shielded CAT-6 Ethernet cable.
At my broadcast tower sites, it’s the incoming copper, from the tower, or telephone system or from the power line itself that brings lighting induced current indoors. Even with decent suppressors on all incoming copper, the only way to dissipate most of the differential voltage from the large current spikes is with near zero ohms bonding between every piece of equipment and to a single very low impedance earth ground point. All metal surfaces in my buildings are grounded by large diameter flexible copper wire, even the metal entrance door is bonded to it bypassing the resistance of its hinges.
When I built my home at the end of a long rural power line, I experienced odd failures during electrical storms. I built my own power line suppressor with the largest GE MOV’s I could find. That eliminated my lightning issues. Of course, surge suppressors must have very low resistance path to ground to be effective. If you can’t get a fiber drop for your data, then do install several layers of Ethernet suppressors between the incoming line and your home. And do install at least a small AC line suppressor in place of a two-pole circuit breaker in your main panel, preferably at the top of the panel where the main circuit breaker resides.
My response, several aspects of which I’ll elaborate on in this writeup:
Thanks everso for your detailed comments and suggestions. Unfortunately, fiber broadband isn’t an option here; I actually feel fortunate (given its rural status) to have Gbit coax courtesy of Comcast:
https://www.edn.com/a-quest-for-faster-upstream-bandwidth/
Regarding internal-vs-external wired Ethernet spans, I don’t know why, but the only times I’ve had Ethernet-connected devices fry (excluding coax and HDMI, which also have been problematic in the past) are related to those (multi-port switches, to be precise) on one or both ends of an external-traversed Ethernet span. Fully internal Ethernet connections appear to be immune. The home has cedar siding and of course there’s also insulation in the walls and ceiling, so perhaps that (along with incremental air gaps) in sum provides sufficient protection?
Your question regarding Ethernet suppressors ties nicely into one of the themes of an upcoming planned blog post. I’ve done only rudimentary research so far, but from what I’ve uncovered to date, they tend to be either:
- Inexpensive but basically ineffective or
- Incredibly expensive, but then again, replacement plasma TVs and such are pricey too (http://www.edn.com/electronics-blogs/brians-brain/4435969/lightning-strike-becomes-emp-weapon-)
Plus, I’m always concerned about bandwidth degradation that may result from the added intermediary circuitry (same goes for coax). Any specific suggestions you have would be greatly appreciated.
Thanks again for writing!
Before continuing, an overview of my home network will be first-time informative for some and act as a memory-refresher to long-time readers for whom I’ve already touched on various aspects. Mine’s a two-story home, with the furnace room, roughly in the middle of the lower level, acting as the networking nexus. Comcast-served coax enters there from the outside and, after routing through my cable modem and router, feeds into an eight-port GbE switch. From there, I’ve been predominantly leveraging Ethernet runs originally laid by the prior owner.
In one direction, Cat 5 (I’m assuming, given its age, versus a newer generation) first routes through the interstitial space between the two levels of the house to the far wall of the family room next to the furnace room, connecting to another 8-port GbE switch. At that point, another Ethernet span exits the house, is tacked to the cedar wood exterior and runs to the upper-level living room at one end of the house, where it re-enters and connects to another 8-port GbE switch. In the opposite direction, another Cat 5 span exits the house at the furnace room and routes outside to the upper-level master bedroom at the other end of the house, where it re-enters and connects to a five-port GbE switch. Although the internal-only Ethernet is seemingly comprised of conventional unshielded cable, judging from its flexibility, I was reminded via examination in prep for tackling this writeup that the external wiring is definitely shielded, not that this did me any protective good (unsurprisingly, sadly, given that externally-routed shielded coax cable spans from room to room have similarly still proven vulnerable in the past).
Normally, there are four Wi-Fi nodes in operation, in a mesh configuration comprised of Google Nest Wifi routers:
- The router, in the furnace room downstairs
- A mesh point in the master bedroom upstairs at one end of the house
- Another in the living room upstairs at the other end of the house
- And one more downstairs, in an office directly below the living room
Why routers in the latter three cases, versus less expensive access points? In the Google Nest Wifi generation, versus with the Google OnHub and Google Wifi precursors (as well as the Google Nest Wifi Pro successor, ironically), access points are only wirelessly accessible; they don’t offer Ethernet connectivity as an option for among other things creating a wired “mesh” backbone (you’ll soon see why such a backbone is desirable). Plus, Google Nest Wifi Routers’ Wi-Fi subsystems are more robust; AC2200 MU-MIMO with 4×4 on 5 GHz and 2×2 on 2.4GHz, versus only AC1200 MU-MIMO Wi-Fi 2×2 on both 2.4 GHz and 5 GHz for the Google Nest Wifi Point. And the Point’s inclusion of a speaker is a don’t-care (more accurate: a detriment) to me.
I’ve augmented the already-existing Ethernet wiring when we bought the house with two other notable additional spans, both internal-only. One runs from the furnace room to my office directly above it (I did end up replacing the original incomplete-cable addition with a fully GbE-complaint successor). The other goes through the wall between the family room and the earlier-mentioned office beyond it (and below the living room), providing it with robust Wi-Fi coverage. As you’ll soon see, this particular AP ended up being a key (albeit imperfect) player in my current monsoon-season workaround.
Speaking of workarounds, what are my solution options, given that the outdoor-routed Ethernet cable is already shielded? Perhaps the easiest option would be to try installing Ethernet surge protectors at each end of the two outdoors-dominant spans. Here, for example are some that sell for $9.99 a pair at Amazon (and were discounted to $7.99 a pair during the recent Prime Fall Days promotion; I actually placed an order but then canceled it after I read the fine print):
As the inset image shows and the following teardown image (conveniently supplied by the manufacturer) further details, they basically just consist of a bunch of diodes:
This one’s twice as expensive, albeit still quite inexpensive, and adds an earth ground strap:
Again, nothing but diodes (the cluster of four on each end are M7s; I can’t read the markings on the middle two), though:
Problem #1: diving into the fine print (therefore my earlier mentioned order cancellation), you’ll find that they only support passing 100 Mbit Ethernet through, not GbE. And problem #2; judging from the user comments published on both products, they don’t seem to work, at least at the atmospheric-electricity intensities my residence sees.
Ok, then, if my observation passes muster that internal-only Ethernet spans, even unshielded ones, are seemingly EMI-immune, why not run replacement cabling from the furnace room to both upper-level ends of the house through the interstitial space between the levels, as well as between the inner and outer walls? That may indeed be what I end up biting the bullet and doing, but the necessary navigation around (and/or through) enroute joists, ductwork and other obstacles is not something that I’m relishing, fiscally or otherwise. In-advance is always preferable to after-the-fact when it comes to such things, after all! Ironically, right before sitting down to start writing this post, I skimmed through the final print edition of Sound & Vision magazine, which included a great writeup by home installer (and long-time column contributor) John Sciacca. There’s a fundamentally solid reason why he wrote the following wise words!
A few of my biggest tips: Prewire for everything (a wire you aren’t using today might be a lifesaver tomorrow!), leave a conduit if possible…
What about MoCA (coax-based networking) or powerline networking? No thanks. As I’ve already mentioned, the existing external-routed coax wiring has proven vulnerable to close-proximity lightning, too. If I’m going to run internally routed cable instead, I’ll just do Ethernet. And after several decades’ worth of dealing with powerline’s unfulfilled promise due to its struggles to traverse multiple circuit breakers and phases, including at this house (which has two breaker boxes, believe it or not, the original one in the garage and a newer supplement in the furnace room), along with injected noise from furnaces, air conditioning units, hair dryers, innumerable wall warts and the like, I’ve frankly collected more than enough scars already. But speaking of breaker boxes, by the way, I’ve already implemented one of the earlier documented suggestions from reader “bdcst”, courtesy of an electrician visit a few years back:
The final option, which I did try (with interesting results), involved disconnecting both ends of the exterior-routed Cat 5 spans and instead relying solely on wireless backbones for the mesh access points upstairs at both ends of the house. As setup for the results to come, I’ll first share what the wired-only connectivity looks like between the furnace room and my office directly above it. I’m still relying predominantly on my legacy, now-obsolete (per Windows 8’s demise) Windows Media Center-based cable TV-distribution scheme, which has a convenient built-in Network Tuner facility accessible via any of the Xbox 360s acting as Windows Media Extenders:
In preparation for my external-Ethernet severing experiment, to maximize the robustness of the resultant wireless backbone connectivity to both ends of the house, I installed a fifth Google Nest Wifi router-as-access point in the office. It indeed resulted in reasonably robust, albeit more erratic, bandwidth between the router and the access point in the living room, first as reported in the Google Home app:
and then by Windows Media Center’s Network Tuner:
I occasionally experienced brief A/V dropouts and freezes with this specific configuration. More notably, the Windows Media Center UI was more sluggish than before, especially in its response to remote control button presses (fast-forward and -rewind attempts were particularly maddening). Most disconcerting, however, was the fact that my wife’s iPhone now frequently lost network connectivity after she traversed from one level of the house to the other, until she toggled it into and then back out of Airplane Mode.
One of the downsides of mesh networks is that, because all nodes broadcast the exact same SSID (in various Google Wifi product families’ case), or the same multi-SSID suite for other mesh setups that use different names for the 2.4 GHz, 5 GHz, and 6 GHz beacons, it’s difficult (especially with Google’s elementary Home utility) to figure out exactly what node you’re connected to at any point in time. I hypothesized that her iPhone was stubbornly clinging to the now-unusable Wi-Fi node she was using before versus switching to the now-stronger signal of a different node in her destination location. Regardless, once I re-disconnected the additional access point in my office, her phone’s robust roaming behavior returned:
But as the above screenshot alludes to, I ended up with other problems in exchange. Note, specifically, the now-weak backbone connectivity reported by the living room node (although, curiously, connectivity between the master bedroom and furnace room remained solid even now over Wi-Fi). The mesh access point in the living room was, I suspect, now wirelessly connected to the one in the office below it, ironically a shorter node-to-node distance than before, but passing through the interstitial space between the levels. And directly between the two nodes in that interstitial space is a big hunk of metal ductwork. Note, too, that the Google Nest Wifi system is based on Wi-Fi 5 (802.11ac) technology, and that the wireless backbone is specifically implemented using the 5 GHz band, which is higher-bandwidth than its 2.4 GHz counterpart but also inherently shorter-range. The result was predictable:
The experiment wasn’t a total waste, though. On a hunch, I tried using the Xfinity Stream app on my Roku to view Comcast-sourced content instead. The delivery mechanism here is completely different: streamed over the Internet and originating from Comcast’s server, versus solely over my LAN from the mini PC source (in all cases, whether live, time-shifted or fully pre-recorded, originating at my Comcast coax TV feed via a SiliconDust HDHomeRun Prime CableCARD intermediary). I wasn’t direct-connecting to premises Wi-Fi from the Roku; instead, I kept it wired Ethernet-connected to the multi-port switch as before, leveraging the now-wireless-backbone-connected access point also connected to the switch there instead. And, as a pleasant surprise to me, I consistently received solid streaming delivery.
What’s changed? Let’s look first at the video codec leveraged. The WTV “wrapper” (container) format now in use by Windows Media Center supersedes the DVR-MS precursor with expanded support for both legacy MPEG-2 and newer MPEG-4 video. And indeed, although a perusal of a recent recorded-show file in Window Explorer’s File Properties option was fruitless (the audio and video codec sections were blank), pulling the file into VLC Media Player and examining it there proved more enlightening. There were two embedded audio tracks, one English and the other Spanish, both Dolby AC3-encoded. And the video was encoded using H.264, i.e., MPEG-4 AVC (Part 10). Interestingly, again according to VLC, it was formatted at 1280×720 pixel resolution and a 59.940060 fps frame rate. And the bitrate varied over time, confirmative of VBR encoding, with input and demuxed stream bitrates both spiking to >8,000 kb/sec peaks.
The good news here, from a Windows Media Center standpoint, is two-fold: it’s not still using archaic MPEG-2 as I’d feared beforehand might have been the case, and the MPEG-4 profile in use is reasonably advanced. The bad news, however, is that it’s only using AVC, and at a high frame rate (therefore bitrate) to boot. Conversely, Roku players also support the more advanced HEVC and VP9 video codec formats (alas, I have no idea what’s being used in this case). And, because the content is streamed directly from Comcast’s server, the Roku and server can communicate to adaptively adjust resolution, frame rate, compression level and other bitrate-related variables, maximizing playback quality as WAN and LAN bandwidth dynamically vary.
For now, given that monsoon season is (supposedly, at least) over until next summer, I’ve reconnected the external Cat 5 spans. And it’s nice to know that when the “thunderbolt and lightning, very, very frightening” return, I can always temporarily sever the external Ethernet again, relying on my Rokus’ Xfinity Stream apps instead. That said, I also plan to eventually try out newer Wi-Fi technology, to further test the hypothesis that “wires beat wireless every time”. Nearing 3,000 words, I’ll save more details on that for another post to come. And until then, I as-always welcome your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Lightning strikes…thrice???!!!
- Empty powerline networking promises
- Lightning strike becomes EMP weapon
- Devices fall victim to lightning strike, again
- Ground strikes and lightning protection of buried cables
- Teardown: Lightning strike explodes a switch’s IC
- Teardown: Ethernet and EMP take out TV tuner
The post The whole-house LAN: Achilles-heel alternatives, tradeoffs, and plans appeared first on EDN.
Integrating digital isolators in smart home devices
Smart home devices are becoming increasingly popular with many households adopting smart thermostats, lighting systems, security systems, and home entertainment systems. These devices provide automation and wireless control of household functions, allowing users to monitor and control their homes from a mobile app or digital interface.
But despite the advantages of smart home devices, users also face an increased risk of electrical malfunctions that may result in electric shock, fire, or direct damage to the device. This article discusses the importance of integrating digital isolators in smart home devices to ensure safety and reliability.
Definition of a digital isolator
A digital isolator is an electronic device that provides electrical isolation between two circuits while allowing digital signals to pass between the circuits. By using electromagnetic or capacitive coupling, the digital isolator transmits data across the isolation barrier without requiring a direct electrical connection.
Digital isolators are often used in applications where electrical isolation is necessary to protect sensitive circuitry from high voltages, noise, or other hazards. They can be used in power supplies, motor control, medical devices, industrial automation, and other applications where safety and reliability are critical. Figure 1 shows a capacitive isolation diagram.
Figure 1 The capacitive isolation diagram includes the top electrode, bottom electrode, and wire bonds. Source: Monolithic Power Systems
Understanding isolation rating
The required isolation voltage is an important consideration when choosing a digital isolator, since it impacts the total solution cost. Isolators generally have one of two isolation classifications: basic isolation or reinforced isolation.
- Basic isolation: This provides sufficient insulation material to protect a person or device from electrical harm; however, the risk of electrical malfunctions is still present if the isolation barrier is broken. Some devices use two layers of basic isolation as a protective measure in the case of the first layer breaking; this is called double isolation.
- Reinforced isolation: This is equivalent to dual basic isolation and is implemented by strengthening the isolation barrier to decrease the chances of the barrier breaking compared to basic isolation.
Figure 2 shows the three types of isolation: basic isolation, double isolation, and reinforced isolation.
Figure 2 The three types of isolation are basic isolation, double isolation, and reinforced isolation. Source: Monolithic Power Systems
Here, creepage distance is the shortest distance between two conductive elements on opposite sides of the isolation barrier and is measured along the isolation surface. Clearance distance is a common parameter that is similar to creepage distance but is measured along a direct path through the air.
As a result, creepage distance is always equal to or greater than the clearance distance, but both are heavily dependent on the IC’s package structure. Parameters such as pin-to-pin distance and body width have a strong correlation with the isolation voltage for isolated components. Wider pin-to-pin spacing and packages have larger isolation voltages, but they also take up more board space and increase the overall system cost.
Depending on the system design and isolation voltage requirements, different isolation ratings are available, typically corresponding to the package type. Small outline integrated circuit (SOIC) packages often have 1.27-mm pin-to-pin spacing and are available in narrow body (3.9-mm package width) or wide body (7.5-mm package width) formats.
The wide-body package is commonly used for meeting reinforced 5-kVRMS requirements, while the narrow-body package is used in applications where the maximum withstand isolation voltage is 3k VRMS. In some cases, extra wide-body packages are used with >14.5-mm creepage for certain 800-V+ systems to meet the creepage and clearance requirements.
Figure 3 shows the clearance and creepage distances in an SOIC package.
Figure 3 Varying clearance and creepage distances are used in SOIC packages to meet design requirements. Source: Monolithic Power Systems
Safety regulations for digital isolators
Safety certifications such as UL 1577, VDE, CSA, and CQC play a pivotal role in ensuring the reliability and safety of digital isolators within various electronic systems. These certifications are described below:
- UL 1577: This certification, established by Underwriters Laboratories, sets stringent standards to evaluate the insulation and isolation performance of digital isolators. Factors including voltage isolation, leakage current, and insulation resistance are examined to ensure compliance with safety requirements.
- VDE: This certification is predominantly recognized in Europe and verifies the quality and safety of electrical products, including digital isolators, through rigorous testing methodologies. VDE certification indicates that the isolators meet the specified safety criteria and conform to European safety standards, ensuring their reliability and functionality in diverse applications.
- Canadian Standards Association (CSA): This certification guarantees that digital isolators adhere to Canadian safety regulations and standards, ensuring their reliability and safety in electronic systems deployed across Canada.
- China Quality Certification (CQC): The China Quality Certification GB 4943.1-2022 emphasizes conformity assessment and quality control in audio/video, information, and communication technology equipment.
These certifications collectively provide manufacturers, engineers, and consumers with the confidence that digital isolators have undergone comprehensive testing and comply with stringent safety measures, contributing to the overall safety and reliability of the electronic devices and systems in which they are utilized across global markets.
Features of digital isolators vs. optocouplers
Traditionally, the isolated transfer of digital signals has been carried out using optocouplers. These devices harness light to transfer signals through the isolation barrier, using an LED and a photosensitive device, typically a phototransistor. The signal on one side of the isolation barrier turns the LED on and off.
When the photons emitted by the LED impact the phototransistor’s base-collector junction, a current is formed in the base and becomes amplified by the transistor’s current gain, transmitting the same digital signal on the opposite side of the isolation barrier.
Digital isolators provide four key features that make them better than optocouplers in smart home devices:
- Low-power consumption: Digital isolators don’t need to supply a light source, and instead use more efficient channels to transfer the signal. This makes digital isolators ideal for battery-powered devices such as smart thermostats and security sensors.
- High-speed data transmission: Phototransistors have long response times, which limits the bandwidth of optical isolators. On the other hand, digital isolators can transfer signals much quicker, enabling fast and reliable communication between smart home devices and control systems.
- Low electromagnetic interference (EMI): EMI can interfere with electronic devices in the home. By adopting capacitive isolation technology, digital isolators are more immune to EMI.
- Wide operating temperature range: This makes digital isolators suitable for a variety of robust environments, including outdoor applications.
Types of digital isolation
There are two types of digital isolation that can be implemented: magnetic isolation and capacitive isolation. Magnetic isolation relies on a transformer to transmit signals, while capacitive isolation uses a capacitor to transmit signals across the isolator, which creates an electrical barrier. This barrier prevents direct current flow and provides isolation between the input and output circuits.
Capacitive isolation is the most commonly used method due to several advantages.
- Higher data rates: Compared to magnetic isolation, the higher data rates of capacitive isolation can be used for applications that require fast and reliable communication.
- Lower power consumption: Compared to magnetic isolation or optical isolation, capacitive isolation typically consumes less power, making it a more energy-efficient choice for battery-powered devices.
- Smaller size: Capacitive isolators are typically smaller than magnetic isolators or optical isolators, which eases their integration into small electronic devices.
- Lower cost: Capacitive isolators are typically less expensive than optical isolators, which rely on expensive optoelectronic components like LEDs and photodiodes.
- Higher immunity to EMI: Compared to magnetic isolation, capacitive isolation is less susceptible to EMI, resulting in capacitive isolation being a more reliable choice in noisy environments.
Figure 4 shows a comparison of traditional optical isolation compared to magnetic and capacitive isolation.
Figure 4 Capacitive isolation offers key advantages over optical isolation and magnetic isolation. Source: Monolithic Power Systems
The type of digital isolation used depends on the application specifications, such as the required data rate, temperature range, or the level of electrical noise in the environment. Figure 5 shows a block diagram of a smart refrigerator, which requires three digital isolators.
Figure 5 The block diagram of a smart refrigerator that requires three digital isolators. Source: Monolithic Power Systems
Applications of digital isolators in smart home devices
Providing electrical isolation between the control system and appliance circuitry is crucial to ensure user safety as well as to protect smart home devices from outside interference or hacking. Some examples of smart home devices that integrate digital isolators include smart lighting systems, smart security systems, smart thermostats and smart home entertainment systems, which are described in further detail below.
Smart lighting systems
In smart lighting systems, digital isolators provide isolation between the control system and the high-voltage lighting circuitry. This prevents the user from coming into contact with high-voltage electrical signals.
Smart security systems
In smart home security systems, digital isolators provide isolation between the control system and the sensors or cameras. Isolating the sensitive control circuitry from the outside world addresses concerns regarding outside interference to the security system.
Smart thermostats
In smart thermostats, digital isolators provide isolation between the control system and the heating or cooling circuits. This minimizes damage to the control system from high-voltage or high-current signals in the heating or cooling circuits.
Smart home entertainment systems
In smart home entertainment systems like smart speakers, digital isolators provide isolation between the control system and the audio or video circuits. This achieves high-quality playback by preventing interference or noise in the audio or video signals.
George Chen is product marketing manager at Monolithic Power Systems (MPS).
Tomas Hudson is applications engineer at Monolithic Power Systems (MPS).
Related Content
- Think Your Home Is Smart? Think Again
- How to design with capacitive digital isolators
- Shocking protection with reinforced digital isolators
- Smart home: 4 things you should know about Matter
- Digital Isolation: What Every Automotive Designer Needs to Know
The post Integrating digital isolators in smart home devices appeared first on EDN.
Online tool programs smart sensors for AIoT
ST’s web-based tool, AIoT Craft, simplifies the development and provisioning of node-to-cloud AIoT projects that use the machine-learning core (MLC) of ST’s smart MEMS sensors. Intended for both beginners and seasoned developers, AIoT Caft helps program these sensors to run inference operations.
The MLC enables decision-tree learning models to run directly in the sensor. Operating autonomously without host system involvement, the MLC handles tasks that require AI skills, such as classification and pattern detection.
To ease the creation of decision-tree models, AIoT Craft includes AutoML, which automatically selects optimal attributes, filters, and window size for sensor datasets. This framework also trains the decision tree to run on the MLC and generates the configuration file to deploy the trained model. To provision the IoT project, the gateway can be programmed with the Data Sufficiency Module, intelligently filtering data points for transmission to the cloud.
As part of the ST Edge AI Suite, AIoT Craft offers customizable example code for in-sensor AI and sensor-to-cloud solutions. Decision tree algorithms can be tested on a ready-to-use evaluation board connected to the gateway and cloud.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Online tool programs smart sensors for AIoT appeared first on EDN.
Cortex-M85 MCUs empower cost-sensitive designs
Renesas has added new devices to its RA8 series of MCUs, combining the same Arm Cortex-M85 core with a streamlined feature set to reduce costs. The RA8E1 and RA8E2 MCU groups are well suited for high-volume applications, including industrial and home automation, mid-end graphics, and consumer products. Both groups employ Arm’s Helium vector extension to boost ML and AI workloads, as well as Trust Zone for enhanced security.
The RA8E1 group’s Cortex-M85 core runs at 360 MHz. These microcontrollers provide 1 Mbyte of flash, 544 kbytes of SRAM, and 1 kbyte of standby SRAM. Peripherals include Ethernet, octal SPI, I2C, USB FS, CAN FD, 12-bit ADC, and 12-bit DAC. RA8E1 MCUs come in 100-pin and 144-pin LQFPs.
MCUs in the RA8E2 group boost clock speed to 480 MHz and increase SRAM to 672 kbytes. They also add a 16-bit external memory interface. RA8E2 MCUs are offered in BGA-224 packages.
The RA8E1 and RA8E2 MCUs are available now. Samples can be ordered on the Renesas website or through its distributor network.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Cortex-M85 MCUs empower cost-sensitive designs appeared first on EDN.
GaN flyback switcher handles 1700 V
With a breakdown voltage of 1700 V, Power Integrations’ IMX2353F GaN switcher easily supports a nominal input voltage of 1000 VDC in a flyback configuration. It also achieves over 90% efficiency, while supplying up to 70 W from three independently regulated outputs.
The IMX2353F, part of the InnoMux-2 family of power supply ICs, is fabricated using the company’s PowiGaN technology. Its high voltage rating makes it possible for GaN devices to replace costly SiC transistors in applications like automotive chargers, solar inverters, three-phase meters, and other industrial power systems.
Like other InnoMux-2 devices, the IMX2353F provides both primary and secondary-side controllers, zero voltage switching without an active clamp, and FluxLink, a safety-rated feedback mechanism. Each of the switcher IC’s three regulated outputs is accurate to within 1%. By independently regulating and protecting each output, the IMX2353F eliminates multiple downstream conversion stages. The device has a switching frequency of 100 kHz and operates over a temperature range of -40°C to +150°C.
Prices for the IMX2353F start at $4.90 each in lots of 10,000 units. Samples and evaluation boards are available from Power Integrations and its authorized distributors.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post GaN flyback switcher handles 1700 V appeared first on EDN.