EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com
Оновлене: 2 години 46 хв тому

SiC MOSFETs afford higher power density

15 годин 1 хв тому

Based on a wide-bandgap material, ON Semiconductor’s 650-V SiC MOSFETs offer improved switching, reliability, and thermals. By replacing existing silicon switching technologies with these silicon carbide devices, designers achieve increased efficiency at the system level, enhanced power density, reduced EMI, and decreased system size and weight.

According to the manufacturer, this generation of SiC MOSFETs employs a novel active cell design combined with advanced thin-wafer technology, enabling best-in-class figure of merit for a 650-V breakdown voltage. The NVBG015N065SC1, NTBG015N065SC1, NVH4L015N065SC1, and NTH4L015N065SC1 N-channel MOSFETs have one of the lowest RDS(on) ratings (12 mΩ) in the market in D2PAK-7 and TO-247 packages.

An internal gate resistor eliminates the need to slow down devices artificially with external gate resistors. And high surge, avalanche, and short-circuit robustness all contribute to enhanced MOSFET ruggedness. The AEC-Q101-qualified devices are well-suited for such applications as electric vehicles, onboard chargers, solar inverters, server power supply units, and uninterruptible power supplies.

Budgetary pricing of the MOSFETs is between $13.66 and $13.99 each.

650-V SiC MOSFET product page

ON Semiconductor

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

Related articles:


googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); });

The post SiC MOSFETs afford higher power density appeared first on EDN.

Reference design optimizes USB power delivery

Птн, 03/05/2021 - 17:21

Compliant with USB power delivery 3.0, the STEVAL-USBPD27S from STMicroelectronics is a 27-W AC/DC adapter reference design with PPS support. It accelerates the design of efficient power adapters with zero-power operation when no cable is connected. USB PPS, a supplemental programmable power supply specification to USB PD, helps save power by reducing device-charging times and heat dissipation.

photo of the STEVAL-USBPD27S reference design with a yellow background

The STEVAL-USBPD27S combines the STM32G071 microcontroller, which includes an on-chip USB Type-C PD controller, with the STCH03 PWM controller and TCPP01-M12 USB Type-C protection IC. Designers can quickly build fast-charging USB power adapters that meet stringent European CoC Version 5 Tier-2 and US DOE Level VI requirements for minimum four-point average efficiency in active mode and standby power below 40 mW.

As an MCU-based reference design, the STEVAL-USBPD27S offers the flexibility to implement additional customized application layers and to incorporate ongoing improvements as the USB PD standard evolves. Delivered as a turnkey evaluation module in a compact 59×35×21-mm outline, the STEVAL-USBPD27S achieves a power density of 10.2 W/in3.

The STEVAL-USBPD27S reference design costs $95.

STEVAL-USBPD27S product page


Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

Related articles:

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); });

The post Reference design optimizes USB power delivery appeared first on EDN.

Functional safety: toolkit tie-ups automate software and verification analysis

Птн, 03/05/2021 - 17:01

Companies providing tools for software developers and quality assurance teams are joining hands with functional safety platforms to automate software analysis and verification, and one such announcement came at the Embedded World 2021 Digital show.

LDRA, which helps developers test safety- and security-critical applications much earlier in the development process, has announced its tool suite’s integration into the Synopsys DesignWare ARC MetaWare Development Toolkit for safety. So, automotive software developers implementing functional safety using Synopsys DesignWare ARC processors can perform system-level testing either on the host platform or the actual target.

LDRA’s test and lifecycle solution for Synopsys functional safety processors and development kits provides designers with lifecycle traceability as well as source code’s static and dynamic analysis. “Our toolkit’s integration into Synopsys function safety development toolkit helps automotive industry software developers from requirements to verification,” said Jim McElroy, VP of sales and marketing at LDRA Technology.

image of a development toolkit and a screenshot of its compilerLDRA works with development toolkits and their compiler and IDE to test safety- and security-critical applications. Source: LDRA

LDRA has signed a similar collaboration pact with PTC to streamline software engineering and quality analysis in its product lifecycle management framework Windchill RV&S. “This integration enables software developers to create safety-critical applications, test application, analyze code, and present that information up in the Windchill product,” McElroy said.

The LDRA tool suite and Windchill RV&S product integration is enabled by the LDRA TBmanager, which employs a rule-based approach for traceability and verification activities. The combined solution automates the software analysis and verification and provides bidirectional traceability from requirements to source code.

The markets that Windchill RV&S serves include automotive, medical, and some industrial, defense, and aeronautics applications. “LDRA toolkit facilitates automation, efficiency and productivity, so Windchill users can test their software, collect results, and put it back in the framework,” McElroy added.

The above tie-ups between toolkits are particularly important in today’s collaborative world where all information is available in the cloud, and people need to share that information. It’s also interesting to see how software has become a collaborative approach across the globe from a development perspective.

Majeed Ahmad, Editor-in-Chief of EDN, has covered the electronics design industry for more than two decades.

Related articles:

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); });

The post Functional safety: toolkit tie-ups automate software and verification analysis appeared first on EDN.

AC/DC supplies fit tight budgets

Птн, 03/05/2021 - 01:34

The LCS series of power supplies from XP Power offer a variety of output voltages at power levels of 35, 50, 75, and 100 W. These economical convection-cooled units meet Class B conducted and radiated emission limits, permitting low-cost system integration.

XP Power PR photo of the LCS power supplies

The power supplies accept an input voltage range of 85 to 264 VAC to allow global use in a wide range of ITE and industrial applications. Such applications include auxiliary power sources, security installations, lighting control, and smart home control systems.

The 35-W supply provides a single regulated output of 5, 12, 15, or 24V. The 50-W, 75-W, and 100-W supplies furnish a single regulated output of 5, 12, 15, 24, 36, or 48V. To accommodate nonstandard system voltages, the output voltage of each supply is adjustable to within ±10%. Operating over a temperature range of -30°C to +70°C, the power supplies deliver full rated power to +50°C and up to 60% power at +70°C.

Standard features include a power-on LED, low standby power consumption of 0.3 W, output short-circuit protection, and both overcurrent and overtemperature protection. Input-to-output isolation is 4000 VAC, while input-to-ground isolation is 2000 VAC and output-to-ground isolation is 1250 VAC. Earth leakage current is a maximum of 0.75 mA. All of the LSC power supplies have an aluminum chassis with a vented galvanized steel cover and integrated connector terminals. Conformal coating is optional.

Prices for the LCS series of AC/DC power supplies start at $8.28 each in lots of 500 units.

LCS35 series product page 

LCS50 series product page 

LCS75 product page 

LCS100 product page 

XP Power 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

Related articles:

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); });

The post AC/DC supplies fit tight budgets appeared first on EDN.

Teardown: What caused these CFL bulbs to fail?

Чтв, 03/04/2021 - 17:28

I recently wondered, after replacing my second one, how the internals of compact fluorescent (CFL) bulbs compare and contrast to an LED light bulb? I anticipated I’d find a plethora of passive analog and power components (along with a comparative dearth of digital components) versus their LCD counterparts but there’s only one way to find out, right?

https://www.edn.com/wp-content/uploads/contenteetimes-images-edn-edn-hands-on-project.png?resize=191%2C165My various LED bulb teardowns over the past several years have consistently been among my most popular. We’ve covered standard, Zigbee-controlled, Wi-Fi augmented, and Bluetooth-enhanced versions, so we have plenty of information for comparison.

Today’s victims are both 13W (60W incandescent-equivalent) T2 form factor CFL bulbs, with claimed 825 lumen output, “generic” (i.e., not brand-name), and in equally-generic packaging:

photo of CFL bulb package

Here are some overview shots for the first one:

photo of a CFL bulb

photo of the other side of a CFL bulb

And here’s the second of the two CFL bulbs, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

photo of the second CFL bulb laying on a table

photo of the other side of the second CFL bulb laying on a table with a penny for scale

Here’s a series of closeups of the product markings:

photo of the markings on a CFL bulb base

photo of the markings on a CFL bulb base

photo of the markings on a CFL bulb base

photo of the markings on a CFL bulb base

And of the passive airflow vents at both ends:

photo of a CFL bulb's bottom vents

photo of a CFL bulb's top vents

Time to dive in. Although the two halves of the base are presumably pressed together, judging from the seam, their union is likely augmented by adhesive. The glass tube attached on the end makes twisting them apart even more complicated from a potential-injury perspective; I therefore initiated the process with a shallow hacksaw cut:

photo of a CFL bulb in a vice with a hacksaw for cutting it open

photo of the saw cut made into the CFL bulb base

Subsequently inserting (and twisting) a wide flat-head screwdriver completed the deed:

photo of the CFL bulb cracked open

photo of the CFL bulb's components exposed

photo of the CFL bulb cracked open showing the PCB

In that last image you can see the two wires that connect the PCB to the electrical contacts in the base. They seemed to be sturdily soldered at both ends so I just snipped ‘em leaving the PCB topside (i.e. the “ballast”) inside fully exposed to view:

photo of the empty CFL base

photo of the exposed CFL bulb PCB

Here are three straight-on shots of various orientations:

photo of the top of the CFL bulb PCB

photo of the top of the CFL bulb PCB

photo of the top of the CFL bulb PCB

Obvious constituent components (even to my binary IC-biased eyes) include the toroidal inductor and transformer, along with the discrete transistors and several different-sized capacitors and other inductors (what more/else of note do you notice, my more analog- and power-attuned readers?). Note, too, the two wire pairs, presumably headed to both ends of the coiled fluorescent tube. Let’s find out:

photo of the wiring connecting the CFL bulb and PCB

photo of the wiring cut between the CFL bulb and PCB

One of the four wires was pretty short, so I left it connected:

photo of the PCB pulled away from the CFL bulb

The PCB backside is pretty unremarkable, unless solder points and a mix of standard and bidirectional diodes are your thing:

photo of the bottom of the CFL bulb's PCB

Now let’s look at the second CFL bulb:

photo of the second CFL bulb cracked open

Here’s another series of shots of the ballast:

photo of the top of the second CFL PCB

photo of the top of the second CFL PCB

photo of the top of the second CFL PCB

photo of the top of the second CFL PCB

This time I got all four wires headed to the fluorescent tube snipped off (although I snipped the top off one tube end in the process):

a photo of the bottom of the CFL tubes

That lead to a clearer straight-on shot of the PCB backside:

photo of the bottom of the second CFL's PCB

read more teardownsSo, circling back to this writeup’s title, what caused these CLF bulbs to fail? The potential-cause list is long, and these bulbs have been sitting on my teardown pile for a while now, so my memory’s a bit fuzzy. But I don’t recall a pop, puff of smoke, or other evidence of ballast-component failure, nor do I see any bulging capacitors, singed circuitry, a discolored base, or the like. Instead, take a look back at the overview photos (specifically the blackened segments of the fluorescent tubes in the base’s proximity, and you’ll likely conclude (as I have) that the root cause in both cases is more traditional in nature: degradation of the electrode filaments.

With that, I’ll turn the fluorescent bulb’s illumination in your direction for your thoughts in the comments!

Brian Dipert is Editor-in-Chief of the Embedded Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related articles:


googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); });

The post Teardown: What caused these CFL bulbs to fail? appeared first on EDN.

5G modem-RF system achieves 10-Gbps speed

Срд, 03/03/2021 - 17:05

The Snapdragon X65 modem-to-antenna system from Qualcomm is a 10-Gbps 5G and 3GPP Release 16 platform with an upgradeable architecture for rapid rollout. The QTM545 5G mmWave antenna module pairs with the Snapdragon X65, bringing higher transmit power compared to previous generations, while maintaining the same tiny footprint. An extended millimeter-wave (mmWave) range results in higher connectivity reliability and throughput speeds.

annotated illustration for the Snapdragon X65 5G modem-RF system showing chips on a board

Along with its 10-Gbps peak download speed, the Snapdragon X65 allows for enhancements, expandability, and customization across 5G segments. It enables rollout of new Release 16 features across smartphones, mobile broadband, fixed wireless access, industrial IoT, and 5G private networks via software upgrades. With its 4-nm baseband chip and a number of power-saving technologies, Snapdragon X65 achieves high power efficiency and all-day battery life.

The QTM545 5G antenna module supports all global mmWave frequencies, including the new n259 (41 GHz) band. It is designed to work with Snapdragon X65’s power-saving techniques and Smart Transmit 2.0 technology. Smart Transmit 2.0 takes advantage of modem-to-antenna system awareness to increase upload data speeds and enhance coverage for both mmWave and sub-6-GHz bands, while continuing to meet RF emissions requirements.

The Snapdragon X65 modem-RF system is currently sampling to OEMs and targeting commercial device launches late in 2021.

Snapdragon X65 product page

QTM545 antenna product page

Qualcomm Technologies

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

Related articles:


googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); });

The post 5G modem-RF system achieves 10-Gbps speed appeared first on EDN.

FFRZVS circuits enable sleek USB power adapters

Срд, 03/03/2021 - 01:05

USB has the potential to address the demand for simplifying the design of miniature power adapters by combining interface signals and power in a single cable. Now, in its fourth generation since its introduction in 1996, the Universal Serial Bus (USB) has standardized computer connectivity, replacing interfaces such as serial ports and parallel ports, and has become the cable of choice for charging a wide range of portable devices.

The gigabit speeds specified in USB 3.0 make this bus a strong contender to replace all types of cables associated with PCs and laptops, including displays, external disk drives, printers, and scanners. Most importantly, with the USB power delivery (USB PD) specification, USB also has the potential to eliminate the power cable.

Developed in 2012 to create an interoperable charging standard for all USB devices, USB PD is now on its third revision. It has evolved to charging applications capable of powering hard drives, printers, and similar devices with power levels up to 100 W and voltages between 5 and 20V. The USB cables may soon be the only thing needed to power our laptops, as well as to connect a wide range of peripherals to them. By breaking the previous 7.5 W power limit at 5V, USB PD also creates the possibility of even faster charging of smartphone batteries.

Power supply miniaturization

The design of miniaturized power supplies requires engineers to find a balance between style and efficiency, while ensuring compliance with a multitude of electrical standards and safety requirements. Switch mode power supplies (SMPS) have become common solutions for devices like laptops as their high efficiency significantly reduces power consumption and consequent heat dissipation, enabling them to be packaged into smaller enclosures.

As SMPS design has matured, the quasi-resonant switching converter (QRC) operating in discontinuous conduction mode (DCM) has become the topology of choice for high-density AC-DC designs, as it eases development challenges and offers a more stable loop control than alternatives. A quasi-resonant switching converter, also known as variable frequency or valley switching flyback, makes use of parasitic resonance characteristics to control the turn-on voltage of the switching MOSFET, thereby reducing switching losses (Figure 1).

circuit diagram for a quasi-resonant switching converter and a graph of resonant oscillationsFigure 1 The basic operation of a quasi-resonant switching converter shows how it uses parasitic resonance characteristics to control the turn-on voltage of the switching MOSFET. Source: Infineon

Figure 1B shows the resonant oscillations in the Vds waveform, caused by the parasitic inductance, capacitance (Lleak) and CD. The resonance results in “valley” points in Vds and, in quasi-resonant (QR) or valley switching flyback, the circuit controller is configured to turn on the MOSFET at the minimum valley point. The controller can be programmed to turn on at different valley points. Where it always turns on at the first valley, it is known as a free running QR flyback. In this mode, the resonant and hence switching frequency varies with the load, with minimum frequency at higher loads.

Despite their advantages, QRC design requires further optimization in order to achieve the densities required by USB levels of miniaturization. Although in low-line situations, the MOSFET virtually operates in zero-voltage switching (ZVS) mode, it’s not the case in high-line situations, leading to significant levels of switching loss. The basic QRC design can be modified to help reduce these losses; a slow reverse recovery diode can push some of the dissipated energy back into the bulk capacitor or output (Figure 1A). It is important to note that while this approach reduces losses at high-line input, it leads to higher losses at low line.

Design considerations such as the use of MOSFETs with higher output capacitance (COSS) as well as low Rds(ON) devices can also help limit conduction and leakage losses. However, the relationship between load and switching frequency poses a further difficulty with QRC topologies with suboptimal transformer utilization at peak power levels. This phenomenon is responsible for common-mode noise interference issues in touchscreen applications.

As pressure on power adapter size increases, manufacturers have begun to simplify production by incorporating windings into the PCB. It requires switching frequencies higher than 100 kHz to minimize copper losses. In these circumstances, forced-frequency resonant zero-voltage switching (FFRZVS) designs offer an optimal solution.

In these designs, switching is implemented at the zero-voltage point in the primary transformer. It reduces the turn-on losses of the power switch and heat dissipation and increases efficiency. The high-frequency operation of FFRZVS circuits enables the size of the magnetic components to be reduced, enabling high-density and compact power supplies.

FFRZVS principles of operation

The significant reductions in turn-on losses and higher efficiencies of FFRZVS are achieved by making small changes to the basic QR design. These changes are illustrated in Figure 2, which shows a FFRZVS reference design based on the Infineon XDPS21071 FFRZVS DCM controller.

schematic for the FFRZVS reference designFigure 2 The FFRZVS reference design is built around a flyback controller IC. Source: Infineon

An additional zero-voltage winding is added to the primary side of the circuit, alongside the primary winding and the auxiliary winding, which is used for zero-crossing detection. This zero-voltage winding, ZWVS, is added to the circuit, along with a capacitor, power switch, QZVS, and a low-side gate driver and RG1, enabling a self-controlled ZVS cycle.

5 graphs show the operational sequence of the FFRZVS designFigure 3 The diagram shows the operational sequence of the FFRZVS design. Source: Infineon

When the primary MOSFET, QM, turns off at t0, there is a short blanking period before the synchronous rectifier, QSR, is turned on, causing current to flow in the secondary winding, WS. When this magnetizing current falls to zero, QSR is turned off, at t1, and the resonant circuit on the primary-side winding causes an oscillation around the voltage, Vbulk. At this point, t2, the ZVS MOSFET is turned on, bringing the additional winding, ZVS, into play. Switching on ZVS at the resonant peak of the primary MOSFET, where the magnetizing current is zero, results in a negative magnetizing current build-up.

Once this current reaches its peak at t3, the ZVS MOSFET is switched off again, causing the magnetizing current to reverse direction. This discharges the equivalent capacitance, resulting in the drain voltage of the primary MOSFET reaching a minimum at t4, where it turns on. Turning on at this point, where its drain-source voltage is minimum, leads to significant reduction in the turn-on losses, approaching those of true ZVS.

The above description highlights the important role that the controller plays in the implementation of FFRZVS, dictating the precision of the timing sequence, based on its measurement of the output voltage.

FFRZVS for USB PD adapters

A reference design for a USB PD adapter based on this operating principle is shown in Figure 4. The fixed-frequency ZVS controller used is specifically designed to target high-density power adapter applications. It is capable of operating in a number of modes, including discontinuous conduction mode (DCM), ZVS, frequency-reduction mode (FRM), and burst mode (BM), ensuring efficiency across different line and load conditions.

three photos of a FFRZVS USB PD adapter reference designFigure 4 The reference design for a USB PD adapter is based on the FFRZVS operating principle. Source: Infineon

The digital and analog peripherals support various signal sampling and conditioning techniques, as required for flyback operation. A built-in high-voltage start-up cell makes the IC power supply much more efficient and flexible during no-load operation and high-voltage circuitry provides voltage monitoring as well as brown-in and brown-out protection.

Mode switching and timing control are handled by a nano-DSP with configurable, non-volatile OTP memory. This programmable capability enables a simplified PCB layout and reduced bill-of-materials.

two graphs show reference design efficiencyFigure 5 The reference design achieves efficiencies greater than 90%. Source: Infineon

The reference design has also achieved a power-density of 15 W/in3 in a form factor of 55 (L) × 25 (W) × 25 (H) mm (Figure 5). The adapter has been proven to withstand worst-case 560V peaks on the primary side. In terms of heat generation, component temperatures do not exceed 100°C.

Emission tests conducted on this adapter have confirmed its compliance with EN 55022 (CISPR 22) class B test standards, thanks to the configurable frequency jittering, which helps improve the EMI signature at heavy loads and maximum switching frequencies. The adapter also fully meets standby power requirements; the 45 W design draws less than 30 mW of standby power at all AC input voltages. The design can also be easily augmented to support power output levels up to 65 W.

Wang Zan is senior staff engineer at Infineon Technologies.

Related articles:

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); });

The post FFRZVS circuits enable sleek USB power adapters appeared first on EDN.

Smart camera platform excels at triggered imaging

Втр, 03/02/2021 - 19:04

ON Semiconductor’s RSL10 Smart Shot camera brings AI-based image recognition to ultra-low-power IoT endpoints, such as surveillance cameras and smart homes. A companion smartphone application serves as the gateway to cloud-based, AI-enabled object recognition services.

PR image of the RSL10 Smart Shot camera board, a user holding a smartphone, and a web cam, with a greenhouse background

The platform combines the RSL10 SIP Bluetooth Low Energy SoC and the ARX3A0 CMOS image sensor, a compact module for developing cameras with 360-fps mono imaging. Teamed with motion and environmental sensors, as well as power and battery management, the platform enables a design that can be used to autonomously capture images and identify objects within them.

Using the RSL10 Smart Shot Camera, developers can create an endpoint that automatically sends an image to the cloud for analysis when triggered by various elements, including time or environmental changes such as motion, humidity, or temperature. Image data is transferred to the cloud through a gateway connected over Bluetooth Low Energy using the RSL10 SIP. Triggers are configured using the companion app, also over Bluetooth Low Energy.  Low-power components employed in the platform allow it to operate for extended periods of time from a single primary or secondary cell. 

The RSL10 Smart Shot camera is available now through ON Semiconductor sales representatives and authorized distributors.

RSL10 Smart Shot Camera product page

ON Semiconductor

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

Related articles:


googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); });

The post Smart camera platform excels at triggered imaging appeared first on EDN.

Re-configurable Ka-band satellite communication without RF frequency conversion

Втр, 03/02/2021 - 17:51

Operators of telecommunication satellites want to be able to offer their customers flexible data and broadcast services anywhere in the world, anytime. Rapidly changing global events like breaking news, the continuous monitoring of aircraft, or the different needs of global time zones, place real-time, daily, or seasonal demands on the coverage, shape, size, and power of the signals transmitted by satellites, as well as the bandwidth and capacity of communication channels contained within these.

The current approach to satellite design requires the specification of the receiver and transmitter to be changed for almost every new application due to mission-specific and individual customer RF needs. This adds unnecessary non-recurring re-design and re-qualification cost and effort to key programs with operators complaining that the cost to develop payloads is prohibitively expensive and delivery takes too long. Today, the global satellite industry is handicapped by the inflexibility, complexity, power consumption, mass, size, and cost of traditional RF frequency conversion. For key suppliers of geostationary Earth orbit (GEO) telecommunication satellites offering up to 50 channels, analogue super-heterodyne convertors add over 40% to the total cost of the payload.

Satellite manufacturers competing for large global tenders want to offer operators flexible communication services, adaptable to real-time user needs and changing link requirements. While there have been significant advances in avionics, with payloads offering customers larger bandwidths and higher throughputs of data, the design of the transponder has in general remained the same for decades. OEMs are limited by current transceiver technology and are motivated by delivering operators increased revenues and efficiency by improving mission flexibility through in-orbit hardware re-configurability. Some primes now include additional hardware that is switched in and out when needed. This approach has resulted in payloads whose mass, power consumption, cost, and inefficiency have all increased with the level of flexibility required by spacecraft owners. Figure 1 illustrates the architecture of a multi-channel telecommunication transponder with on-board digital processing.

three block diagrams of telecommunication transponder architecturesFigure 1 The diagram shows the architecture of a traditional digital satellite payload.

Wideband space-grade ADCs were launched 10 years ago, offering the ability to directly digitise L and S-band carriers. For satellites communicating at these frequencies, bandpass under-sampling techniques allowed receivers to directly digitise the RF uplink, removing the need for traditional super-heterodyne down-convertors. This resulted in transponders that were physically smaller, lighter in mass, less power consuming, and lower cost.

Around the same time, the first wideband space-grade DAC was also launched, offering the ability to directly up-convert digital baseband to C-band. The use of return-to-zero analogue outputs reduced the sinc roll-off in the higher Nyquist zones, allowing access to the images at these frequencies. For UHF, L, S, and C-band satellites, the EV12DS130 MUX-DAC enabled transmitters without the need for traditional RF up-convertors, delivering transponders that were physically smaller, lighter in mass, less power consuming, and lower cost (Figure 2).

block diagram of direct converting a digital payloadFigure 2 Wideband space-grade ADCs and DACs enable direct-converting of a digital payload.

Not only did the EV10AS180A and the EV12DS130 eliminate the need for traditional RF frequency conversion, they allowed satellite communication to exploit the advantages of software-defined radio (SDR) offering operators new levels of flexibility, e.g. the ability to change RF frequency plans in-orbit in response to real-time user needs. For transponder manufacturers, SDR allowed them to reduce non-recurring engineering (NRE) and recurring costs by selling a single, generic, multi-mission payload that could be re-used by communication, Earth-observation, navigation, and IoT/M2M applications.

Traditional satellite communication at L and S-band became congested and to avail of larger information bandwidths, operators moved to Ku, K, and Ka-band. To support these higher frequencies, the first wideband, space-grade ADCs and DACs were used to reduce the number of overall RF frequency conversion stages by directly digitising and re-constructing respectively IF carriers (Figure 3).

block diagram of a K-band digital payloadFigure 3 This is the current architecture of a K-band digital payload.

To support the move to Ka-band, Teledyne e2v started research in 2019 investigating the potential of a novel K-band (18 to 27 GHz) ADC, realised using a 24 GHz front-end, track and hold amplifier and a quad ADC interleaving the four digitiser cores. A prototype was developed and testing revealed that optimising INL calibration for higher frequencies, as opposed to baseband operation, as well as minimising the offset mismatch between individual ADCs, could maximise dynamic K-band performance (Figure 4).

photo of the K-band ADC prototype
graph showing K-band ADC measured performanceFigure 4
The photo shows the proof-of-concept K-band ADC and the graph shows the measured performance. Source: Teledyne e2v

The ultimate goal of the research is to develop the first Ka-band ADC and DAC for satellite communication to eliminate traditional analogue frequency conversion. This will provide operators increased flexibility in-orbit and real-time RF agility. Further R&D in 2020 discovered there were limits to the performance that could be achieved from the first prototype and to increase signal-to-noise ratio (SNR), spurious free dynamic range (SFDR), and the frequency from K to Ka-band, some fundamental changes would be required.

For the last five decades, Moore’s Law has driven the semiconductor industry, increasing performance and reducing power consumption with each smaller geometry. SDR at L, S, and C-band using direct-converting ADCs and DACs were made possible by exploiting the faster speed and lower power-dissipation benefits of CMOS scaling. However, below 28 nm, Fmax drops from a peak of 360 GHz due to process parasitics and the latest ultra deep-submicron nodes are simply too small to support the development of Ka-band mixed-signal converters. Furthermore, fabrication costs at these geometries is astronomical and not commercially viable for the space industry with its relatively low volumes. Fmax for 90 nm SiGe heterojunction bipolar transistors (HBTs) is currently 600 GHz.

To increase dynamic performance in the higher Nyquist zones and to move from K to Ka-band, a different form factor to that used by the proof-of-concept ADC is required. System-in-package (SiP) allows for significant RF miniaturisation by allowing multiple disparate die to be placed on a single common substrate. Package parasitics at microwave frequencies, particularly for wire-bonded leaded devices, and the choice of materials limit performance at Ka-band. Traditional RF MMICs use LTCC substrates and the research showed that moving to faster organic substrates improves operation at higher frequencies.

In 2020, a second prototype was developed combining two CMOS, interleaved, quad ADCs and a SiGe 30 GHz track and hold amplifier. Flip-chip die with lower parasitics at higher frequencies were mounted onto a low-dielectric constant organic substrate and placed in a compact 33×19 mm SiP, as shown in Figure 5. Improved performance was measured at K-band.

photo of an exposed K-band ADC prototypegraph of K-band ADC prototype performanceFigure 5 The second prototype of the K-band ADC showed improved performance. Source: Teledyne e2v

Following the research carried out in 2019 and 2020, Teledyne e2v plans to release samples of the first Ka-band ADC for space applications in the second half on 2021. The SiP product will include a 40 GHz, front-end, track and hold amplifier to allow direct sampling of Ka-band carriers.

To complement the development of the Ka-band ADC, a 12-bit, 12 GSPS, 25 GHz DAC will also be offered to enable software-defined microwave (SDM) satellite communication. The EV12DD700 quadruples the sampling frequency, the re-constructed bandwidth, and the range of frequencies that a baseband digital input can be directly up-converted to compared to the original, space-grade SDR DAC, the EV12DS130. The new EV12DD700 contains a novel 2RF mode allowing access to the images in the higher Nyquist zones at K-band.

This dual device also offers ×4, ×8, and ×16 interpolation ratios to reduce the input data rate as well as programmable, digital anti-sinc filters to flatten the output response from both channels in the frequency domain. Real and complex I/Q data can be re-constructed and each DAC has independent adjustment of gain, interpolation factor, and digital up conversion (DUC) local-oscillator frequency. An integrated DDS can generate a ramp, a CW tone, or a chirp signal, and fast frequency hopping is also supported to secure and protect the downlink. Separate from the DACs’ return-to-zero up-converting modes, the use of DUC can translate a baseband input with reduced instantaneous bandwidth to the higher Nyquist zones using fewer serial links.

photo of the EV12DD700 DACgraph showing EV12DD700 DAC direct up-converting modesFigure 6 The photo shows the EV12DD700 DAC and the graph is its direct up-converting modes. Source: Teledyne e2v

To support satellite communication, particularly beamforming applications, both the ADC and DAC contain features that synchronise gain and phase delay across multiple channels to guarantee deterministic latency and processing. After power-up, a SYNC input pulse resets all the dividers within the clocks paths of both devices to ensure the circuits restart deterministically. A SYNCO output connects to another device for multi-device locking.

The digital interfaces of the ADC and the DAC are realised using 12 Gbps high-speed serial links and the ESIstream protocol. This is based on 14b/16b encoding with each frame containing scrambled data to ensure timing transitions as well as two bits of overhead: one for disparity to control dc balancing and the other as a toggling synchronisation monitor. When combined with the above ADC/DAC SYNC and SYNCO signals, the links support multi-device synchronisation and deterministic latency. Free ESIstream IP is available for space-grade FPGAs!

The following YouTube videos demonstrate the features of the prototype Ka-band ADC and DAC.

For the first time, the prospect of Ka-band ADCs and DACs offer the potential to extend SDR to SDM for satellite communication. This will allow operators to change RF frequency plans and transponder operation in-orbit, in response to real-time user needs and link requirements. Technology-demonstrator satellites will be able to offer telecommunication, Earth-observation, IoT, and navigation services, as well as de-risking new multi-mission concepts, by re-configuring the specification and functionality of a single payload.

RF agility and resilience will allow operators to maximise the return from their expensive spacecraft assets in response to changing communication and market needs. The ability to re-configure and re-use the same transponder hardware is highly disruptive, will reduce NRE and recurring costs, will prolong the mission lifetime of hardware, and will lower the overall price to access satellite communication. The use of Ka-band ADCs and DACs will deliver major SWaP benefits to RF payloads!

The ability to change a payload’s RF uplink/downlink carrier frequencies, instantaneous processed bandwidth, waveform and modulation types, and the fundamental service offered by re-configuring an FPGA in-orbit, represents a game-changing advance for satellite communication. ‘SoftSats’ will enable many new mission types and transponder architectures, and I’d like to understand how you would exploit this unique technology for future applications. For example, will you still locate your transceivers within the main payload? Would you consider positioning Ka-band ADCs and DACs at the receive and transmitting antennae, directly processing the uplink and downlink carriers respectively, and then connect to on-board digital processors using high-speed electrical or optical links as illustrated in Figure 7?

block diagram of a distributed satellite receiver architectureFigure 7 This diagram shows a distributed satellite receiver architecture. Source: Teledyne e2v

First samples of the Ka-band ADC and DAC will become available this year with procurement and qualification options, as well as radiation-hardness data, to be released shortly after.

To offer the space industry further integration and on-board processing benefits, SiPs will also be offered combining microwave ADCs and DACs with qualified FPGAs in a compact form factor (Figure 8). The first product will baseline Xilinx’s XQRKU060 device as illustrated below, with additional space-grade FPGAs planned as part of the overall roadmap.

product concept combines RF ADCs and DACs with an FPGA. Figure 8 The planned product concept combines RF ADCs and DACs with Xilinx’s XQRKU060 FPGA.

Until next month, the first person to tell me the difference between the DAC’s RF and 2RF modes will win a Courses for Rocket Scientists World Tour t-shirt. Congratulations to Lorenzo from Italy, the first to answer the riddle from my previous post.

Dr. Rajan Bedi is the CEO and founder of Spacechips, which designs and builds a range of advanced, L to Ku-band, ultra-high-throughput on-board processors and transponders for telecommunication, Earth-observation, navigation, Internet, and M2M/IoT satellites. Spacechips’ Design Consultancy Services develop bespoke satellite and spacecraft sub-systems, as well as advising customers how to use and select the right components, and how to design, test, assemble, and manufacture space electronics. 

Related articles:

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); });

The post Re-configurable Ka-band satellite communication without RF frequency conversion appeared first on EDN.

Understanding FFT vertical scaling

Пн, 03/01/2021 - 17:39

The fast Fourier Transform (FFT), added to an oscilloscope or digitizer, permits measuring the frequency domain spectrum of the acquired signals. This provides a different and usually helpful perspective; signals can be viewed as plots of amplitude or phase versus frequency (Figure 1).

amplitude plot of FFTFigure 1 The amplitude FFT of a 100-MHz sine wave is a single spectral line located at 100 MHz. The amplitude of the impulse is 150 mV, matching the peak amplitude of the input sine wave.

There are a number of factors that affect the FFT vertical readouts, including the choice of output type, FFT processing issues, signal duration, and non-FFT instrument characteristics. This article will focus on these issues.

FFT vertical output formats

The FFT calculation is based on the discrete Fourier transform (DFT) as described by the equation:

discrete Fourier transform equation

X(k) = frequency domain points
x(n) = time domain samples
n = index of time samples
k = index of frequency points
N = number of input samples in the record

The physical interpretation of the Fourier transform is that the input samples, x(n), are tested by multiplication of a series of sinusoids represented by the complex exponential, e-jπkn/N, in the equation. At each test frequency the product is averaged. The result is nonzero only if the input has energy at the test frequency.

This is a mathematically complex equation, and each output point has two elements. The computation normally computes the real and imaginary parts of each output point. The outputs can be displayed in a number of formats based on the calculated real and imaginary FFT output components.

The FFT native output data types, real (R) and imaginary (I), are both expressed in units of volts, all other formats are derived from them:

Linear magnitude (volts) = (R2+I2)1/2

Magnitude squared (volts2) = R2+I2

Phase (radians) = TAN-1(I/R)

Power spectrum (dBm) = 10*Log10{(R2+I2)/1 mW)}

Power spectral density (dBm/Hz) = 10*Log10{(R2+I2)/(1mW*Δf*ENBW)}

R = real part of the FFT
I = imaginary part of the FFT
Δf = resolution bandwidth
ENBW = effective noise bandwidth, weighting dependent

Different oscilloscope suppliers may offer various combinations of these FFT output formats and may scale them differently. In this example, based on Teledyne LeCroy’s Maui Studio application, which simulates a number of their oscilloscopes, the FFT is available in these five formats along with the native real and imaginary format.

The linear magnitude format, which is displayed in Figure 1, is calculated as a root mean square value, but is scaled to read spectral amplitude as peak volts so that the peak amplitude of the spectral line matches the peak amplitude of the input sine wave.

The power spectrum format scale is logarithmic, and the values are in units of decibels relative to a milliwatt (dBm). The power spectral density normalizes the power spectrum value to the effective resolution bandwidth so that its value does not change with changes in analysis bandwidth. It is expressed in units of dBm/Hz.

FFT processing factors affecting the FFT vertical output

The frequency spectrum produced by the FFT is discrete; it has valid amplitude data only at k uniformly spaced frequency values. The discrete nature of the FFT output can cause some confusion in interpreting the spectral amplitude. You might think of the FFT output as being the result of passing the input through a bank of bandpass filters with center frequencies offset by a fixed frequency increment, which is sometimes described as the FFT bin width or resolution bandwidth. If the input signal frequency falls in the center of one of these bandpass filters the full output amplitude is shown at the output. If the input signal frequency falls between the two bandpass filter center frequencies, the amplitude is lower (Figure 2).

The resolution bandwidth of the FFT is the reciprocal of the time duration of the input signal. In our example, the input time signal has a duration of 5 μs and the resolution bandwidth is 200 kHz. Setting the input frequency to exactly 50 MHz centers the signal in the resolution bandwidth and the amplitude of the spectral peak is 150 mV.

FFT output plot shows the amplitude responseFigure 2 An expanded view of the FFT output shows the amplitude response as the input frequency is changed in increments of one half the resolution bandwidth.

Changing the input frequency to 50.1 MHz places the input signal between the two filters at 50.0 MHz and 50.2 MHz. Energy is split between the two filters and the peak amplitude falls to 95.6 mV, a loss of 3.9 dB. Stepping the frequency by increments of 100 kHz, the FFT output amplitude is seen to rise and fall. This is called the “picket fence” effect or “scallop” loss and it occurs in all FFT calculations.

Another issue that occurs when the input frequency varies is more easily seen looking at the FFT baselines in Figure 3. In addition to the lower peak amplitude, moving the input frequency between cells spreads and raises the spectrum baseline. When the frequency is at 50.0 MHz, the start and stop points of the input waveform, shown in the upper left grid in yellow, are at the same level, nominally zero volts. When the input frequency is 50.1 MHz, shown in the lower left grid in red, the start and stop points are at different levels.

The FFT calculation is a circular one with the last point looped back to the first point, so a change in the amplitude values looks like a discontinuity. This is a form of angle modulation which spreads the spectrum due to modulation sidebands and results in the baseline of the spectrum being raised in frequency cells adjacent to that of the excited cell; this is called spectral leakage. Any signals in the adjacent cells combine with the leakage component, changing the amplitude in that cell. This causes the greatest error when the signals in adjacent cells have small amplitudes.

plot shows input frequency not cell centeredFigure 3 If the input frequency is not cell centered, the first and last point of the time record have different amplitudes and energy spreads or leaks into adjacent cells, altering the amplitude in those cells.

Both of these effects are countered by amplitude modulating the signal input so that the end points are forced to zero amplitude. This process is called weighting and the modulating waveshape is called a weighting window. The shape of the window function determines the spectral response, including the shape of the spectral line and the amplitude of any sidebands. The characteristics of commonly-used weighting functions are shown in Table 1.

Table 1 The characteristics of common FFT weighting (window) functions

FFT window function characteristics
Window type Highest sidelobe (dB) Scallop loss (dB) Effective noise bandwidth (cells) Coherent gain (dB)
Rectangular (none) -13 3.92 1.00 0.0
Von Hann (Hanning) -32 1.42 1.50 -6.02
Hamming -43 1.78 1.37 -5.35
Flat top -44 0.01 3.43 -11.05
Blackman Harris -67 1.13 1.71 -7.53

The table summarizes the ability of each window to minimize sidelobes and scallop loss. Note that the effective noise bandwidth (ENBW) broadens the width of the FFT filter cells. The broader the cell, the less scallop loss. The 3.9 dB loss shown in Figure 2, where the rectangular weighting was used, can be reduced to as low as 0.01 dB by using the flat top weighting.

Note also that applying weighting decreases the sidelobe amplitudes due to spectral leakage. The coherent gain is the change in amplitude when the weighting function is applied. Most oscilloscope suppliers compensate for this attenuation so that changing the selected weighting function does not change the displayed signal amplitude.

Figure 4 shows the effect that the window functions produce on the spectral lines for the same input signal.

plot shows the effect that the window functions produceFigure 4 The selection of the weighting window affects the shape of the FFT cell frequency response. Narrower windows yield better frequency resolution, while broader windows reduce scallop loss and spectral leakage.

The spectral lines broaden as indicated by the ENBW. The broader responses decrease scallop loss, which makes sense since signals in adjacent cells will overlap at higher amplitudes for broader responses thereby minimizing scallop loss. The weighting functions also affect the amplitude of the sidelobes. With no weighting, the highest sidelobe is -13 dB below the spectral peak. The weighting functions reduce this with the Blackman-Harris weighting function reducing it to -67 dB.

The selection of a window function depends on the user’s needs. If you are measuring transients that are smaller than the acquisition window, then a window function should not be used as the amplitude of the spectrum peak will change based on the transient’s location in the acquisition window. In that case, the rectangular window (no weighting) is the best choice. The narrower window responses provide better frequency resolution, while the broader responses (Blackman Harris or flat top) produce more accurate amplitude measurements. If you need both then a good compromise is Von Hann or Hamming weighting. Most oscilloscopes use Von Hann or Hanning weighting as their default weighting window.

Frequency response and amplitude flatness

Another issue that affects the FFT vertical output levels is the frequency response and amplitude flatness of the oscilloscope or digitizer front end. Remember that the signal amplitude will be attenuated by 1 or 3 dB at the instrument’s bandwidth, depending on the manufacturer’s specification of the bandwidth. Additionally, most suppliers have a specification for the flatness of the frequency response. This is usually on the order of 0.25 to 1.0 dB. The flatness is generally repeatable for a specific setup and can be corrected. Any probes used may also affect the instrument’s frequency response flatness.

Effect of signal duration on FFT peak amplitude

If the input signal duration is less than the full input record length it will also affect the amplitude of the FFT. Keeping in mind that the linear magnitude of the FFT is basically a rms calculation, it is expected that the amplitude will be proportional to the input signals duty cycle relative to the input record length. Figure 5 shows the FFT peak amplitude response to signals with six different durations.

plot shows FFT peak amplitudeFigure 5 The effect of signal duration on the FFT peak amplitude response is shown here.

The M1 trace in the upper left grid shows the 150-mV peak input signal filling the 500 ns input record length; this is the reference signal. Below that grid is the FFT of the signal showing a peak amplitude of 150 mV. Trace M3, the third grid down in the left-hand column, shows the signal duration reduced to 400 ns or 80% of the available record length. Below that trace is its FFT with a peak amplitude of 120 mV. The signal duration is 80% of the input record length and the peak FFT response is 80% of the full duration signal. The signal duration is decreased in steps of 60, 40, 20, and 10% of the input record length and the peak FFT response follows linearly.

The FFT vertical or amplitude response is affected by a number of factors which should be kept in mind when using an FFT. It is proportional to the input signal level. Variations in the input level caused by frequency response variations in the input signal chain result in variations in the FFT amplitude response. The frequency of the input signal can cause variations in the FFT amplitude produced by scallop loss and spectral leakage when the signal is not centered in an FFT frequency cell. This effect is frequency dependent and can be ameliorated by the use of weighting. Finally, the FFT amplitude response is affected by the signal duration relative to the input record length.

Arthur Pini is a technical support specialist and electrical engineer with over 50 years experience in electronics test and measurement.

Related articles:


googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); });

The post Understanding FFT vertical scaling appeared first on EDN.

Noise-suppression sheets facilitate EMI-controlled designs

Птн, 02/26/2021 - 22:28

Any electronic product must pass applicable electromagnetic compatibility (EMC) tests before it can be placed in its intended market. Accepting that prevention is better than cure, it’s usually ideal to design for compliance from an early stage of development. Various approaches can be taken, from applying known best practices to using an EMC simulator, where available, and doing EMC pre-tests in-house or with a specialist partner.

Despite the best-laid plans, however, mandatory testing at an approved test house can present surprises. A solution is needed quickly; any significant redesign at such a late stage of development is expensive and causes delays. Typical approaches include placing additional low-pass filters, often using ferrite beads, at known trouble spots to reduce conducted interference or to introduce shielding to block radiated emissions and protect sensitive components.

As an alternative, composite magnetic materials are available as flexible sheets that can be trimmed and formed to block EMI signals at a specific location. These noise-suppression sheet (NSS) materials are available in various permeability ratings, which allow designers to attenuate interference in a given frequency band by selecting an appropriate value and suitable thickness. The material can be cut to an appropriate size and applied as a shield using self-adhesive backing.

The NSS materials are relatively easy to use and bypass the custom design and fabrication challenges typically encountered to produce a metal shield that must be bonded or screwed into place during final assembly. Perhaps less well known is that NSS can be formed around wires such as power lines as an elegant replacement for ferrite beads and cores by wrapping it around cables and easily secured using a convenient and production-friendly heatshrink sleeve (Figure 1).

diagram of NSS employed in designsFigure 1 Here is how engineers can apply NSS to attenuate EMI from components and cables. Source: KEMET

However, placing NSS at trouble spots as a tactical response when interference rears its head is just one of the many ways these materials can be used. NSS can also be a strategic ally when designed into the product at an early stage. In addition to helping ensure EMC compliance, it can also be used to enhance various aspects of the system’s performance, such as energy efficiency and electrostatic discharges (ESD) protection.

So, examining the structure and properties of NSS can help designers understand its versatility and support for a wide range of applications.

NSS composition and properties

NSS is a composite magnetic material made by blending micron-sized magnetic material powders in a polymer base (Figure 2).

diagram of NSS composite magnetic material structureFigure 2 NSS is a composite magnetic material. Source: KEMET

The material has complex permeability (μ) comprising two components, μ and μ. The value of μ determines how well the material can support magnetic flux, whereas μ expresses the noise absorption effectiveness.

Mathematically, it’s expressed as follow:

μ = μ – j μ

Here, μ and μ are analogous to inductive and resistive properties, respectively. As the signal frequency increases, μ reaches a threshold and begins to fall rapidly, while μ rises (Figure 3).

graph of NSS permeability ratingsFigure 3 The NSS materials have complex permeability ratings. Source: KEMET

By carefully controlling these properties, KEMET has created the Flex Suppressor NSS family, which features characteristics for attenuating noise signals and sustaining magnetic flux in various frequency bands from 1 MHz to 40 GHz (Figure 4). They are used in applications from consumer electronics and automotive infotainment to super-high frequency (SHF) equipment such as 5G infrastructure.

graph of NSS attenuation characteristicsFigure 4 The NSS materials can attenuate unwanted noise in various frequency ranges. Source: KEMET

NSS for noise absorption

Acknowledging that design for EMC should be considered properly at the beginning of any electronic design project, NSS can be considered as part of the solution from the outset. Moreover, in addition to preventing unwanted interactions with nearby equipment, it’s also important to prevent the system from interfering with itself.

Any system can contain numerous interference sources, such as reflection of signals from the inside of the casing or openings like the screen or speaker aperture, and noise radiated from ICs or cables. In a multi-board assembly, preventing crosstalk between substrates is also important. Placing filters in-circuit at multiple points, and introducing shielding to handle various noise signals can complicate the design and add to the bill of materials. Alternatively, applying one or several individual pieces of NSS can be faster and simpler. No board real-estate, grounding, or soldered components such as L-C filters are required.

Figure 5 shows NSS used for de-sensing receiver circuitry in wireless devices, such as mobiles, IoT nodes, and gateways, and remote controls to ensure reliable communication and optimal range. In this way, effective use of NSS can lower RF transmitter power requirements and ease receiver design, delivering advantages such as low power consumption, long battery life, and small size.

diagram shows how NSS improves de-sensing of an RF receiver for a laptopFigure 5 The use of NSS improves de-sensing of an RF receiver. Source: KEMET

Moreover, NSS can be applied as shown in Figure 6 to protect circuitry against ESD that can cause system components such as controllers and line drivers to malfunction.

diagram showing how NSS can abe used for protection against ESDFigure 6 NSS can also be used for protection against ESD. Source: KEMET

Optimizing μ for flux shaping

NSS formulas such as the Flex Suppressor EFW series have been optimized to boost electromagnetic coupling between transmitters and receivers (Figure 7). Designers can thus enhance the performance of wireless power transfer (WPT) systems to ensure faster charging and increased energy efficiency resulting in a lower cost of ownership.

diagram showing how NSS can boost WPT efficiencyFigure 7 Careful placement of NSS can boost WPT efficiency. Source: KEMET

Flex Suppressor can also be used effectively in RFID systems to improve coupling of the reader’s electromagnetic energy to activate nearby tags. Figure 8 shows how placing the NSS directly behind the reader’s antenna marshals the radiated energy that would otherwise be lost to strengthen the field in front of the antenna.

diagram of how NSS can optimize RFID reader performanceFigure 8 The NSS material tuned for 13.56 MHz can optimize RFID reader performance. Source: KEMET

As an example of the effect produced, using NSS material optimized for the 13.56-MHz frequency standardized in the ISO1444/1443 RFID specification, the distance from which the reader can activate the tag can be increased by 85 mm, or almost 300%, from 45 to 130 mm.

As demonstrated in the above design examples, NSS materials can be used effectively in many ways to realize device integration. Much more than simply an emergency add-on in the event of EMC failure, it can effectively support best practices in design for EMC and various signal integrity roles to improve system performance, particularly in power-conscious wireless devices.

By taking advantage of flux-shaping properties, designers can also utilize NSS to boost WPT efficiency and maximize RFID reader performance, ultimately delivering products to market that are compact, elegant, satisfying to use, and economical to own.

Patrik Kalbermatten is senior manager handling distribution promotion for magnetic, sensor, and actuator product management at KEMET Electronics Corp.

Related articles:

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); });

The post Noise-suppression sheets facilitate EMI-controlled designs appeared first on EDN.

The math behind the Smith chart

Птн, 02/26/2021 - 18:29

All kinds of parameters can be described using complex numbers of the form real + j * imaginary. The numerical values of real and imaginary can vary over very wide ranges. Trying to graph them directly can be difficult or just plain impractical, but we did do that once in this blog on transmission line velocity factor.

The problem lends itself to the good work of Phillip H. Smith (1905-1987) and T. Mizuhashi (b. 1937). The Smith chart is a graph-based method of simplifying complex math. Instead of plotting the real and imaginary numbers directly on x-y coordinates, a new parameter called “gamma” is derived and the real and imaginary parts of that new gamma are themselves plotted on x-y coordinates instead. Why is it called gamma? William Shakespeare’s line seems appropriate: “A rose by any other name would smell as sweet.”

The governing equations are as follows. The algebra isn’t hard so please trace it through.

algebra equations for a Smith chartThis is the algebra of the Smith chart.

We graphically plot the real and imaginary parts of gamma. If we hold a constant value of R and allow X to vary, we get one set of curves that look like circles tangent to each other at the far right. If we hold a constant value of X and allow R to vary, we get a second set of curves above and below the horizontal axis which seem to emanate from that point of tangency we just spoke of.

Varying R or varying X causes both the real and the imaginary parts of gamma to vary. It all looks like the following.

smith chart results of varying R and XThese two annotated views of the Smith chart show the results of varying R or varying X.

The outermost circle shown here for R = 0 is not an absolute limit. We can extend this plot to negative values of R as well, but then the outermost circle diameter can get really big.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related articles:


googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); });

The post The math behind the Smith chart appeared first on EDN.

Inventor of life-saving devices Garrett Morgan is born, March 4, 1877

Птн, 02/26/2021 - 02:38

photo of inventor Garrett MorganGarrett Morgan was an inventor who saw a problem and set out to solve it. He spent much of his life in Ohio, where he had a number of successful businesses and received patents that led to the gas masks and traffic signals that still keep people safe today.

Morgan was born in Paris, Kentucky in 1877 and grew up working on his family’s farm. His parents were former enslaved people, and his father was the enslaved son of Confederate Colonel John Hunt Morgan.

As a teenager, Morgan moved to Ohio and began working at the Roots and McBride Company, where he taught himself to repair sewing machines. In 1907, he opened a repair shop, followed by a garment shop that he started with his wife in 1909. He began designing machines and innovations, like a zigzag stitching device for manually-operating sewing machines. In 1913 he also incorporated the G.A. Morgan Hair Refining Company, which sold hair-straightening products that Morgan invented.

In 1914, Morgan received two patents for breathing devices intended for use by firefighters, engineers, chemists, and workers who might be exposed to noxious fumes. The design consisted of a protective canvas hood with air tubes that could supply fresh air and remove smoke or gases. Morgan wrote in his patent application that “the object of the invention is to provide a portable attachment which will enable a fireman to enter a house filled with thick suffocating gases and smoke and to breathe freely for some time therein and thereby enable him to perform his duties of saving life and valuables without danger to himself of suffocation.”

patent drawing for Morgan breathing device

patent drawing for the Morgan breathing device

The safety hood proved its worth during the Lake Erie crib disaster in 1916. During construction of water intake tunnels for Cleveland’s water system, a natural gas explosion filled a tunnel beneath Lake Erie with carbon monoxide. Morgan and other rescuers used his devices to enter the shaft and rescue miners.

photo of Garrett Morgan rescuing miners using his gas mask inventionSource: Western Reserve Historical Society

After more successful demonstrations, Morgan sold safety hoods to fire and police departments, mining companies, and the US Navy and the design was the basis for gas masks used by the US Army in WWI.

He was inspired to create another influential invention after witnessing a crash between a car and a horse-drawn carriage in Cleveland. At the time, the city used manually-operated traffic signals at the intersections of major streets, but they only had two signals, which made the transition between signals dangerous.

Morgan’s traffic signal patent, granted in 1923, outlined a crank-operated, three-position mechanical design mounted on a T-shaped pole. It added a “stop in all directions” signal not seen in previous devices, which gave drivers time to clear the intersection and allowed pedestrians to cross safely.

patent drawings for the Morgan traffic signal

The first G.A. Morgan Safety System was installed in Willoughby, Ohio, and after he sold the patent to General Electric for $40,000, three-position traffic signals were used across the country.

The success of his businesses allowed him to start a newspaper, the Cleveland Call, in 1920. According to History.com, it was “one of the most important Black newspapers in the nation.”

His health was impacted by early gas mask testing, and he developed glaucoma that left him nearly blind by the 1950s. Morgan died in 1963 in Cleveland.

He was recognized by the US government for his traffic signal invention, was inducted into the National Inventors Hall of Fame in 2005, and the US Department of Transportation started the Garrett A. Morgan Technology and Transportation Futures Program in 1997.

Related articles:

Also on this day in tech history
On March 4, 1986, Soviet space probe Vega 1 began returning images of Halley’s Comet.

For more moments in tech history, see this blog. EDN strives to be historically accurate with these postings. Should you see an error, please notify us.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); });

The post Inventor of life-saving devices Garrett Morgan is born, March 4, 1877 appeared first on EDN.

Windows on Arm: All of Apple’s challenges with none of its charm

Чтв, 02/25/2021 - 22:05

What I’m about to say may shock those of you who’ve read my stuff before and are therefore already well aware of my longstanding love affair with all things tech; I returned a Christmas present the other day (don’t worry, I’ve already determined how to otherwise spend the funds). I’ll give you a minute to pick yourselves up off the floor.

Here’s the background: Within my mid-November coverage of Apple’s first systems based on the new Arm-based M1 SoC, derived from but not identical to the A14, I wrote:

I’m sure I’ll have more to say about Apple’s current status and future outlook in upcoming blog posts, along with the similar situation that Microsoft is experiencing with its Arm-based Surface Pro X product line.

The Surface Pro X is the focus of this particular post. Here’s a selection of stock photos of it in its Matte Black color scheme (a Platinum option is now also offered), both standalone and accompanied by its optional keyboard and pen accessories:

PR photo of the Microsoft Surface Pro X using its stand

PR photo of the back of the Microsoft Surface Pro X using its stand

PR photo of the side of the Microsoft Surface Pro X using its stand

PR photo of the Microsoft Surface Pro X with keyboard accessory

PR photo of the Microsoft Surface Pro X with its pen accessory

PR photo of the side of the Microsoft Surface Pro X using its stand and keyboardSource: Microsoft

As I’ve written about before, referencing both the 4th and 5th (2017) editions, I’ve got a fair bit of recent experience with modern Microsoft Surface Pro models built using Intel x86 processors. And my Surface familiarity actually stretches back to early 2014, when I picked up both Arm-based (Surface RT) and x86-based (first-generation Surface Pro) devices for evaluation and (in the latter case, at least) ongoing use. The Arm-based Surface RT device was my teardown victim back in November, in fact.

Reviews of the first-generation Surface Pro X (introduced in October 2019) and second-generation tweak (unveiled one year later) praise its thin-and-light form factor, its thin-bezel sharp display, and its speedy performance on Arm-native code. But that’s pretty much where the kudos end. Pricing is quite high; 8 GByte system memory-equipped first-generation models begin at $999.99, with a $500 (!!!) adder for an additional 8 GBytes of RAM. And no, the equally-pricey keyboard and pen aren’t included, either. The Arm-native code suite is to this day essentially restricted to Microsoft’s own Chrome-based Edge browser and Office suite. And 32-bit x86 applications run via built-in O/S emulation (for which 16 GBytes of RAM will likely be helpful), albeit with requisite performance and power consumption (i.e., battery life) impacts.

Unfortunately, subsequent to the Windows 10 announcement in mid-2015, a notable percentage of x86 applications have migrated to 64-bit versions, for which 32-bit-only emulation won’t suffice. Still, Microsoft announced at the end of September that emulation of 64-bit applications would appear in no later than two months. And I found a claimed “like new” system, at a historically-reputable online retailer, for a notable discount from the brand-new price. So I asked my wife to get it for me for Christmas, and she generously obliged.

November came and went without a 64-bit emulation release, alas. It did show up two weeks later, but its application compatibility is reportedly underwhelming (I’m being charitable with my wording) and more generally, operating system iterations have even backstepped on prior 32-bit emulated app support. And did I mention that the online retailer neglected to mention that my “like new” system arrived absent an AC adapter? So we sent it back within the full-refund time period; to its credit, the retailer picked up the return-shipping tab, too.

The dearth of native Windows-on-Arm applications more than one year after the initial Surface Pro X platform release, particularly from third-party developers, is at least somewhat surprising to me, especially in contrast to Apple’s situation. Microsoft’s fundamental problem is two-fold:

  • Windows-on-Arm is an adjunct to the core Windows-on-x86 business, for Microsoft and third-party developers alike, and
  • Microsoft’s initial Windows RT effort pretty much fell flat on its face, putting a “taint” on subsequent Arm-based projects.

Compare this with Apple’s experience thus far. The company announced its move-to-Arm intentions in June, following up with first systems five months later. Apple’s own applications are already Arm-native, of course, but a raft of third-party developers (including Microsoft itself) have also already unveiled Arm-native software versions. And for lingering legacy x86-only code, Rosetta 2’s translation capabilities have proven to be even more robust (both compatibility- and performance-wise) than I’d prognosticated.

To that point, even after Microsoft’s 64-bit emulation software gets fixed, it and its Windows-on-Arm partners’ hardware will seemingly still be hampered. The various Surface Pro X models, for example, run on the SQ1 and speed-bumped SQ2 processors, which although claimed “custom” are, as far as I can tell, little more than relabeled Qualcomm 8cx SoCs (other Windows-on-Arm OEMs’ systems are also based on various Qualcomm ICs). The 8cx combines four high-performance Kryo 495 Gold (Cortex-A76) “big” cores running at up to 2.84 GHz, along with four low-power Kryo 495 Silver (Cortex-A55) “little” cores operating at up to 1.8 GHz.

This all sounds good until you learn that the Apple M1 SoC’s four high-performance cores clock at up to 3.2 GHz, with the four low-power cores at greater than 2 GHz. The internal cache sizes are also notably larger with the M1 versus the 8cx (and Intel x86, for that matter) counterpart. And there’s one further twist that hasn’t gotten much media notice (yet, at least). When running x86 code, the M1 switches to an optimized memory consistency model total store ordering mode. Reflective of the fundamental gap between the two processors, a “bare metal” virtualized version of Windows for Arm runs notably faster on Apple’s hardware than it does natively on Microsoft’s own.

Presumably in response, Microsoft is reportedly finally making use of the Arm architecture license it took more than a decade ago to develop more optimized Arm-based chips. That’ll take time to reach fruition, of course; Apple’s been rolling its own silicon since the A4 in 2010. And it’s not like Microsoft is particularly known (yet, at least) for its chip-design expertise.

That all said, I realize that I’m still doing a bit of an apples(pun intended)-to-oranges comparison here. Apple’s so-far announced M1-based systems, the revamped MacBook Air and MacBook Pro, are traditional notebook computer form factors, whereas the Surface Pro X is a touchscreen interface-augmented “hybrid” that can operate either as an iPad-reminiscent tablet or (by attaching the keyboard) as a laptop. The Surface Pro X, unlike either the MacBook Air or MacBook Pro, also has built-in LTE cellular data connectivity, but so too does my Surface Pro 5. At the end of the day, I wasn’t willing to wait around for a more robust native-plus-emulated future that may or may not end up coming to pass. So I sent the Surface Pro X back.

Thoughts on anything I’ve shared in this piece, readers? Sound off in the comments!

Brian Dipert is Editor-in-Chief of the Embedded Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related articles:


googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); });

The post Windows on Arm: All of Apple’s challenges with none of its charm appeared first on EDN.

Protecting circuit breakers in a boom wrecker truck

Чтв, 02/25/2021 - 21:52

illustration of an engineer investigating a fire caused by a circuit breaker that a technician has just put out with a fire extinguisherIn the 1980s, I was tasked with figuring out why a million-dollar boom wrecker truck had burned up. The origin of the fire had already been determined; it began in one of the tool boxes. I retrieved the specifications for the wiring harness and switch panel, which was built by a subcontractor.

A 5 A automatically-resetting thermal circuit breaker feeding #18 AWG wire shouldn’t be a problem. Next, I retrieved a switch panel from the stockroom and discovered problem #1: none of the circuit breakers had any markings on them. This switch panel had six circuit breakers rated from 5 to 50 A. How in the world does the subcontractor know which breaker is which? Suspecting incorrect parts, I made a load bank, got a rheostat and ammeter, and went to work.

Sure enough, problem #2 was that the breaker protecting the tool box wiring tripped only after holding 50 A for a few minutes. With both the problem and the obvious solution in hand, I call the supplier.

Now problem #3 rears its ugly head; the supplier knows this circuit breaker is rated at 50 A. All the circuit breakers are rated at 50 A. For more than a decade, the supplier has been building circuit breakers exactly the same way. Dozens of boom wreckers mounted on very expensive trucks, often totaling over a million dollars for the two, have #18 AWG tool box lighting protected with a 50-A circuit breaker. And not just any circuit breaker, but the one that automatically resets itself when it’s cooled down. Okay, now we have a big problem.

Do you have a memorable experience solving an engineering problem at work or in your spare time? Tell us your Tale.

Time for problem #4—the supplier flat out refuses to change the circuit breakers to the values on the drawing to which he is supposedly building these switch panels because “then they might trip.” Yes, because they are supposed to trip. That is literally the only reason they are there. Tripping is their sole purpose in life. However, nothing I say gets through, because this man has built these switch panels for us for over a decade “without a single problem.”

Temporarily stymied but not yet out of the game, I escalate this to my boss, the head of engineering. He and the purchasing agent get the supplier on the phone. No success. They escalate to the owner of the company, a highly-competent engineer himself. Nope, nothing doing. Finally, they throw up their hands and give in.

read more tales from the cubeI revise the design and write the service bulletin to add in-line fuses behind the four circuit breakers that weren’t designed to be 50 A. That’s right, we now had properly-sized in-line fuses to protect the wire right behind 50-A circuit breakers whose job was, apparently, to protect the supplier.

Sometimes inertia is stronger than common sense or good engineering practice.

Since leaving the wrecker industry, Mike Whitfield has spent 27 years working as an electrical designer and is part owner of an AEC engineering company doing electrical design for buildings.

Related articles:

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); });

The post Protecting circuit breakers in a boom wrecker truck appeared first on EDN.

Test probe touts 2-kV differential range

Срд, 02/24/2021 - 20:29

With its 400-MHz bandwidth, the HVD3220 differential probe from Teledyne LeCroy permits in-circuit GaN and SiC system testing. The high-voltage probe carries a 1500-VDC CAT III rating, as well as a 2000-V (DC + peak AC) CAT I rating.

A member of the HVD3000 series of high-voltage differential probes, the HVD3220 enables wide-bandgap power conversion testing at DC bus voltages ranging from 500 to 1500V. It offers a guaranteed 2000-V peak differential voltage range, along with high common-mode rejection ratio (CMRR) across a broad frequency range with a CMRR of 65 dB at 1 MHz. High CMRR improves measurement capability in noisy, high common-mode environments commonly found in power electronics.

The HVD3220 probe boasts gain accuracy of up to 0.35%, while AutoZero capability ensures further measurement precision by allowing small offset drifts to be calibrated out of the measurement. High offset capability of 1500V is available when the probe is used with an HDO series oscilloscope. A ProBus interface provides power and communication to the probe, eliminating the need for a separate power supply or batteries. Attenuation is automatically selected based on oscilloscope gain range (V/div) setting.

Outfitted with a 2-meter cable, the HVD3220 high-voltage differential probe costs $5,250. Tip accessories are included.

HVD3220 product page

Teledyne LeCroy

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

Related articles:

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); });

The post Test probe touts 2-kV differential range appeared first on EDN.

Triaxial accelerometer offers wideband sensing

Срд, 02/24/2021 - 20:04

With a frequency response up to 15 kHz, the 830M1 triaxial accelerometer from TE Connectivity aids in machine health monitoring and predictive maintenance. Piezoelectric sensing crystals in the 830M1 ensure accurate performance, long-term stability, and minimal long-term drift.

Housed in a leadless chip carrier package, the 830M1 measures acceleration in three axes (X, Y, and Z), allowing a compact single-device installation versus three individual sensors. TE’s accelerometer comes in multiple configurations with dynamic ranges from ±25 to ±2000 g and sensitivity from 0.63 to 50 mV/g.

photo of the AspenCore guide to sensors in automotive book


A new book, AspenCore Guide to Sensors in Automotive: Making Cars See and Think Ahead, will help you make sense of the sensor labyrinth in modern vehicles. It’s available now at the EE Times bookstore.


The 830M1 triaxial accelerometer can be used for both low-speed and high-speed rotating machinery, sensing conditions ranging from imbalance to bearing defects and wear. Its fully hermetic LCC package enables operation in harsh industrial environments.

830M1 product page

TE Connectivity 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

Related articles:

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); });

The post Triaxial accelerometer offers wideband sensing appeared first on EDN.

Planning the design cycle for adding an antenna to a wireless device

Срд, 02/24/2021 - 19:16

The design cycle is a little different when the product is wireless and requires an antenna. Antennas change the design process, because it needs to be located with care, in the best position on the PCB, and it’s important to consider its relationship with some of the other components in the design.

Ideally, the designer should arrange the RF elements of the design before considering the other components. In this article, we look at the additional stages that an antenna adds to the design cycle, starting with the selection of the most suitable antenna.

Antenna selection

The most popular embedded antennas are the surface-mounted device (SMD) group. They are popular chiefly because of their efficient use of board space, but also because they can achieve excellent performance within a device. These antennas are tiny; they can be as small as 1-mm across and are reflowed directly onto the host PCB during the board assembly process. Generally, they are manufactured from high-grade dielectric laminate substrates.

The designer might also consider using a ready-made antenna module. These contain the antenna in a small package with other components, ready to drop into the design. The chief advantage of choosing an antenna module is the fact that it can be popped into the design, and the RF circuitry is provided ready-made.

Flexible printed circuit (FPC) antennas can be an interesting option where the component layout of the design is restricted in available board space, or where, for some reason, an SMD antenna will not easily fit. The FPC comprises a thin layer of copper cover tape with an integral cable and a UFL connector to join them to the circuit board. It’s thin enough to be slightly bent to a curved surface, and tucked into a small space within the device, maybe fixed inside the outer casing of the design. FPCs are a good choice where space is tight, and are popular in small handheld devices.

If the design for the device contains materials that might compromise performance, the designer might choose an external antenna (Figure 1). Embedded antennas tend to not achieve strong performance close to metal components, so if large metal features are incorporated into the design, a terminal or case-mounted antenna placed to the outside of the device may provide the best performance.

photos of three external antenna optionsFigure 1 External antenna options include (left to right) an SMD antenna, an FPC antenna, or terminal antennas. Source: Antenova

These antennas are manufactured with their own insulating material to isolate the RF signal, and if there are metal parts close by, they will still perform with minimal losses. Case-mounted antennas free up space for other components on the PCB, and as they are not as sensitive to the other parts in the design, they are easier to integrate.

Antenna placement

Of all the components in a typical wireless design, the antenna is probably the most sensitive to its position. Therefore, it is recommended that the placement of the antenna be decided right at the start.

The SMD antenna is soldered directly to the host PCB, and the position of the antenna has implications for its RF performance. The antenna will radiate in six directions along an axis, usually along the length of the antenna. This means that to perform well, it should have as many directions as possible free from reflective and absorptive obstructions. For this reason, antennas are often placed on the corner of the PCB, or designed to be used on one of the edges of the PCB. Manufacturers design their antennas to operate in different positions, and the datasheet for each antenna will specify exactly how the antenna radiates, and how to place it on the host PCB to optimize performance.

illustration of an SMD placed on the long edge of a PCBFigure 2 An SMD antenna is placed on the long edge of a PCB. Source: Antenova

There are certain components that need to be placed as far away from the antenna as possible, because they create noise and are likely to cause impedance to the radiated performance of the antenna. The chief culprits for causing interference are motors, batteries, and any components with a high-metal content such as LCDs.

Finally, the outer casing for the device may also cause interruptions for the radiated fields of the antenna. If the device has a plastic cover, be careful, because plastic has a higher dielectric constant than air, and can likely detune the resonant frequency of the antenna.

Ground planes and board design for RF

SMD antennas typically require a ground plane to radiate. In an embedded design, the ground plane is a section of the PCB which provides a flat contiguous surface for the RF signal to reciprocate from. The ground plane must be of a certain length, which is related to the longest wavelength of the antenna. It’s therefore critical to provide the correct amount of space for the ground plane on the PCB, as this will allow the antenna to radiate efficiently.

Again, it will be explained in the antenna manufacturer’s datasheet. Sometimes the ground plane is below the antenna, and sometimes it’s adjacent to it; this will vary from antenna to antenna, and will be a factor in your choice of SMD antenna.

As well as requiring a ground plane, antennas often require a certain space around them to be free of any other components—a keep-out area. The keep-out requirements for each antenna are also unique for each individual antenna, and these areas will need to be kept clear of other components, possibly through several layers, if not all of the board beneath the antenna.

The RF performance of the device will be best if the RF trace lines are kept as short as possible from the radio to the antenna. This is because longer transmission lines are more prone to reflections and signal energy losses in the copper trace, which can degrade the overall radiated performance of the device. Therefore, we recommend that the RF elements of the design be placed as close as possible to the antenna.

Some designs will benefit from a lumped element matching circuit—such as a Pi matching circuit—within the RF circuitry to tune the antenna for improved working bandwidth.

illustration of an antenna design with an active tuning circuitFigure 3 An antenna design with an active tuning circuit can overcome the bandwidth reduction seen with a smaller ground plane. Source: Antenova

Gerber review and RF testing

Before the design is finalized, a Gerber file layout review provides a good check of the RF circuits and transmission lines in the layer stack-up of the PCB design and will flag any areas that are not quite correct. The Gerber review checks that the antenna, the module, the transmission lines, vias, and PCB materials are all optimized for good RF performance. Some antenna design companies charge for Gerber reviews, while others offer it free of charge, or you may use a software design package for this purpose.

The next test will be to measure how well the antenna performs on its PCB. This is done in an anechoic chamber. However, the antenna may work well in the perfect conditions of the chamber, and then behave differently in its final application, where people and objects in the environment could affect how the antenna radiates. So, with a design for a wearable device, or a medical device to be used close to the human body, the antenna should be tuned and tested with a phantom head or phantom hand in the anechoic chamber.

A few more tests can assess how well the design will work in real life: passive testing, over-the-air (OTA) testing, and synthetic aperture radar (SAR). The results will be measured for efficiency, spurious emissions, total radiated power, and total isotropic sensitivity.

It’s critical to test the design, to be sure that the device will perform correctly in day-to-day use, and will not create emissions or interference. These tests are critical where the design requires carrier network approval, and it’s usual to use a specialist RF testing service.

Finally, every design for the cellular networks must be certified by the mobile network carrier to gain approval to be used on its network.

Geoff Schulteis, an RF antenna application specialist, leads technical support for Antenova’s North American customer designs.

Related articles:

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); });

The post Planning the design cycle for adding an antenna to a wireless device appeared first on EDN.

Qt 6.0: The journey to the next upgrade for cross-platform applications

Срд, 02/24/2021 - 00:53

Qt—which lives in desktop applications, embedded systems, and mobile devices in consumer electronics, vehicles, medical devices, and industrial automation systems to support cross-platform applications and graphical user interfaces—has an upgrade.

screenshot of Qt 6.0 window showing binaries for desktop platformsFigure 1 A pre-release snapshot of Qt 6.0 shows binaries for desktop platforms. Source: The Qt Company

Qt 6.0 claims to provide a one-stop-shop for software design and development based on three pillars. Start with productivity-enhancing tools and APIs that aim to close the gap between the increasing amount of software requirements rising with the exponential growth of the IoT and the stagnant growth of available software developers.

Productivity tools

Qt’s CTO Lars Knoll said that Qt developers will be able to launch Qt applications on any graphics hardware with maximum performance without any runtime overhead. The upgraded rendering hardware interface (RHI) and Qt Quick 3D are the most prominent improvements from a human-machine-interface (HMI) creation perspective.

example of Qt 6.0 3D graphics showing a sapce helmet against an outdoor backgroundFigure 2 While Qt 5 relied on OpenGL for hardware-accelerated graphics, with Qt 6, all 3D graphics in Qt Quick are now built on top of a new abstraction layer for 3D graphics called RHI. Source: The Qt Company

The RHI allows the running of Qt on any hardware acceleration platform. That includes OpenGL and Vulkan on desktop and Metal and Direct3D for mobile platforms. Next, Qt Quick 3D allows for interaction and merging of 2D and 3D content, as modern applications need to utilize both concepts for appealing and modern-looking user interfaces.

Enhanced user experience

The sixth major version of Qt takes a more holistic approach to software development while offering a new graphics architecture and programming language improvements. Qt has unified the tools and has made their use easier for cross-functional teams building 2D and 3D applications.

For instance, Qt Design Studio 2.0 enables designers to create compelling experiences for 2D and 3D user interfaces. “We have also enhanced Qt’s support for MCUs in Qt Design Studio,” Knoll said. “If the user creates a Qt for MCUs, Qt features that are not available for MCUs will also be disabled in the UI.”

Scalability boost

Qt 6.0 allows the same code to be used on any hardware of any size—from MCUs to supercomputers—on any operating system and even on bare metal without an operating system. “It improves coding efficiency to a level that even ultra-low-cost hardware can support smartphone-like user interfaces,” said Knoll.

He added that Qt 6.0 has been putting a lot of effort into coding efficiency and making developers feel comfortable using Qt as much as possible. For instance, Qt 6.0 is based upon C++17, bringing lots of innovations and programming improvements to any C++ developer.

3 screenshots of Qt 6.0 windows showing native styling on macOS and WindowsFigure 3 With 6.0, Qt Quick now supports native styling on both macOS and Windows. Source: The Qt Company

Knoll, also the chief maintainer of the Qt Project, pointed out that tools generally serve a dedicated purpose, “However, what’s important is how they interact with each other.” Qt 6.0 aims to allow designers to implement various dynamic and interactive behaviors—such as navigation flows, UI states, transitions, and animations—and reduce the amount of required specifications and implementation effort.

Majeed Ahmad, Editor-in-Chief of EDN, has covered the electronics design industry for more than two decades.

Related articles:

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); });

The post Qt 6.0: The journey to the next upgrade for cross-platform applications appeared first on EDN.

Harvest indoor lighting for IoT devices

Пн, 02/22/2021 - 18:19

Ambient light is a prime source of harvestable energy for low-power electronics devices. Most photovoltaic cells, however, are designed to work with direct sunlight and are ineffective indoors. Now, an organic light cell is available that can glean power from lighting as low as 20 Lux (lumens per square meter), opening the door for indoor energy harvesting in the IoT.

As the EDN article Energy harvesting expands IoT options points out, there are many applications for which even battery power is impractical. The cost and logistics of changing batteries can quickly become too burdensome for a widespread product installation. Outfitting a large apartment complex with traditional keyless electronic door locks, for instance, would require periodic replacement of hundreds of batteries, an unappealing prospect for building operators. Harvesting the energy of hallway lighting, however, could eliminate that requirement by continuously storing power against the lock’s occasional use.

The trouble with traditional photovoltaic cells in such an application is their poor response to low light levels. Such cells are designed to be effective in direct sunlight, hence the name “solar cell,” which has a light density of 30,000 to 100,000 Lux. Typical indoor home lighting, though, ranges from 50 to 150 Lux. At these levels traditional solar cells produce almost no usable power.

This inability of solar cells to be effective at low light levels has limited their utility for indoor energy harvesting applications. A new light cell from Swedish company Epishine promises to solve that problem. The company has created an organic photovoltaic cell optimized to work efficiently at illumination levels from 20 to 1000 Lux from artificial lighting such as LEDs and fluorescents.

The Epishine light cell is fabricated by printing the active elements onto plastic films that get pressed together. The result is a thin (0.2 mm), flexible product that produces about 18 µW/cm2 at an illumination of 500 Lux. The finished cell has a 10-mm minimum bend radius, so it can be curved (Figure 1) and fabricated to a custom shape to fit installation requirements.

illustration of an Epishine light cell for indoor energy harvestingFigure 1 This indoor light cell is printed on a plastic film, giving it a flexibility that opens installation options. Source: Epishine

In addition to its flexibility, the Epishine light cell has another handy attribute – it is essentially transparent to radio waves. The active organic electronic layer is only around 100-nm thick, so there is almost no RF absorption. This gives IoT designers the space-saving option of positioning their antenna behind the light cell rather than alongside.

Stock versions of the light cell are available in three sizes – 50-mm wide and 20-, 30-, and 50-mm in height. They come in two configuration options, six and eight series-connected cells. Each cell has an open-circuit voltage (VOC) a little over half a volt, so six cells in series yields VOC of 3.8V and eight cells yields 5.05V, making them compatible with common battery-pack voltages. As shown in Figure 2, a 50×50 mm, six-cell configuration would yield at least 40 μA at 3V in normal home lighting.

graph showing power generation for the Epishine light cells for indoor energy harvestingFigure 2 Power generation matching the needs of many small microcontrollers is available from normal room lighting with the Epishine light cell. Source: Epishine

While microamperes seem like a pretty meager energy supply, they can be more than enough to power today’s tiny, energy-efficient microcontrollers – at least while the lights are on. For applications needing to operate in darkness or to supply intermittent power at higher levels, the light cell can be paired with an energy storage device, such as a rechargeable battery. Properly sized to accommodate battery and light cell degradation over time, the energy harvesting power system can outlast the design’s installed operating life.

Rich Quinnell is a retired engineer and writer, and former Editor-in-Chief at EDN.

Related articles:

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); });

The post Harvest indoor lighting for IoT devices appeared first on EDN.