EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 2 години 55 хв тому

Handheld analyzers gain pulse generator option

Чтв, 05/09/2024 - 22:26

FieldFox handheld RF analyzers from Keysight can now generate an array of pulse types at frequencies as high as 54 GHz. Outfitted with Option 357 pulse generator software, the FieldFox B- and C-Series analyzers give field engineers access to pulse generation capabilities that support analog modulations and user-defined pulse sequences. All that is needed to upgrade an existing analyzer is a software license key and firmware upgrade.

The software option includes standard pulses, FM chirps, FM triangles, AM pulses, and user-definable pulse sequences. In addition, it can create continuous wave (CW) signals with or without AM/FM modulations, including frequency shift keying (FSK) and binary phase shift keying (BPSK). Key parameters of the generated signal are displayed in both numerical and graphical formats.

FieldFox handheld analyzers equipped with pulse generation serve many purposes, including field radar testing for air traffic control, simulating automotive radar scenarios, performing field EMI leakage checks, and assessing propagation loss of mobile networks.

FieldFox product page

Keysight Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Handheld analyzers gain pulse generator option appeared first on EDN.

Software platform streamlines factory automation

Чтв, 05/09/2024 - 22:26

Reducing shop-floor hardware, Siemens’ Simatic Automation Workstation delivers centralized software-defined factory automation and control. The system allows manufacturers to replace a hardware programmable logic controller (PLC), conventional human-machine interface (HMI), and edge device with a single software-based workstation.

Hundreds of PLCs can be found throughout plants, each one requiring extensive programming to keep it up-to-date, secure, and aligned with other PLCs in the manufacturing environment. In contrast, the Simatic Workstation can be viewed and managed from a central point. Since programming, updates, and patches can be deployed to the entire fleet in parallel, the shop floor remains in synch.

Simatic Workstation is an on-premise operational technology (OT) platform. It offers high data throughput and low latency, essential for running various modular applications. Simatic caters to conventional automation tasks, like motion control and sequencing, as well as advanced automation operations that incorporate artificial intelligence.

The Simatic Automation Workstation is the latest addition to Siemens’ Xcelerator digital business platform. Co-creator Ford Motor Company will be the first customer to deploy and scale these workstations across its manufacturing operations.

Siemens

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Software platform streamlines factory automation appeared first on EDN.

Silicon capacitor boasts ultra-low ESL

Чтв, 05/09/2024 - 22:26

Joining Empower’s family of E-CAP silicon capacitors for high-frequency decoupling is the EC1005P, a device with an equivalent series inductance (ESL) of just 1 picohenry (pH). The EC1005P offers a capacitance of 16.6 µF, along with low impedance up to 1 GHz. A very thin profile allows the capacitor to be embedded into the substrate or interposer of any SoC, especially those used in high-performance computing (HPC) and artificial intelligence (AI) applications.

E-CAP high-density silicon capacitor technology fulfills the ‘last inch’ decoupling gap from the voltage regulators to the SoC supply pins. This approach integrates multiple discrete components into a single monolithic device with a much smaller footprint and component count than solutions based on conventional multilayer ceramic capacitors.

In addition to sub-1-pH ESL, the EC1005P provides sub-3-mΩ equivalent series resistance (ESR). The capacitor comes in a 3.643×3.036-mm, 120-pad chip-scale package. Its standard profile of 784 µm can be customized for various height requirements.

The EC1005P E-CAP is sampling now, with volume production expected in Q4 2024. A datasheet for the EC1005P was not available at the time of this announcement. For more information about Empower’s ECAP product family, click here.

Empower Semiconductor 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Silicon capacitor boasts ultra-low ESL appeared first on EDN.

Crossbar switch eases in-vehicle USB-C connectivity

Чтв, 05/09/2024 - 22:25

A 10-Gbps automotive-grade crossbar switch from Diodes routes USB 3.2 and DisplayPort 2.1 signals through a USB Type-C connector. The PI3USB31532Q crossbar switch maintains high signal integrity when used in automotive smart cockpit and rear seat entertainment applications.

For design flexibility, the PI3USB31532Q supports three USB-C compliant configuration modes switching at 10 Gbps. It can connect a single lane of USB 3.2 Gen 2; one lane of USB 3.2 Gen 2 and two channels of DisplayPort 2.1 UHBR10; or four channels of DisplayPort 2.1 UHBR10 to the USB-C connector. When configured for DisplayPort, the switch also connects the complementary AUX channels to the USB-C sideband pins. Switch configuration is controlled via an I2C interface or on-chip logic using four external pins.

The crossbar switch provides a -3-dB bandwidth of 8.3 GHz, with insertion loss, return loss and crosstalk of -1.7 dB, -15 dB, and -38 dB, respectively, at 10 Gbps. Qualified to AEC-Q100 Grade 2 requirements, the part operates over a temperature range of -40°C to +105°C and requires a 3.3-V supply.

Housed in a 3×6-mm, 40-pin QFN package, the PI3USB31532Q crossbar switch costs $1.10 each in lots of 3500 units.

PI3USB31532Q product page

Diodes

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Crossbar switch eases in-vehicle USB-C connectivity appeared first on EDN.

MCU manages 12-V automotive batteries

Чтв, 05/09/2024 - 22:25

Infineon’s PSoC 4 HVPA 144k MCU serves as a programmable embedded system for monitoring and managing automotive 12-V lead-acid batteries. The ISO 26262-compliant part integrates precision analog and high-voltage subsystems on a single chip, enabling safe, intelligent battery sensing and management.

Powered by an Arm Cortex-M0+ core operating at up to 48 MHz, the 32-bit microcontroller supplies up to 128 kbytes of code flash, 8 kbytes of data flash, and 8 kbytes of SRAM, all with ECC. Dual delta-sigma ADCs, together with four digital filtering channels, determine the battery’s state-of-charge and state-of-health by measuring voltage, current, and temperature with an accuracy of up to ±0.1%.

An integrated 12-V LDO regulator, which tolerates up to 42 V, allows the device to be supplied directly from the 12-V lead-acid battery without requiring an external power supply. The high-voltage subsystem also includes a LIN transceiver (physical interface or PHY).

The PSoC 4 HVPA 144k is available now in 6×6-mm, 32-pin QFN packages. Infineon also offers an evaluation board and automotive-grade software.

PSoC 4 HVPA 144k product page

Infineon Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post MCU manages 12-V automotive batteries appeared first on EDN.

2×AA/USB: OK!

Чтв, 05/09/2024 - 16:20

While an internal, rechargeable lithium battery is usually the best solution for portable kit nowadays, there are still times when using replaceable cells with an external power option, probably from a USB source, is more appropriate. This DI shows ways of optimizing this.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The usual way of combining power sources is to parallel them, with a series diode for each. That is fine if the voltages match and some loss of effective battery capacity, owing to a diode’s voltage drop, can be tolerated. Let’s assume the kit in question is something small and hand-held or pocketable, probably using a microcontroller like a PIC, with a battery comprising two AA cells, the option of an external 5 V supply, and a step-up converter producing a 3.3 V internal power rail. Simple steering diodes used here would give a voltage mismatch for the external power while wasting 10 or 20% of the battery’s capacity.

Figure 1 shows a much better way of implementing things. The external power is pre-regulated to avoid the mismatch, while active switching minimizes battery losses. I have used this scheme in both one-offs and production units, and always to good effect.

Figure 1 Pre-regulation of an external supply is combined with an almost lossless switch in series with the battery, which maximizes its life.

The battery feed is controlled by Q1, which is a reversed p-MOSFET. U1 drops any incoming voltage down to 3.3 V. Without external power, Q1’s gate is more negative than its source, so it is firmly on, and (almost) the full battery voltage appears across C3 to feed the boost converter. Q2’s emitter–base diode stops any current flowing back into U1. Apart from the internal drain–source or body diode, MOSFETs are almost symmetrical in their main characteristics, which allows this reversed operation.

When external power is present, Q1.G will be biased to 3.3 V, switching it off and effectively disconnecting the battery. Q2 is now driven into saturation connecting U1’s 3.3 V output, less Q2’s saturated forward voltage of 100–200 mV, to the boost converter. (The 2N2222, as shown, has a lower VSAT than many other types.) Note that Q2’s base current isn’t wasted, but just adds to the boost converter’s power feed. Using a diode to isolate U1 would incur a greater voltage drop, which could cause problems: new, top-quality AA manganese alkaline (MnAlk) cells can have an off-load voltage well over 1.6 V, and if the voltage across C3 were much less than 3 V, they could discharge through the MOSFET’s inherent drain–source or body diode. This arrangement avoids any such problems.

Reversed MOSFETs have been used to give battery-reversal protection for many years, and of course such protection is inherent in these circuits. The body diode also provides a secondary path for current from the battery if Q1 is not fully on, as in the few microseconds after external power is disconnected.

Figure 1 shows U1 as an LM1117-3.3 or similar type, but many more modern regulators allow a better solution because their outputs appear as open circuits when they are unpowered, rather than allowing reverse current to flow from their outputs to ground. Figure 2 shows this implementation.

Figure 2 Using more recent designs of regulator means that Q2 is no longer necessary.

Now the regulator’s output can be connected directly to C3 and the boost converter. Some devices also have an internal switch which completely isolates the output, and D1 can then be omitted. Regulators like these could in principle feed into the final 3.3mV rail directly, but this can actually complicate matters because the boost converter would then also need to be reverse-proof and might itself need to be turned off. R2 is now used to bias Q1 off when external power is present.

If we assume that the kit uses a microcontroller, we can easily monitor the PSU’s operation. R5—included purely for safety’s sake—lets the microcontroller check for the presence of external power, while R3 and R4 allow it to measure the battery voltage accurately. Their values, calculated on the assumption that we use an 8-bit A–D conversion with a 3.3 V reference, give a resolution of 10 mV/count, or 5 mV per cell. Placing them directly across the battery loads it with ~5–6 µA, which would drain typical cells in about 50 years; we can live with that. The resistor ratio chosen is close to 1%-accurate.

Many components have no values assigned because they will depend on your choice of regulator and boost converter. With its LM1117-3.3, the circuit of Figure 1 can handle inputs of up to 15 V, though a TO-220 version then gets rather warm with load currents approaching 80 mA (~1 W, its practical power limit without heatsinking).

I have also used Figure 2 with Microchip’s MCP1824T-3302 feeding a Maxim MAX1674 step-up converter, with an IRLML6402 for Q1, which must have a low on-resistance. Many other, and more recent, devices will be suitable, and you probably have your own favorites.

While the external power input is shown as being naked, you may want to clothe it with some filtering and protection such as a poly-fuse and a suitable Zener or TVS. Similarly, no connector is specified, but USBs and barrel jacks both have their places.

While this is shown for nominal 3V/5V supplies, it can be used at higher voltages subject to gate–source voltage limitations owing to the MOSFET’s input protection diodes, the breakdown voltages of which can range from 6 V to 20 V, so check your device’s data sheet.

Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 2×AA/USB: OK! appeared first on EDN.

Optimize battery selection and operating life of wireless IoT devices

Чтв, 05/09/2024 - 14:59

Batteries are essential for powering many Internet of Things (IoT) devices, particularly wireless sensors, which are now deployed by billions. But batteries are often difficult to access and expensive to change because it’s a manual process. Anything that can be done to maximize the life of batteries and minimize or eliminate the need to change them during their operating life is a worthwhile endeavour and a significant step toward sustainability and efficiency.

Taking the example of a wireless sensor, this is a five-step process:

  1. Select the components for your prototype device: sensor, MCU, and associated electronics.
  2. Use a smart power supply with measurement capabilities to establish a detailed energy profile for your device under simulated operating conditions.
  3. Evaluate your battery options based on the energy profile of your device.
  4. Optimize the device parameters (hardware, firmware, software, and wireless protocol).
  5. Make your final selection of the battery type and capacity with the best match to your device’s requirements.

Selecting device type and wireless protocol

Microcontroller (MCU) is the most common processing resource at the heart of embedded devices. You’ll often choose which one to use for your next wireless sensor based on experience, the ecosystem with which you’re most familiar, or corporate dictate. But when you have a choice and conserving energy is a key concern for your application, there may be a shortcut.

Rather than plow through thousands of datasheets, you could check out EEMBC, an independent benchmarking organization. The EEMBC website not only enables a quick comparison of your options but also offers access to a time-saving analysis tool that lists the sensitivity of MCU platforms to various design parameters.

Most IoT sensors spend a lot of time in sleep mode and send only short bursts of data. So, it’s important to understand how your short-listed MCUs manage sleep, idle and run modes, and how efficiently they do that.

Next, you need to decide on the wireless protocol(s) you’ll be using. Range, data rate, duty cycle, and compatibility within the application’s operating environment will all be important considerations.

Figure 1 Data rates and range are the fundamental parameters considered when choosing a wireless protocol. Source: BehrTech

Once you’ve established the basics, digging into the energy efficiency of each protocol gets more complex and it’s a moving target. There are frequent new developments and enhancements to established wireless standards.

At data rates of up to 10 Kbps, Bluetooth LE/Mesh, LoRa, or Zigbee are usually the lowest energy protocols of choice for distances up to 10 meters. If you need to cover a 1-km range, NB-IoT may be on your list, but at an order of magnitude higher energy usage.

In fact, MCU hardware, firmware and software, the wireless protocol, and the physical environment in which an IoT device operates are all variables that need to be optimized to conserve energy. The only effective way to do that is to model these conditions during development and watch the effect of changes on the fly as you make changes to any of these parameters.

Establish an initial energy profile of device under test (DUT)

The starting point is to use a smart, programmable power supply and measurement unit to profile and record the energy usage of your device. This is necessary because simple peak and average power measurements with multimeters can only provide limited information. The Otii Arc Pro from Qoitech was used here to illustrate the process.

Consider a wireless MCU. In run mode, it may be putting out a +6 dBm wireless signal and consuming 10 mA or more. In deep sleep mode, the current consumption might fall below 0.2 µA. That’s a 50:1 dynamic range and changes happen almost instantaneously, certainly within microseconds. Conventional multimeters can’t capture changes like these, so they can’t help you understand the precise energy profile of your device. Without that, your choice of battery is open to miscalculation.

Your smart power supply is a digitally controlled power source offering control over parameters such as voltage, current, power, and mode of operation. Voltage control should ideally be in 1 mV steps so that you can determine the DUT’s energy consumption at different voltage levels to mimic battery discharge.

You’ll need sense pins to monitor the DUT power rails, a UART to see what happens when you make code changes, and GPIO pins for status monitoring. Standalone units are available, but it can be more flexible and economical to choose a smart power supply that uses your computer’s processing resources and display, as shown in the example below.

Figure 2 The GUI for a smart power supply can run on Windows, MacOS, or Ubuntu. Source: Qoitech

After connecting, you power and monitor the DUT simultaneously. You’re presented with a clear picture of voltages and current changes over time. Transients that you would never be able to see on a traditional meter are clearly visible and you can immediately detect unexpected anomalies.

Figure 3 A smart power profiler gives you a detailed comparison of your device’s energy consumption for different hardware and firmware versions. Source: Qoitech

From the stored data in the smart power supply, you’ll be able to make a short list of battery options.

Choosing a battery

Battery selection needs to consider capacity, energy density, voltage, discharge profile, and temperature. Datasheet comparisons are the starting point but it’s important to validate the claims of battery manufacturers by benchmarking their batteries through testing. Datasheet information is based on performance under “normal conditions” which may not apply to your application.

Depending on your smart power supply model, the DUT energy profiling described earlier may provide an initial battery life estimate based on a pre-programmed battery type and capacity. Either the same instrument or a separate piece of test equipment may then be used for a more detailed examination of battery performance in your application. Accelerated discharge measurements, when properly set up, are a time-saving alternative to the years it may take a well-designed IoT device to exhaust its battery.

These measurements must follow best practices to create an accurate profile. These include maintaining high discharge consistency to achieve a match to the DUT’s peak current, shortening the cycle time and increasing sleep current so that the battery can recover. You should also consult with battery manufacturers to validate any assumptions you make during the process.

You can profile the same battery chemistries from different manufacturers, or different battery chemistries, perhaps comparing lithium coin cells with AA alkaline batteries.

Figure 4 The comparison shows accelerated discharge characteristics for AA and AAA alkaline batteries from five different manufacturers. Source: Qoitech

By this stage, you have a good understanding of both the energy profile of your device and of the battery type and capacity that’s likely to result in the longest operating life in your applications. Upload your chosen battery profile to your smart power supply and set it up to emulate that battery.

Optimize and iterate

You can now go back to the DUT and optimize hardware and software for the lowest power consumption in near real-world conditions. You may have the flexibility to experiment with different wireless protocols, but even if that’s not the case, experimenting with sleep and deep-sleep modes, network routing, and even alternative data security protocols can all yield improvements, avoiding a common problem where 40 bytes of data can easily become several Kbytes.

Where the changes create a significant shift in your device’s energy profile, you may also review the choice of battery and evaluate again until you achieve the best match.

While this process may seem lengthy, it can be completed in just a few hours and may extend the operating life of a wireless IoT edge device, and hence reduce battery waste, by up to 30%.

Björn Rosqvist, co-founder and chief product officer of Qoitech, has 20+ years of experience in power electronics, automotive, and telecom with companies such as ABB, Ericsson, Flatfrog, Sony, and Volvo Cars.

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Optimize battery selection and operating life of wireless IoT devices appeared first on EDN.

Apple’s Spring 2024: In-person announcements no more?

Срд, 05/08/2024 - 17:35

By means of introduction to my coverage of Apple’s 2024-so-far notable news, I’d like to share the amusing title, along with spot-on excerpts from the body text, from a prescient piece I saw on Macworld yesterday. The title? “Get ready for another Apple meeting that could have been an email”. Now the excerpts:

Apple started running virtual press events during the pandemic when in-person gatherings made little sense and at various times were frowned upon or literally illegal. But Apple has largely stuck with that format even as health concerns lessened and its own employees were herded back into the office.

 Why is that? Because virtual events have advantages far beyond the containment of disease. Aside from avoiding the logistical headaches of getting a thousand bad-tempered journalists from around the world to the same place at the same time, a pre-recorded video presentation is much easier to run smoothly than a live performance…

 Nobody cringes harder than me when live performers get things wrong, and I absolutely get the attraction of virtual keynotes for Apple. But it does raise some awkward existential questions about why we need to bother with the elaborate charade that is a keynote presentation. What, after all, is the point of a keynote? If it’s just to get information about new products, that can be done far more efficiently via a press release that you can read at your own speed; just the facts, no sitting through skits and corporate self-congratulation.

 Is it to be marketed by the best hypemen in the business? If that’s really something you want, you might as well get it from an ad: virtual keynotes give none of that dubious excitement and tribalistic sense of inclusivity you get with a live performance. And we’ve even lost the stress-test element of seeing an executive operating the product under extreme pressure. What we’re left with is a strange hybrid: a long press release read out by a series of charisma-free executives, interspersed with advertisements.

I said something similar in my coverage of Apple’s June 2023 Worldwide Developer Conference (WWDC):

This year’s event introductory (and product introduction) presentation series was lengthy, with a runtime of more than two hours, and was also entirely pre-recorded. This has been Apple’s approach in recent years, beginning roughly coincident with the COVID lockdown and consequent transition to a virtual event (beginning in 2020; 2019 was still in-person)…even though both last- and this-years’ events returned to in-person from a keynote video viewing standpoint.

 On the one hand, I get it; as someone who (among other things) delivers events as part of his “day job”, the appeal of a tightly-scripted, glitch-free set of presentations and demonstrations can’t be understated. But live events also have notable appeal: no matter how much they’re practiced beforehand, there’s still the potential for a glitch, and therefore when everything still runs smoothly, what’s revealed and detailed is (IMHO) all the more impactful as a result.

What we’ve ended up with so far this year is a mix of press release-only and virtual-event announcements, in part (I suspect, as does Macworld) because of “building block” mass-production availability delays for the products in today’s (as I write these words on Tuesday, May 7) news.

But I’m getting ahead of myself.

The Vision Pro

Let’s rewind to early January, when Apple confirmed that its first-generation Vision Pro headset (which I’d documented in detail within last June’s WWDC coverage) would open for pre-orders on January 19, with in-store availability starting February 2.

Granted, the product’s technology underpinnings remain amazing 11 months post-initial unveil:

But I’m still not sold on the mainstream (translation: high volume) appeal of such a product, no matter how many entertainment experiences and broader optimized applications Apple tries to tempt me with (and no matter how much Apple may drop the price in the future, assuming it even can to a meaningful degree, given bill-of-materials cost and profit-expectation realities). To be clear, this isn’t an Apple-only diss; I’ve expressed the same skepticism in the past about offerings from Oculus-now-Meta and others. And at the root of my pessimism about AR/VR/XR/choose-your-favorite-acronym (or, if you’re Apple, “spatial computing”, whatever that means) headsets may indeed be enduring optimism of a different sort.

Unlike the protagonists of science fiction classics such as William Gibson’s Neuromancer and Virtual Light, Neal Stephenson’s Snow Crash, and Ernest Cline’s Ready Player One, I don’t find the real world to be sufficiently unpleasant that I’m willing to completely disengage from it for long periods of time (and no, the Vision Pro’s EyeSight virtual projected face doesn’t bridge this gap). Scan through any of the Vision Pro reviews published elsewhere and you’ll on-average encounter similar lukewarm-at-best enthusiasm from others. And I can’t help but draw an accurate-or-not analogy to Magic Leap’s 2022 consumer-to-enterprise pivot when I see subsequent Apple press releases touting medical and broader business Vision Pro opportunities.

So is the Vision Pro destined to be yet another Apple failure? Maybe…but definitely not assuredly. Granted, we might have another iPod Hi-Fi on our hands, but keep in mind that the first-generation iPhone and iPad also experienced muted adoption. Yours truly even dismissively called the latter “basically a large-screen iPod touch” on a few early occasions. So let’s wait and see how quickly the company and its application-development partners iterate both the platform’s features and cost before we start publishing headlines and crafting obituaries about its demise.

The M3-based MacBook Air

Fast-forward to March, and Apple unveiled M3 SoC-based variants of the MacBook Air (MBA), following up on the 13” M2-based MBA launched at the 2022 WWDC and the first-time-in-this-size 15” M2 MBA unveiled a year later:

Aside from the Apple Silicon application processor upgrade (first publicly discussed last October), there’s faster Wi-Fi (6E) along with an interesting twist on expanded external-display support; the M3-based models can now simultaneously drive two of ‘em, but only when the “clamshell” is closed (i.e., when the internal display is shut off). But the most interesting twist, at least for this nonvolatile-memory-background techie, is that Apple did a seeming back-step on its flash memory architecture. In the M2 generation, the 256 GByte SSD variant consisted of only a single flash memory chip (presumably single-die, to boot, bad pun intended), which bottlenecked performance due to the resultant inability for multi-access parallelism. To get peak read and (especially evident) write speeds, you needed to upgrade to a 512 GByte or larger SSD.

The M3 generation seemingly doesn’t suffer from the same compromise. A post-launch teardown revealed that (at least for that particular device…since Apple multi-sources its flash memory, one data point shouldn’t necessarily be extrapolated to an all-encompassing conclusion) the 256 GByte SSD subsystem comprised two 128 GByte flash memory chips, with consequent restoration of full performance potential. I’m particularly intrigued by this design decision considering that two 128 GByte flash memories conceivably cost Apple more than one 256 GByte alternative (likely the root cause of the earlier M1-to-M2 move). That said, I also don’t underestimate the formidable negotiation “muscle” of Apple’s procurement department…

Earnings

Last week, we got Apple’s second-fiscal-quarter earnings results. I normally don’t cover these at all, and I won’t dwell long on the topic this time, either. But they reveal Apple’s ever-increasing revenue and profit reliance on its “walled garden” services business (to the ever-increasing dismay of its “partners”, along with various worldwide government entities), given that hardware revenue dropped for all hardware categories save Macs, notably including both iPhone and iPad and in spite of the already-discussed Vision Pro launch. That said, the following corporate positioning seemed to be market-calming:

In the March quarter a year ago, we were able to replenish iPhone channel inventory and fulfill significant pent-up demand from the December quarter COVID-related supply disruptions on the iPhone 14 Pro and 14 Pro Max. We estimate this one-time impact added close to $5 billion to the March quarter revenue last year. If we removed this from last year’s results, our March quarter total company revenue this year would have grown.

The iPad Air

And today we got new iPads and accessories. The iPad Air first:

Reminiscent of the aforementioned MacBook Air family earlier this year, they undergo a SoC migration, this time from the M1 to the M2. They also get a relocated front camera, friendlier (as with 2022’s 10th generation conventional iPad) for landscape-orientation usage. And to the “they” in the previous two sentences, as well as again reminiscent of the aforementioned MacBook Air expansion to both 13” and 15” form factors, the iPad Air now comes in both 11” and 13” versions, the latter historically only offered with the iPad Pro.

Speaking of which

The M4 SoC

Like their iPad Air siblings, the newest generation of iPad Pros relocate the front camera to a more landscape orientation-friendly bezel location. But that’s among the least notable enhancements this time around. On the flip side of the coin, perhaps most notable news is that they mark the first-time emergence of Apple’s M4 SoC. I’ll begin with obligatory block diagrams:

Some historical perspective is warranted here. Only six months ago, when Apple rolled out its first three (only?) M3 variants along with inclusive systems, I summarized the to-date situation:

Let’s go back to the M1. Recall that it ended up coming in four different proliferations:

  • The entry-level M1
  • The M1 Pro, with increased CPU and GPU core counts
  • The M1 Max, which kept the CPU core constellation the same but doubled up the graphics subsystem, and
  • The M1 Ultra, a two-die “chiplet” merging together two M1 Max chips with requisite doubling of various core counts, the maximum amount of system memory, and the like

But here’s the thing: it took a considerable amount of time—1.5 years—for Apple to roll out the entire M1 family from its A14 Bionic development starting point:

  • A14 Bionic (the M1 foundation): September 15, 2020
  • M1: November 10, 2020
  • M1 Pro and Max: October 18, 2021
  • M1 Ultra: March 8, 2022

 Now let’s look at the M2 family, starting with its A15 Bionic SoC development foundation:

 Nearly two years’ total latency this time: nine months alone from the A15 to the M2.

I don’t yet know for sure, but for a variety of reasons (process lithography foundation, core mix and characteristics, etc.) I strongly suspect that the M3 chips are not based on the A16 SoC, which was released on September 7, 2022. Instead, I’m pretty confident in prognosticating that Apple went straight to the A17 Pro, unveiled just last month (as I write these words), on September 12 of this year, as their development foundation.

 Now look at the so-far rollout timeline for the M3 family—I think my reason for focusing on it will then be obvious:

  • A17 Pro: September 12, 2023
  • M3: October 30, 2023
  • M3 Pro and Max: October 30, 2023
  • M3 Ultra: TBD
  • M3 Extreme (a long-rumored four-Max-die high-end proliferation, which never ended up appearing in either the M1 or M2 generations): TBD (if at all)

Granted, we only have the initial variant of the M4 SoC so far. There’s no guarantee at this point that additional family members won’t have M1-reminiscent sloth-like rollout schedules. But for today, focus only on the initial-member rollout latencies:

  • M1 to M2: more than 19 months
  • M2 to M3: a bit more than 16 months
  • M3 to M4: a bit more than 6 months

Note, too, that Apple indicates that the M4 is built on a “second-generation 3 nm process” (presumably, like its predecessors, from TSMC). Time from another six-months-back quote:

Conceptually, the M3 flavors are reminiscent of their precursors, albeit with newer generations of various cores, along with a 3 nm fabrication process foundation.

As for the M4, here’s my guess: from a CPU core standpoint, especially given the rapid generational development time, the performance and efficiency cores are likely essentially the same as those in the M3, albeit with some minor microarchitecture tweaks to add-and-enhance deep learning-amenable instructions and the like, therefore this press release excerpt:

Both types of cores also feature enhanced, next-generation ML accelerators.

The fact that there are six efficiency cores this time, versus four in the M3, is likely due in no small part to the second-generation 3 nm lithography’s improved transistor packing capabilities along with more optimized die layout efficiencies (any potential remaining M3-to-M4 die size increase might also be cost-counterbalanced by TSMC’s improved 3 nm yields versus last year).

What about the NPU, which Apple brands as the “Neural Engine”? Well, at first glance it’s a significant raw-performance improvement from the one in the M3: 18 TOPs (trillion operations per second) vs 38 TOPs. But here comes another six-month back quote about the M3:

The M3’s 16-core neural engine (i.e., deep learning inference processing) subsystem is faster than it was in the previous generation. All well and good. But during the presentation, Apple claimed that it was capable of 18 TOPS peak performance. Up to now I’d been assuming, as you know from the reading you’ve already done here, that the M3 was a relatively straight-line derivation of the A17 Pro SoC architecture. But Apple claimed back in September that the A17 Pro’s neural engine ran at 35 TOPS. Waaa?

 I see one (or multiple-in-combination) of (at least) three possibilities to explain this discrepancy:

  • The M3’s neural engine is an older or more generally simpler design than the one in the A17 Pro
  • The M3’s neural engine is under-clocked compared to the one in the A17 Pro
  • The M3’s neural engine’s performance was measured using a different data set (INT16 vs INT8, for example, or FLOAT vs INT) than what was used to benchmark the A17 Pro

My bet remains that the first possibility of the three listed was the dominant if not sole reason for the M3 NPU’s performance downgrade versus that in the A17 Pro. And I’ll also bet that the M4 NPU is essentially the same as the one in the A17 Pro, perhaps again with some minor architecture tweaks (or maybe just a slight clock boost!). So then is the M4 just a tweaked A17 Pro built on a tweaked 3 nm process? Not exactly. Although the GPU architecture also seems to be akin to, if not identical to, the one in the A17 Pro (six-core implementation) and M3 (10-core matching count), the display controller has more tangibly evolved this time, likely in no small part for the display enhancements which I’ll touch on next. Here’s the summary graphic:

More on the iPad Pro

Turning attention to the M4-based iPads themselves, the most significant thing here is that they’re M4-based iPads. This marks the first time that a new Apple Silicon generation has shown up in something other than an Apple computer (notably skipping the M3-based iPad Pro iteration in the process, as well), and I don’t think it’s just a random coincidence. Apple’s clearly, to me, putting a firm stake in the ground as to the corporate importance of its comparatively proprietary (versus the burgeoning array of Arm-based Windows computers) tablet product line, both in an absolute sense and versus computers (Apple’s own and others). A famous Steve Jobs quote comes to my mind at this point:

If you don’t cannibalize yourself someone else will.

The other notable iPad Pro enhancement this time around is the belated but still significant display migration to OLED technology, which I forecasted last August. Unsurprisingly, thanks to the resultant elimination of a dedicated backlight (an OLED attribute I noted way back in 2010 and revisited in 2019) the tablets are now significantly thinner as a result, in spite of the fact that they’re constructed in a fairly unique dual-layer brightness-boosting “sandwich” (harking back to my earlier display controller enhancements comments; note that a separate simultaneous external tethered display is still also supported). And reflective of the tablets’ high-end classification, Apple has rolled out corresponding “Pro” versions of its Magic Keyboard (adding a dedicated function-key row, along with a haptic feedback-enhanced larger trackpad):

And Pencil, adding “squeeze” support, haptic feedback of its own, and other enhancements:

Other notable inter- and intra-generational tweaks:

  • No more mmWave 5G support.
  • No more ultra-wide rear camera, either.
  • Physical SIM slots? Gone, too.
  • Ten-core CPU M4 SoCs are unique to the 1 TByte and 2 TByte iPad Pro variants; lower-capacity mass storage models get only 9 CPU cores (one less performance core, to be precise, although corresponding GPU core counts are interestingly per-product-variant unchanged this time). They’re also allocated only half the RAM of their bigger-SSD brethren: 8 GBytes vs 16 GBytes.
  • 1 and 2 TByte iPads are also the only ones offered a nano-texture glass option.

Given that Apple did no iPad family updates at all last year, this is an encouraging start to 2024. That said, the base 10th-generation iPad is still the same as when originally unveiled in October 2022, although it did get a price shave today (and its 9th-generation precursor is no longer in production, either). And the 6th-generation iPad mini introduced in September 2021 is still the latest-and-greatest, too. I’m admittedly more than a bit surprised and pleased that my unit purchased gently used off eBay last summer is still state-of-the-art!

iPad software holdbacks

And as for Apple’s ongoing push to make the iPad, and the iPad Pro specifically, a credible alternative to a full-blown computer? It’s a topic I first broached at length back in September 2018, and to at least some degree the situation hasn’t tangibly changed since then. Tablet hardware isn’t fundamentally what’s holding the concept back from becoming a meaningful reality, but then again, I’d argue that it never was the dominant shortcoming. It was, and largely remains, software; both the operating system and the applications that run on it. And I admittedly felt validated in my opinion here when I perused The Verge’s post-launch event livestream archive and saw it echoed there, too.

Sure, Apple just added some nice enhancements to its high-end multimedia-creation and editing tablet apps (along with their MacOS versions, I might add) but how many folks are really interested in editing multiple ProRes streams without proxies on a computer nowadays, far from on an iPad? What about tangible improvements for the masses? Sure, you can use a mouse with an iPad now, but multitasking attempts still, in a word, suck. And iPadOS still doesn’t even support the basics, such as multi-user support. Then again, there’s always this year’s WWDC, taking place mid-next month, which I will of course once again be covering for EDN and y’all. Hope springs eternal, I guess. Until then, let me know your thoughts in the comments.

p.s…I realized just before pressing “send to Aalyia” that I hadn’t closed the loop on my earlier “building block mass-production availability delays” tease. My suspicion is that originally the new iPads were supposed to be unveiled alongside the new MacBook Airs back in March, in full virtual-event form. But in the spirit of “where there’s smoke, there’s fire”, longstanding rumors about OLED display volume production delays, I’m also guessing (and/or second-generation 3 nm process volume production delays), pushed the iPads to today.

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Apple’s Spring 2024: In-person announcements no more? appeared first on EDN.

TSMC crunch heralds good days for advanced packaging

Срд, 05/08/2024 - 14:09

TSMC’s advanced packaging capacity is fully booked until 2025 due to hyper demand for large, powerful chips from cloud service giants like Amazon AWS, Microsoft, Google, and Meta. Nvidia and AMD are known to have secured TSMC’s chip-on-wafer-on-substrate (CoWoS) and system-on-integrated-chips (SoIC) capacity for advanced packaging.

Nvidia’s H100 chips—built on TSMC’s 4-nm process—use CoWoS packaging. On the other hand, AMD’s MI300 series accelerators, manufactured on TSMC’s 5-nm and 6-nm nodes, employ SoIC technology for the CPU and GPU combo before using CoWoS for high-bandwidth memory (HBM) integration.

Figure 1 CoWoS is a wafer-level system integration platform that offers a wide range of interposer sizes, HBM cubes, and package sizes. Source: TSMC

CoWoS is an advanced packaging technology that offers the advantage of larger package size and more I/O connections. It stacks chips and packages them onto a substrate to facilitate space, power consumption, and cost benefits.

SoIC, another advanced packaging technology created by TSMC, integrates active and passive chips into a new system-on-chip (SoC) architecture that is electrically identical to native SoC. It’s a 3D heterogeneous integration technology manufactured in front-end of line with known-good-die and offers advantages such as high bandwidth density and power efficiency.

TSMC is ramping up its advanced packaging capacity. It aims to triple the production of CoWoS-based wafers, producing 45,000 to 50,000 CoWoS-based units per month by the end of 2024. Likewise, it plans to double the capacity SoIC-based wafers by the end of this year, manufacturing between 5,000 and 6,000 units a month. By 2025, TSMC wants to hit a monthly capacity of 10,000 SoIC wafers.

Figure 2 SoIC is fully compatible with advanced packaging technologies like CoWoS and InFO. Source: TSMC

Morgan Stanley analyst Charlie Chan has raised an interesting and viable question: How do companies like TSMC judge advanced packaging demand and allocate capacity accordingly. What’s the benchmark that TSMC uses for its advanced packaging customers?

Jeff Su, director of investor relations at TSMC, while answering Chan, acknowledged that the demand for advanced packaging is very strong and the capacity is very tight. He added that TSMC has more than doubled its advanced packaging capacity in 2024. Moreover, the mega-fab has leveraged its special relationships with OSATs to fulfill customer needs.

TSMC works closely with OSATs, including its Taiwan neighbor and the world’s largest IC packaging and testing company, ASE. TSMC chief C. C. Wei also mentioned during an earnings call that Amkor plans to build an advanced packaging and testing plant next to TSMC’s fab in Arizona. Then there is news circulating in trade media about TSMC planning to build an advanced packaging plant in Japan.

Advanced packaging is now an intrinsic part of the AI-driven computing revolution, and the rise of chiplets will only bolster its importance in the semiconductor ecosystem. TSMC’s frantic capacity upgrades and tie-ups with OSATs point to good days for advanced packaging technology.

TSMC’s archrivals Samsung and Intel Foundry will undoubtedly be watching closely this supply-and-demand saga for advanced packaging while recalibrating their respective strategies. We’ll continue covering this exciting aspect of semiconductor makeover in the coming days.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post TSMC crunch heralds good days for advanced packaging appeared first on EDN.

Double and invert 5 V to generate ±10 V using two generic chips and two bootstraps

Втр, 05/07/2024 - 17:15

Integration of analog circuitry with digital logic often requires the addition of an extra supply rail or two. The excellent PSRR of precision op-amps (typically >>100 dB) makes them unfussy about power rail variations. This simplifies power supply circuitry and eases the task of designing it to be uncomplicated and inexpensive.

Here’s a variation on the popular flying-capacitor charge-pump voltage converter motif that takes advantage of op-amp tolerance for less than perfect supply regulation. It first doubles and then inverts 5 V to generate nominally symmetrical positive and negative 10-volt rails which can each handily supply several milliamps. The complete converter consists of two inexpensive generic 20 volt-capable, metal-gate CMOS triple SPDT CD4053Bs, plus just eight passive components. Figure 1 shows the circuit.

Figure 1 A 25 kHz multivibrator (U2b) clocks flying capacitor switches that first, double 5 V to +10 V (paralleled U1a,c and U2a,c), and then inverts it to -10 V (U1b and U2b).

Wow the engineering world with your unique design: Design Ideas Submission Guide

 Paralleled switches U1c and U2c, running at Fpump = 25 kHz, alternate the top end of “flying” capacitor C2 between ground and +5 V, while U1a and U2a synchronously alternate its bottom end between +5 V and +10 V, creating a voltage-doubling capacitive charge pump. The connection of the resulting 10-V rail on U1,2 pin 13 to U1,2 pin 16 implements the first “bootstrap” mentioned above, whereby the switches supply 10 V to themselves. D1 gets things rolling on power up by initially providing ~+5 V until the charge pump takes over, whereupon D1 is reverse biased and disconnects.

Doubling up on the U1,2a and U1,2c charge pump switches serves to halve the effective impedance of the +10 V output to ~180 Ω. This is important because the +10 V output powers not only the external load, but also the internal U1,2b voltage inverter (more on this later). Plus, these relatively high ON-resistance metal-gate CMOS switches need all the help they can get. The result is a fairly stiff +10 V output that droops with loading current 180 mV/mA according to this expression:

V+ = 10 V – 180(I+ + I-)
Where:
I+ = +10 V output load current
I – = -10 V output load current

The 25 kHz pump clock is provided by a “merged” oscillator consisting of U2b driven by positive feedback. From U2c through C1 and negative feedback through R1, generating:

Fpump = (2 loge(2)R1C1)-1

Pump frequency will vary somewhat with component tolerance and loading of the 10 V outputs, but since the clock frequency isn’t critical, any effect on pump performance will be insignificant.

The resulting oscillator waveforms are sketched in Figure 2

Figure 2 The 25kHz multivibrator 10Vpp waveshapes.

 Inversion of +10 V to produce -10 V is handled by U1,2b switching C4 between +10 V and ground on the left side and ground and -10 V on the right. The connection to pin 7 provides the second “bootstrap”. D2 clamps pin 7 near enough to ground for the switches to begin working at power-up until the charge pump takes over.

The result is a negative rail that reacts to loading according to this expression:

 V- = -10 V + (430*I- + 180*I+)
Where:
I+ = +10 V output load current
I – = -10 V output load current

The dependence of the two output voltages on loading is graphically summarized in Figure 3.

Figure 3 Output voltages under four loading scenarios: (1) +10 V output with +10 V loaded 0 to 10 mA, -10 V unloaded; (2) +10 V output with both +10 V and -10 V loaded 0 to 10 mA equally; (3) -10 V output with -10 V loaded 0 to 10 mA, +10 V unloaded; (4) -10 V output with +10 V and -10 V loaded 0 to 10 mA equally.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Double and invert 5 V to generate ±10 V using two generic chips and two bootstraps appeared first on EDN.

James Hitchcock at Tektronix explains the recent EA acquisition

Пн, 05/06/2024 - 16:54

An interview with James Hitchcock, a general manager of Keithley Instruments a Tektronix Company, shed light upon the recent acquisition of Elektro-Automatik (EA), a supplier of high-power electronic test solutions. 

EA’s principal application space lies in energy storage, mobility, hydrogen, and renewable energy applications where their bidirectional programmable DC power supplies can double up as both the power supply and electronic load with their unique regenerative feature and bidirectionality. Many tests involve the necessity to dump large amounts of power in the form of heat through passive/resistive load banks or electronic loads, including battery cycling and burn-in tests. On a massive scale, handling this amount of heat is a significant undertaking, where the proper HVAC and even liquid cooling may be necessary. Instead, EA power supplies take that energy and transfer it back to the grid, recycling otherwise wasted energy and eliminating any cooling costs (Figure 1). 

Figure 1: The process of energy recovery for EA’s regenerative bidirectional programmable power supplies in a testing scenario connected with the unit under test (UUT). Source: EA, a Tektronix Company

The principal application space for many Tektronix instruments lie in signal integrity and precision high frequency testing with an offering of high-end mixed signal oscilloscopes, signal generators, and spectrum analyzers. Keithley source measure units (SMUs), and precision measure instruments offer solutions for semiconductor characterization and quality control. Outside of this, the MSO oscilloscopes and IsoVu probes are geared towards power electronics performance analysis. However, how does any of this mix with EA’s high power test equipment portfolio? 

Test solutions for the EV powertrain

“The primary motivation for acquiring EA and combining the solutions of Tektronix was focused around the battery emulation capabilities of EA and the applications focused on power inverters and motor drives primarily in the automotive space” says James, “where the EA sources can test the batteries but also emulate them in the designs of the vehicles and the Tektronix 4 and 5 series MSO scopes are well-suited for the AC signal analysis to drive the motor that is powered by these battery systems”. As shown in Figure 2, select EA power supplies can simulate a set of battery cells at a specific state of charge (SOC) in a few minutes. Typically, these tests involve hours of preparation, charging and discharging multiple batteries and different SOCs before beginning DUT validation.

Figure 2: The ability to both source and sink power enables EA’s power supplies to simulate battery behavior and accurately reproduce a battery’s voltage and current characteristics to test devices.

The Keithley data acquisition (DAQ) systems and digital multimeters (DMM) have played a role in this space for many years, monitoring the temperature and voltage control of the batteries in battery management systems (Figure 3). “So across the entire engineering workflow of designing the powertrain for an EV the Tektronix-, Keithley-, and EA-branded products work together for a solution.” 

Figure 3: Keithley DAQ systems have long been leveraged in environmental monitoring, burn-in/accelerated life testing, as well as failure analysis for automotive applications. Source: Keithley, a Tektronix company

Power inverter and fuel cell testing

“There are other opportunities in power inverters in renewables, especially converting voltage from the DC side with solar panels to AC,” says James. The testing space expands beyond this with fuel cells testing for heavier duty electric mobility solutions such as large trucks, construction equipment, trains, and boats. Fuel cells are also increasingly used in energy security, providing a backup source of power in the event a black out or brownout occurs. “This is an area that EA is very good at and Tektronix can get involved in designing the precision electronics needed to control this type of testing.”

A gap in market for a unified testing solution 

“Our Keithley source measurement units (SMUs) are well-suited to individual cell design,” says James “Our sourcing capabilities with our SMUs stop at about 5 kW of power (Figure 4). We have a 300 V solution and several hundred amp pulsing solutions with our SMUs and we found engineers were moving to higher powers with the evolution of new battery chemistries, new drive trains, and motors.” 

Figure 4: The Keithley 2650 series SMU is a high power instrument designed for characterizing high power electronics such as diodes, FETs, IGBTs, etc., with up to 3000 V or 2000 W of pulse current power. Source: Keithley, a Tektronix company

Tektronix intends to support this trend of moving to higher voltage electrification system in EVs and more energy dense battery chemistries to reach parity with internal combustion engineer (ICE) vehicles, “there was a gap in the market where the suppliers were offering the power solutions or the measurement solutions but no one was really offering the full capability to serve the engineer across that full power portfolio”. 

In the near term, Tektronix intends to bring the EA products into their software umbrella, providing unified testing solutions for engineers across the power spectrum from low-power embedded IoT designs to ultra-high power energy storage, mobility, and hydrogen fuel applications.

Aalyia Shaukat, associate editor at EDN, has worked in the design publishing industry for eight years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in EE journals and trade magazines.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post James Hitchcock at Tektronix explains the recent EA acquisition appeared first on EDN.

Why Synopsys wants to sell its application security testing business

Пн, 05/06/2024 - 09:43

Nearly a month after Synopsys snapped security IP supplier Intrinsic ID, the Silicon Valley-based firm is reported to have reached closer to selling its software integrity group (SIG), which specializes in application security testing for software developers.

A Reuters report published last week claims that a private equity consortium led by Clearlake Capital and Francisco Partners is in advanced talks to acquire the SIG unit for more than $2 billion, and the deal is anticipated to be announced as early as this week. Synopsys telegraphed the intention to divest its security software business late last year.

The acquisition as well as divesture activities have a strong imprint of Sassine Ghazi’s vision for the company’s future roadmap. Source: Yahoo Finance

Synopsys CEO Sassine Ghazi told the press in March 2024 that around three dozen buyers had shown interest in the SIG unit, and the company was narrowing down the list of potential suitors to half a dozen. Synopsys board has already approved the initiation process for the sale of the SIG unit.

Synopsys has significantly grown its application security test business after acquiring software testing firm Coverity in 2014. Next year, it scooped software security vendor Codenomicon, followed by the acquisition of open-source security vendor Black Duck Software in December 2017.

In June 2021, Synopsys snapped application security risk management firm Code Dx, and a year later, it acquired WhiteHat Security to offer automated protection for web applications in production environments. So, while Synopsys has significantly grown its application security testing business over the years and is one of the key players in this market, why does it want to sell it now?

First, it’s a highly competitive market, and Synopsys has seen its profit margins steadily decline over the past years. Second, and more importantly, Synopsys is streamlining its focus on EDA and IP businesses, so a move away from the application software business seems logical in that context.

A few months before acquiring Intrinsic ID’s IP business for physical unclonable function (PUF) incorporated into system-on-chip (SoC) designs for security capabilities like identification, Synopsys made waves by buying Ansys, an EDA outfit hyper-focused on simulation software. This acquisition is expected to extend Synopsys’ core EDA business into several growing adjacent markets.

When Synopsys made the Ansys and Intrinsic ID acquisitions in a quarter, there were vibes that this EDA firm was on way to become an industry giant. However, the news about the SIG unit’s potential sale shows that the $79 billion company has a well-thought-out plan in which EDA and IP businesses will likely define its future roadmap.

“We believe there’s a higher return on investment in the 90% of our portfolio spread between the design automation and design IP business segments,” Ghazi told investors in November 2023. The company’s software service businesses, like application security testing, clearly fall in the remaining 10%, and buyout firms will be having a closer look at such businesses in 2024.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Why Synopsys wants to sell its application security testing business appeared first on EDN.

3 basic considerations for vibration control in chip manufacturing

Птн, 05/03/2024 - 10:09

Uncontrolled vibration can cause semiconductor damage and decreased performance. Many sources of vibration challenge semiconductor manufacturers, including people’s footsteps, running machines, wind blowing in the building and passing vehicles. These sources can pose a significant challenge for design and manufacturing engineers.

Working in environments with poorly controlled vibrations can mean these professionals waste time and raw materials while designing and manufacturing new components or improving existing ones. What sources of vibration control should engineers consider?

  1. Facilities for vibration control

People involved with semiconductor manufacturing facilities under construction should be proactive and insist that those buildings have appropriate vibration controls. That was the approach of the design team associated with a $279 million project for a three-story semiconductor research lab.

The designers knew even tiny vibrations could negatively impact a semiconductor’s performance, potentially delaying or complicating research and manufacturing. Similarly, they recognized that the new facility must have contamination-mitigation features.

For instance, the building must have a clean room with a vibration-isolated floor. While working with those overseeing the construction details, the design professionals created a set of specifications adhering to their vibration-dampening and contamination-preventing needs.

Designers considering temporarily or permanently working at existing semiconductor facilities should ask which measures those buildings have, ensuring they reflect industry standards. That proactive measure helps designers work at places where they will spend their time well.

  1. Specialized products to interrupt and absorb vibration

Semiconductor manufacturing plants must have integrated products that absorb incoming vibrational energy and dampen external vibration sources. For example, a company may need to put thousands of spring mounts inside pipes and ductwork. However, the size and placement of the required spring mounts vary depending on the length and diameter of the building’s infrastructure.

It’s also often necessary to suspend pipes and ductwork from acoustic hangers after wrapping them in special housing. Some semiconductor facilities also have pipe connectors designed for specific types of vibration.

Those overseeing the construction or upgrading of a semiconductor fabrication facility should familiarize themselves with the off-the-shelf and custom-made products available to meet such needs. It’s also wise to get input from at least one consultant about how best to dampen the known or suspected types of vibration that will affect a fab.

  1. Install sensors to measure machine conditions

When electronics product designers consider the aspects of new items, they must think about whether such components could be manufactured on a facility’s existing equipment. Another thing to verify is whether the fab’s infrastructure has sensors to detect abnormal vibrations.

Due to the semiconductor industry’s heavy dependence on water during manufacturing, a pump failure could be an extremely costly and disruptive problem. Rotor pumps spin as fast as 30,000 rotations per minute and vibrate more when rotor damage occurs. This issue generally requires a total pump replacement.

Advanced sensors can measure tiny changes—such as progressively increasing vibration—and warn technicians that failures will happen soon. Such information allows fab professionals to order new parts or schedule service calls before outages occur. Decision makers could also use these sensors as vibration monitoring tools and act quickly to mitigate new issues.

Vibration control is essential

Poor or non-existent vibration-control measures in a semiconductor plant affect manufacturers and design team members. The above mentioned strategic measures can reduce or eliminate problems, helping everyone stay productive and get the best results from their work.

Ellie Gabel is a freelance writer as well as an associate editor at Revolutionized.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 3 basic considerations for vibration control in chip manufacturing appeared first on EDN.

Keysight hones post-quantum algorithm testing

Птн, 05/03/2024 - 01:49

Keysight announced additional testing capabilities for its Inspector security platform to assess the robustness of post-quantum cryptography (PQC). Keysight Inspector, part of the recent Riscure Security Solutions acquisition, enables device and chip vendors to identify and fix hardware vulnerabilities during the design cycle.

The development of PQC encryption algorithms capable of withstanding quantum computer attack is crucial for protecting sensitive electronic information. However, new technologies assumed to be resilient against post-quantum threats may be vulnerable to existing hardware-based attacks. To tackle this issue, Keysight has added post-quantum algorithm testing to the Inspector device security platform.

Inspector can now be used to test implementations of the CRYSTALS-Dilithium digital signature algorithm, one of the encryption algorithms selected by NIST for PQC standardization. Hardware designers adopting this algorithm will be able to verify that products are secure against these threats. Government institutions and security test labs can also use Inspector to verify the strength of third-party products.

With ongoing standardization, many more new security algorithms will become available for multiple applications and industries. Ensuring their effectiveness demands verifiable implementations. Keysight will furnish the requisite test tools alongside certification services via Inspector.

To read more about Inspector and Riscure Security Solutions by Keysight, click here.

Keysight Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Keysight hones post-quantum algorithm testing appeared first on EDN.

High-side switch suits automotive loads

Птн, 05/03/2024 - 01:49

HMI’s HL8518 is a single-channel high-side power switch for automotive low-watt lamps, high-side relays and valves, and other general loads. The device integrates a power FET and charge pump, providing a typical on-resistance of 80 mΩ.

The HL8518 operates from 3.5 V to 40 V and provides 3-V/5-V compatible logic inputs. Current limiting is programmable via an external resistor. AEC-Q100 Grade 1 qualified, the switch operates over a temperature range of -40°C to +125°C and has a low standby current of <0.5 µA.

Protection functions of the HL8518 include overvoltage, short-circuit, undervoltage lockout, thermal shutdown, and reverse battery. When tested in accordance with AEC-Q100-12, the power switch achieved Class A certification by enduring 1 million short circuits to ground.

Samples of the HL8518 high-side switch can be ordered online.

HL8518 product page

HMI

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post High-side switch suits automotive loads appeared first on EDN.

32-bit MCUs embed high level of security

Птн, 05/03/2024 - 01:49

Powered by an Arm Cortex-M33 core, Microchip’s PIC32CK 32-bit MCUs leverage both a hardware security module (HSM) and Arm’s TrustZone security architecture. This level of embedded security enables designers to meet upcoming cybersecurity compliance requirements set to take effect in 2024 for most IoT-connected devices.

The HSM subsystem of these mid-range MCUs integrates a dedicated CPU, memory, secure DMA controllers, cryptographic accelerators, and firewalled communications with the host. It provides symmetric and asymmetric cryptographic operations, true random number generation, key management, and authentication for automotive, industrial, medical, and communication applications. TrustZone, a hardware-based secure privilege environment, provides an additional layer of protection for key software functions.

PIC32CK microcontrollers support ISO 26262 function safety and ISO/SAE 21434 cybersecurity standards. Devices offer a range of options to tune the level of security, memory, and connectivity bandwidth. They furnish up to 2 Mbytes of dual-panel flash with ECC and 512 kbytes of SRAM. Connectivity options include 10/100-Mbps Ethernet, CAN FD, and USB.

The PIC32CK family is available now for purchase in high-volume production quantities.

PIC32CK product page

Microchip Technology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 32-bit MCUs embed high level of security appeared first on EDN.

USB signal conditioner is self-adapting

Птн, 05/03/2024 - 01:45

Operating from a supply voltage down to 2.3 V, the PI5USB212 signal conditioner IC from Diodes automatically detects a USB 2.0 high-speed connection. The part, which is intended for use in PCs, docking stations, cable extenders, and monitors, preserves signal integrity when driving long PCB traces or cables extending up to 5 meters.

The PI5USB212 symmetrically boosts the USB D+ and D- channels to maintain common-mode stability. It also applies pre-emphasis to compensate for intersymbol interference (ISI). The IC’s wide supply range of 2.3 V to 5.5 V simplifies system design and extends the operating window of portable and battery-powered equipment.

To converse energy, the PI5USB212 automatically detects when a USB device is not attached and reduces its supply current to just 0.7 mA typical. When the IC is disabled via the RSTN disable pin, it minimizes current consumption to just 13 µA typical. In addition to USB 2.0, the PI5USB212 is compatible with USB On-The-Go (OTG 2.0) and Battery Charging (BC 1.2) specifications.

Housed in a 12-pin, 1.6×1.6-mm QFN package, the PI5USB212 signal conditioner costs $2.70 each in lots of 3500 units.

PI5USB212 product page

Diodes

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post USB signal conditioner is self-adapting appeared first on EDN.

Inline power sensor covers low frequencies

Птн, 05/03/2024 - 01:45

The MA24103A inline power sensor from Anritsu performs peak power measurements from 25 MHz to 1 GHz with a power range of 2 mW to 150 W. A dual-path architecture enables the sensor to carry out true-RMS measurements over the entire frequency and power range. This bidirectional plug-and-play device joins the company’s existing MA24105A peak power sensor, which has a frequency range of 350 MHz to 4 GHz.

Critical markets that require peak and average power measurements well below the 1-GHz range, such as public safety, avionics, and railroads, demand reliable communication between control centers and vehicles. Lower frequencies can propagate a longer distance and maintain communication with fast-moving vehicles. Typically, at these lower frequencies, transmitting signals operate within the watt range, making the MA24103A particularly suitable for such applications.

The MA24103A inline peak power sensor communicates with a PC via a USB connection and comes with PowerXpert data analysis and control software. It also works with Anritsu handheld instruments equipped with optional high-accuracy power meter software (Option 19).

MA2410xA product page 

Anritsu

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Inline power sensor covers low frequencies appeared first on EDN.

ISM bands and frequencies: Comparisons and case studies

Чтв, 05/02/2024 - 18:25

The industrial, scientific, and medical (ISM) radio frequency bands find common use in electronics systems, by virtue of their comparatively lightly regulated nature versus (for example) spectrum swaths used by cellular, satellite, and terrestrial radio and television networks. As Wikipedia explains:

The ISM radio bands are portions of the radio spectrum reserved internationally for industrial, scientific, and medical (ISM) purposes, excluding applications in telecommunications. Examples of applications for the use of radio frequency (RF) energy in these bands include RF heating, microwave ovens, and medical diathermy machines. The powerful emissions of these devices can create electromagnetic interference and disrupt radio communication using the same frequency, so these devices are limited to certain bands of frequencies. In general, communications equipment operating in ISM bands must tolerate any interference generated by ISM applications, and users have no regulatory protection from ISM device operation in these bands.

 Despite the intent of the original allocations, in recent years the fastest-growing use of these bands has been for short-range, low-power wireless communications systems, since these bands are often approved for such devices, which can be used without a government license, as would otherwise be required for transmitters; ISM frequencies are often chosen for this purpose as they already must tolerate interference issues. Cordless phones, Bluetooth devices, near-field communication (NFC) devices, garage door openers, baby monitors, and wireless computer networks (Wi-Fi) may all use the ISM frequencies, although these low-power transmitters are not considered to be ISM devices.

FCC certification of such products is still necessary, of course, to ensure that a given device doesn’t stray beyond a given ISM band’s lower and upper frequency boundaries, for example, or exceed broadcast power limits. That said, reiterating my first-paragraph point, the key appeal of ISM band usage lies in its no-license-required nature. Plenty of products, including those listed in the earlier Wikipedia description along with, for example, the snowblower-mangled “fob” for my Volvo’s remote keyless system that I finished dissecting two years ago, leverage one-to-multiple ISM bands; Wikipedia lists twelve total defined and regulated by the ITU, some usable worldwide, others only in certain regions.

Probably the most common (discussed, at least, if not also used) ISM bands nowadays are the so-called “2.4 GHz” (strictly speaking, it should be 2.45 GHz, reflective of the center frequency) that spans 2.4 GHz to 2.5 GHz, and “5 GHz” (an even less accurate moniker) that ranges from 5.725 GHz to 5.875 GHz. And echoing the earlier Wikipedia quote that “in recent years the fastest-growing use of these bands has been for short-range, low-power wireless communications systems”, among the most common applications of those two ISM bands nowadays are Bluetooth (2.4 GHz) and Wi-Fi (both 2.4 GHz and 5 GHz, more recently further expanding into the non-ISM “5.9 GHz” and “6 GHz” band options). This reality is reflected in the products and broader topics that I regularly showcase in my blog posts and teardowns.

However, although when you hear the words “Bluetooth” and “Wi-Fi” you might automatically think of things like:

  • Smartphones
  • Tablets
  • Computers and
  • Speakers

I’m increasingly encountering plenty of other wirelessly communicating widgets that also abide in one or both of these bands. Some of them also use Bluetooth and/or Wi-Fi, whether because they need to interact with Bluetooth- and Wi-Fi-based devices (a wireless HDMI transmitter that leverages a smartphone or tablet as its associated receiver-and-display, for example) or more generally because high-volume industry-standard chips and software tend to be cost-effective (not to mention stable, feature-rich and otherwise mature) versus proprietary alternatives. But others do take the proprietary route, even if just from a “handshake” protocol standpoint.

In the remainder of this post, I’ll showcase a few case study examples of the latter that I’ve personally acquired. Before I dive in, however, here are a few thoughts on why a manufacturer might go down either the 2.4 GHz or 5 GHz (or both) development path. Generally speaking…

2.4 GHz is, all other factors being equal:

  • Longer range (open-air)
  • Comparatively immune to (non-RF) environmental attenuation factors such as chicken wire in walls and the like, and
  • Lower power-consuming

but is also:

  • Lower-bandwidth and longer-latency, and
  • (For Wi-Fi uses) offers fewer non-spectrum-overlapping broadcast channel options

Unsurprisingly, 5 GHz is (simplistically, at least) the mirror image of its 2. 4 GHz ISM sibling:

  • Higher bandwidth (especially with modern quantization schemes) and lower latency, and
  • (For Wi-Fi) many more non-overlapping channels (a historical advantage that’s, however, increasingly diminished by modern protocols’ support for multichannel bonding)

but:

  • Shorter range
  • Greater attenuation by (non-RF) environmental factors, and
  • Higher power-consuming

Again, I’ll reiterate that these comparisons are with “all other factors being equal”. 5 GHz Wi-Fi, for example, is receiving the bulk of industry development attention nowadays versus its 2.4 GHz precursor, so the legacy power consumption differences between them are increasingly moot (if not reversed). And environmental attenuation effects can to at least some degree be counterbalanced by more exotic MIMO antenna (and associated transmitter and receiver) designs along with mesh LAN topologies. With those generalities and qualifiers (along with others of both flavors that I may have overlooked; chime in, readers) documented, let’s dive in.

Wireless multi-camera flash setups

One of last month’s teardowns was of Godox’s V1 flash unit, which supports the company’s “X” wireless communication protocol, optionally acting as either a master (for other receivers and/or flashes configured as slaves) or slave (to another transmitter or master-configured flash):

In that writeup, I also mentioned Neewer’s conceptually similar, albeit protocol-incompatible Z1 flash unit and its “Q” wireless scheme:

And a year back I covered now-defunct Cactus and its own unique wireless sync approach:

All three schemes are 2.4 GHz-based but proprietary in implementation. Candidly, I’m somewhat surprised, given the limited data payload seemingly required in this application, that even longer-range 900 MHz wasn’t used instead. Then again, the limitations of camera optics and artificial illumination intensity-vs-distance may “cap” the upper-end range requirement, and comparative latency might also factor into the 2.4 GHz-vs-900 MHz selection.

Wireless HDMI transmitter and receiver

Vention’s compact system, which I purchased from Amazon at the beginning of the year, has found a permanent place in my travel satchel. The Amazon product page mentions both 2.4 and 5 GHz compatibility, but I think that’s a typo: Vention’s literature documents (and promotes, versus the company-positioned inferior 2.4 GHz alternative) only 5 GHz support, and the FCC certification records (FCC ID: 2A7Z4-ADC) also only document 5 GHz capabilities. The perhaps-obvious touted 5 GHz advantages are resolution (1080p max) and frame rate (60 fps), along with decent range; up to 131 feet (40 m), but only “in interference-free environments”, along with a further qualifier that “range is reduced to 32FT/10M when transmitting through walls or floors.” Regardless, since this is a “closed loop” (potentially multiple) transmitter to receiver setup, Wi-Fi compatibility isn’t necessary.

Wireless video-capture monitoring systems

Accsoon and Zhiyun’s approaches to wirelessly connecting a camera’s external video output to a remote monitor, which I previously covered back in July of last year, are conceptually similar but notably vary in implementation. The two Accsoon “mainstream” units I own are designed to solely stream to a remote smartphone or tablet and are therefore 2.4 GHz Wi-Fi-based, generating a Wi-Fi Direct-like beacon to which the mobile device connects. That said, Accsoon also sells a series of CineEye “Pro” models come as transmitter-plus-dedicated receiver sets and support both 2.4 GHz and 5 GHz transmission capabilities.

Zhiyun’s TransMount gear is intended to be used with the company’s line of gimbals, and like Accsoon’s hardware you can also “tune into” a transmitter directly from a smartphone or tablet using a company-developed Android or iOS app. That said, Zhiyun also sells a dedicated receiver to which you can connect a standalone HDMI field monitor. And for peak potential image quality (at a range tradeoff), everything runs only on 5 GHz Wi-Fi.

Wireless lavalier microphone sets

I got the Aikela set from Amazon last spring, and the Hollyland system (the Lark 150, to be exact) off eBay a month earlier. Both, as you have probably already discerned from the photos, are two-transmitter (max)/single-receiver setups. The Hollyland is the more professional-featured of the two, among other things supporting both built-in and external-tethered mics for the transmitters; that said, the Aikela receiver has integrated analog and both digital Lightning and USB-C output options…which is why I own both setups. They’re both 2.4 GHz-based and leverage proprietary communication schemes. Newer wireless lav models, such as DJI’s Mic 2, can also direct-transmit audio to a smartphone, tablet or other receiver over Bluetooth.

Joyo wireless XLR transmitter/receiver combo

I picked up two sets of these from Amazon last summer. As the image hopefully communicates effectively, they aren’t full-blown microphone setups per se; instead, they take the place of an XLR cable, with the transmitter mated to the XLR output of a microphone (or other audio-generating device) and the receiver connected to the mixing board, etc. The big surprise here, at least to me, is that unlike the previous 2.4 GHz mic sets, these are 5 GHz-based.

Clearly, as the earlier microphone-set examples exemplify, audio doesn’t represent a particularly large data payload, and any lip sync loss due to latency will be minimal at worst (and can be further time sync-corrected in post-production; that is, if you’re not live-streaming).

Perhaps the developer was assuming that multiple sets of these would be in simultaneous use by a band, for vocals and/or instruments, and wanted plenty of spectrum to play with (each transmitter/receiver combo is uniquely configurable to one of four possible channels). And/or perhaps the goal was to avoid interference with other 2.4 GHz broadcasters (such as a microwave oven backstage). All at a potential broadcast range tradeoff versus 2.4 GHz, of course.

Wireless guitar systems

I got the Amazon Basics setup last summer, and the Leapture RT10 (also from Amazon) last fall. Why both, especially considering the voluminous dust currently collecting on my guitars? The on-sale prices, only ~$30 in both cases, were hard to resist. I figured I could just do a teardown on at least one of them. And hope springs eternal that I’ll eventually blow the dust off my guitars. Both are 2.4 GHz-based; the Leapture setup also offers Bluetooth streaming support.

CPAP (continuous positive airway pressure) machine

Last, but not least, and breaking the to-this-point consistent cadence of multimedia-tailored case studies, there’s my Philips Respironics DreamStation Auto CPAP (living at altitude can have some unique accompanying health challenges). Every morning, I download the previous night’s captured sleep data to my iPad over Bluetooth. Bluetooth Low Energy (LE), to be exact, for reasons that aren’t even remotely clear to me. The machine is AC-powered, after all, not battery-operated. And that the DreamStation doesn’t use conventional Bluetooth connectivity only acts as a potential further complication to initial pairing and ongoing communication. Then again, I suppose Bluetooth connectivity is among the least of Philips’ challenges right now…

Connect with me, wired or wirelessly

As always, I welcome your thoughts on anything I’ve written here, and/or any additional case studies you’d like to share, in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post ISM bands and frequencies: Comparisons and case studies appeared first on EDN.

Non-linear pullup for multi-rate I2C buses

Срд, 05/01/2024 - 17:12

I2C is a popular bidirectional serial communications bus having a clock and a data line. Both line’s drivers consist of an open drain ground-referenced N-channel MOSFET with a pullup resistor connected to a supply ranging from 1.8 V to 5 V. The pullup resistor must be small enough to meet certain timing requirements in the presence of significant bus capacitance, but large enough that the surprisingly weak active driver (specified to drop less than 0.4 V at 3 mA for standard mode and less than 0.6 V at 6 mA for fast mode speeds) current is not exceeded and that the logic low levels are met. Meeting both needs can be a challenge.

Figure 44 in section 7.24 of the UM10204 I2C-bus specification and user manual presents a method of amelioration (Figure 1).

Figure 1: Switched-pullup circuit where the analog switch is activated at high bus voltages only, paralleling an additional resistor with the standard pullup. Source: NXP

Wow the engineering world with your unique design: Design Ideas Submission Guide

An analog switch is activated at the higher bus voltages only, paralleling an additional resistor with the standard pullup. This reduces rise time without raising the driver’s achievable logic low level. But when the driver is activated, the amount of improvement is limited by the presence of the additional resistor at the higher voltages—too small an additional pullup, and the allowed driver current will be exceeded, and the required logic low level will not be met. A better approach would be to connect the additional resistor only when the signal is rising, that is, when the driver is off. The driver would then not be fighting the additional pullup, which accordingly could be made extremely small. This is the approach taken with the following circuit.

In Figure 2, comparators U1 and U2 are set to switch at the logic low and high thresholds of a typical 1.8V I2C bus.

Figure 2 A schematic of simulated I2C drivers, pullup resistors and bus capacitances, without (old) and with (new) connection to the autonomous non-linear pullup circuit.

When the driver turns off and releases the signal “new” from a logic low, that signal rises through the low threshold. There is an acceptable propagation-delayed positive output transition of U1 which clocks the 1Q output of D flipflop U3 to a logic high. This activates U4, switching R5 in parallel with the standard pullup R6 and greatly reducing rise time. As the signal rises through the logic high level, the output of U2 transitions to a logic low, clearing the 1Q output of U3, deactivating U4 and disconnecting R5. (In this instance, the propagation delay is welcome. U2’s delay allows the signal time to reach 1.8 V, courtesy of the additional pullup.) The circuit is now ready for the driver’s next activation, which will happen without it having to fight R5. Until activation, the circuit draws negligible current. Figure 3 shows the reduced rise time of the “new” circuit in comparison to that of the “old”, both having the same bus capacitance and same standard pullup. 100 pF is only 25% of the maximum specified value for I2C operation.

Figure 3 A comparison of the performances of standard (old) and an enhanced (new) I2C bus signals. The signals CLR, CLK, and Q swing between ground and +3.3 V are shown scaled for clarity purposes.

Although 1.8 V is a popular bus voltage (especially for smart battery IC’s), I was unable to find suitably fast, adequately low supply current comparators which can be powered from this voltage. Fortunately, 3.3 V is generally available in products with 1.8 V buses, and an analog switch serves admirably to bridge the gap between the two supplies. If the bus runs at 3.3 V, the analog switch can be replaced with a PNP transistor whose emitter is connected to the bus’s supply, and its base driven through a 3.3k resistor. In the unlikely event of a 5 V bus, 5V can be connected to the PNP’s emitter, but a 5 V-supply-capable D flip-flop will need to be found to replace U3.

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Non-linear pullup for multi-rate I2C buses appeared first on EDN.

Сторінки