Feed aggregator

2×AA/USB: OK!

EDN Network - Thu, 05/09/2024 - 16:20

While an internal, rechargeable lithium battery is usually the best solution for portable kit nowadays, there are still times when using replaceable cells with an external power option, probably from a USB source, is more appropriate. This DI shows ways of optimizing this.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The usual way of combining power sources is to parallel them, with a series diode for each. That is fine if the voltages match and some loss of effective battery capacity, owing to a diode’s voltage drop, can be tolerated. Let’s assume the kit in question is something small and hand-held or pocketable, probably using a microcontroller like a PIC, with a battery comprising two AA cells, the option of an external 5 V supply, and a step-up converter producing a 3.3 V internal power rail. Simple steering diodes used here would give a voltage mismatch for the external power while wasting 10 or 20% of the battery’s capacity.

Figure 1 shows a much better way of implementing things. The external power is pre-regulated to avoid the mismatch, while active switching minimizes battery losses. I have used this scheme in both one-offs and production units, and always to good effect.

Figure 1 Pre-regulation of an external supply is combined with an almost lossless switch in series with the battery, which maximizes its life.

The battery feed is controlled by Q1, which is a reversed p-MOSFET. U1 drops any incoming voltage down to 3.3 V. Without external power, Q1’s gate is more negative than its source, so it is firmly on, and (almost) the full battery voltage appears across C3 to feed the boost converter. Q2’s emitter–base diode stops any current flowing back into U1. Apart from the internal drain–source or body diode, MOSFETs are almost symmetrical in their main characteristics, which allows this reversed operation.

When external power is present, Q1.G will be biased to 3.3 V, switching it off and effectively disconnecting the battery. Q2 is now driven into saturation connecting U1’s 3.3 V output, less Q2’s saturated forward voltage of 100–200 mV, to the boost converter. (The 2N2222, as shown, has a lower VSAT than many other types.) Note that Q2’s base current isn’t wasted, but just adds to the boost converter’s power feed. Using a diode to isolate U1 would incur a greater voltage drop, which could cause problems: new, top-quality AA manganese alkaline (MnAlk) cells can have an off-load voltage well over 1.6 V, and if the voltage across C3 were much less than 3 V, they could discharge through the MOSFET’s inherent drain–source or body diode. This arrangement avoids any such problems.

Reversed MOSFETs have been used to give battery-reversal protection for many years, and of course such protection is inherent in these circuits. The body diode also provides a secondary path for current from the battery if Q1 is not fully on, as in the few microseconds after external power is disconnected.

Figure 1 shows U1 as an LM1117-3.3 or similar type, but many more modern regulators allow a better solution because their outputs appear as open circuits when they are unpowered, rather than allowing reverse current to flow from their outputs to ground. Figure 2 shows this implementation.

Figure 2 Using more recent designs of regulator means that Q2 is no longer necessary.

Now the regulator’s output can be connected directly to C3 and the boost converter. Some devices also have an internal switch which completely isolates the output, and D1 can then be omitted. Regulators like these could in principle feed into the final 3.3mV rail directly, but this can actually complicate matters because the boost converter would then also need to be reverse-proof and might itself need to be turned off. R2 is now used to bias Q1 off when external power is present.

If we assume that the kit uses a microcontroller, we can easily monitor the PSU’s operation. R5—included purely for safety’s sake—lets the microcontroller check for the presence of external power, while R3 and R4 allow it to measure the battery voltage accurately. Their values, calculated on the assumption that we use an 8-bit A–D conversion with a 3.3 V reference, give a resolution of 10 mV/count, or 5 mV per cell. Placing them directly across the battery loads it with ~5–6 µA, which would drain typical cells in about 50 years; we can live with that. The resistor ratio chosen is close to 1%-accurate.

Many components have no values assigned because they will depend on your choice of regulator and boost converter. With its LM1117-3.3, the circuit of Figure 1 can handle inputs of up to 15 V, though a TO-220 version then gets rather warm with load currents approaching 80 mA (~1 W, its practical power limit without heatsinking).

I have also used Figure 2 with Microchip’s MCP1824T-3302 feeding a Maxim MAX1674 step-up converter, with an IRLML6402 for Q1, which must have a low on-resistance. Many other, and more recent, devices will be suitable, and you probably have your own favorites.

While the external power input is shown as being naked, you may want to clothe it with some filtering and protection such as a poly-fuse and a suitable Zener or TVS. Similarly, no connector is specified, but USBs and barrel jacks both have their places.

While this is shown for nominal 3V/5V supplies, it can be used at higher voltages subject to gate–source voltage limitations owing to the MOSFET’s input protection diodes, the breakdown voltages of which can range from 6 V to 20 V, so check your device’s data sheet.

Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 2×AA/USB: OK! appeared first on EDN.

Optimize battery selection and operating life of wireless IoT devices

EDN Network - Thu, 05/09/2024 - 14:59

Batteries are essential for powering many Internet of Things (IoT) devices, particularly wireless sensors, which are now deployed by billions. But batteries are often difficult to access and expensive to change because it’s a manual process. Anything that can be done to maximize the life of batteries and minimize or eliminate the need to change them during their operating life is a worthwhile endeavour and a significant step toward sustainability and efficiency.

Taking the example of a wireless sensor, this is a five-step process:

  1. Select the components for your prototype device: sensor, MCU, and associated electronics.
  2. Use a smart power supply with measurement capabilities to establish a detailed energy profile for your device under simulated operating conditions.
  3. Evaluate your battery options based on the energy profile of your device.
  4. Optimize the device parameters (hardware, firmware, software, and wireless protocol).
  5. Make your final selection of the battery type and capacity with the best match to your device’s requirements.

Selecting device type and wireless protocol

Microcontroller (MCU) is the most common processing resource at the heart of embedded devices. You’ll often choose which one to use for your next wireless sensor based on experience, the ecosystem with which you’re most familiar, or corporate dictate. But when you have a choice and conserving energy is a key concern for your application, there may be a shortcut.

Rather than plow through thousands of datasheets, you could check out EEMBC, an independent benchmarking organization. The EEMBC website not only enables a quick comparison of your options but also offers access to a time-saving analysis tool that lists the sensitivity of MCU platforms to various design parameters.

Most IoT sensors spend a lot of time in sleep mode and send only short bursts of data. So, it’s important to understand how your short-listed MCUs manage sleep, idle and run modes, and how efficiently they do that.

Next, you need to decide on the wireless protocol(s) you’ll be using. Range, data rate, duty cycle, and compatibility within the application’s operating environment will all be important considerations.

Figure 1 Data rates and range are the fundamental parameters considered when choosing a wireless protocol. Source: BehrTech

Once you’ve established the basics, digging into the energy efficiency of each protocol gets more complex and it’s a moving target. There are frequent new developments and enhancements to established wireless standards.

At data rates of up to 10 Kbps, Bluetooth LE/Mesh, LoRa, or Zigbee are usually the lowest energy protocols of choice for distances up to 10 meters. If you need to cover a 1-km range, NB-IoT may be on your list, but at an order of magnitude higher energy usage.

In fact, MCU hardware, firmware and software, the wireless protocol, and the physical environment in which an IoT device operates are all variables that need to be optimized to conserve energy. The only effective way to do that is to model these conditions during development and watch the effect of changes on the fly as you make changes to any of these parameters.

Establish an initial energy profile of device under test (DUT)

The starting point is to use a smart, programmable power supply and measurement unit to profile and record the energy usage of your device. This is necessary because simple peak and average power measurements with multimeters can only provide limited information. The Otii Arc Pro from Qoitech was used here to illustrate the process.

Consider a wireless MCU. In run mode, it may be putting out a +6 dBm wireless signal and consuming 10 mA or more. In deep sleep mode, the current consumption might fall below 0.2 µA. That’s a 50:1 dynamic range and changes happen almost instantaneously, certainly within microseconds. Conventional multimeters can’t capture changes like these, so they can’t help you understand the precise energy profile of your device. Without that, your choice of battery is open to miscalculation.

Your smart power supply is a digitally controlled power source offering control over parameters such as voltage, current, power, and mode of operation. Voltage control should ideally be in 1 mV steps so that you can determine the DUT’s energy consumption at different voltage levels to mimic battery discharge.

You’ll need sense pins to monitor the DUT power rails, a UART to see what happens when you make code changes, and GPIO pins for status monitoring. Standalone units are available, but it can be more flexible and economical to choose a smart power supply that uses your computer’s processing resources and display, as shown in the example below.

Figure 2 The GUI for a smart power supply can run on Windows, MacOS, or Ubuntu. Source: Qoitech

After connecting, you power and monitor the DUT simultaneously. You’re presented with a clear picture of voltages and current changes over time. Transients that you would never be able to see on a traditional meter are clearly visible and you can immediately detect unexpected anomalies.

Figure 3 A smart power profiler gives you a detailed comparison of your device’s energy consumption for different hardware and firmware versions. Source: Qoitech

From the stored data in the smart power supply, you’ll be able to make a short list of battery options.

Choosing a battery

Battery selection needs to consider capacity, energy density, voltage, discharge profile, and temperature. Datasheet comparisons are the starting point but it’s important to validate the claims of battery manufacturers by benchmarking their batteries through testing. Datasheet information is based on performance under “normal conditions” which may not apply to your application.

Depending on your smart power supply model, the DUT energy profiling described earlier may provide an initial battery life estimate based on a pre-programmed battery type and capacity. Either the same instrument or a separate piece of test equipment may then be used for a more detailed examination of battery performance in your application. Accelerated discharge measurements, when properly set up, are a time-saving alternative to the years it may take a well-designed IoT device to exhaust its battery.

These measurements must follow best practices to create an accurate profile. These include maintaining high discharge consistency to achieve a match to the DUT’s peak current, shortening the cycle time and increasing sleep current so that the battery can recover. You should also consult with battery manufacturers to validate any assumptions you make during the process.

You can profile the same battery chemistries from different manufacturers, or different battery chemistries, perhaps comparing lithium coin cells with AA alkaline batteries.

Figure 4 The comparison shows accelerated discharge characteristics for AA and AAA alkaline batteries from five different manufacturers. Source: Qoitech

By this stage, you have a good understanding of both the energy profile of your device and of the battery type and capacity that’s likely to result in the longest operating life in your applications. Upload your chosen battery profile to your smart power supply and set it up to emulate that battery.

Optimize and iterate

You can now go back to the DUT and optimize hardware and software for the lowest power consumption in near real-world conditions. You may have the flexibility to experiment with different wireless protocols, but even if that’s not the case, experimenting with sleep and deep-sleep modes, network routing, and even alternative data security protocols can all yield improvements, avoiding a common problem where 40 bytes of data can easily become several Kbytes.

Where the changes create a significant shift in your device’s energy profile, you may also review the choice of battery and evaluate again until you achieve the best match.

While this process may seem lengthy, it can be completed in just a few hours and may extend the operating life of a wireless IoT edge device, and hence reduce battery waste, by up to 30%.

Björn Rosqvist, co-founder and chief product officer of Qoitech, has 20+ years of experience in power electronics, automotive, and telecom with companies such as ABB, Ericsson, Flatfrog, Sony, and Volvo Cars.

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Optimize battery selection and operating life of wireless IoT devices appeared first on EDN.

Marktech unveils multi-chip packages with InGaAs photodiodes and multiple LED emitters

Semiconductor today - Thu, 05/09/2024 - 13:44
Marktech Optoelectronics Inc of Latham, NY, USA, a designer and manufacturer of optoelectronics components and assemblies — including UV, visible, near-infrared (NIR) and short-wave infrared (SWIR) emitters, detectors, and indium phosphide (InP) epiwafers — has unveiled its new multi-chip packages with indium gallium arsenide (InGaAs) photodiodes and multiple LED emitters...

КПІшники відвідали Вищий антикорупційний суд

Новини - Thu, 05/09/2024 - 12:18
КПІшники відвідали Вищий антикорупційний суд
Image
medialab чт, 05/09/2024 - 12:18
Текст

Студенти 1 курсу факультету соціології та права Київського політехнічного інституту відвідали ВАКС.

Navitas’ CEO & co-founder Gene Sheridan a finalist in EY’s Entrepreneur Of The Year 2024 Greater Los Angeles Award

Semiconductor today - Thu, 05/09/2024 - 11:22
Gene Sheridan, CEO & co-founder of gallium nitride (GaN) power IC and silicon carbide (SiC) technology firm Navitas Semiconductor Corp of Torrance, CA, USA, has been named by Ernst & Young LLP (EY US) as an Entrepreneur Of The Year 2024 Greater Los Angeles Award finalist...

Flexible electronic device may offer a new approach to the treatment of spinal injuries

News Medical Microelectronics - Thu, 05/09/2024 - 05:55
A tiny, flexible electronic device that wraps around the spinal cord could represent a new approach to the treatment of spinal injuries, which can cause profound disability and paralysis.

Apple’s New M4 Processor: An ‘Outrageously Powerful’ Device for AI

AAC - Thu, 05/09/2024 - 03:00
The new SoC leverages 28 billion transistors to make the new iPad Pro "the most powerful device of its kind."

Протокол засідання ректорату від 7 травня 2024 року

Новини - Wed, 05/08/2024 - 22:20
Протокол засідання ректорату від 7 травня 2024 року
Image
kpi ср, 05/08/2024 - 22:20
Текст

Протокол засідання ректорату - оперативного штабу реагування та забезпечення життєдіяльності університету від 7 травня 2024 року

An Explanation of Undervoltage Lockout

AAC - Wed, 05/08/2024 - 22:00
Learn how undervoltage lockout (UVLO) can protect semiconductor devices and electronic systems from potentially hazardous operation.

Вшанування пам’яті героїв-політехніків

Новини - Wed, 05/08/2024 - 20:03
Вшанування пам’яті героїв-політехніків
Image
medialab ср, 05/08/2024 - 20:03
Текст

8 травня — День пам’яті та перемоги над нацизмом. Київські політехніки вшанували пам’ять студентів, викладачів і працівників КПІ, полеглих у Другій світовій війні.

«Промодень на публічній службі»

Новини - Wed, 05/08/2024 - 19:05
«Промодень на публічній службі»
Image
medialab ср, 05/08/2024 - 19:05
Текст

За ініціативою уповноваженої особи з питань запобігання та виявлення корупції Яни Юріївни Цимбаленко й за активної участі завідувача кафедри теорії та практики управління Ростислава Івановича Пашова і старшої викладачки Анни Миколаївни Іщенко, студенти спеціальностей 281 «Публічне управління та а

Медцентр у КПІ

Новини - Wed, 05/08/2024 - 19:01
Медцентр у КПІ
Image
medialab ср, 05/08/2024 - 19:01
Текст

Сучасні медлабораторії, новітнє обладнання, висококваліфіковані лікарі, медичне обслуговування без довгих черг і талонів та сервіс, орієнтований на зручність пацієнта.

Apple’s Spring 2024: In-person announcements no more?

EDN Network - Wed, 05/08/2024 - 17:35

By means of introduction to my coverage of Apple’s 2024-so-far notable news, I’d like to share the amusing title, along with spot-on excerpts from the body text, from a prescient piece I saw on Macworld yesterday. The title? “Get ready for another Apple meeting that could have been an email”. Now the excerpts:

Apple started running virtual press events during the pandemic when in-person gatherings made little sense and at various times were frowned upon or literally illegal. But Apple has largely stuck with that format even as health concerns lessened and its own employees were herded back into the office.

 Why is that? Because virtual events have advantages far beyond the containment of disease. Aside from avoiding the logistical headaches of getting a thousand bad-tempered journalists from around the world to the same place at the same time, a pre-recorded video presentation is much easier to run smoothly than a live performance…

 Nobody cringes harder than me when live performers get things wrong, and I absolutely get the attraction of virtual keynotes for Apple. But it does raise some awkward existential questions about why we need to bother with the elaborate charade that is a keynote presentation. What, after all, is the point of a keynote? If it’s just to get information about new products, that can be done far more efficiently via a press release that you can read at your own speed; just the facts, no sitting through skits and corporate self-congratulation.

 Is it to be marketed by the best hypemen in the business? If that’s really something you want, you might as well get it from an ad: virtual keynotes give none of that dubious excitement and tribalistic sense of inclusivity you get with a live performance. And we’ve even lost the stress-test element of seeing an executive operating the product under extreme pressure. What we’re left with is a strange hybrid: a long press release read out by a series of charisma-free executives, interspersed with advertisements.

I said something similar in my coverage of Apple’s June 2023 Worldwide Developer Conference (WWDC):

This year’s event introductory (and product introduction) presentation series was lengthy, with a runtime of more than two hours, and was also entirely pre-recorded. This has been Apple’s approach in recent years, beginning roughly coincident with the COVID lockdown and consequent transition to a virtual event (beginning in 2020; 2019 was still in-person)…even though both last- and this-years’ events returned to in-person from a keynote video viewing standpoint.

 On the one hand, I get it; as someone who (among other things) delivers events as part of his “day job”, the appeal of a tightly-scripted, glitch-free set of presentations and demonstrations can’t be understated. But live events also have notable appeal: no matter how much they’re practiced beforehand, there’s still the potential for a glitch, and therefore when everything still runs smoothly, what’s revealed and detailed is (IMHO) all the more impactful as a result.

What we’ve ended up with so far this year is a mix of press release-only and virtual-event announcements, in part (I suspect, as does Macworld) because of “building block” mass-production availability delays for the products in today’s (as I write these words on Tuesday, May 7) news.

But I’m getting ahead of myself.

The Vision Pro

Let’s rewind to early January, when Apple confirmed that its first-generation Vision Pro headset (which I’d documented in detail within last June’s WWDC coverage) would open for pre-orders on January 19, with in-store availability starting February 2.

Granted, the product’s technology underpinnings remain amazing 11 months post-initial unveil:

But I’m still not sold on the mainstream (translation: high volume) appeal of such a product, no matter how many entertainment experiences and broader optimized applications Apple tries to tempt me with (and no matter how much Apple may drop the price in the future, assuming it even can to a meaningful degree, given bill-of-materials cost and profit-expectation realities). To be clear, this isn’t an Apple-only diss; I’ve expressed the same skepticism in the past about offerings from Oculus-now-Meta and others. And at the root of my pessimism about AR/VR/XR/choose-your-favorite-acronym (or, if you’re Apple, “spatial computing”, whatever that means) headsets may indeed be enduring optimism of a different sort.

Unlike the protagonists of science fiction classics such as William Gibson’s Neuromancer and Virtual Light, Neal Stephenson’s Snow Crash, and Ernest Cline’s Ready Player One, I don’t find the real world to be sufficiently unpleasant that I’m willing to completely disengage from it for long periods of time (and no, the Vision Pro’s EyeSight virtual projected face doesn’t bridge this gap). Scan through any of the Vision Pro reviews published elsewhere and you’ll on-average encounter similar lukewarm-at-best enthusiasm from others. And I can’t help but draw an accurate-or-not analogy to Magic Leap’s 2022 consumer-to-enterprise pivot when I see subsequent Apple press releases touting medical and broader business Vision Pro opportunities.

So is the Vision Pro destined to be yet another Apple failure? Maybe…but definitely not assuredly. Granted, we might have another iPod Hi-Fi on our hands, but keep in mind that the first-generation iPhone and iPad also experienced muted adoption. Yours truly even dismissively called the latter “basically a large-screen iPod touch” on a few early occasions. So let’s wait and see how quickly the company and its application-development partners iterate both the platform’s features and cost before we start publishing headlines and crafting obituaries about its demise.

The M3-based MacBook Air

Fast-forward to March, and Apple unveiled M3 SoC-based variants of the MacBook Air (MBA), following up on the 13” M2-based MBA launched at the 2022 WWDC and the first-time-in-this-size 15” M2 MBA unveiled a year later:

Aside from the Apple Silicon application processor upgrade (first publicly discussed last October), there’s faster Wi-Fi (6E) along with an interesting twist on expanded external-display support; the M3-based models can now simultaneously drive two of ‘em, but only when the “clamshell” is closed (i.e., when the internal display is shut off). But the most interesting twist, at least for this nonvolatile-memory-background techie, is that Apple did a seeming back-step on its flash memory architecture. In the M2 generation, the 256 GByte SSD variant consisted of only a single flash memory chip (presumably single-die, to boot, bad pun intended), which bottlenecked performance due to the resultant inability for multi-access parallelism. To get peak read and (especially evident) write speeds, you needed to upgrade to a 512 GByte or larger SSD.

The M3 generation seemingly doesn’t suffer from the same compromise. A post-launch teardown revealed that (at least for that particular device…since Apple multi-sources its flash memory, one data point shouldn’t necessarily be extrapolated to an all-encompassing conclusion) the 256 GByte SSD subsystem comprised two 128 GByte flash memory chips, with consequent restoration of full performance potential. I’m particularly intrigued by this design decision considering that two 128 GByte flash memories conceivably cost Apple more than one 256 GByte alternative (likely the root cause of the earlier M1-to-M2 move). That said, I also don’t underestimate the formidable negotiation “muscle” of Apple’s procurement department…

Earnings

Last week, we got Apple’s second-fiscal-quarter earnings results. I normally don’t cover these at all, and I won’t dwell long on the topic this time, either. But they reveal Apple’s ever-increasing revenue and profit reliance on its “walled garden” services business (to the ever-increasing dismay of its “partners”, along with various worldwide government entities), given that hardware revenue dropped for all hardware categories save Macs, notably including both iPhone and iPad and in spite of the already-discussed Vision Pro launch. That said, the following corporate positioning seemed to be market-calming:

In the March quarter a year ago, we were able to replenish iPhone channel inventory and fulfill significant pent-up demand from the December quarter COVID-related supply disruptions on the iPhone 14 Pro and 14 Pro Max. We estimate this one-time impact added close to $5 billion to the March quarter revenue last year. If we removed this from last year’s results, our March quarter total company revenue this year would have grown.

The iPad Air

And today we got new iPads and accessories. The iPad Air first:

Reminiscent of the aforementioned MacBook Air family earlier this year, they undergo a SoC migration, this time from the M1 to the M2. They also get a relocated front camera, friendlier (as with 2022’s 10th generation conventional iPad) for landscape-orientation usage. And to the “they” in the previous two sentences, as well as again reminiscent of the aforementioned MacBook Air expansion to both 13” and 15” form factors, the iPad Air now comes in both 11” and 13” versions, the latter historically only offered with the iPad Pro.

Speaking of which

The M4 SoC

Like their iPad Air siblings, the newest generation of iPad Pros relocate the front camera to a more landscape orientation-friendly bezel location. But that’s among the least notable enhancements this time around. On the flip side of the coin, perhaps most notable news is that they mark the first-time emergence of Apple’s M4 SoC. I’ll begin with obligatory block diagrams:

Some historical perspective is warranted here. Only six months ago, when Apple rolled out its first three (only?) M3 variants along with inclusive systems, I summarized the to-date situation:

Let’s go back to the M1. Recall that it ended up coming in four different proliferations:

  • The entry-level M1
  • The M1 Pro, with increased CPU and GPU core counts
  • The M1 Max, which kept the CPU core constellation the same but doubled up the graphics subsystem, and
  • The M1 Ultra, a two-die “chiplet” merging together two M1 Max chips with requisite doubling of various core counts, the maximum amount of system memory, and the like

But here’s the thing: it took a considerable amount of time—1.5 years—for Apple to roll out the entire M1 family from its A14 Bionic development starting point:

  • A14 Bionic (the M1 foundation): September 15, 2020
  • M1: November 10, 2020
  • M1 Pro and Max: October 18, 2021
  • M1 Ultra: March 8, 2022

 Now let’s look at the M2 family, starting with its A15 Bionic SoC development foundation:

 Nearly two years’ total latency this time: nine months alone from the A15 to the M2.

I don’t yet know for sure, but for a variety of reasons (process lithography foundation, core mix and characteristics, etc.) I strongly suspect that the M3 chips are not based on the A16 SoC, which was released on September 7, 2022. Instead, I’m pretty confident in prognosticating that Apple went straight to the A17 Pro, unveiled just last month (as I write these words), on September 12 of this year, as their development foundation.

 Now look at the so-far rollout timeline for the M3 family—I think my reason for focusing on it will then be obvious:

  • A17 Pro: September 12, 2023
  • M3: October 30, 2023
  • M3 Pro and Max: October 30, 2023
  • M3 Ultra: TBD
  • M3 Extreme (a long-rumored four-Max-die high-end proliferation, which never ended up appearing in either the M1 or M2 generations): TBD (if at all)

Granted, we only have the initial variant of the M4 SoC so far. There’s no guarantee at this point that additional family members won’t have M1-reminiscent sloth-like rollout schedules. But for today, focus only on the initial-member rollout latencies:

  • M1 to M2: more than 19 months
  • M2 to M3: a bit more than 16 months
  • M3 to M4: a bit more than 6 months

Note, too, that Apple indicates that the M4 is built on a “second-generation 3 nm process” (presumably, like its predecessors, from TSMC). Time from another six-months-back quote:

Conceptually, the M3 flavors are reminiscent of their precursors, albeit with newer generations of various cores, along with a 3 nm fabrication process foundation.

As for the M4, here’s my guess: from a CPU core standpoint, especially given the rapid generational development time, the performance and efficiency cores are likely essentially the same as those in the M3, albeit with some minor microarchitecture tweaks to add-and-enhance deep learning-amenable instructions and the like, therefore this press release excerpt:

Both types of cores also feature enhanced, next-generation ML accelerators.

The fact that there are six efficiency cores this time, versus four in the M3, is likely due in no small part to the second-generation 3 nm lithography’s improved transistor packing capabilities along with more optimized die layout efficiencies (any potential remaining M3-to-M4 die size increase might also be cost-counterbalanced by TSMC’s improved 3 nm yields versus last year).

What about the NPU, which Apple brands as the “Neural Engine”? Well, at first glance it’s a significant raw-performance improvement from the one in the M3: 18 TOPs (trillion operations per second) vs 38 TOPs. But here comes another six-month back quote about the M3:

The M3’s 16-core neural engine (i.e., deep learning inference processing) subsystem is faster than it was in the previous generation. All well and good. But during the presentation, Apple claimed that it was capable of 18 TOPS peak performance. Up to now I’d been assuming, as you know from the reading you’ve already done here, that the M3 was a relatively straight-line derivation of the A17 Pro SoC architecture. But Apple claimed back in September that the A17 Pro’s neural engine ran at 35 TOPS. Waaa?

 I see one (or multiple-in-combination) of (at least) three possibilities to explain this discrepancy:

  • The M3’s neural engine is an older or more generally simpler design than the one in the A17 Pro
  • The M3’s neural engine is under-clocked compared to the one in the A17 Pro
  • The M3’s neural engine’s performance was measured using a different data set (INT16 vs INT8, for example, or FLOAT vs INT) than what was used to benchmark the A17 Pro

My bet remains that the first possibility of the three listed was the dominant if not sole reason for the M3 NPU’s performance downgrade versus that in the A17 Pro. And I’ll also bet that the M4 NPU is essentially the same as the one in the A17 Pro, perhaps again with some minor architecture tweaks (or maybe just a slight clock boost!). So then is the M4 just a tweaked A17 Pro built on a tweaked 3 nm process? Not exactly. Although the GPU architecture also seems to be akin to, if not identical to, the one in the A17 Pro (six-core implementation) and M3 (10-core matching count), the display controller has more tangibly evolved this time, likely in no small part for the display enhancements which I’ll touch on next. Here’s the summary graphic:

More on the iPad Pro

Turning attention to the M4-based iPads themselves, the most significant thing here is that they’re M4-based iPads. This marks the first time that a new Apple Silicon generation has shown up in something other than an Apple computer (notably skipping the M3-based iPad Pro iteration in the process, as well), and I don’t think it’s just a random coincidence. Apple’s clearly, to me, putting a firm stake in the ground as to the corporate importance of its comparatively proprietary (versus the burgeoning array of Arm-based Windows computers) tablet product line, both in an absolute sense and versus computers (Apple’s own and others). A famous Steve Jobs quote comes to my mind at this point:

If you don’t cannibalize yourself someone else will.

The other notable iPad Pro enhancement this time around is the belated but still significant display migration to OLED technology, which I forecasted last August. Unsurprisingly, thanks to the resultant elimination of a dedicated backlight (an OLED attribute I noted way back in 2010 and revisited in 2019) the tablets are now significantly thinner as a result, in spite of the fact that they’re constructed in a fairly unique dual-layer brightness-boosting “sandwich” (harking back to my earlier display controller enhancements comments; note that a separate simultaneous external tethered display is still also supported). And reflective of the tablets’ high-end classification, Apple has rolled out corresponding “Pro” versions of its Magic Keyboard (adding a dedicated function-key row, along with a haptic feedback-enhanced larger trackpad):

And Pencil, adding “squeeze” support, haptic feedback of its own, and other enhancements:

Other notable inter- and intra-generational tweaks:

  • No more mmWave 5G support.
  • No more ultra-wide rear camera, either.
  • Physical SIM slots? Gone, too.
  • Ten-core CPU M4 SoCs are unique to the 1 TByte and 2 TByte iPad Pro variants; lower-capacity mass storage models get only 9 CPU cores (one less performance core, to be precise, although corresponding GPU core counts are interestingly per-product-variant unchanged this time). They’re also allocated only half the RAM of their bigger-SSD brethren: 8 GBytes vs 16 GBytes.
  • 1 and 2 TByte iPads are also the only ones offered a nano-texture glass option.

Given that Apple did no iPad family updates at all last year, this is an encouraging start to 2024. That said, the base 10th-generation iPad is still the same as when originally unveiled in October 2022, although it did get a price shave today (and its 9th-generation precursor is no longer in production, either). And the 6th-generation iPad mini introduced in September 2021 is still the latest-and-greatest, too. I’m admittedly more than a bit surprised and pleased that my unit purchased gently used off eBay last summer is still state-of-the-art!

iPad software holdbacks

And as for Apple’s ongoing push to make the iPad, and the iPad Pro specifically, a credible alternative to a full-blown computer? It’s a topic I first broached at length back in September 2018, and to at least some degree the situation hasn’t tangibly changed since then. Tablet hardware isn’t fundamentally what’s holding the concept back from becoming a meaningful reality, but then again, I’d argue that it never was the dominant shortcoming. It was, and largely remains, software; both the operating system and the applications that run on it. And I admittedly felt validated in my opinion here when I perused The Verge’s post-launch event livestream archive and saw it echoed there, too.

Sure, Apple just added some nice enhancements to its high-end multimedia-creation and editing tablet apps (along with their MacOS versions, I might add) but how many folks are really interested in editing multiple ProRes streams without proxies on a computer nowadays, far from on an iPad? What about tangible improvements for the masses? Sure, you can use a mouse with an iPad now, but multitasking attempts still, in a word, suck. And iPadOS still doesn’t even support the basics, such as multi-user support. Then again, there’s always this year’s WWDC, taking place mid-next month, which I will of course once again be covering for EDN and y’all. Hope springs eternal, I guess. Until then, let me know your thoughts in the comments.

p.s…I realized just before pressing “send to Aalyia” that I hadn’t closed the loop on my earlier “building block mass-production availability delays” tease. My suspicion is that originally the new iPads were supposed to be unveiled alongside the new MacBook Airs back in March, in full virtual-event form. But in the spirit of “where there’s smoke, there’s fire”, longstanding rumors about OLED display volume production delays, I’m also guessing (and/or second-generation 3 nm process volume production delays), pushed the iPads to today.

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Apple’s Spring 2024: In-person announcements no more? appeared first on EDN.

TSMC Certifies a Host of Top EDA Tools for New Process Nodes

AAC - Wed, 05/08/2024 - 16:00
With TSMC scaling down to 2 nm and lower, the semiconductor giant is working with Cadence, Siemens, and Synopsys to bring updated EDA tools to IC designers.

Dürr and Rohde & Schwarz collaborate on ADAS/AD functional testing for EOL and PTI

ELE Times - Wed, 05/08/2024 - 14:22

Automated and autonomous vehicles, which rely on sensors like cameras and radar, either assist or take over decision-making in traffic situations. Sensors’ proper interaction and functionality must be thoroughly tested to ensure road safety. Dürr and Rohde & Schwarz, a global technology group, have developed an innovative, cost-effective solution for over-the-air (OTA) vehicle-in-the-loop (VIL) testing. This solution validates conformity and effectiveness during end-of-line (EOL) testing or periodical technical inspection (PTI).

Road safety is a key challenge for future mobility, especially for automated and autonomous vehicles. Ensuring the continued functionality of advanced driver assistance systems (ADAS) and autonomous driving (AD) features is critical for long-term vehicle safety and performance. Therefore, manufacturers of vehicles equipped with these features require certification, either from a third party, authority or by self-certification. A vehicle-in-the-loop (VIL) test can validate the correct operation of all ADAS/AD functions in the end-of-line (EOL) and ensure conformity of production (COP) before a vehicle leaves the factory. In addition, maintaining proper functionality throughout a vehicle’s lifespan requires additional control measures during periodical technical inspection (PTI).

Simulating various driving scenarios

To address these additional requirements in the EOL and PTI process, Dürr and Rohde & Schwarz initiated a joint project incorporating Dürr’s patented x-road curve multi-function roll test stand, Rohde & Schwarz’ new RadEsT radar target simulator and the open-source simulation platform CARLA. The combination creates a virtual environment specifically for the camera and radar sensors installed in the test vehicle, allowing for the OTA simulation of different inspection scenarios without touching the vehicle. These scenarios include critical situations such as unintended lane departures and other vehicles braking suddenly or switching lanes directly in front of the test vehicle. The test vehicle must react promptly to changes in the VIL simulation and, if necessary, trigger the automated lane-keeping systems (ALKS) or advanced emergency braking systems (AEBS), for example, to pass inspection.

Patented technology for ultimate versatility

The 4WD x-road curve allows for unrestricted driving with steering movements, facilitating cornering maneuvers without altering the test vehicle. Laser measurement technology recognizes the front wheels’ position and steering angle while swiveling front double roller units automatically adjust for any angular difference to the driving direction. This ensures the vehicle remains centered on the test stand even at high speeds, regardless of the steering wheel’s position, and without the need for vehicle fixation, thus minimizing cycle times.

Resilient processes and precise results

RadEsT, the radar target simulator developed by Rohde & Schwarz, is exceptionally resilient to external factors, ensuring consistent performance in production and workshop environments. Its ability to provide precise and repeatable measurements makes it an invaluable tool for conducting accurate assessments in real-world conditions. Moreover, its compact and lightweight design enables easy and flexible integration at a cost-effective price point.

Easy to use test scenario generation

The open-source tool CARLA offers vehicle manufacturers or PTI organizations maximum flexibility with additional cost-saving opportunities and great potential for scenario selection. The recently announced upgrade of the CARLA simulator to Unreal Engine 5 is expected to enhance modeling, simulation realism, and performance, particularly for over-the-air camera simulation via monitors.

By combining Dürr’s patented x-road curve multi-function roll test stand, Rohde & Schwarz’ innovative radar target simulator, and the open-source platform CARLA, automated and autonomous vehicles’ full functionality can be cost-effectively evaluated to ensure proper operation in production and throughout the complete vehicle’s lifespan.

The post Dürr and Rohde & Schwarz collaborate on ADAS/AD functional testing for EOL and PTI appeared first on ELE Times.

Aixtron grows Q1 revenue and profit significantly year-on-year

Semiconductor today - Wed, 05/08/2024 - 14:14
For first-quarter 2024, deposition equipment maker Aixtron SE of Herzogenrath, near Aachen, Germany has reported revenue of €118.3m (near the top end of the €100–120m guidance range). This was down 45% on last quarter’s record €214.2m but up 53% on €77.2m a year ago (although the latter was reduced by delays in the issue of export licenses, pushing €70m worth of shipments out of the quarter)...

Radiation-Tolerant DC-DC 50-Watt Power Converters Provide High-Reliability Solution for New Space Applications

ELE Times - Wed, 05/08/2024 - 14:10

The LE50-28 power converters are available in nine variants with single- and triple-outputs for optimal design configurability

The Low-Earth Orbit (LEO) market is rapidly growing as private and public entities alike explore the new space region for everything from 5G communication and cube satellites to IoT applications. There is an increased demand for standard space grade solutions that are reliable, cost effective and configurable. To meet this market need, Microchip Technology (Nasdaq: MCHP) today announces a new family of Radiation-Tolerant (RT) LE50-28 isolated DC-DC 50W power converters available in nine variants with single- and triple-outputs ranging from 3.3V to 28V.

The off-the-shelf LE50-28 family of power converters are designed to meet MIL-STD-461. The power converters have a companion EMI filter and offer customers ease of design to scale and customize by choosing one or three outputs based on the voltage range needed for the end application. This series provides flexibility to parallel up to four power converters to reach 200-Watts.

Designed to serve 28V bus systems, the LE50-28 isolated DC-DC power converters can be integrated with Microchip’s PolarFire® FPGAs, microcontrollers and LX7720-RT motor control sensor for a complete electrical system solution. Designers can use these high-reliability radiation-tolerant power solutions to significantly reduce system-level development time.

“The new family of LE50-28 devices enable our customers to succeed in new space and LEO environments where components must withstand harsh conditions,” said Leon Gross, vice president of Microchip’s discrete products group. “Our off-the-shelf products offer a reliable and cost-effective solution designed for the durability our customers have come to expect from Microchip.”

The LE50-28 power converters offer a variety of electrical connection and mounting options. The LE50 series is manufactured with conventional surface mount and thru-hole components on a printed wiring board. This distinction in the manufacturing process can reduce time to market and risks associated with supply chain disruptions.

The LE50-28 family offers space-grade radiation tolerance with 50 Krad Total Ionizing Dose (TID) and Single Event Effects (SEE) latch-up immunity of 37 MeV·cm2/mg linear energy transfer.

Microchip offers a wide range of components to support the new space evolution with sub-QML strategy to bridge the gap between traditional Qualified Manufacturers List (QML) components and Commercial-Off-The-Shelf (COTS) components. Designed for new space applications, sub-QML components are the optimal solution that combines the radiation tolerance of QML components with our space flight heritage that permits lower screening requirements for lower cost and shorter lead times.

Microchip’s extensive space solutions include FPGAs, power and discrete devices, memory products, communication interfaces, oscillators, microprocessors (MPUs) and MCUs, offering a broad range of options across qualification levels, and the largest qualified plastic portfolio for space applications. For more information, visit our space solutions webpage.

Support and Resources

The new family of LE50-28 devices are supported by comprehensive analysis and test reports including worst case analysis, electrical stress analysis and reliability analysis.

Pricing and Availability

The LE50-28 single-output and LE50-28 triple-output are now available. For additional information and to purchase, contact a Microchip sales representative, authorized worldwide distributor or visit Microchip’s Purchasing and Client Services website, www.microchipdirect.com.

Resources

High-res images available through Flickr or editorial contact (feel free to publish):

  • Application image: flickr.com/photos/microchiptechnology/53332596878/sizes/l
  • Video link: https://www.youtube.com/watch?v=XjXePfpjNa4

The post Radiation-Tolerant DC-DC 50-Watt Power Converters Provide High-Reliability Solution for New Space Applications appeared first on ELE Times.

TSMC crunch heralds good days for advanced packaging

EDN Network - Wed, 05/08/2024 - 14:09

TSMC’s advanced packaging capacity is fully booked until 2025 due to hyper demand for large, powerful chips from cloud service giants like Amazon AWS, Microsoft, Google, and Meta. Nvidia and AMD are known to have secured TSMC’s chip-on-wafer-on-substrate (CoWoS) and system-on-integrated-chips (SoIC) capacity for advanced packaging.

Nvidia’s H100 chips—built on TSMC’s 4-nm process—use CoWoS packaging. On the other hand, AMD’s MI300 series accelerators, manufactured on TSMC’s 5-nm and 6-nm nodes, employ SoIC technology for the CPU and GPU combo before using CoWoS for high-bandwidth memory (HBM) integration.

Figure 1 CoWoS is a wafer-level system integration platform that offers a wide range of interposer sizes, HBM cubes, and package sizes. Source: TSMC

CoWoS is an advanced packaging technology that offers the advantage of larger package size and more I/O connections. It stacks chips and packages them onto a substrate to facilitate space, power consumption, and cost benefits.

SoIC, another advanced packaging technology created by TSMC, integrates active and passive chips into a new system-on-chip (SoC) architecture that is electrically identical to native SoC. It’s a 3D heterogeneous integration technology manufactured in front-end of line with known-good-die and offers advantages such as high bandwidth density and power efficiency.

TSMC is ramping up its advanced packaging capacity. It aims to triple the production of CoWoS-based wafers, producing 45,000 to 50,000 CoWoS-based units per month by the end of 2024. Likewise, it plans to double the capacity SoIC-based wafers by the end of this year, manufacturing between 5,000 and 6,000 units a month. By 2025, TSMC wants to hit a monthly capacity of 10,000 SoIC wafers.

Figure 2 SoIC is fully compatible with advanced packaging technologies like CoWoS and InFO. Source: TSMC

Morgan Stanley analyst Charlie Chan has raised an interesting and viable question: How do companies like TSMC judge advanced packaging demand and allocate capacity accordingly. What’s the benchmark that TSMC uses for its advanced packaging customers?

Jeff Su, director of investor relations at TSMC, while answering Chan, acknowledged that the demand for advanced packaging is very strong and the capacity is very tight. He added that TSMC has more than doubled its advanced packaging capacity in 2024. Moreover, the mega-fab has leveraged its special relationships with OSATs to fulfill customer needs.

TSMC works closely with OSATs, including its Taiwan neighbor and the world’s largest IC packaging and testing company, ASE. TSMC chief C. C. Wei also mentioned during an earnings call that Amkor plans to build an advanced packaging and testing plant next to TSMC’s fab in Arizona. Then there is news circulating in trade media about TSMC planning to build an advanced packaging plant in Japan.

Advanced packaging is now an intrinsic part of the AI-driven computing revolution, and the rise of chiplets will only bolster its importance in the semiconductor ecosystem. TSMC’s frantic capacity upgrades and tie-ups with OSATs point to good days for advanced packaging technology.

TSMC’s archrivals Samsung and Intel Foundry will undoubtedly be watching closely this supply-and-demand saga for advanced packaging while recalibrating their respective strategies. We’ll continue covering this exciting aspect of semiconductor makeover in the coming days.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post TSMC crunch heralds good days for advanced packaging appeared first on EDN.

Дорогі ветерани та нинішні захисники Вітчизни!

Новини - Wed, 05/08/2024 - 12:47
Дорогі ветерани та нинішні захисники Вітчизни!
Image
medialab ср, 05/08/2024 - 12:47
Текст

Шановні політехніки!

8 травня київські політехніки, разом з мільйонами людей по всьому світові, відзначають День пам’яті та перемоги над нацизмом у Другій світовій війні 1939–1945 років. У ці травневі дні ми вшановуємо тих, кому випала доля пройти крізь її смертельний вогонь.

Pages

Subscribe to Кафедра Електронної Інженерії aggregator