Microelectronics world news


EDN Network - Thu, 05/09/2024 - 16:20

While an internal, rechargeable lithium battery is usually the best solution for portable kit nowadays, there are still times when using replaceable cells with an external power option, probably from a USB source, is more appropriate. This DI shows ways of optimizing this.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The usual way of combining power sources is to parallel them, with a series diode for each. That is fine if the voltages match and some loss of effective battery capacity, owing to a diode’s voltage drop, can be tolerated. Let’s assume the kit in question is something small and hand-held or pocketable, probably using a microcontroller like a PIC, with a battery comprising two AA cells, the option of an external 5 V supply, and a step-up converter producing a 3.3 V internal power rail. Simple steering diodes used here would give a voltage mismatch for the external power while wasting 10 or 20% of the battery’s capacity.

Figure 1 shows a much better way of implementing things. The external power is pre-regulated to avoid the mismatch, while active switching minimizes battery losses. I have used this scheme in both one-offs and production units, and always to good effect.

Figure 1 Pre-regulation of an external supply is combined with an almost lossless switch in series with the battery, which maximizes its life.

The battery feed is controlled by Q1, which is a reversed p-MOSFET. U1 drops any incoming voltage down to 3.3 V. Without external power, Q1’s gate is more negative than its source, so it is firmly on, and (almost) the full battery voltage appears across C3 to feed the boost converter. Q2’s emitter–base diode stops any current flowing back into U1. Apart from the internal drain–source or body diode, MOSFETs are almost symmetrical in their main characteristics, which allows this reversed operation.

When external power is present, Q1.G will be biased to 3.3 V, switching it off and effectively disconnecting the battery. Q2 is now driven into saturation connecting U1’s 3.3 V output, less Q2’s saturated forward voltage of 100–200 mV, to the boost converter. (The 2N2222, as shown, has a lower VSAT than many other types.) Note that Q2’s base current isn’t wasted, but just adds to the boost converter’s power feed. Using a diode to isolate U1 would incur a greater voltage drop, which could cause problems: new, top-quality AA manganese alkaline (MnAlk) cells can have an off-load voltage well over 1.6 V, and if the voltage across C3 were much less than 3 V, they could discharge through the MOSFET’s inherent drain–source or body diode. This arrangement avoids any such problems.

Reversed MOSFETs have been used to give battery-reversal protection for many years, and of course such protection is inherent in these circuits. The body diode also provides a secondary path for current from the battery if Q1 is not fully on, as in the few microseconds after external power is disconnected.

Figure 1 shows U1 as an LM1117-3.3 or similar type, but many more modern regulators allow a better solution because their outputs appear as open circuits when they are unpowered, rather than allowing reverse current to flow from their outputs to ground. Figure 2 shows this implementation.

Figure 2 Using more recent designs of regulator means that Q2 is no longer necessary.

Now the regulator’s output can be connected directly to C3 and the boost converter. Some devices also have an internal switch which completely isolates the output, and D1 can then be omitted. Regulators like these could in principle feed into the final 3.3mV rail directly, but this can actually complicate matters because the boost converter would then also need to be reverse-proof and might itself need to be turned off. R2 is now used to bias Q1 off when external power is present.

If we assume that the kit uses a microcontroller, we can easily monitor the PSU’s operation. R5—included purely for safety’s sake—lets the microcontroller check for the presence of external power, while R3 and R4 allow it to measure the battery voltage accurately. Their values, calculated on the assumption that we use an 8-bit A–D conversion with a 3.3 V reference, give a resolution of 10 mV/count, or 5 mV per cell. Placing them directly across the battery loads it with ~5–6 µA, which would drain typical cells in about 50 years; we can live with that. The resistor ratio chosen is close to 1%-accurate.

Many components have no values assigned because they will depend on your choice of regulator and boost converter. With its LM1117-3.3, the circuit of Figure 1 can handle inputs of up to 15 V, though a TO-220 version then gets rather warm with load currents approaching 80 mA (~1 W, its practical power limit without heatsinking).

I have also used Figure 2 with Microchip’s MCP1824T-3302 feeding a Maxim MAX1674 step-up converter, with an IRLML6402 for Q1, which must have a low on-resistance. Many other, and more recent, devices will be suitable, and you probably have your own favorites.

While the external power input is shown as being naked, you may want to clothe it with some filtering and protection such as a poly-fuse and a suitable Zener or TVS. Similarly, no connector is specified, but USBs and barrel jacks both have their places.

While this is shown for nominal 3V/5V supplies, it can be used at higher voltages subject to gate–source voltage limitations owing to the MOSFET’s input protection diodes, the breakdown voltages of which can range from 6 V to 20 V, so check your device’s data sheet.

Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 2×AA/USB: OK! appeared first on EDN.

Optimize battery selection and operating life of wireless IoT devices

EDN Network - Thu, 05/09/2024 - 14:59

Batteries are essential for powering many Internet of Things (IoT) devices, particularly wireless sensors, which are now deployed by billions. But batteries are often difficult to access and expensive to change because it’s a manual process. Anything that can be done to maximize the life of batteries and minimize or eliminate the need to change them during their operating life is a worthwhile endeavour and a significant step toward sustainability and efficiency.

Taking the example of a wireless sensor, this is a five-step process:

  1. Select the components for your prototype device: sensor, MCU, and associated electronics.
  2. Use a smart power supply with measurement capabilities to establish a detailed energy profile for your device under simulated operating conditions.
  3. Evaluate your battery options based on the energy profile of your device.
  4. Optimize the device parameters (hardware, firmware, software, and wireless protocol).
  5. Make your final selection of the battery type and capacity with the best match to your device’s requirements.

Selecting device type and wireless protocol

Microcontroller (MCU) is the most common processing resource at the heart of embedded devices. You’ll often choose which one to use for your next wireless sensor based on experience, the ecosystem with which you’re most familiar, or corporate dictate. But when you have a choice and conserving energy is a key concern for your application, there may be a shortcut.

Rather than plow through thousands of datasheets, you could check out EEMBC, an independent benchmarking organization. The EEMBC website not only enables a quick comparison of your options but also offers access to a time-saving analysis tool that lists the sensitivity of MCU platforms to various design parameters.

Most IoT sensors spend a lot of time in sleep mode and send only short bursts of data. So, it’s important to understand how your short-listed MCUs manage sleep, idle and run modes, and how efficiently they do that.

Next, you need to decide on the wireless protocol(s) you’ll be using. Range, data rate, duty cycle, and compatibility within the application’s operating environment will all be important considerations.

Figure 1 Data rates and range are the fundamental parameters considered when choosing a wireless protocol. Source: BehrTech

Once you’ve established the basics, digging into the energy efficiency of each protocol gets more complex and it’s a moving target. There are frequent new developments and enhancements to established wireless standards.

At data rates of up to 10 Kbps, Bluetooth LE/Mesh, LoRa, or Zigbee are usually the lowest energy protocols of choice for distances up to 10 meters. If you need to cover a 1-km range, NB-IoT may be on your list, but at an order of magnitude higher energy usage.

In fact, MCU hardware, firmware and software, the wireless protocol, and the physical environment in which an IoT device operates are all variables that need to be optimized to conserve energy. The only effective way to do that is to model these conditions during development and watch the effect of changes on the fly as you make changes to any of these parameters.

Establish an initial energy profile of device under test (DUT)

The starting point is to use a smart, programmable power supply and measurement unit to profile and record the energy usage of your device. This is necessary because simple peak and average power measurements with multimeters can only provide limited information. The Otii Arc Pro from Qoitech was used here to illustrate the process.

Consider a wireless MCU. In run mode, it may be putting out a +6 dBm wireless signal and consuming 10 mA or more. In deep sleep mode, the current consumption might fall below 0.2 µA. That’s a 50:1 dynamic range and changes happen almost instantaneously, certainly within microseconds. Conventional multimeters can’t capture changes like these, so they can’t help you understand the precise energy profile of your device. Without that, your choice of battery is open to miscalculation.

Your smart power supply is a digitally controlled power source offering control over parameters such as voltage, current, power, and mode of operation. Voltage control should ideally be in 1 mV steps so that you can determine the DUT’s energy consumption at different voltage levels to mimic battery discharge.

You’ll need sense pins to monitor the DUT power rails, a UART to see what happens when you make code changes, and GPIO pins for status monitoring. Standalone units are available, but it can be more flexible and economical to choose a smart power supply that uses your computer’s processing resources and display, as shown in the example below.

Figure 2 The GUI for a smart power supply can run on Windows, MacOS, or Ubuntu. Source: Qoitech

After connecting, you power and monitor the DUT simultaneously. You’re presented with a clear picture of voltages and current changes over time. Transients that you would never be able to see on a traditional meter are clearly visible and you can immediately detect unexpected anomalies.

Figure 3 A smart power profiler gives you a detailed comparison of your device’s energy consumption for different hardware and firmware versions. Source: Qoitech

From the stored data in the smart power supply, you’ll be able to make a short list of battery options.

Choosing a battery

Battery selection needs to consider capacity, energy density, voltage, discharge profile, and temperature. Datasheet comparisons are the starting point but it’s important to validate the claims of battery manufacturers by benchmarking their batteries through testing. Datasheet information is based on performance under “normal conditions” which may not apply to your application.

Depending on your smart power supply model, the DUT energy profiling described earlier may provide an initial battery life estimate based on a pre-programmed battery type and capacity. Either the same instrument or a separate piece of test equipment may then be used for a more detailed examination of battery performance in your application. Accelerated discharge measurements, when properly set up, are a time-saving alternative to the years it may take a well-designed IoT device to exhaust its battery.

These measurements must follow best practices to create an accurate profile. These include maintaining high discharge consistency to achieve a match to the DUT’s peak current, shortening the cycle time and increasing sleep current so that the battery can recover. You should also consult with battery manufacturers to validate any assumptions you make during the process.

You can profile the same battery chemistries from different manufacturers, or different battery chemistries, perhaps comparing lithium coin cells with AA alkaline batteries.

Figure 4 The comparison shows accelerated discharge characteristics for AA and AAA alkaline batteries from five different manufacturers. Source: Qoitech

By this stage, you have a good understanding of both the energy profile of your device and of the battery type and capacity that’s likely to result in the longest operating life in your applications. Upload your chosen battery profile to your smart power supply and set it up to emulate that battery.

Optimize and iterate

You can now go back to the DUT and optimize hardware and software for the lowest power consumption in near real-world conditions. You may have the flexibility to experiment with different wireless protocols, but even if that’s not the case, experimenting with sleep and deep-sleep modes, network routing, and even alternative data security protocols can all yield improvements, avoiding a common problem where 40 bytes of data can easily become several Kbytes.

Where the changes create a significant shift in your device’s energy profile, you may also review the choice of battery and evaluate again until you achieve the best match.

While this process may seem lengthy, it can be completed in just a few hours and may extend the operating life of a wireless IoT edge device, and hence reduce battery waste, by up to 30%.

Björn Rosqvist, co-founder and chief product officer of Qoitech, has 20+ years of experience in power electronics, automotive, and telecom with companies such as ABB, Ericsson, Flatfrog, Sony, and Volvo Cars.


Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Optimize battery selection and operating life of wireless IoT devices appeared first on EDN.

Marktech unveils multi-chip packages with InGaAs photodiodes and multiple LED emitters

Semiconductor today - Thu, 05/09/2024 - 13:44
Marktech Optoelectronics Inc of Latham, NY, USA, a designer and manufacturer of optoelectronics components and assemblies — including UV, visible, near-infrared (NIR) and short-wave infrared (SWIR) emitters, detectors, and indium phosphide (InP) epiwafers — has unveiled its new multi-chip packages with indium gallium arsenide (InGaAs) photodiodes and multiple LED emitters...

Navitas’ CEO & co-founder Gene Sheridan a finalist in EY’s Entrepreneur Of The Year 2024 Greater Los Angeles Award

Semiconductor today - Thu, 05/09/2024 - 11:22
Gene Sheridan, CEO & co-founder of gallium nitride (GaN) power IC and silicon carbide (SiC) technology firm Navitas Semiconductor Corp of Torrance, CA, USA, has been named by Ernst & Young LLP (EY US) as an Entrepreneur Of The Year 2024 Greater Los Angeles Award finalist...

Flexible electronic device may offer a new approach to the treatment of spinal injuries

News Medical Microelectronics - Thu, 05/09/2024 - 05:55
A tiny, flexible electronic device that wraps around the spinal cord could represent a new approach to the treatment of spinal injuries, which can cause profound disability and paralysis.

Apple’s New M4 Processor: An ‘Outrageously Powerful’ Device for AI

AAC - Thu, 05/09/2024 - 03:00
The new SoC leverages 28 billion transistors to make the new iPad Pro "the most powerful device of its kind."

An Explanation of Undervoltage Lockout

AAC - Wed, 05/08/2024 - 22:00
Learn how undervoltage lockout (UVLO) can protect semiconductor devices and electronic systems from potentially hazardous operation.

Apple’s Spring 2024: In-person announcements no more?

EDN Network - Wed, 05/08/2024 - 17:35

By means of introduction to my coverage of Apple’s 2024-so-far notable news, I’d like to share the amusing title, along with spot-on excerpts from the body text, from a prescient piece I saw on Macworld yesterday. The title? “Get ready for another Apple meeting that could have been an email”. Now the excerpts:

Apple started running virtual press events during the pandemic when in-person gatherings made little sense and at various times were frowned upon or literally illegal. But Apple has largely stuck with that format even as health concerns lessened and its own employees were herded back into the office.

 Why is that? Because virtual events have advantages far beyond the containment of disease. Aside from avoiding the logistical headaches of getting a thousand bad-tempered journalists from around the world to the same place at the same time, a pre-recorded video presentation is much easier to run smoothly than a live performance…

 Nobody cringes harder than me when live performers get things wrong, and I absolutely get the attraction of virtual keynotes for Apple. But it does raise some awkward existential questions about why we need to bother with the elaborate charade that is a keynote presentation. What, after all, is the point of a keynote? If it’s just to get information about new products, that can be done far more efficiently via a press release that you can read at your own speed; just the facts, no sitting through skits and corporate self-congratulation.

 Is it to be marketed by the best hypemen in the business? If that’s really something you want, you might as well get it from an ad: virtual keynotes give none of that dubious excitement and tribalistic sense of inclusivity you get with a live performance. And we’ve even lost the stress-test element of seeing an executive operating the product under extreme pressure. What we’re left with is a strange hybrid: a long press release read out by a series of charisma-free executives, interspersed with advertisements.

I said something similar in my coverage of Apple’s June 2023 Worldwide Developer Conference (WWDC):

This year’s event introductory (and product introduction) presentation series was lengthy, with a runtime of more than two hours, and was also entirely pre-recorded. This has been Apple’s approach in recent years, beginning roughly coincident with the COVID lockdown and consequent transition to a virtual event (beginning in 2020; 2019 was still in-person)…even though both last- and this-years’ events returned to in-person from a keynote video viewing standpoint.

 On the one hand, I get it; as someone who (among other things) delivers events as part of his “day job”, the appeal of a tightly-scripted, glitch-free set of presentations and demonstrations can’t be understated. But live events also have notable appeal: no matter how much they’re practiced beforehand, there’s still the potential for a glitch, and therefore when everything still runs smoothly, what’s revealed and detailed is (IMHO) all the more impactful as a result.

What we’ve ended up with so far this year is a mix of press release-only and virtual-event announcements, in part (I suspect, as does Macworld) because of “building block” mass-production availability delays for the products in today’s (as I write these words on Tuesday, May 7) news.

But I’m getting ahead of myself.

The Vision Pro

Let’s rewind to early January, when Apple confirmed that its first-generation Vision Pro headset (which I’d documented in detail within last June’s WWDC coverage) would open for pre-orders on January 19, with in-store availability starting February 2.

Granted, the product’s technology underpinnings remain amazing 11 months post-initial unveil:

But I’m still not sold on the mainstream (translation: high volume) appeal of such a product, no matter how many entertainment experiences and broader optimized applications Apple tries to tempt me with (and no matter how much Apple may drop the price in the future, assuming it even can to a meaningful degree, given bill-of-materials cost and profit-expectation realities). To be clear, this isn’t an Apple-only diss; I’ve expressed the same skepticism in the past about offerings from Oculus-now-Meta and others. And at the root of my pessimism about AR/VR/XR/choose-your-favorite-acronym (or, if you’re Apple, “spatial computing”, whatever that means) headsets may indeed be enduring optimism of a different sort.

Unlike the protagonists of science fiction classics such as William Gibson’s Neuromancer and Virtual Light, Neal Stephenson’s Snow Crash, and Ernest Cline’s Ready Player One, I don’t find the real world to be sufficiently unpleasant that I’m willing to completely disengage from it for long periods of time (and no, the Vision Pro’s EyeSight virtual projected face doesn’t bridge this gap). Scan through any of the Vision Pro reviews published elsewhere and you’ll on-average encounter similar lukewarm-at-best enthusiasm from others. And I can’t help but draw an accurate-or-not analogy to Magic Leap’s 2022 consumer-to-enterprise pivot when I see subsequent Apple press releases touting medical and broader business Vision Pro opportunities.

So is the Vision Pro destined to be yet another Apple failure? Maybe…but definitely not assuredly. Granted, we might have another iPod Hi-Fi on our hands, but keep in mind that the first-generation iPhone and iPad also experienced muted adoption. Yours truly even dismissively called the latter “basically a large-screen iPod touch” on a few early occasions. So let’s wait and see how quickly the company and its application-development partners iterate both the platform’s features and cost before we start publishing headlines and crafting obituaries about its demise.

The M3-based MacBook Air

Fast-forward to March, and Apple unveiled M3 SoC-based variants of the MacBook Air (MBA), following up on the 13” M2-based MBA launched at the 2022 WWDC and the first-time-in-this-size 15” M2 MBA unveiled a year later:

Aside from the Apple Silicon application processor upgrade (first publicly discussed last October), there’s faster Wi-Fi (6E) along with an interesting twist on expanded external-display support; the M3-based models can now simultaneously drive two of ‘em, but only when the “clamshell” is closed (i.e., when the internal display is shut off). But the most interesting twist, at least for this nonvolatile-memory-background techie, is that Apple did a seeming back-step on its flash memory architecture. In the M2 generation, the 256 GByte SSD variant consisted of only a single flash memory chip (presumably single-die, to boot, bad pun intended), which bottlenecked performance due to the resultant inability for multi-access parallelism. To get peak read and (especially evident) write speeds, you needed to upgrade to a 512 GByte or larger SSD.

The M3 generation seemingly doesn’t suffer from the same compromise. A post-launch teardown revealed that (at least for that particular device…since Apple multi-sources its flash memory, one data point shouldn’t necessarily be extrapolated to an all-encompassing conclusion) the 256 GByte SSD subsystem comprised two 128 GByte flash memory chips, with consequent restoration of full performance potential. I’m particularly intrigued by this design decision considering that two 128 GByte flash memories conceivably cost Apple more than one 256 GByte alternative (likely the root cause of the earlier M1-to-M2 move). That said, I also don’t underestimate the formidable negotiation “muscle” of Apple’s procurement department…


Last week, we got Apple’s second-fiscal-quarter earnings results. I normally don’t cover these at all, and I won’t dwell long on the topic this time, either. But they reveal Apple’s ever-increasing revenue and profit reliance on its “walled garden” services business (to the ever-increasing dismay of its “partners”, along with various worldwide government entities), given that hardware revenue dropped for all hardware categories save Macs, notably including both iPhone and iPad and in spite of the already-discussed Vision Pro launch. That said, the following corporate positioning seemed to be market-calming:

In the March quarter a year ago, we were able to replenish iPhone channel inventory and fulfill significant pent-up demand from the December quarter COVID-related supply disruptions on the iPhone 14 Pro and 14 Pro Max. We estimate this one-time impact added close to $5 billion to the March quarter revenue last year. If we removed this from last year’s results, our March quarter total company revenue this year would have grown.

The iPad Air

And today we got new iPads and accessories. The iPad Air first:

Reminiscent of the aforementioned MacBook Air family earlier this year, they undergo a SoC migration, this time from the M1 to the M2. They also get a relocated front camera, friendlier (as with 2022’s 10th generation conventional iPad) for landscape-orientation usage. And to the “they” in the previous two sentences, as well as again reminiscent of the aforementioned MacBook Air expansion to both 13” and 15” form factors, the iPad Air now comes in both 11” and 13” versions, the latter historically only offered with the iPad Pro.

Speaking of which

The M4 SoC

Like their iPad Air siblings, the newest generation of iPad Pros relocate the front camera to a more landscape orientation-friendly bezel location. But that’s among the least notable enhancements this time around. On the flip side of the coin, perhaps most notable news is that they mark the first-time emergence of Apple’s M4 SoC. I’ll begin with obligatory block diagrams:

Some historical perspective is warranted here. Only six months ago, when Apple rolled out its first three (only?) M3 variants along with inclusive systems, I summarized the to-date situation:

Let’s go back to the M1. Recall that it ended up coming in four different proliferations:

  • The entry-level M1
  • The M1 Pro, with increased CPU and GPU core counts
  • The M1 Max, which kept the CPU core constellation the same but doubled up the graphics subsystem, and
  • The M1 Ultra, a two-die “chiplet” merging together two M1 Max chips with requisite doubling of various core counts, the maximum amount of system memory, and the like

But here’s the thing: it took a considerable amount of time—1.5 years—for Apple to roll out the entire M1 family from its A14 Bionic development starting point:

  • A14 Bionic (the M1 foundation): September 15, 2020
  • M1: November 10, 2020
  • M1 Pro and Max: October 18, 2021
  • M1 Ultra: March 8, 2022

 Now let’s look at the M2 family, starting with its A15 Bionic SoC development foundation:

 Nearly two years’ total latency this time: nine months alone from the A15 to the M2.

I don’t yet know for sure, but for a variety of reasons (process lithography foundation, core mix and characteristics, etc.) I strongly suspect that the M3 chips are not based on the A16 SoC, which was released on September 7, 2022. Instead, I’m pretty confident in prognosticating that Apple went straight to the A17 Pro, unveiled just last month (as I write these words), on September 12 of this year, as their development foundation.

 Now look at the so-far rollout timeline for the M3 family—I think my reason for focusing on it will then be obvious:

  • A17 Pro: September 12, 2023
  • M3: October 30, 2023
  • M3 Pro and Max: October 30, 2023
  • M3 Ultra: TBD
  • M3 Extreme (a long-rumored four-Max-die high-end proliferation, which never ended up appearing in either the M1 or M2 generations): TBD (if at all)

Granted, we only have the initial variant of the M4 SoC so far. There’s no guarantee at this point that additional family members won’t have M1-reminiscent sloth-like rollout schedules. But for today, focus only on the initial-member rollout latencies:

  • M1 to M2: more than 19 months
  • M2 to M3: a bit more than 16 months
  • M3 to M4: a bit more than 6 months

Note, too, that Apple indicates that the M4 is built on a “second-generation 3 nm process” (presumably, like its predecessors, from TSMC). Time from another six-months-back quote:

Conceptually, the M3 flavors are reminiscent of their precursors, albeit with newer generations of various cores, along with a 3 nm fabrication process foundation.

As for the M4, here’s my guess: from a CPU core standpoint, especially given the rapid generational development time, the performance and efficiency cores are likely essentially the same as those in the M3, albeit with some minor microarchitecture tweaks to add-and-enhance deep learning-amenable instructions and the like, therefore this press release excerpt:

Both types of cores also feature enhanced, next-generation ML accelerators.

The fact that there are six efficiency cores this time, versus four in the M3, is likely due in no small part to the second-generation 3 nm lithography’s improved transistor packing capabilities along with more optimized die layout efficiencies (any potential remaining M3-to-M4 die size increase might also be cost-counterbalanced by TSMC’s improved 3 nm yields versus last year).

What about the NPU, which Apple brands as the “Neural Engine”? Well, at first glance it’s a significant raw-performance improvement from the one in the M3: 18 TOPs (trillion operations per second) vs 38 TOPs. But here comes another six-month back quote about the M3:

The M3’s 16-core neural engine (i.e., deep learning inference processing) subsystem is faster than it was in the previous generation. All well and good. But during the presentation, Apple claimed that it was capable of 18 TOPS peak performance. Up to now I’d been assuming, as you know from the reading you’ve already done here, that the M3 was a relatively straight-line derivation of the A17 Pro SoC architecture. But Apple claimed back in September that the A17 Pro’s neural engine ran at 35 TOPS. Waaa?

 I see one (or multiple-in-combination) of (at least) three possibilities to explain this discrepancy:

  • The M3’s neural engine is an older or more generally simpler design than the one in the A17 Pro
  • The M3’s neural engine is under-clocked compared to the one in the A17 Pro
  • The M3’s neural engine’s performance was measured using a different data set (INT16 vs INT8, for example, or FLOAT vs INT) than what was used to benchmark the A17 Pro

My bet remains that the first possibility of the three listed was the dominant if not sole reason for the M3 NPU’s performance downgrade versus that in the A17 Pro. And I’ll also bet that the M4 NPU is essentially the same as the one in the A17 Pro, perhaps again with some minor architecture tweaks (or maybe just a slight clock boost!). So then is the M4 just a tweaked A17 Pro built on a tweaked 3 nm process? Not exactly. Although the GPU architecture also seems to be akin to, if not identical to, the one in the A17 Pro (six-core implementation) and M3 (10-core matching count), the display controller has more tangibly evolved this time, likely in no small part for the display enhancements which I’ll touch on next. Here’s the summary graphic:

More on the iPad Pro

Turning attention to the M4-based iPads themselves, the most significant thing here is that they’re M4-based iPads. This marks the first time that a new Apple Silicon generation has shown up in something other than an Apple computer (notably skipping the M3-based iPad Pro iteration in the process, as well), and I don’t think it’s just a random coincidence. Apple’s clearly, to me, putting a firm stake in the ground as to the corporate importance of its comparatively proprietary (versus the burgeoning array of Arm-based Windows computers) tablet product line, both in an absolute sense and versus computers (Apple’s own and others). A famous Steve Jobs quote comes to my mind at this point:

If you don’t cannibalize yourself someone else will.

The other notable iPad Pro enhancement this time around is the belated but still significant display migration to OLED technology, which I forecasted last August. Unsurprisingly, thanks to the resultant elimination of a dedicated backlight (an OLED attribute I noted way back in 2010 and revisited in 2019) the tablets are now significantly thinner as a result, in spite of the fact that they’re constructed in a fairly unique dual-layer brightness-boosting “sandwich” (harking back to my earlier display controller enhancements comments; note that a separate simultaneous external tethered display is still also supported). And reflective of the tablets’ high-end classification, Apple has rolled out corresponding “Pro” versions of its Magic Keyboard (adding a dedicated function-key row, along with a haptic feedback-enhanced larger trackpad):

And Pencil, adding “squeeze” support, haptic feedback of its own, and other enhancements:

Other notable inter- and intra-generational tweaks:

  • No more mmWave 5G support.
  • No more ultra-wide rear camera, either.
  • Physical SIM slots? Gone, too.
  • Ten-core CPU M4 SoCs are unique to the 1 TByte and 2 TByte iPad Pro variants; lower-capacity mass storage models get only 9 CPU cores (one less performance core, to be precise, although corresponding GPU core counts are interestingly per-product-variant unchanged this time). They’re also allocated only half the RAM of their bigger-SSD brethren: 8 GBytes vs 16 GBytes.
  • 1 and 2 TByte iPads are also the only ones offered a nano-texture glass option.

Given that Apple did no iPad family updates at all last year, this is an encouraging start to 2024. That said, the base 10th-generation iPad is still the same as when originally unveiled in October 2022, although it did get a price shave today (and its 9th-generation precursor is no longer in production, either). And the 6th-generation iPad mini introduced in September 2021 is still the latest-and-greatest, too. I’m admittedly more than a bit surprised and pleased that my unit purchased gently used off eBay last summer is still state-of-the-art!

iPad software holdbacks

And as for Apple’s ongoing push to make the iPad, and the iPad Pro specifically, a credible alternative to a full-blown computer? It’s a topic I first broached at length back in September 2018, and to at least some degree the situation hasn’t tangibly changed since then. Tablet hardware isn’t fundamentally what’s holding the concept back from becoming a meaningful reality, but then again, I’d argue that it never was the dominant shortcoming. It was, and largely remains, software; both the operating system and the applications that run on it. And I admittedly felt validated in my opinion here when I perused The Verge’s post-launch event livestream archive and saw it echoed there, too.

Sure, Apple just added some nice enhancements to its high-end multimedia-creation and editing tablet apps (along with their MacOS versions, I might add) but how many folks are really interested in editing multiple ProRes streams without proxies on a computer nowadays, far from on an iPad? What about tangible improvements for the masses? Sure, you can use a mouse with an iPad now, but multitasking attempts still, in a word, suck. And iPadOS still doesn’t even support the basics, such as multi-user support. Then again, there’s always this year’s WWDC, taking place mid-next month, which I will of course once again be covering for EDN and y’all. Hope springs eternal, I guess. Until then, let me know your thoughts in the comments.

p.s…I realized just before pressing “send to Aalyia” that I hadn’t closed the loop on my earlier “building block mass-production availability delays” tease. My suspicion is that originally the new iPads were supposed to be unveiled alongside the new MacBook Airs back in March, in full virtual-event form. But in the spirit of “where there’s smoke, there’s fire”, longstanding rumors about OLED display volume production delays, I’m also guessing (and/or second-generation 3 nm process volume production delays), pushed the iPads to today.

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Apple’s Spring 2024: In-person announcements no more? appeared first on EDN.

TSMC Certifies a Host of Top EDA Tools for New Process Nodes

AAC - Wed, 05/08/2024 - 16:00
With TSMC scaling down to 2 nm and lower, the semiconductor giant is working with Cadence, Siemens, and Synopsys to bring updated EDA tools to IC designers.

Dürr and Rohde & Schwarz collaborate on ADAS/AD functional testing for EOL and PTI

ELE Times - Wed, 05/08/2024 - 14:22

Automated and autonomous vehicles, which rely on sensors like cameras and radar, either assist or take over decision-making in traffic situations. Sensors’ proper interaction and functionality must be thoroughly tested to ensure road safety. Dürr and Rohde & Schwarz, a global technology group, have developed an innovative, cost-effective solution for over-the-air (OTA) vehicle-in-the-loop (VIL) testing. This solution validates conformity and effectiveness during end-of-line (EOL) testing or periodical technical inspection (PTI).

Road safety is a key challenge for future mobility, especially for automated and autonomous vehicles. Ensuring the continued functionality of advanced driver assistance systems (ADAS) and autonomous driving (AD) features is critical for long-term vehicle safety and performance. Therefore, manufacturers of vehicles equipped with these features require certification, either from a third party, authority or by self-certification. A vehicle-in-the-loop (VIL) test can validate the correct operation of all ADAS/AD functions in the end-of-line (EOL) and ensure conformity of production (COP) before a vehicle leaves the factory. In addition, maintaining proper functionality throughout a vehicle’s lifespan requires additional control measures during periodical technical inspection (PTI).

Simulating various driving scenarios

To address these additional requirements in the EOL and PTI process, Dürr and Rohde & Schwarz initiated a joint project incorporating Dürr’s patented x-road curve multi-function roll test stand, Rohde & Schwarz’ new RadEsT radar target simulator and the open-source simulation platform CARLA. The combination creates a virtual environment specifically for the camera and radar sensors installed in the test vehicle, allowing for the OTA simulation of different inspection scenarios without touching the vehicle. These scenarios include critical situations such as unintended lane departures and other vehicles braking suddenly or switching lanes directly in front of the test vehicle. The test vehicle must react promptly to changes in the VIL simulation and, if necessary, trigger the automated lane-keeping systems (ALKS) or advanced emergency braking systems (AEBS), for example, to pass inspection.

Patented technology for ultimate versatility

The 4WD x-road curve allows for unrestricted driving with steering movements, facilitating cornering maneuvers without altering the test vehicle. Laser measurement technology recognizes the front wheels’ position and steering angle while swiveling front double roller units automatically adjust for any angular difference to the driving direction. This ensures the vehicle remains centered on the test stand even at high speeds, regardless of the steering wheel’s position, and without the need for vehicle fixation, thus minimizing cycle times.

Resilient processes and precise results

RadEsT, the radar target simulator developed by Rohde & Schwarz, is exceptionally resilient to external factors, ensuring consistent performance in production and workshop environments. Its ability to provide precise and repeatable measurements makes it an invaluable tool for conducting accurate assessments in real-world conditions. Moreover, its compact and lightweight design enables easy and flexible integration at a cost-effective price point.

Easy to use test scenario generation

The open-source tool CARLA offers vehicle manufacturers or PTI organizations maximum flexibility with additional cost-saving opportunities and great potential for scenario selection. The recently announced upgrade of the CARLA simulator to Unreal Engine 5 is expected to enhance modeling, simulation realism, and performance, particularly for over-the-air camera simulation via monitors.

By combining Dürr’s patented x-road curve multi-function roll test stand, Rohde & Schwarz’ innovative radar target simulator, and the open-source platform CARLA, automated and autonomous vehicles’ full functionality can be cost-effectively evaluated to ensure proper operation in production and throughout the complete vehicle’s lifespan.

The post Dürr and Rohde & Schwarz collaborate on ADAS/AD functional testing for EOL and PTI appeared first on ELE Times.

Aixtron grows Q1 revenue and profit significantly year-on-year

Semiconductor today - Wed, 05/08/2024 - 14:14
For first-quarter 2024, deposition equipment maker Aixtron SE of Herzogenrath, near Aachen, Germany has reported revenue of €118.3m (near the top end of the €100–120m guidance range). This was down 45% on last quarter’s record €214.2m but up 53% on €77.2m a year ago (although the latter was reduced by delays in the issue of export licenses, pushing €70m worth of shipments out of the quarter)...

Radiation-Tolerant DC-DC 50-Watt Power Converters Provide High-Reliability Solution for New Space Applications

ELE Times - Wed, 05/08/2024 - 14:10

The LE50-28 power converters are available in nine variants with single- and triple-outputs for optimal design configurability

The Low-Earth Orbit (LEO) market is rapidly growing as private and public entities alike explore the new space region for everything from 5G communication and cube satellites to IoT applications. There is an increased demand for standard space grade solutions that are reliable, cost effective and configurable. To meet this market need, Microchip Technology (Nasdaq: MCHP) today announces a new family of Radiation-Tolerant (RT) LE50-28 isolated DC-DC 50W power converters available in nine variants with single- and triple-outputs ranging from 3.3V to 28V.

The off-the-shelf LE50-28 family of power converters are designed to meet MIL-STD-461. The power converters have a companion EMI filter and offer customers ease of design to scale and customize by choosing one or three outputs based on the voltage range needed for the end application. This series provides flexibility to parallel up to four power converters to reach 200-Watts.

Designed to serve 28V bus systems, the LE50-28 isolated DC-DC power converters can be integrated with Microchip’s PolarFire® FPGAs, microcontrollers and LX7720-RT motor control sensor for a complete electrical system solution. Designers can use these high-reliability radiation-tolerant power solutions to significantly reduce system-level development time.

“The new family of LE50-28 devices enable our customers to succeed in new space and LEO environments where components must withstand harsh conditions,” said Leon Gross, vice president of Microchip’s discrete products group. “Our off-the-shelf products offer a reliable and cost-effective solution designed for the durability our customers have come to expect from Microchip.”

The LE50-28 power converters offer a variety of electrical connection and mounting options. The LE50 series is manufactured with conventional surface mount and thru-hole components on a printed wiring board. This distinction in the manufacturing process can reduce time to market and risks associated with supply chain disruptions.

The LE50-28 family offers space-grade radiation tolerance with 50 Krad Total Ionizing Dose (TID) and Single Event Effects (SEE) latch-up immunity of 37 MeV·cm2/mg linear energy transfer.

Microchip offers a wide range of components to support the new space evolution with sub-QML strategy to bridge the gap between traditional Qualified Manufacturers List (QML) components and Commercial-Off-The-Shelf (COTS) components. Designed for new space applications, sub-QML components are the optimal solution that combines the radiation tolerance of QML components with our space flight heritage that permits lower screening requirements for lower cost and shorter lead times.

Microchip’s extensive space solutions include FPGAs, power and discrete devices, memory products, communication interfaces, oscillators, microprocessors (MPUs) and MCUs, offering a broad range of options across qualification levels, and the largest qualified plastic portfolio for space applications. For more information, visit our space solutions webpage.

Support and Resources

The new family of LE50-28 devices are supported by comprehensive analysis and test reports including worst case analysis, electrical stress analysis and reliability analysis.

Pricing and Availability

The LE50-28 single-output and LE50-28 triple-output are now available. For additional information and to purchase, contact a Microchip sales representative, authorized worldwide distributor or visit Microchip’s Purchasing and Client Services website, www.microchipdirect.com.


High-res images available through Flickr or editorial contact (feel free to publish):

  • Application image: flickr.com/photos/microchiptechnology/53332596878/sizes/l
  • Video link: https://www.youtube.com/watch?v=XjXePfpjNa4

The post Radiation-Tolerant DC-DC 50-Watt Power Converters Provide High-Reliability Solution for New Space Applications appeared first on ELE Times.

TSMC crunch heralds good days for advanced packaging

EDN Network - Wed, 05/08/2024 - 14:09

TSMC’s advanced packaging capacity is fully booked until 2025 due to hyper demand for large, powerful chips from cloud service giants like Amazon AWS, Microsoft, Google, and Meta. Nvidia and AMD are known to have secured TSMC’s chip-on-wafer-on-substrate (CoWoS) and system-on-integrated-chips (SoIC) capacity for advanced packaging.

Nvidia’s H100 chips—built on TSMC’s 4-nm process—use CoWoS packaging. On the other hand, AMD’s MI300 series accelerators, manufactured on TSMC’s 5-nm and 6-nm nodes, employ SoIC technology for the CPU and GPU combo before using CoWoS for high-bandwidth memory (HBM) integration.

Figure 1 CoWoS is a wafer-level system integration platform that offers a wide range of interposer sizes, HBM cubes, and package sizes. Source: TSMC

CoWoS is an advanced packaging technology that offers the advantage of larger package size and more I/O connections. It stacks chips and packages them onto a substrate to facilitate space, power consumption, and cost benefits.

SoIC, another advanced packaging technology created by TSMC, integrates active and passive chips into a new system-on-chip (SoC) architecture that is electrically identical to native SoC. It’s a 3D heterogeneous integration technology manufactured in front-end of line with known-good-die and offers advantages such as high bandwidth density and power efficiency.

TSMC is ramping up its advanced packaging capacity. It aims to triple the production of CoWoS-based wafers, producing 45,000 to 50,000 CoWoS-based units per month by the end of 2024. Likewise, it plans to double the capacity SoIC-based wafers by the end of this year, manufacturing between 5,000 and 6,000 units a month. By 2025, TSMC wants to hit a monthly capacity of 10,000 SoIC wafers.

Figure 2 SoIC is fully compatible with advanced packaging technologies like CoWoS and InFO. Source: TSMC

Morgan Stanley analyst Charlie Chan has raised an interesting and viable question: How do companies like TSMC judge advanced packaging demand and allocate capacity accordingly. What’s the benchmark that TSMC uses for its advanced packaging customers?

Jeff Su, director of investor relations at TSMC, while answering Chan, acknowledged that the demand for advanced packaging is very strong and the capacity is very tight. He added that TSMC has more than doubled its advanced packaging capacity in 2024. Moreover, the mega-fab has leveraged its special relationships with OSATs to fulfill customer needs.

TSMC works closely with OSATs, including its Taiwan neighbor and the world’s largest IC packaging and testing company, ASE. TSMC chief C. C. Wei also mentioned during an earnings call that Amkor plans to build an advanced packaging and testing plant next to TSMC’s fab in Arizona. Then there is news circulating in trade media about TSMC planning to build an advanced packaging plant in Japan.

Advanced packaging is now an intrinsic part of the AI-driven computing revolution, and the rise of chiplets will only bolster its importance in the semiconductor ecosystem. TSMC’s frantic capacity upgrades and tie-ups with OSATs point to good days for advanced packaging technology.

TSMC’s archrivals Samsung and Intel Foundry will undoubtedly be watching closely this supply-and-demand saga for advanced packaging while recalibrating their respective strategies. We’ll continue covering this exciting aspect of semiconductor makeover in the coming days.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post TSMC crunch heralds good days for advanced packaging appeared first on EDN.

CNIPA validates EPC’s GaN gate technology patent

Semiconductor today - Wed, 05/08/2024 - 11:58
Efficient Power Conversion Corp (EPC) of El Segundo, CA, USA — which makes enhancement-mode gallium nitride on silicon (eGaN) power field-effect transistors (FETs) and integrated circuits for power management applications — says that the China National Intellectual Property Administration (CNIPA) has validated the claims of EPC’s patent ‘Compensated gate MISFET and method for fabricating the same’ (Chinese Patent No. ZL201080015425.X) for enhancement-mode GaN semiconductor devices...

Waaree Energies Limited and Ecofy Empower Indian Homeowners with Affordable Rooftop Solar Solutions and Hassle-Free Financing

ELE Times - Wed, 05/08/2024 - 11:51

Waaree Energies Limited, India’s largest manufacturer of solar PV modules with the largest aggregate installed capacity of 12 GW, as of June 30, 2023 (Source: CRISIL Report), has established a collaboration with Ecofy, an NBFC backed by Eversource Capital, committed to providing green finance for climate-positive initiatives. Ecofy is committing Rs 100 crores into the partnership, showcasing confidence in Waaree’s capabilities and the renewable energy sector’s growth potential.

Complementing the government’s PM Surya Ghar Yojana 2024 and leveraging favourable market conditions, this partnership is expected to contribute to India’s renewable energy transition. By synergizing Waaree Energies Limited’s solar expertise with Ecofy’s digital financing solutions, through the initiative we aim to accelerate the solarisation of over 10,000 rooftops across households and MSMEs, as envisioned in the PM Surya Ghar Yojana 2024. Through this partnership, we intend to make clean energy more accessible and affordable for homeowners, aiding in achieving the nationwide objective of solarizing households and MSME’s.

Kailash Rathi, Head of Partnerships & Co-Lending at Ecofy, added, “Our collaboration with Waaree signifies a milestone towards solar adoption at a time when the industry is at an inflection point. Over the past 15 months, Ecofy has empowered over 5000 rooftop solar customers. We have invested heavily in this segment enabling penetration through product innovation and instant approvals. As the country prepares for the peak solar season, the collaboration between Ecofy and Waaree is expected to act as a catalyst, and aid in accelerating solar adoption and penetration across diverse segments of society.”

Pankaj Vassal, President of Sales at Waaree Energies Limited, expressed enthusiasm for the collaboration, stating, “Our partnership with Ecofy represents progress towards democratizing solar power accessibility. By integrating our solar solutions with Ecofy’s financing platform, we are working towards removing barriers and aiding in accelerating the adoption of solar power across households and businesses. Ultimately, this is expected to empower more people to embrace the benefits of clean energy while collectively building a greener, more environmentally-conscious India.”

Waaree Energies Limited and Ecofy expect to play a significant role in achieving India’s energy independence goals while assisting households in embracing a greener, more cost-effective way of living.

The post Waaree Energies Limited and Ecofy Empower Indian Homeowners with Affordable Rooftop Solar Solutions and Hassle-Free Financing appeared first on ELE Times.

Kore.ai’s Research Reveals Historic Shift as Contact Center Agents and Consumers Increasingly Prefer AI-Driven Solutions

ELE Times - Wed, 05/08/2024 - 09:26

Kore.ai, a leader in enterprise conversational and generative AI platform technology, has unveiled its annual 2024 Agent Experience (AX) and Customer Experience (CX) Benchmark Reports, featuring historic findings that indicate the increased global acceptance of automation and self-service solutions.

Kore.ai commissioned the research to shed light on the impact of intelligent virtual assistants (IVAs) and contact center AI solutions on customer interactions and agent job satisfaction. The reports show that, for the first time, customer service agents are prioritizing advanced AI technology and automated tools over competitive salaries and a fair work environment. Similarly, consumers are increasingly embracing AI, valuing its precision and reliability. A key factor in this shift is the IVAs’ ability to offer around-the-clock assistance and smoothly transition between tasks without requiring repetitive information, significantly enhancing consumer satisfaction and comfort levels.

Kore.ai and research partner Farrell Insights surveyed 1,200 customers and 600 agents across multiple regions including the Americas, UK, Germany, India, Japan, Philippines, and Australia, and in major industries like banking, retail, healthcare, travel, telecom, and others. The key findings are collated in the Kore.ai Agent Experience (AX) and Customer Experience (CX) Benchmark Reports 2024.

Key AX Findings Include:
  • An Industry First: Tech Trumps Pay- Agents ranked three automated assistant functionalities– tools that help them better understand customer needs, reduce time spent on searches and minimize typing during call wrap-ups– higher than competitive salary and fair working conditions in terms of importance.
  • Contact Centers are Lagging- 72% of agents express a strong desire for IVAs, but contact centers are lagging in implementation, with 62% of agents reporting a lack of AI use cases. Outdated systems also hinder productivity, with 91% of agents reporting technology-related frustrations.
  • AI Education Boosts Satisfaction- Agents trained in AI report 92% job satisfaction and engagement levels compared to their non-trained counterparts (73%).
  • Win-Win with IVAs- 71% of customer service agents view increased automated assistant usage for assessing and routing customer needs as mutually beneficial for both consumers and agents.
Key CX Findings Include:
  • Customers Prioritize Accuracy and Efficiency Over Live Agent Access– For the first time, effectiveness and accuracy ranked more important than the ability to access a live agent. Additionally, 68% of customers believe that AI assistants’ ability to seamlessly carry and continue conversations across channels is important when it comes to great customer service interactions.
  • Closing Gap between Automated and Live Agent Performance- In the US, there is only a 4% gap between the rating of IVA performance vs. expectations for live agents (72% vs 76%, respectively). In the APAC region, there is no difference in performance ratings.
  • The Rise of IVAs Across Industries- Comfort with IVAs is growing across most sectors (travel, banking, retail, cable/telco/ISP) while healthcare sees direct human contact as crucial. Retail emerges as a standout sector with universal approval for AI-assisted customer service, especially in areas like product search (75% respondents reported interest) and purchasing (74%), highlighting broad trust in AI for both advisory and transactional roles.
  • 24/7 Access Appeals to All- The allure of around-the-clock access to customer service is significant among consumers, with 77% noting this is a draw for automation and IVAs. Even Boomers are on board, with 68% recognizing the benefits of self-service’s constant access. Other key elements playing crucial roles in enhancing consumer acceptance include conversational voice and the assurance of secure communication for personal information, which enterprise-grade IVAs provide.

“Having monitored this sector for over a decade, this is the first time I’ve observed such a dramatic shift in agent preferences for automation over compensation,” said Michael Farrell, President and Chief Strategist at Farrell Insight. “As effectiveness, accuracy, security, ease of use, and trust increasingly become the top priorities for both agents and consumers, the method of achieving these results becomes secondary. Our research with Kore.ai indicates a watershed trend: people are leaning towards outcome-focused interactions in customer service, driven by their positive experiences with IVAs and contact center AI solutions.”

To improve customer experience, increase agent satisfaction, and optimize contact center performance, leveraging AI-powered solutions is essential for businesses to stay ahead of the curve.

“Our latest research shows increased engagement and satisfaction with AI solutions among agents and consumers,” said Raj Koneru, CEO of Kore.ai. “Adopting AI technologies in call centers not only enhances service quality for customers but also transforms agent roles by streamlining routine tasks and improving work conditions. We aim for this research to guide organizations looking to elevate their service interactions with AI-powered automation.”

To view the Kore.ai’s full AX and CX reports, please visit::  https://kore.ai/research-analysis/?utm_source=PR

The post Kore.ai’s Research Reveals Historic Shift as Contact Center Agents and Consumers Increasingly Prefer AI-Driven Solutions appeared first on ELE Times.

Building Blocks of the Aviation Industry

ELE Times - Wed, 05/08/2024 - 09:16

The aerospace and aviation industries are experiencing massive growth following the decimation of the pandemic. There is no denying the fact that both sectors stand at the cusp of technological advancements and evolution. The field has enormously stepped up from the early days of aviation to a more sophisticated and technology-laden service industry.

The industry is undergoing a transformative change in its overall lifecycle process- from the design to the final flight and everything in between. Technology breakthroughs are yielding some exceptional results, redefining commercial aviation and space exploration. Taking a closer look into the future, the aerospace and aviation industry is poised to grow manifold in the next decade, with AI/ML and other related technologies at the forefront of innovation. 

While many trends are being worked upon simultaneously, the one that is catching the most eyeballs is the eVTOL (Electric Vertical Take-off Landing) aircraft that uses electric power to hover, take off, and land vertically. This aircraft is from the not-so-distant future and would cater to point-to-point transportation between urban areas, finding its way as an efficient alternative to ground transport. 

Achieving Sustainability with Each Flight

Sustainability has become a norm that needs to be addressed by every industry. The aviation sector is utilizing forefront technologies and finding the right resources to help reduce its carbon footprint. Airlines are exploring eco-friendly, alternative sustainable aviation fuels like biofuels that would help in reducing carbon emissions or improving aerodynamics to enhance fuel efficiency. Engineers are also developing electric and hybrid aircraft and airports placing energy-efficient practices to reduce the industry’s reliance on fossil fuels. With such initiatives peaking, the aviation sector is moving closer to achieving green goals by formulating sustainable infrastructure and energy management systems.

Technological Impetus on the Rise

Right from operations to safety, efficiency, and customer experience, the aviation industry relies extensively on technology. At the forefront is AI/ML and automation that has the potential to transform the way airlines and airports operate. The development involves streamlining processes like cargo transport, aircraft data examination, integration of HR, maintenance, and flight systems into the apt interface, integration of AI-powered software into standard flight simulation training devices that are capable of analyzing real-time data, provide instant feedback on the pilot’s performance, and provide insights to instructors. Also, the constant evolution of technologies like cloud, robotics, augmented reality, virtual reality, Big Data, the Internet of Things, and AI/ML, is bringing faster and more crisp results in data refining that would help build advanced biometric technology and other sophisticated systems.

Moreover, this is the age of unmanned aerial vehicles (UAVs) i.e. Drones and the aviation sector is benefitting tons from their ability to access difficult-to-reach areas and capture high-quality images. These features have helped the airlines to restructure and revolutionize their maintenance and inspection approach. 

Cyber Threats Creeping into Intricate Digitised Aviation Systems 

Any industry that works on critical digital infrastructure is prone to cyber-attacks. Aviation’s digital landscape is a complex one with multiple stakeholders including airlines, airports, OEMs, and several service providers. With growing complexity, the digital ecosystem built on diverse technologies, new and old, with different levels of cybersecurity measures becomes exceedingly vulnerable and high-value targets to cyber exploitation.

Multiple points in the industry’s vast and interconnected supply chain are usual targets of cybercriminals. The attacks can disrupt operations, steal valuable data (passenger’s personal info, credit card details, flight data, etc.) called phishing, or pose indirect threats (ransomware) on third-party vendors. 

Thus, understanding the cybersecurity space becomes vital in the aviation business to brace against any possible breach.  

Apurva Gopinath, Assistant Vice President, Financial Services and Profession Group, Commercial Risk Solutions, India at Aon India Insurance Brokers Private Limited spoke about the underlying cyber threat in the aviation sector and shared insights on how businesses can adopt better and stricter cybersecurity strategies. 

Aviation Businesses are facing the most challenging cyber threat landscape yet with global ransomware damage costs predicted to reach $20 billion this year, an increase of 57X from 5 years ago. Aon’s global Digital Forensics and Incident Response (DFIR) team report that over 50 per cent of those subject to ransomware attacks pay the ransom in some form. To strengthen cyber resilience, aviation companies must adopt a risk-based approach to review the effectiveness of controls, particularly in Access Management, Business Continuity, Third Party Risk and Vulnerability Management. Some of the actions companies can take to reinforce their cybersecurity strategies include conducting vulnerability assessments and breach simulations, proactively utilizing threat intelligence to monitor the techniques and procedures of threat actors, and reviewing governance including Business Continuity, Disaster Recovery (BCDR) plans. Aviation businesses should also quantify the financial loss of cyber events listed on their cyber risk register to ensure the appropriate return of security investment (ROSI) and check contractual obligations to ensure insurance policies adequately cover any financial loss arising out of a breach.”

The post Building Blocks of the Aviation Industry appeared first on ELE Times.

STM32CubeMonitor 1.8, STM32CubeMonitor-UCPD 1.3, and STM32CubeMonitor-RF 2.12, more powerful data manipulations

ELE Times - Wed, 05/08/2024 - 07:42

Author: STMicroelectronics

STM32CubeMonitor 1.8 is the first version to add support for the SEGGER J-Link hardware probe. As a result, developers who are familiar with the third-party probe will be able to use it while capturing data with the ST software. It will make debugging and monitoring operations a lot simpler. As the J-Link fully supports the JTAG interface and offers download speeds of up to 4 MB/s (J-Link ULTRA+ / PRO), it also opens the door to greater development opportunities and rapid flashing operations. That’s why ST updated STM32CubeMonitor. We wanted to make the tool even more practical and enable developers to enjoy a more flexible and practical STM32 ecosystem.

ST often releases new versions of STM32CubeMonitorSTM32CubeMonitor-RF, and STM32CubeMonitor-UCPD. The tools repeatedly appear on our blog posts because many STM32 developers use them to release their products to market faster. Indeed, the challenge for any embedded system engineer is to find a comprehensive platform for their microcontroller or microprocessor. A device may have many features, but it won’t be useful if designers can’t implement them efficiently. As a result, it is critical to offer a wide range of software tools that facilitate the development of applications on STM32 devices. Let us, therefore, explore some of these tools and their new functionalities.

What’s new in STM32CubeMonitor 1.8?

The big update brought by STM32CubeMonitor 1.8 is the support for SEGGER J-Link probes. Avid readers of the ST Blog already know that SEGGER is an active member of the ST Partner Program. The company ships embOS, a real-time operating system optimized for STM32 devices. In fact, embOS was also one of the first pieces of software to receive the MadeforSTM32 label. More recently, we shared how SEGGER launched their STM32-SFI Flasher Commander to enable entire assembly lines to support software firmware installs (SFI). Hence, the support of their J-Link probes should come as no surprise.

The support of the SEGGER probe within STM32CubeMonitor is relatively straightforward. Instead of using the traditional STLINK in and out nodes acq stlink in and acq stlink out, developers just use ack jlink in and ack jlink out within the Node-RED interface. Hence, instead of having to convert the on-board STLINK into a J-Link, engineers can use the hardware probe to enjoy the SEGGER suite of software and solutions. Finally, STM32CubeMonitor 1.8 adds support for a greater range of acquisition rates when choosing a frequency lower than 1 Hz. The feature will help customize how often the software captures data, thus further optimizing its operations.

What is STM32CubeMonitor? The Netflix of MCUs

STM32CubeMonitor is a runtime variable monitoring and visualization tool with a web interface for remote connections and a graphical interface to create custom dashboards. It ensures developers can efficiently monitor their application through a graphical interface that relies on Node-RED. This flow-based programming tool enables users to create complex data representations with no coding at all. It will allow them to debug their software easily and analyze behaviors without disrupting an existing codebase. Additionally, users can share their dashboards on the Node-RED and ST communities to build on one another.

To make the first experience with STM32CubeMonitor more intuitive, the ST Wiki explains in detail how developers can monitor a variable within an application in just two steps. Users select the start address of the data they track in memory and its type. To assist in this task, we have a guide showing how to get addresses from ELF files. The interface then asks the user to select an STLINK probe.

A runtime monitoring utility based on Node-RED STM32CubeMonitorSTM32CubeMonitor

Keeping track of registers, variables in memory, interrupts, and the myriad of events that occur at any moment is daunting. Hence, manually monitoring them is so demanding that teams often do not have the resources for this endeavor. STM32CubeMonitor solves this problem and relies on Node-RED to keep things as simple as possible. Users drag and drop graphical representations of a program’s element onto a canvas to create a flow, meaning a sequence of events. For instance, conditions can trigger modules that send alerts by email or push data to a cloud platform using MQTT.

Without entering a single line of code, users can create graphs, chart plots, or generate gauges that will help them visualize values in a counter, data from a sensor, and many other aspects of an application. Additionally, the presence of a web server means that it’s possible to use these visualizations on any PC or mobile browser, whether on the local network or remotely. Moreover, thanks to the Node-RED and ST community, users can start by looking at other users’ dashboards and organically learn by studying other people’s examples.

A .CSV generator for power users

The previous version of STM32CubeMonitor (version 1.6) updated the export to CSV feature to generate files that would work better with spreadsheets. For instance, the time column moved before the value column to fit how most people set their tables. Similarly, time began at 0, and long numbers got a separator to be more readable. Finally, version 1.6 also made it easier to identify probe configurations by giving them names.

Version 1.7 of STM32CubeMonitor now builds on the previous release to bring features requested by our users to turn the CSV exporter into a powerhouse. For instance, creating and organizing multiple columns within the export interface is now possible. Previously, users would have had to run a Python script to manipulate data or do everything in their spreadsheet application, which tends to be cumbersome. Similarly, each variable gets its column and a timestamp to better track it. Hence, the new options within STM32CubeMonitor ensure users can structure their data more easily and use their spreadsheet software to view the results instead of applying time-consuming changes.

Node-RED 3.1

Since version 1.5, STM32CubeMonitor supports Node-RED 3. One of the most significant improvements is the addition of a contextual menu available when users right-click. Consequently, they can access a lot more actions and discover features that would previously require digging into menus. The other important functionality available in Node-RED 3 is junctions, a special type of node that makes it easier to route wires. It helps simplify and clarify designs by bringing greater flexibility. Version 3 also introduced debugging capabilities that expose node locations when working with sub-flows, thus helping developers see what node is generating an error message.

And since version 1.7, STM32CubeMonitor uses Node-RED 3.1, which brings notifications management at the tab level, thus offering a lot more granularity to developers tracking multiple aspects of their application. Users also get a bigger workspace (from 5000×5000 to 8000×8000) and lockable flow, which can prevent accidental changes, which is especially important when dealing with mission-critical flows. Version 3.1, released only a few months ago, also updated the context panel to include popular options absent from the previous iteration, forcing users to dig through menus. Finally, among the many other improvements, Node-RED 3.1 optimized the wiring between horizontally aligned nodes to make them significantly more readable.

Eco acquisition mode

STM32CubeMonitor features a low-power acquisition mechanism, named ECO mode, that reduces CPU consumption by lowering the ring sample rate below 10 Hz. There are many instances when developers don’t need fast data acquisition and could benefit from a lower processing load. Traditionally, the utility captures variables every 50 ms or double the low rate frequency. Thanks to the ECO mode, developers get far more granularity and can manage resources better. The feature is also quite accessible since the threshold is simply a value in the settings file. Changing it is thus straightforward.

A support tool throughout the life cycle of a product

During the prototyping phase, engineers will likely use an STLINK probe, such as one of the STLINK-V3 modules currently availableIt connects the MCU board to the PC, which will help set up the STM32CubeMonitor Dashboard and act as a gateway for the web interface. As designers prepare to ship their final product, they can create a software routine that will send data to a USB port using UART. Developers can thus still monitor their application securely by using a computer with STM32CubeMonitor connected to that USB port. As a result, the tool provides a long-term analysis that will help plan upgrades or upcoming features.

New format and symbol change notification

The latest version of STM32CubeMonitor brings the ability to export data in CSV instead of simply using a proprietary format. Users can import the information into Excel, MATLAB, and others, opening the door to more data optimization and manipulation. The new software will also throw a notification if symbols change. Put simply, the utility tracks variables by defining them in a file and associating them with a symbol. However, recompiling the code may render the symbols’ files obsolete, creating discrepancies with the Node-RED dashboard. The new STM32CubeMonitor will alert users if they forget to update the symbols’ file.

What’s new in STM32CubeMonitor-RF 2.12?

To support the latest features present in STM32WB and STM32WBA devices, STM32CubeMonitor-RF must align itself with their Bluetooth Low Energy stacks. Consequently, each new release tracks the changes brought to the microcontrollers’ firmware packages. In this instance, STM32CubeMonitor-RF 2.12 is aligned with version 1.17.0 of the firmware for the STM32WB and version 1.1.0 for the STM32WBA, the 1st wireless Cortex-M33 for more powerful and more secure Bluetooth applications. Additionally, the new utility brings support for over-the-air firmware updates on the STM32WBA and the latest Open Thread stack on the STM32WB.

What are some of the key features of STM32CubeMonitor-RF? Utility to optimize Bluetooth and 802.15.4 applications The OTA Updater and its Optimize MTU Size optionThe OTA Updater and its Optimize MTU Size option

STM32CubeMonitor-RF is a tool that tests the Bluetooth and 802.15.4 radio performance of STM32WB microcontrollers. The graphical user interface helps visualize signal strength and packet errors over time, while a command-line interface opens the door to macros, batch files, and other types of automation. Put simply, it draws from the same philosophy as the traditional STM32CubeMonitor but specializes in radio performance. Hence, developers can rapidly test their design and potentially spot issues. The utility can also sniff 802.15.4 communications between devices. The easiest way to try the utility is to connect an STM32WB development board to a computer and use its USB or UART interface.

Over-the-air performance

Since version 2.8.0, STM32CubeMonitor-RF more than doubled over-the-air performances thanks to larger data packets. When users select the “Optimize MTU size” option in the “OTA Updater”, the software tool increases OTA transfers from 16 kbit/s to 41 kbit/s. It is, therefore, an essential quality of life improvement for developers. Sending files or updating a device’s firmware are everyday operations during development. The faster speeds will ensure developers work faster and more efficiently.

Advanced Features

The software package includes advanced features like an OpenThread 1.3 stack and an 802.15.4 sniffer firmware that works with a USB dongle or a Nucleo board. STM32CubeMonitor-RF also inaugurates a new BLE Received Signal Strength Indication (RSSI) acquisition scheme, which helps determine the approximate distance between two Bluetooth devices. Faithful readers of the ST Blog will remember that the technology was crucial during the pandemic in assisting companies like Inocess in developing products such as the Nextent Tag to help maintain physical distancing guidelines.

Another milestone is that STM32CubeMonitor-RF 2.10 brought the latest features from the STM32WB BLE 5.3 firmware (stack version 1.15.0). Developers thus get to enjoy BLE extended advertising. Traditionally, Bluetooth 4 and 5 have three advertising channels only, each capable of sending a payload of 255 bytes. Thanks to extended advertisements, sending a much larger payload is possible using one of the 37 data channels. One of the three channels simply sends a header pointing to the extension. Consequently, developers don’t need to send the same data on all three channels to ensure its reception, and they can transmit more data faster.

ACI logs

CubeMonitor-RF 2.11 brought a quality of life improvement in the form of application command interface (ACI) logs in CSV format. Put simply, ACI is the mechanism that sends commands to the Bluetooth stack, and thus, one of the first logs developers look into when debugging or optimizing their software. Previously, ACI logs were only available in a traditional .txt format. The move to CSV opens the door to clearer presentations and easier manipulation. For instance, users can rapidly sort the list of commands by value, type, or number of times they were sent.

New testing capabilities

Version 2.11 of CubeMonitor-RF brought a new method of testing the reliability of 802.15.4 stacks thanks to the support of a continuous wave mode. As the name implies, it just sends an uninterrupted signal without modulation. Developers can thus perform basic but crucial measurements to gauge signal propagation under several conditions. It’s an important first test for engineers looking to understand how their design will perform. Currently, the feature is only available on devices running the STM32CubeWB 1.11.0 firmware or later.

What’s new in STM32CubeMonitor-UCPD 1.3?

STM32CubeMonitor-UCPD 1.3 is now compatible with the USB Extended Power Range (EPR), a new profile delivering 48 V at 5 A for a total of 240 W. At this level, it becomes a lot simpler to fast-charge laptops or power docking stations with multiple fast-charging ports. Moreover, 240 W also brings USB-C to more power tools, further democratizing the connector. As makers look to use one port to save resources, reuse cables, and reduce waste, support for the EPR mode enables teams to adopt the new standard faster. Furthermore, as 240 W compatible cables are now becoming available, it is critical. to adopt the profile as early as possible.

What is STM32CubeMonitor-UCPD?

STM32CubeMonitor-UCPD monitors and helps set up USB-C and Power Delivery systems on STM32 microcontrollers running the ST USB PD stack. Developers can use the tool to monitor interactions on the USB-C interface, use sink or source power profiles, and use vendor-defined messages (VDM). The tool even has predefined settings to facilitate and hasten developments by handling many of the complexities inherent to these new technologies. STM32CubeMonitor-UCPD was integral to the launch of ST’s USB-C Power Delivery ecosystem in 2019. Since then, we’ve continued to improve the software to help developers gauge performance and obtain certifications faster.

Since STM32CubeMonitor-UCPD 1.2.0 houses a Java machine, like the other tools in this blog post, the utility has everything the installer needs. Users no longer need to install Java themselves before running the application. Additionally, users can now display traces for the voltage and current bus, VDM, UCSI, and more. The new STM32CubeMonitor-UCPD also monitors electrical values from the battery. Hence, developers can track more processes and understand what happens when connecting two USB-C devices or using Power Delivery.

The post STM32CubeMonitor 1.8, STM32CubeMonitor-UCPD 1.3, and STM32CubeMonitor-RF 2.12, more powerful data manipulations appeared first on ELE Times.


Subscribe to Кафедра Електронної Інженерії aggregator - Новини світу мікро- та наноелектроніки