EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 2 hours 50 min ago

Automotive Hall ICs provide dual outputs

Thu, 09/21/2023 - 20:50

Hall-effect sensors in the AH39xxQ series from Diodes provide accurate speed and directional data or two independent outputs. Targeting automotive applications, the parts are AEC-Q100 Grade 0 qualified and support PAPP documentation. Self-diagnostic features also make the Hall ICs suitable for ISO 26262-compliant systems.

In line with automotive battery requirements, the devices operate over a wide supply voltage range of 2.7 V to 27 V. They have a 40-V absolute maximum rating, enabling them to safely handle 40-V load dumps. Three operating-point and release-point (BOP/BRP) options are offered, with typical values of 10/-10 gauss, 25/-25 gauss, and 75/-75 gauss. A narrow operating window ensures accurate and reliable switching points.

Dual-channel operation allows one Hall IC to replace two latch switches, saving PCB space and overall component costs. In addition, a chopper stabilized design minimizes switch point drift and ensures accurate measurements over a broad temperature range.

Hall sensors in the AH396xQ series cost $0.40 each in lots of 1000 units. Hall sensors with self-diagnostics in the AH397xQ series cost $0.44 each in like quantities. All of the devices come in TSOT25 packages.

AH396xQ series product page

AH397xQ series product page

Diodes

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Automotive Hall ICs provide dual outputs appeared first on EDN.

Kit accelerates STM32H5 MCU development

Thu, 09/21/2023 - 20:50

The STM32H5731-DK Discovery kit from ST allows developers to build secure, connected IoT devices based on the STM32H5 microcontroller. User can explore all the integrated features of the STM32H5— including analog peripherals, adaptive real-time accelerator (ART), media interfaces, and mathematical accelerators—as well as core security services certified and maintained by ST.

Along with the STM32H5 MCU, the development board furnishes a color touch display, digital microphone, and USB, Ethernet, and Wi-Fi interfaces. An audio codec, flash memory, and headers for connecting expansion shields and daughterboards are also provided.

To simplify the development process, the STM32CubeH5 MCU software package supplies the components required to develop an application on the STM32H5 microcontroller, including examples and application code. ST also offers the STM32CubeMX tool for configuring and initializing the MCU.

The STM32H5 employs an Arm Cortex-M33 MCU core running at 250 MHz and is the first to support ST’s Secure Manager system-on-chip security features. It combines Arm TrustZone security with ST’s STM32Trust framework to comply with PSA Certified Level 3 and Global Platform SESIP3 security specifications.

The Discovery kit board cost $98.75 and is available through ST’s eStore and authorized distributors.

STM32H5731-DK product page

STMicroelectronics

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Kit accelerates STM32H5 MCU development appeared first on EDN.

Accelerating RISC-V development with network-on-chip IP

Thu, 09/21/2023 - 09:33

In the world of system-on-chip (SoC) devices, architects encounter many options when configuring the processor subsystem. Choices range from single processor cores to clusters to multiple core clusters that are predominantly heterogeneous but occasionally homogeneous.

A recent trend is the widespread adoption of RISC-V cores, which are built upon open standard RISC-V instruction set architecture (ISA). This system is available through royalty-free open-source licenses.

Here, the utilization of network-on-chip (NoC) technologies’ plug-and-play capabilities has emerged as an effective strategy to accelerate the integration of RISC-V-based systems. This approach facilitates seamless connections between processor cores or clusters and intellectual property (IP) blocks from multiple vendors.

 

Network-on-chip basics

Using a NoC interconnect IP offers several advantages. The NoC can extend across the whole device, with each IP having one or more interfaces that span the entire SoC. These interfaces have their own data widths, operate at varying clock frequencies, and utilize diverse protocols such as OCP, APB, AHB, AXI, STBus, and DTL commonly adopted by SoC designers. Each of these interfaces links to a corresponding network interface unit (NIU), also referred to as a socket.

The NIU’s role is to receive data from a transmitting IP and then organize and serialize this data into a standardized format suitable for network transmission. Multiple packets can be in transit simultaneously. Upon arrival at its destination, the associated socket performs the reverse action by deserializing and undoing the packetization before presenting the data to the relevant IP. This process is done in accordance with the protocol and interface specifications linked to that particular IP.

A straightforward illustration of IP blocks could be visualized as solid logic blocks. Additionally, an SoC usually utilizes a single NoC. Figure 1 illustrates a basic NoC configuration.

Figure 1 A very simple NoC representation shows basic design configuration. Source: Arteris

The NoC itself can be implemented using a variety of topologies, including 1D star, 1D ring, 1D tree, 2D mesh, 2D torus and full mesh, as illustrated in Figure 2.

Figure 2 The above examples show a variety of NoC topologies. Source: Arteris

Some SoC design teams may want to develop their own proprietary NoCs, a process that is resource- and time-intensive. This approach requires teams of several specialized engineers to work for two or more years. To make matters more challenging, designers often invest nearly as much time debugging and verifying an in-house developed NoC as they do for the rest of the entire design.

As design cycles shorten and time-to-revenue pressures increase, SoC development teams are considering commercially available NoC IP. This IP enables the customization required in an internally developed NoC IP but is available from third-party vendors.

Another challenge of the growing SoC complexity is the practice of utilizing multiple NoCs and various NoC topologies within a single device (Figure 3). For instance, one section of the chip might adopt a hierarchical tree topology, while another area could opt for a 2D mesh configuration.

Figure 3 The illustration highlights sub-system blocks with internal NoCs. Source: Arteris

In many cases, the IP blocks in today’s SoCs are the equivalent of entire SoCs of only a few years ago, making them sub-systems. Thus, the creators of these sub-system blocks will often choose to employ industry-standard NoC IP provided by a third-party vendor.

In instances requiring high levels of customizability and co-optimization of compute and data transport, such as a processor cluster or a neural network accelerator, the IP development team may opt for a custom implementation of the transport mechanisms. Alternatively, they might decide to utilize one of the lesser adopted, highly specialized protocols to achieve their design goals.

RISC-V and NoC integration

For a standalone RISC-V processor core, these IPs are available with AXI interfaces for designers who don’t need coherency and CHI interfaces for those who do. This allows these cores to plug-and-play with an industry-standard NoC at the SoC level.

Likewise, if design teams select one of the less commonly adopted protocols for inter-cluster communication in a RISC-V design, that cluster can also feature ACE, AXI or CHI interfaces toward external connections. This method allows for quick connection to the SoC’s NoC.

Figure 4 below features both non-coherent and cache coherent options. Besides their usage in IPs and SoCs, these NoCs can also function as super NoCs within multi-die systems.

Figure 4 A NoC interconnect IP is shown in the context of a multi-die system. Source: Arteris

NoC IP in RISC-V processors

The industry is experiencing a dramatic upsurge in SoC designs featuring processor cores and clusters based on the open standard RISC-V instruction set architecture.

The development and adoption of RISC-V-based systems, including multi-die systems, can be accelerated by leveraging the plug-and-play capabilities offered by NoC technologies. This enables quick, seamless and efficient connections between RISC-V processor cores or clusters and IP functional blocks provided by multiple vendors.

Frank Schirrmeister, VP solutions and business development at Arteris, leads activities in the automotive, data center, 5G/6G communications, mobile, aerospace and data center industry verticals. Before Arteris, Frank held various senior leadership positions at Cadence Design Systems, Synopsys and Imperas, focusing on product marketing and management, solutions, strategic ecosystem partner initiatives and customer engagement.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Accelerating RISC-V development with network-on-chip IP appeared first on EDN.

Exploring the superior capabilities of Wi-Fi 7 over Wi-Fi 6

Wed, 09/20/2023 - 17:27

In recent years, applications such as video conferences, ultra high-definition streaming services, cloud services, gaming, and advanced industrial Internet of Things (IIoT) have significantly raised the bar for wireless technology. Wi-Fi 6 (including Wi-Fi 6E) and dual-band Wi-Fi were promising solutions to the rising wireless demands. However, the real-world improvements and noticeable benefits of Wi-Fi 6 have been underwhelming.

Now, we have a new standard on the horizon, bringing significant technical changes to the Wi-Fi industry. Wi-Fi 7 will be a giant leap forward for residential and enterprise users. This article will provide readers with insights into the latest progress of Wi-Fi 7. It will help engineers to better understand the full capabilities of Wi-Fi 7 and all the technical challenges that come with these new features. It will assist engineers to work on smooth Wi-Fi 7 adoption and develop potential applications regarding advanced wireless technologies.

Expected Wi-Fi 7 performance vs Wi-Fi 6, 6E and 5

From the last column of Table 1, you can clearly see some performance numbers that Wi-Fi 7 will be able to deliver. As you can see, we are looking at a 4.8 fold connection speed gain from Wi-Fi 6 to Wi-Fi 7, making the maximum theoretical data rate 46 Gbps. That is a considerable speed improvement from Wi-Fi 5 to Wi-Fi 6, which was only 2.8 times.

 

Wi-Fi 5

Wi-Fi 6

Wi-Fi 6E

Wi-Fi 7

Launch time

2013

2019

2021

2024 (Expected)

IEEE standard

802.11ac

802.11ax

802.11ac

802.11be

Max data rate

3.5 Gbps

9.6 Gbps

9.6 Gbps

46 Gbps

Bands

5 GHz

2.4 GHz, 5 GHz

2.4 GHz, 5 GHz, 6 GHz

2.4 GHz, 5 GHz, 6 GHz

Channel size

20, 40, 80, 80+80, 160 MHz

20, 40, 80, 80+80, 160 MHz

20, 40, 80, 80+80, 160 MHz

Up to 320 MHz

Modulation

256-QAM OFDM

1024-QAM OFDMA

1024-QAM sOFDMA

4096-QAM OFDMA(with Extensions)

MIMO

4×4 MIMO DL MIMO

8×8 UL/DL MU-MIMO

8×8 UL/DL MU-MIMO

16×16 UL/DL MU-MIMO

RU

/

RU

RU

Multi-RUs

MAC

/

/

/

MLO

Table 1 A specification comparison between Wi-Fi 5, Wi-Fi 6, Wi-Fi 6E, and Wi-Fi 7.

That much speed improvement is due to the channel size increasing up to 320 MHz. From Table 1, channel size has stayed the same for over ten years. Another key reason Wi-Fi 7 could deliver much higher speed is that it supports three frequency bands (2.4 GHz, 5 GHz, 6 GHz) and multi-link operations. Figure 1 shows the bands, spectrum, channels, and channel width that are available to Wi-Fi 7. This feature not only improves connection speed but also improves network capacity by five times compared to Wi-Fi 6. In a later section, we will explore these new technical features in more detail.

Figure 1 A description of bands, spectrum, channels, and channel width available to Wi-Fi 7. Source: Keysight

Based on the specifications of Wi-Fi 7, besides the 46 Gbps speed, we expect Wi-Fi 7 to deliver less than five milliseconds of latency. This is over one hundred times better than Wi-Fi 6. With this performance, we could expect 15x better AR/VR performance.

Maximum channel bandwidth increase

As mentioned in Table 1, one of the most significant changes coming to Wi-Fi 7 is the maximum channel bandwidth. It allows the 6 GHz band to double its bandwidth from 160 MHz to 320 MHz, this change will enable many more simultaneous data transmissions. As illustrated in Figure 2, with twice the bandwidth resources, you can easily expect the base speed to double.

Figure 2 Wi-Fi 7’s maximum channel bandwidth in the 6 GHz bad versus the 5 GHz band of Wi-Fi 6. Source: Keysight

Currently, two main challenges will make adopting 320 MHz slower. First, from a regulatory standpoint, certain regions support three channels of the 320 MHz contiguous spectrum while others only support one channel, and some regions do not support any channel. That is why this bandwidth is exclusive to the 6 GHz band. It requires policymakers in different regions to work closely with the Wi-Fi industry to find feasible solutions to allow additional bandwidth for Wi-Fi applications. Despite these challenges, several chipset/module vendors have already certified Wi-Fi 7 modules, and several device manufacturers will be releasing Wi-Fi 7 access points (APs) in 2023.

Another challenge is that we need compatible clients to support this feature. Currently, all client devices only support 160 MHz at best. Device makers must consider factors like interference or power consumption when designing and developing their new products. Higher bandwidth support usually means higher power usage and a higher chance of interference. It usually takes time for device makers to find a balance between performance and other factors. Therefore, it will take time until the industry can take full advantage of this channel bandwidth increase.

Multi-link operation

There is another important feature coming to Wi-Fi 7. This feature is multi-link operation or MLO. Currently, as shown on the left of Figure 3, Wi-Fi technology only supports single-link operation, which means Wi-Fi devices can only transmit data using either the 2.4 GHz band or the 5 GHz band. With Wi-Fi 7 and MLO, shown on the right of Figure 3, Wi-Fi devices can transmit data using all available bands and channels to transmit data simultaneously. There are usually two schemes for MLO to work. Devices could either choose among different bands for each transfer cycle, or they could just aggregate more than one band. Either way, MLO avoids congestion on the links, lowering latency. This feature will improve reliability for applications like VR/AR, gaming, video conferencing, and cloud computing.

Figure 3 Single-link operation of Wi-Fi 6 versus MLO of Wi-Fi 7. Source: Keysight

As mentioned in the previous section, Wi-Fi 7 now supports wider maximum channel bandwidth of up to 320 MHz. To support high band aggregation, it will cause an increase in peak-to-average power ratio (PAPR) in wider channels. Therefore, this MLO feature will introduce more power consumption, which device makers must find ways to compensate for. Besides additional power usage, having more subchannels will make managing interference more difficult at the same time.

Channel puncturing

The following important feature is channel puncturing or, preamble puncturing. This feature allows APs to establish transmissions with more than one companion device at the same time and be able to monitor for interference on the channel. If they detect interference in the channel, they can ‘puncture’ the channel and notch out that 20 MHz sub-channel to continue the transmission in the rest of the channel. The overall bandwidth is lower because of the punctured amount, but we still enable a decent channel than not using it at all.

Channel puncturing already existed in Wi-Fi 6 as an optional feature. However, because of its technical complexity, this feature requires both compatible APs and clients to work properly. There has yet to be a manufacturer taking advantage of this feature. With the new Wi-Fi 7 standards, this channel puncturing could become a standard feature.

For measurement requirements, this feature has presented more challenges from the regulatory side. The European Telecommunications Standards Institute (ETSI) has already given the standards for preamble puncturing testing, but for 160 MHz bandwidths. The Federal Communications Commission (FCC), however, needs to provide clear guidelines for the measurement limits for preamble puncturing. The existing measurement limits were not designed for the Wi-Fi 7 preamble puncturing feature, and they are too restrictive. For example, there are discussions in presentations on how to manage channel puncturing for dynamic frequency selection (DFS) testing, but no formal definition in FCC guidance documents (KDBs). Also, there are possible changes coming to the in-band emission limits for channel puncturing.

Other important new features of Wi-Fi 7 and IoT support

To support more IoT devices on one Wi-Fi network, Wi-Fi 7 brought 16×16 multi-user multiple-input and multiple-output (MU-MIMO). This feature will easily double the network capacity of Wi-Fi 6. While this improves the transmission efficiency, it also greatly increases the amount of testing required, as several tests are required for each antenna output.

Wi-Fi 7 adopts a higher-order modulation scheme, 4096-QAM, to further enhance peak rates. As shown in Figure 4. This allows Wi-Fi 7 to carry 12 bits at a time rather than 10 bits. That means the new modulation scheme alone can improve theoretical transmission rates by 20% compared to Wi-Fi 6’s 1024-QAM. Besides faster data rate improvement, when it comes to streaming, gaming, and VR/AR applications, 4K-QAM means flawless 4K/8K image quality, higher colour accuracy, and minimum lag.

Figure 4 Wi-Fi 7 adopts a higher-order modulation scheme, 4096-QAM, to further enhance peak rates; here is an example of 1024 QAM vs. 4096 QAM. Source: Keysight

With Wi-Fi 6, each user only has one resource unit (RU) assigned to transmit frames, which makes the spectrum resource less flexible. Wi-Fi 7, however, allows multiple RUs combinations to serve one single user, which increases transmission efficiency. See Figure 5.

Figure 5 An example of single RU versus multi-RU. Source: Keysight

Understanding Wi-Fi 7

Wireless connectivity has become increasingly vital in our lives. Wi-Fi technology plays a crucial role in meeting our growing demands for higher speed, low latency, high capacity, and high efficiency for household and enterprise users. Wi-Fi 7 (802.11be) will bring improvements in all these major aspects compared to Wi-Fi 6 (802.11ax) and will open more doors to more and better IoT applications and services.

Wi-Fi 7 leverages the increased channel width, multi-channel operation, and channel puncturing to improve speed and efficiency. Other features like multi-user capabilities enhancements, 4K-QAM, and multi-RU support will further optimize the user experience.

Wi-Fi 7 also comes with several tough challenges. The most important one is finding a balance between wider feature support and power consumption. Of course, there is always an element of interference in the subchannels. To support all these new features, we need compatible APs and clients, which is not possible if we do not have all the regulatory guidelines in place for all regions in the world. This requires regulatory bodies to work closely with industry leaders to define these guidelines so that Wi-Fi 7 evolves to reality from theory.

 

Xiang Li is an experienced wireless network engineer with a master’s degree in electrical engineering. Currently, Xiang is an Industry Solution Marketing Engineer at Keysight Technologies.

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Exploring the superior capabilities of Wi-Fi 7 over Wi-Fi 6 appeared first on EDN.

Disassembling the Echo Studio, Amazon’s Apple HomePod foe

Tue, 09/19/2023 - 17:30

Back in late February, within my teardown of an Apple HomePod mini smart speaker, I wrote:

I recently stumbled across a technique for cost-effectively obtaining teardown candidates, which I definitely plan to continue employing in the future (in fact, I’ve now got two more victims queued up in my office which I acquired the same way, although I’m not going to spoil the surprise by telling you about them yet).

In early June, while dissecting my second victim, a first-generation full-size HomePod, I summarized the aforementioned acquisition technique:

I picked up a couple of “for parts only” devices with frayed power cords on eBay for substantial discounts from the fully functional (whether brand new or used) price. One of them ended up still being fully functional; a bit of electrical tape sheltered the frayed segments from further degradation. The other went “under the knife”. Today, I’ll showcase one of the “two more victims” I previously foreshadowed.

Victim #2 wouldn’t fully boot, the result of either a bad logic board or a firmware update gone awry. And today, we’re going to take a look at victim #3, ironically a direct competitor to the HomePod. It’s a high-end Amazon Echo Studio smart speaker; here’s a stock shot to start us off:

That one, matching the one you’ll learn about in detail today, is “Charcoal” in color. The device also comes in white…err…“Glacier”. Here’s a conceptual image of its multi-transducer insides:

The story of how I came to obtain a device which normally sells for $199.99 (sometimes found on sale for $159.99) for only $49.99 is a bit convoluted but also educational—I’ll revisit the big-picture topic later as it relates to other similar-configured devices, specifically computers. So, I hope you’ll indulge my brief detour before diving into the device’s guts. As with its predecessors, this smart speaker was listed on eBay “for parts only”. And in something of a first after more than a quarter century of my being active on eBay, after I bought it the seller reached out to me to be sure I knew what I was (and wasn’t) getting before he sent it to me.

A funny (at least to me) aside: in sitting down to write this piece just now, I finally took a close look at the seller’s eBay username (“starpawn19”) followed by a visit to his eBay storefront (“Star Pawn of New Port Richey”). He runs a pawn shop, which I’d actually already suspected. He told me that someone sold one of his employees the Echo Studio, but the employee forgot to tell the seller (before leaving) to unregister the smart speaker from his or her Amazon account, and the phone number the seller gave the pawn shop ended up being inactive.

After doing a bit more research, there’s likely more to the tale than I was told (I’m not at all suggesting that “starpawn19” was being deceptive, mind you, only not fully knowledgeable). Amazon keeps a record of each customer account that each of its Echo devices (each with a unique device ID) has ever been registered with (in fact, if you buy a new device direct from Amazon, you often have the option for it to come pre-registered). If all that had happened, as had been related to me, was that the previous registered user forgot to unregister it first, I think (although information found online is contradictory, and the Amazon support reps I spoke with in fruitlessly striving to resurrect my new toy were tight-lipped) that a factory reset (which I tried) would enable its association with a new account. If, on the other hand, a previous user ever reports it lost or stolen (or if, apparently, Amazon thinks you’ve been nasty to its delivery personnel!) it gets unregistered and all subsequent activation attempts will fail, as I discovered:

The only recourse that “Contact Customer Service” offered me was to return the unit to the seller for a refund…which of course wasn’t an option available to me, since I knew about its compromised condition upfront. So, what happened? One of two things, I’m guessing. Either:

  • Whoever sold the device to “starpawn19” had previously stolen it from someone else or,
  • Whoever sold the device to “starpawn19” hadn’t been happy with the price they got for it and subsequently decided to get revenge by reporting it lost or stolen to Amazon.

With that backgrounder over, let’s get to tearing down, shall we? I’ll begin with a few overview images (albeit no typical box shots, sorry; it didn’t come in retail packaging), as-usual accompanied by the obligatory 0.75″ (19.1 mm) diameter U.S. penny for dimension comparison purposes but absent (I realized in retrospect) the detachable power cord. The Echo Studio is 8.1” high and 6.9” in diameter (206 mm x 175 mm), weighing 7.7 lbs. (3.5 kg). Here’s a front view:

Because I don’t want it to feel left out:

Now a back view:

A closeup of the backside connections reveals the power port in the center, flanked by a micro-USB connector to the left (with no documented user function) and a multipurpose 3.5 mm audio input jack on the right, capable of accepting both TRS analog plugs and incoming optical S/PDIF digital streams.

Like Apple’s HomePod, the Echo Studio contains a mix of speaker counts and sizes, capable of reproducing various audio frequency ranges, and variously located in the device. But the implementation details are quite different in both cases. Here’s a look at the internals of the first-generation HomePod (recall that the second-generation successor has only five midrange/tweeter combo transducers, versus the seven in this initial-version design):

Compare it against the “x-ray” image of the Echo Studio at the beginning of this article. Several deviations particularly jump out at me:

  • Apple employed a single speaker configuration to tackle both midrange and high frequencies, whereas Amazon used distinct drivers for each frequency span: three 2” midranges and a 1” high-frequency tweeter.
  • Apple’s woofer points upward, out the top of the HomePod, whereas Amazon’s 5” driver is downward firing. That said, both designs leverage porting (Amazon calls them “apertures”, one in front and the other in back) to enhance bass response.

  • The varying speaker counts and locations affect both bill-of-materials costs and sound reproduction capabilities. Recall that bass frequencies are omnidirectional; you can put a subwoofer pretty much anywhere in a room, with optimum location guided by acoustic response characteristics versus proximity to your ears. Conversely, high frequencies are unidirectional; your best results come from pointing each tweeter directly at your head (note, for example, its front-and-center location in the Echo Studio). Midrange frequencies have intermediary transducer location requirements.
  • The Echo Studio was also designed for surround sound reproduction. That explains, for example, the fact that one of its three midrange drivers points directly upward, to support Dolby Atmos’ “height” channel(s). The other two midrange drivers point to either side. And like its other modern Echo siblings, two Echo Studios can be paired together to more fully reproduce left and right channel “stereo” sound, as well as (paired with an Echo Sub) to further enhance the system’s low bass response.

Here’s our first look at the top of the Echo Studio, with the grille for the upward-firing midrange in the middle and an array of seven microphones spaced around the outer ring. Recall that the HomePod’s six (first-gen) then four (second-gen) mics were around the middle of the device.

Left-to-right along the lower edge of the ring are four switches: mute, volume down and up, and the multifunction “action” button. And, in the space between the speaker grill and outer ring, a string of multi-color LEDs shines through in various operating modes, with varying colors and patterns indicating current device status (bright red, for example, signifies mute activation).

Now for the Echo Studio’s rubberized “foot”:

including that unique DSN (device serial number) that I mentioned earlier:

and which I’m betting is our pathway inside:

Yup!

Before continuing, I want to give credit to the folks at iFixit, who (as far as I know) never did a proper teardown of the Echo Studio but whose various repair manuals were still invaluable disassembly guides (in the spirit of giving back, you’ll find comments from yours truly posted to this one). I’ll also point you to a teardown video I found while doing preparatory research:

I don’t speak Hindi, so the narration was of no use to me, but the visuals were still helpful!

Anyhoo….onward. Amazon really gave my Torx driver set quite a workout; I’m not sure that I’ve ever encountered so many different screw head types and sizes in the same device. First, getting the plastic bottom off required removing 15 of them:

Lift off the bottom plate:

Disconnect the wiring harness and a flex PCB cable:

And the deed is done; we’re in!

Let’s first focus on the PCB-plus-power assembly attached to the inside of the bottom plate:

Remove four screws:

and the PCB comes free:

The IC to the right of the flex PCB connector is the Texas Instruments’ OPA1679 quad audio op amp. Flip the PCB over:

and you’ll find another, smaller IC below the audio input jack, presumably also from TI considering its two-line marking:

TI 12
4521

but whose identity escapes me. Ideas, readers? (I’m also guessing, by the way, that the optical S/DPIF receiver is built into the audio input jack? And where does the DAC for the digital S/PDIF audio input, or alternatively the ADC for the analog audio input, reside? Keep reading…)

Here’s another shot of this same PCB from a different vantage point before we proceed:

And now for that wiring harness:

Now for the PCB inside the device, which we saw earlier and which (many of you have likely already figured out, given the massive transformer, passives, and all the thermal paste) handles DC power generation and distribution:

Remove one more screw and disconnect one more wiring harness:

And the PCB lifts right out, leaving a baseplate behind:

I strove mightily to chip away at and remove all that thermal paste while leaving everything around and underneath it (and embedded within it) intact, but eventually bailed on the idea. Sorry, folks; if anyone knows how to cost-effectively dissolve this stubborn stuff without marring everything else, I’m happy to give it a shot. Speaking of shots, here are some more:

About that baseplate:

The plastic ring around it, held in place by a single screw, needs to come out first:

And now for the baseplate itself, which does double-duty (on its other site) in redirecting the woofer’s output through the smart speaker’s two side “apertures”:

Bless you, iFixit registered user Jeff Roske, for suggesting in an iFixit guide comment (step 6, to be exact) that “Power supply baseplate must be rotated clockwise, ~1cm at outside edge, to release catch tabs before lifting out of unit” and, in the process, saving my sanity (I only wish it hadn’t taken me five minutes’ worth of frustration before I saw Jeff’s wise words):

Woof woof, look what’s underneath!

And I bet you know what my next step will be…

Finally, we get our first glimpse at the system’s “brains”, i.e., its main board:

Here are closeups of the insides’ four quadrants, as oriented above. Left:

Top:

Right:

And bottom:

Specifically, to get the system board out, we’re first going to have to disconnect a bunch of wiring harnesses and flex PCB cables. Here they are, sequentially clockwise around the board starting at the left side, and in both their connected and disconnected states:

Pausing the cadence briefly at this point, I encountered something I don’t think I’ve seen in a teardown before; a zip tie bundling multiple harnesses together to tidy up the insides!

And here’s another organizing entity, a harness restraint along one side:

Onward…

Up top are also two antenna feeds that need to be unsnapped:

Now over to the right:

Next to go were sixteen total screws, eight of them shiny and screwed into metal underneath, the other eight black and anchored to underneath plastic:

Houston, we have a liftoff”:

Here’s the side of the system board that you’ve previously seen in its installed (pointed downward, to be precise) state, which I’ll call “side 1”:

And here’s when-installed upward-facing system board “side 2”:

Time for some closeups and Faraday cage removals. Side 1 first; keeping to the prior clockwise cadence, I’ll start with the left-quadrant circuitry, which includes (among other things) a Texas Instruments LM73605 step-down voltage converter:

Now the top:

Right:

And finally, the Faraday cage-dominated landscape at bottom:

Again, you already know what comes next, right?

That’s MediaTek’s MT7668 wireless connectivity controller at the center. Quoting from the product page on the manufacturer website: it’s a “Highly integrated single chip combining 2×2 dual-band 802.11ac Wi-Fi with MU-MIMO and latest Bluetooth 5.0 radios together in a single package.” The only wireless protocol NOT documented is Zigbee, support for which keen-eyed readers may have already noticed was mentioned in the smart speaker “foot” markings earlier. And by the way, if you hadn’t already noticed, in addition to the earlier noted external antenna wiring feeds, there are PCB-embedded antennae to either side of the Faraday cage (also visible on the other side of the PCB).

Speaking of which…now for the system board side 2. Already-exposed IC at left (a Texas Instruments TPA3129D2 class D audio amplifier) first:

At top (and top left) left is a clutch of ICs only some of which I’ve been able to ID:

The upper-left Faraday cage hides, I believe, the Zigbee controller. Its markings are:

MG21
A020JI
B01U8O
2119

and through a convoluted process involving Google Image search followed by a bunch of dead-end (and dead-page) searches that ended up bearing (hopefully not-rotten) fruit, I think it’s Silicon Labs’ EFR32MG21:

Now for the remainder of the ICs in this region:

The IC at the very top, with the silver rectangular “patch” atop it (which I at first thought was a sticker but can’t seem to peel off, so…) seemingly covering a same-size gold region underneath, is one of the puzzlers. Above the silver patch are the following faintly (so I may not be spot-on with my discernment) stamped lines:

13TTI
CT7NQ4

Below the silver section are also two marked lines, the first stamped and the second embossed:

3221
56A03

And next to the chip is this odd structure I’ve not encountered before, with a loop atop it:

Any ideas, folks?

To its lower right is another puzzler, marked as follows:

TI 13
531A

Fortunately, the remainder aren’t as obscure. Directly below the “silver patch” IC is Cirrus Logic’s CS42526 2-in, 6-out multi-channel codec with an integrated S/PDIF receiver (hearkening back to my earlier S/PDIF discussion related to the bottom-panel PCB). And next to its lower-right corner are two Texas Instruments THS4532 dual differential amplifiers.

Last but not least, there are the Faraday cage-rich right and lower quadrants:

Let’s tackle the cage-free IC at the right edge first: it’s another Texas Instruments TPA3129D2 class D audio amplifier, the mate of its mirror-image at left. Above and two Cages to its left is nonvolatile storage, a Sandisk SDINBDG4-8G 8 GByte industrial NAND e.MMC flash memory:

To its right, underneath the large rectangular Faraday cage, is (curiously) another MediaTek Wi-Fi-plus-Bluetooth controller IC, the MT7658, which is online documentation-bereft in comparison to its MT7668 cousin on the other side of the PCB, mentioned earlier, but which I’ve seen before.

And what about that large square Faraday cage at the bottom? Glad you asked:

Underneath the lid, in the upper right and left corners, are two Samsung K4A4G165WE-BCRC 4 Gbit DDR4 SDRAMs. And underneath them both, befitting the thermal paste augmentation, is the system “brains”, MediaTek’s MT8516 application processor.

Remember that earlier mentioned LED ring, and those up-top buttons and mics? They suggest we’ve got one PCB to go, plus we haven’t seen any midrange or tweeter transducers yet. Next let’s get out those two shiny metal brackets:

Two screws for each attach to the side:

And one screw for each attach to the assemblage above (or below, in the Echo Studio’s current upside-down orientation, if you prefer):

And with impediments now removed, they lift right out:

Next up for removal is the baseplate underneath (more accurately: above) the now-absent metal braces and system board:

More speakers! Finally!

At this point, I hypothesized that I’d need to get the top of the Echo Studio off in order to proceed any further. Holding it in place, however, were 14 screws accessible only from underneath the top, only a subset of which are shown here:

Why? Those speakers. The midranges in particular had really strong magnets, which tended to intercept and cling to screws en route from their original locations to the outside world:

Those same magnets, I discovered complete with colorful accompanying language, would also yank Torx bits right out of my driver handle. I eventually bailed (temporarily) on my iFixit driver kit and went with a set of long-shaft single-piece Torx screwdrivers instead. Here’s the end result as it relates to the top speaker grill, alongside my various implements of destruction:

And here’s the now-exposed upward-directed midrange speaker:

Four more screws removed, less “momentously” than before, set it free:

Now for that top outer ring. With the screws originally holding it in place no longer present, I was able to pop it free using only my thumbnail:

The translucent ring around the underside for the LEDs to shine through is likely obvious (hold that thought), as are the “springs” for the four buttons. The one on the left, if you look closely, is different than the other three: there’s a cutout in it. Again, hold that thought:

Two easily removed screws are all that still hold the LED PCB in place:

Here’s an initial overview shot of the LED PCB standalone, and of its top with the ring of 24 LEDs around its middle, an immediately obvious feature.

Keen-eyed readers may have also already noticed the seven MEMS microphone ports, one of which is missing its gasket (stuck instead to the underside of the top outer ring, which you’ll now notice if you go back and look at the earlier photo of it). Let’s now do some closeups, beginning with the top:

The “mute” switch, at far right in this photo, is different than the others because it has dedicated LEDs alongside it that illuminate red when the mics are in a muted state (in addition to the entire ring going red, as mentioned earlier). And what’s that next to the “action” switch on the far left? Remember the cutout I mentioned in the accompanying switch? It’s an ambient light sensor that enables dynamic calibration of the LEDs’ brightness to the room’s current illumination level. The IC below it, judging from the logo, is sourced by Toshiba Semiconductor, part of its LCX low-voltage CMOS logic line, but I can’t definitively ID it. Here are the markings:

LCX
74
3109
LS3208

The chips at right:

are a Texas Instruments LP5036 36-channel I2C RGB LED driver and, below it, another TI chip unknown in function and labeled as follows:

SN19070
T 0C8
A17V

That same two-chip combo is also present on the left side of the PCB:

The bottom of the top side of the PCB is comparatively unmemorable:

as is the underside of the PCB, aside from the bulk of the MEMS microphones’ housings:

We’re still not the rest of the way inside, but keep the faith; I still see a few screws up top:

Turns out the Echo Studio is comprised of two separate shells, the outer doing double-duty as front and side speakers’ grills (and, along with the top pieces’ colors, the sole differentiators between “Charcoal” and “Glacier” variants, come to think of it), which now slide apart:

Here’s what the outer-shell grilles for the front tweeter and side midranges look like from the inside:

No speaker, therefore no grill, in back, of course:

They correspond to the inner-shell locations of the left midrange:

Right midrange:

And front tweeter:

Again, the backside is bare:

Speaking of which…the midranges are identical to the top-mounted one you already saw. But let’s take a closer look at that tweeter:

In closing, one final set of images. In peering through the hole left from the tweeter’s removal:

I happened to notice the two antenna wires, one white and the other black, that I’d briefly showed you earlier. I realized I hadn’t yet discerned where they ended up and decided to rectify that omission before wrapping up:

This is the inside of the inner shell, looking toward the top and back of the device from the bottom. The antennas are in the back left (black wire) and right (white wire) of the Echo Studio, next to and “behind” the midrange drivers:

My guesses are as follows:

  • One of them is for 2.4 GHz Wi-Fi (black?), the other for 5 GHz (white?).
  • The 2.4 GHz one multitasks between Wi-Fi and Zigbee duties, and
  • The Bluetooth antennae are the PCB-embedded ones noted earlier.

Agree or disagree? And related: I’m still baffled as to why the design includes two Wi-Fi-plus-Bluetooth controller SoCs, MediaTek’s MT7658 and MT7668, plus a dedicated Zigbee controller. If you have any insights on this, or thoughts on anything else I’ve covered in this massive missive, I as-always welcome them in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Disassembling the Echo Studio, Amazon’s Apple HomePod foe appeared first on EDN.

What is moment of inertia?

Mon, 09/18/2023 - 17:00

In a state of bewilderment one fine day, I asked a group of three mechanical engineers at this company where I was then employed if they could please explain to me the concept of “moment of inertia”. To my utter astonishment, none of them could offer an explanation. None of them knew!

Since then, though, I think I’ve found out.

Sir Isaac Newton taught us (please forgive my paraphrasing) that for linear motion of some object, we have F = m*A which means that a mass “m” will undergo a changing linear velocity at some value of acceleration “A” under the influence of an applied force “F”.

Figure 1 Linear motion on a frictionless surface where force is equal to mass multiplied by acceleration. Source: John Dunn

There is an analogous equation for rotary motion. We have T = J*Θ where “T” is the torque applied to some object having a moment of inertia “J” which experiences a rotary acceleration called “Θ” which can be measured in units of radians per second squared.

Figure 2 Rotary Motion where the torque applied to an object is equal to its moment of inertia about the rotation axis multiplied by its rotary acceleration. Source: John Dunn

In a rotary motor, the armature will have some particular moment of inertia. There will also be a motor coefficient of torque designated kt. The torque that gets created within that motor will be kt multiplied by the armature current. In short, we will have T = kt*I where I is the armature current.

Writing further, we have kt*I = J* Θ which rearranges to Θ = kt I / J = Angular acceleration.

Angular acceleration is directly proportional to the applied armature current times the coefficient of torque and inversely proportional to the moment of inertia.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post What is moment of inertia? appeared first on EDN.

Rectangle and triangle waveform function generator

Fri, 09/15/2023 - 16:20

A description of a four-phase RC oscillator of a sinusoidal and rectangular shape is given. A triangular-shaped signal is obtained by the counterphase addition of half-waves of sinusoidal signals shifted in phase relative to each other by 90 degrees.

Wow the engineering world with your unique design: Design Ideas Submission Guide

It was previously shown that with the antiphase addition of sinusoidal signals rectified by two-half-period rectifiers shifted relative to each other by 90 degrees, it is possible to obtain a signal of an almost ideal triangular shape [1–5]. The mathematical description of the waveform and the scheme of practical implementation of such generators are given in [3–5].

Function generators usually consist of a generator of rectangular pulses, which are then converted into signals of triangular and sinusoidal shape. This function multiphase generator operates on a different principle: first, a tunable generator of a four-phase sinusoidal signal is used. Then, the sinusoidal signals are converted into rectangular ones using comparators. After that, the sinusoidal signals of the four phases are fed to the rectifier—the key elements controlled from the outputs of the four comparators. The rectified signals are mixed at the load resistance and form a triangle-shaped signal with twice the frequency, Figure 1.

Figure 1 Synthesis of a triangular-shaped signal from the sum of antiphase sinusoidal signals rectified by two-half-period rectifiers shifted by 90 degrees.

Figure 2 shows an electrical diagram of a four-phase sine wave signal generator operating in the frequency range of 50–500 Hz. The generator is made on four operational amplifiers U1.1–U1.4 of the LM324 chip. The potentiometer R2 is adjusted to obtain stable sinusoidal oscillations with minimal distortion. The generation frequency is set by RC circuits C2-R8-R10.1, C3-R9-R10.2 and is regulated by a dual potentiometer R10.1, R10.2. Four-phase signals are removed from the outputs of operational amplifiers: 0, 90, 180 and 270 degrees.

Figure 2 The four-phase sine wave signal generator operating in the frequency range of 50–500 Hz.

Signals from the outputs of a four-phase generator, Figure 1, are fed to the inputs of rectangular-shaped signal formers, Figure 2. The formers in Figure 2 contain four comparators U1.1–U1.4 of the LM339 chip. Rectangular-shaped signals with a phase shift of 0, 90, 180 and 270 degrees are removed from the outputs of the comparators.

Simultaneously, signals from the outputs of the comparators U1.1–U1.4, Figure 2, are sent to the control inputs of four analog switches U2.1–U2.4 of the CD4066 chip. Signals from a sinusoidal signal generator are fed to the inputs of analog switches. From the outputs of the keys, the rectified signals are sent to the resistive adder R3–R7. Analog keys U2.1–U2.4 are switched by signals of comparators U1.1–U1.4 in such a way that antiphase two-half-period rectified signals with a phase shift of 90 degrees are formed at the outputs of the keys. This makes it possible to implement a triangular-shaped signal at the output of the device with a frequency doubled relative to the sinusoidal signal generator (100–1000 Hz), Figure 1.

Figure 3 shows the shape of the sinusoidal and rectangular signals taken from the outputs of the generator and comparators and Figure 4 shows the shape of four-phase signals at the outputs of the sinusoidal signal generator and the outputs of rectangular signal formers.

Figure 3 The shape of the sinusoidal and rectangular signals taken from the outputs of the generator and comparators.

Figure 4 The shape of four-phase signals at the outputs of the sinusoidal signal generator and the outputs of rectangular signal formers.

Figure 5 shows how a four-phase function generator chip would look like, it also shows a diagram of its connection using a minimum number of external elements. The need to adjust the resistance of the adjusting resistor R2 may require the use of an additional two pins of the chip.

Figure 5 Possible view of the four-phase function generator chip and its connection scheme.

Michael A. Shustov is a doctor of technical sciences, candidate of chemical sciences and the author of over 750 printed works in the field of electronics, chemistry, physics, geology, medicine, and history.

 Related Content

References

  1. Shustov M.A. “Additive signal former of the triangular shape”. Radio engineering (RU), 2003, No. 1, pp. 95–96.
  2. Shustov M.A. “Circuit engineering. 500 devices on analog chips”. St. Petersburg, Science and Technology, 2013, 352 p.
  3. Shustov M.A., Shustov A.M. “Electronic Circuits for All”. London, Elektor International Media BV, 2017, 397 p.; “Elektronika za sve: Priručnik praktične elektronike”. Niš: Agencija EHO, 2017; 2018, 392 St. (Serbia).
  4. Shustov M.A., Shustov A.M. “Simple functional generator”. Elektor, May 16, 2018. https://www.elektormagazine.com/labs/simple-function-generator-160548
  5. Shustov M.A., Shustov A.M. “Simple Function Generator. With reverse-order signal creation”. Elektor, 2020, V. 46, № 7–8 (502), P. 20–23.
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Rectangle and triangle waveform function generator appeared first on EDN.

R&S enhances automotive radar test system

Fri, 09/15/2023 - 16:18

Options for the R&S AREG800A automotive radar echo generator enable realistic autonomous braking tests at short distances from dynamic artificial objects. The system is well-suited for radar object simulation, from single sensor tests to 360° environments, covering 76 GHz to 81 GHz for automotive radar sensors. Its scalability and adaptability allow a wide range of ADAS traffic scenarios to be electronically generated during the entire testing life cycle.

The test setup teams the AREG800A generator as a back end and the QAT100 antenna array as a front end. Alternatively, the AREG800A can be configured with remote mmWave front ends. To improve distance resolution, AREG8-81S/D mmWave front ends now extend the instantaneous bandwidth from 4 GHz to 5 GHz. In addition, a new feature for near object range allows minimum distance of one or more artificial objects to be reduced down to the airgap value of the radar under test.

For the QAT100 antenna array, R&S now offers an enhanced flatness option, which provides a detailed characterization of each antenna in the array. This improves the amplitude flatness of all antennas when operated with the AREG800A radar echo generator.

With the R&S radar test platform, tests currently performed in real-world test drives can now be executed in the lab. For more information, use the product page links below.

AREG800A product page

QAT100 product page

Rohde & Schwarz 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post R&S enhances automotive radar test system appeared first on EDN.

Image sensor optimizes automated driving

Fri, 09/15/2023 - 16:18

Sony announced the upcoming release of the IMX735, a CMOS image sensor for automotive cameras boasting a pixel count of 17.42 effective megapixels. With one of the industry’s highest resolutions, the sensor will enable the development of automotive camera systems capable of complex sensing and recognition.

The IMX735’s high-definition capture extends the object recognition range, allowing better detection of road conditions, vehicles, pedestrians, and other objects. Early detection of far-away objects helps make automated driving systems safer. In addition, the sensor provides a horizontal pixel signal readout, easing synchronization with mechanical-scanning LiDAR, which also employs a horizontal scanning method.

The pixel structure and exposure method of the IMX735 improves saturation illuminance, yielding a wide dynamic range of 106 dB, even in HDR and LED flicker mitigation modes. In priority mode, the sensor’s dynamic range extends to 130 dB.

Housed in a 236-pin plastic BGA package, the IMX735 is 14.54×17.34 mm and AEC-Q100 Grade 2 qualified. Sampling of the image sensor is planned to begin in September 2023. A datasheet for the IMX735 was not available at the time of this announcement.

Sony Semiconductor Solutions

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Image sensor optimizes automated driving appeared first on EDN.

MLCCs offer low-resistance soft termination

Fri, 09/15/2023 - 16:18

Unlike conventional soft-termination multilayer ceramic capacitors, TDK’s CNA and CNC series of MLCCs feature resin layers covering only the board-mounting side. The unique terminal structure allows electric current to pass outside the layers, reducing electrical resistance. According to TDK, the soft-termination devices using this structure are the first in the industry.

Capacitors in the automotive-grade CNA series and commercial-grade CNC series provide capacitances of up to 22 µF in a 3216 case size and 47 µF in a 3225 case size. According to the manufacturer, low-resistance soft termination translates to higher capacitances than conventional products. Automotive parts are AEC-Q200 compliant.

Mass production of the CNA and CNC series will commence in September 2023. Capacitors in the CNA series (automotive grade) include the CNA5L1X7R1H106K160AE, CNA5L1X7S1A226M160AE, and CNA6P1X7S1A476M250AE. Capacitors in the CNC series (commercial grade) include the CNC5L1X7R1H106K160AE, CNC5L1X7S1A226M160AE, and CNC6P1X7S1A476M250AE.

TDK

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post MLCCs offer low-resistance soft termination appeared first on EDN.

PCB design system shortens development time

Fri, 09/15/2023 - 16:17

OrCAD X, a PCB design platform from Cadence, offers cloud scalability and AI-powered placement automation to speed design turnaround time. The company claims that generative AI automation reduces placement time from days to minutes, while cloud-connected capabilities, including data management and collaborative layout design, boost productivity.

OrCAD X leverages layout improvements based on the Cadence Allegro X platform and provides backward data compatibility with both OrCAD and Allegro technologies. Optimized for small and medium businesses, OrCAD X offers an intuitive PCB layout canvas. Dynamic creation of manufacturing documentation provides a real-time view of fabrication details throughout the entire design process.

Cadence will be presenting the OrCAD X platform at the PCB West 2023 conference and exhibition, September 19-22 in Santa Clara, CA. For more information about OrCAD X and to sign up for first access, click here.

Cadence Design Systems

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post PCB design system shortens development time appeared first on EDN.

Oscilloscopes tout hardware-accelerated analysis

Fri, 09/15/2023 - 16:17

Keysight’s Infiniium MXR-B-Series oscilloscopes offer automated test tools and hardware-accelerated analysis to quickly find anomalies. The series comprises 12 models ranging in performance from 500 MHz to 6 GHz, 4 or 8 channels, and multiple hardware and software options.

Built-in tools reduce troubleshooting time by automating fault detection, design compliance testing, power integrity analysis, protocol decoding of more than 50 serial protocols, and mask testing on all channels simultaneously. Each scope leverages the same hardware-acceleration ASIC as Keysight’s 110-GHz UXR-B-Series oscilloscopes to accelerate analysis, eye diagrams, and triggering.

MXR B-Series scopes provide an update rate of greater than 200,000 waveforms/s, a sample rate of 16 Gsamples/s, and bandwidth up to 6 GHz that does not decrease with channel usage. They also boost jitter analysis by up to 70% and power integrity analysis by 65% compared to the MXR A-Series. A noise floor as low as 43 µV and an effective number of bits (ENOB) of up to 9.0 ensure accurate measurements.

For more information on the MXR B-Series oscilloscopes, use the product page link below.

MXR B-Series product page

Keysight Technologies

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Oscilloscopes tout hardware-accelerated analysis appeared first on EDN.

Apple’s latest product launch event takes flight

Wed, 09/13/2023 - 21:48

Another September…another suite of new AirPods, iPhones and Watches from Apple. Don’t get me wrong: in a world rife with impermanence, there’s something comforting about predictability, no matter how boring it might also be. And at this Tuesday’s event nexus was the most predictable (albeit simultaneously impactful) announcement of all: a decade-plus-one years after unveiling the proprietary Lightning connector for its various mobile devices, replacing the initial and equally-proprietary 30-pin dock connector, the transition to Lightning’s successor has now also begun. This time, though, the heir isn’t proprietary. It’s USB-C.

The switch to USB-C isn’t even remotely a surprise, as I said. The only question in my mind was when it’d start, and now another question has taken its place: how long will it take to complete? After all, more than five years ago the European Union (EU) started making grumbling noises about whether it should standardize charger connections. A bit more than four years later, last October to be exact, the EU followed through on its threat, mandating USB-C usage by the end of 2024. Later that month, Apple publicly acquiesced, admitting that it had no choice but to comply.

With today’s iPhone 15, 15 Plus, 15 Pro and 15 Pro Max, and a cognizant charging case for the tweaked 2nd-gen AirPods Pro, the transition to USB-C has started in earnest. And as usual, the interesting bits (or if you prefer, the devils) are in the details. Since the iPhone 15 and 15 Plus are based on last year’s A16 Bionic SoC, the brains of 2022’s iPhone 14 Pro and 14 Pro Max, they “only” run USB-C at Lightning-compatible USB 2.0 speeds (recall that the connector form factor—USB-A or USB-C, for example—and the bus bandwidth 480 Mbps USB 2.0 or 5-or-higher Gbps USB 3.x—are inherently distinct although they’re often implementation-linked). This year’s A17 Pro (hold that thought) SoC, conversely, contains a full USB 3 controller.

The higher bandwidth potential of the new wired bus generation is particularly resonant for anyone who’s tried transferring long-duration 4K video off a smartphone using comparatively slothlike USB 2/Lightning or Wi-Fi. And Power Delivery (PD) support (assuming it actually works as intended) will be great for passing higher charging voltage-and-current payloads to the phone; the iPhone 15 series implementation is bidirectional, actually, enabling the phone’s battery to even bump up the charge on an Apple Watch or set of AirPods in a pinch. But I was curious to see what exact form this new bus would take, among other reasons due to the system complications it might create. Pre-event rumors had indicated that Apple might have instead branded it as “Thunderbolt 4” which, if true, would have offered the broadest system compatibility: with TB4 and TB3, as well as with TB2 and original Thunderbolt via adapters, and with USB-C and USB generational precursors.

Here’s the thing with USB-C; Apple still supports (although it doesn’t still sell) plenty of Intel-based systems containing only Thunderbolt 3 ports.  And as my own past documented experiences exemplify, USB-C and Thunderbolt 3 aren’t guaranteed to interoperate, in spite of their connector commonality. Intel, for example, sold two different generations of TB3 controllers: “Alpine Ridge” (the chipset in my CalDigit TS3 Plus dock, for example, along with several other TB3 docks and hubs I own) is Thunderbolt-only, while the “Titan Ridge” successor also interoperates with USB-C devices (I plan to elaborate on these differences, along with the additional existing and future enhancements supported by Thunderbolt 4 and just-announced Thunderbolt 5, in an upcoming focused-topic post). If the A17 Pro SoC is really USB-C only, Apple will be facing a notable support burden (albeit decreasing over time, since all newer Apple Silicon-based systems support Thunderbolt 4, therefore also USB-C). That’s why I suspect that although Apple’s marketeers are calling the conector “USB-C” for simplicity’s sake, it’s also Thunderbolt-interoperable.

A few more notes here: Apple’s dropping sales of its Lightning-based MagSafe wireless charging accessories, a curious move considering they still work with still-sold iPhone 14 and 13 variants (RIP iPhone 14 Pro models, along with the iPhone 13 mini). And if you still want to use your Lightning-based charger or other accessory, Apple will happily sell you an overpriced USB-C adapter for it. Bus fixations now satiated, let’s broaden the view and see what else Apple announced this week.

The iPhone 15 family

You already know about the A16 Bionic SoC from last year’s coverage. And you already know about the A16 Pro SoC’s USB controller enhancements. But there’s much more to talk about, of course, beginning with the package-integrated RAM boost from 6 GBytes to 8 GBytes. Last year’s A16 Bionic was Apple’s first chip fabricated on foundry partner TSMC’s 4 nm process. This year, with the A17 Pro, it’s TSMC’s successor 3 nm process, with a commensurate increase in the available transistor budget (from 16 to 19 billion) which Apple has leveraged in various ways:

  • Performance- and power consumption-enhanced microarchitecture CPU cores, albeit with the same counts (2 performance, 4 efficiency) as before
  • An improved neural engine for deep learning inference, claimed up to twice as fast as before, but again with the same core count (16) as before
  • A six-core graphics accelerator with a redesigned shader architecture, claimed capable of up to 20% higher peak performance than before, derived in part from new hardware-accelerated ray tracing support, and
  • Enhanced video and display controllers, now capable of hardware-decoding the AV1 codec (among other things).

About that first-time “Pro” branding for the new SoC …on Monday, Daring Fireball’s John Gruber published an as-usual excellent pre-event summary of how Apple has historically transitioned its smartphone product line each year, and how it’s more recently tweaked the cadence in the era of the “Pro” smartphone tier. Although Apple has previously tweaked smartphone SoCs to come up with iPad variants—from the A12 SoC to the A12X and A12Z, for example—this is the first time I can recall that the company has custom-branded (and high-end branded, to boot: usually you start with a defeatured variant to maximize initial chip yield) a SoC out of the chute. At least two options going forward that I can see:

  • Perhaps next year’s iPhone 16 and 16 Plus will be based on a neutered non-Pro variant of the A17, or
  • Mebbe they’re saving the non-Pro version for the next-gen iPhone SE?

The iPhone 15 and 15 Plus inherit the processing-related enhancements present in the last year’s iPhone 14 Pro and Pro Max, reflective of their SoC commonality.

Apple has also “ditched the notch” previously required to integrate the iPhone 14 and 14 Plus front camera into the display, instead going with the software-generated and sensor-obscuring Dynamic Island toward the top of the display. Speaking of displays, reflective of OLED’s ongoing improvements (and LCD’s ongoing struggle to remain relevant against them), these are capable of up to 2000 nits of brightness when used outdoors. And, speaking of cameras, there are still two rear ones, “main” and “ultra-wide”, the latter still 12 Mpixel in resolution. The former has gotten attention, however; it uses a 48 Mpixel “quad pixel” sensor in combination with computational photography to implement image stabilization and other capabilities, outputting 24 Mpixel images. It also supports not only standard but also 2x optical telephoto modes, the latter generating 12 Mpixel pictures.

Now for the iPhone 15 Pro and Pro Max (again, above and beyond the SoC and RAM updates already covered). First, they switch from stainless steel to lighter-weight titanium-plus-aluminum combo frames:

They incorporate a similar 48 Mpixel main camera as their non-Pro siblings, albeit with slightly larger pixel dimensions for improved low light performance, three focal length options, and the option to capture images in full 48 Mpixel resolution. And, as before, there’s a dedicated 12 Mpixel ultra-wide camera. This time, however, instead of the main camera doing double-duty for telephoto purposes, there’s (again, as with the iPhone 14 Pro generation) a dedicated third 12 Mpixel telephoto camera, this time with 3x optical zoom range in the standard “Pro” and 5x in the “Pro” Max, the latter stretching to a 120 mm focal length. A complicated multi-prism structure enables squeezing this optical feat into a svelte smartphone form factor:

Last, but not least, the previous single-function switch on the side has been swapped out for a multi-function “action” button. Here’s the summary:

Apple Watch Series 9 and Ultra 2

Although Apple claimed via its naming that the SoCs in the Apple Watch Series 6 (using the S6 chip), 7 (S7) and 8 (S8) were different, a lingering rumor (backed up by Wikipedia specs) claimed the contrary: That they were actually the same sliver of silicon (based on the A13 Bionic SoC found in the iPhone 11 series), differentiated only by topside package mark differences, and that Apple focused its watch family evolution efforts instead on display, chassis, interface and other enhancements.

Whether or not previous-generation SoC speculations were true, we definitely have a new chip inside both the Series 9 and Ultra 2 this time. It’s the S9, comprising 5.6B transistors that, among other things, assemble a 30% faster GPU and a 4-core neural engine with twice as fast machine learning (ML) processing as before. The benefits of the GPU—faster on-display animation updates, particularly for high-res screens—are likely already obvious to you. The deep learning inference improvements, while perhaps more obscure at first glance, are IMHO more compelling in their potential.

For one thing, as I’ve discussed in the past, doing deep learning “work” as far out on the “edge” as possible (alternatively stated, as close to the input data being fed to the ML model as possible) is beneficial in several notable ways: it minimizes the processing latency that would otherwise accrue from sending that data elsewhere (to a tethered smartphone, for example, or a “cloud” server) for processing, and it affords ongoing functionality even in the absence of a “tether”. As Apple mentioned on Tuesday, one key way that the company is leveraging the beefed-up on-watch processing capabilities is to locally run Siri inference tasks on voice inputs, allowing for direct health data access right from the watch, for example. Another example is the “sensor fusion” merge of data from the watch’s accelerometer, gyro, and optical heart rate sensor to implement the new “Double tap” gesture that requires no interaction with the touchscreen display whatsoever:

Reminiscent of my earlier comments about OLED advancements, the Series 9 display is twice as bright (2000 nits) as the one in Series 8 predecessors, and it drops down as low as 1 nit for use in dimly lit settings.

The one in the Ultra 2 is even brighter, 3000 nits max to be precise:

And both watches, as well as the entire iPhone 15 family, come with a second-generation ultra-wideband (UWB) transceiver IC for even more accurate location of AirPods-stuck-in-sofa-cushions and other compatible devices. Speaking of AirPods…

Second-gen (plus) AirPods Pro

As previously mentioned, the charging case for the second-generation AirPods Pro earbuds now supports USB-C instead of Lightning.

Curiously, however, Apple doesn’t currently plan to sell the case standalone for use by existing AirPods Pro 2nd-gen owners. The company has also tweaked the design of the earbuds themselves, for improved dust resistance and lossless audio playback compatibility with the upcoming Vision Pro extended-reality headset. Why I wonder, didn’t Apple call them the AirPods Pro 2nd Generation SE? (I jest…sorta…)

The rest of the story

There’s more that I could write about, including Apple’s (but not third parties’) purge of leather cases, watch bands and the like, its carbon-neutral and broader “green” aspirations, and the well-intentioned but cringe-worthy sappy video that accompanied their rollout. But having just passed through the 2,000 word threshold, and mindful of both Aalyia’s wrath (again I jest…totally this time) and her desire for timely publication of my prose, I’ll wrap up here. I encourage and await your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Apple’s latest product launch event takes flight appeared first on EDN.

A 50 MHz 50:50-square wave output frequency doubler/quadrupler

Wed, 09/13/2023 - 17:41

On two days in the course of every year, one in March heralding the start of spring and another in September marking the first of fall, the Earth’s axis of rotation aligns perpendicular to the rays of the Sun. These days are the equinoxes and, as the name implies, divide daytime into nominally equal intervals of sunlight and night.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Author of multiple EDN design ideas, Jim McLucas (Mr. Equinox) evidently has a passion and a talent for devising circuits that also divide up time into equal intervals. He has published several clever and innovative design ideas that convert arbitrary waveshapes into 50:50 square waves, thus slicing and dicing the time axis into equal segments. He’s also often included a wide-range frequency doubler functions:

I thought this looked like a fun concept and design challenge, and Jim kindly gave me permission to borrow it and try designing an “equinoctial” circuit of my own. Figure 1 shows the result.

Figure 1 Kibitzer’s version of a McLucas frequency multiplier and square wave generator.

Figure 1’s circuit comprises two almost identical sections: input processor, IP (U1pin1 through A1), and output generator, OG (U1p12 through A2).

The IP is capable of working in either of two modes as selected by jumper J1 or J2. J1 puts the IP into 50:50 mode in which it will accept any duty cycle input and convert it to a symmetrical 50% duty cycle square wave, suitable for frequency doubling by the OG. (This circuit concept is purely Mr. McLucas’s.) J2 puts the IP into frequency-doubling mode in which an input waveshape that’s already 50:50 symmetrical is doubled before input to the OG for net frequency quadrupling.

When frequency doubling J2 is selected, the combination of RC delays (R1C4 in the IP and R8C3 in the OG) and XOR gates (U1) generate high speed pulses (~6 ns width) on each input edge. Hence two pulses per cycle and doubled frequency input to the OG for quadrupled frequency. If J1 is jumpered instead, then R1C4 is bypassed and just one pulse per cycle and an unmultiplied 50:50 square wave is generated by the IP for doubling by the OG.

The hearts of both IP and OG are simple but fast timing loops in which a very fast monostable flip-flop is forced by feedback from an op-amp integrator to generate 50:50 square waves. (Yup. Jim’s idea again.)

My variation on Jim’s basic timing loop concept consists of U3’s two D type flip-flops and the surrounding components, including Schottky switching diodes D1 and D2, current sink transistors Q1 and Q2, and timing capacitors C1 and C2. Because the two loops are essentially identical, let’s talk about the OG loop.

Each timing sequence begins when U1pin8 delivers a clock pulse to U3pin3. U3 is positive-edge-triggered and responds by driving U3pin6 low. This disconnects D2 from timing cap C2 and allows the current sink Q2 to ramp it down toward the switching threshold of U3pin 4 = -SET.

The timing interval thus begun has a duration (~10 ns to 500 µs) determined by Q2’s collector current as controlled in turn by integrator A2. The intent is to force the interval to be accurately 50% of the time between U1pin8 pulses. A2 does this by subtracting the 2.5 V reference developed by the R6R7 voltage divider from the pulse train at U2pin13 and accumulating the averaged difference on feedback capacitor C6. 

If the duty cycle at U2pin13 is <50%, indicating that the U3 timeout is too long, A2’s output will ramp up, increasing Q2’s collector current and C2 ramp rate, thereby making the timeout shorter. If it’s >50%, A2 will ramp down, decreasing IcQ2 and lengthing the timeout. Net result: after a few seconds, U2pin13 will output an accurately 50:50 square wave at 2 or 4 times (depending on J1 J2) the input frequency.

Provided, of course, that said frequency is within the limits of the timing loop.

The high end of said frequency range is mainly limited by the propagation delays of U3, Q2 ,and D2. These sum to about 10 ns (maybe a smidgeon less) and thus limit the max frequency to ~1/(10 ns + 10 ns) = ~1/20 ns = ~50 MHz (or possibly a bit more). The low end is limited by leakage currents (mainly through D2) that can cause C2 to continue to ramp down even when A2 turns Q2 completely off. This leakage can sum to upwards of 10 nA (especially if the diode is warm) and sets a bottom-end interval of ~1 ms and a temperature-dependent minimum frequency of (very) roughly ~1/(1 ms + 1 ms) = ~1/2 ms = ~500 Hz.

OG output is routed through U2pins 6 and 8 and summed by R12 and R13 to produce a convenient 5 Vpp, ~50 Ω output. If no input is provided, the output shuts down at zero volts, preventing overheating of U2.

An additional detail is A3. It serves as an IP duty cycle comparator that holds OG timing loop activity disabled until the IP has converged (or nearly so) on and is producing an accurate 50:50 pulse train. This avoids the possibility of the erratic and persistent confusion of the OG feedback loop, which can occur if it’s allowed to try to converge prematurely.

 It was indeed a fun project—all things being “equal”. Thanks, Jim!

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Nearly 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post A 50 MHz 50:50-square wave output frequency doubler/quadrupler appeared first on EDN.

How Apple’s recent news increases GaN and USB-C ubiquity

Tue, 09/12/2023 - 22:45

Apple announced today that its new iPhone 15 will have a USB-C charging port instead of the long-running proprietary Lightning port. This shift aligns with European Union law requiring devices to adopt a standard charging connection by December 2024. Under the law, by the end of 2024, all mobile phones, tablets, and cameras sold in the EU must be equipped with a USB Type-C charging port, and from spring 2026, the obligation will extend to laptops.

The new law is part of a broader EU effort to reduce e-waste and to empower consumers to make more sustainable choices. According to the European Commission, these new obligations may help consumers save up to 250 million euros annually on unnecessary charger purchases. The disposal of unused chargers accounts for about 11,000 tons of e-waste annually.

In an era marked by innovation and sustainability, this change by Apple represents a significant step in the world of charging technology. Paired with the capabilities of gallium nitride (GaN) transistors in chargers, we see a future free from the inconvenience of hunting for compatible chargers or struggling with a tangle of cables combined with a significant reduction in e-waste.

Charging complexity: costly and environmentally harmful

Until recently, consumers had to carry multiple chargers and multiple types of cables, creating inconvenience and clutter along with higher costs and environmental concerns. For instance, laptop PCs typically required chargers with 19 V AC inputs, while Android-based phones used USB-C connectors, and iPhones relied on Lightning ports. Each connector is unique, requiring consumers to carry multiple cables and chargers to charge their devices.

Additionally, different devices require different power levels, most often tailored to the charging device, resulting in 5-20 W chargers for phones, 30-45 W chargers for tablets, and 65-100 W chargers for laptop PCs. Many of us carried three or more chargers and several cables to match the device’s USB, Lightning, or Barrel-style charging port.

Streamlined charging—finally!

Here are the pivotal advancements that have simplified the charging landscape:

  • Standardized USB-C charge port inputs and cables: Most common devices, including laptops, tablets, handheld gaming consoles, and smartphones, have transitioned to USB-C charging inputs. This shift to a standard connector type simplifies charging. With USB-C becoming the standard across various device categories, consumers no longer need to carry multiple types of cables or worry about interoperability and compatibility. Apple’s change to USB-C charging for their phones, which they have already made for their laptops and tablets, closes the last major exception to USB-C charging ubiquity. Adopting USB-C as a universal connector means that only one type of cable is required to charge various devices. Some people may still carry a second cable to charge multiple devices simultaneously, but the need for a diverse collection of cables for different devices disappears.

  • A single GaN-powered charger: The EU mandate aligns with another development that makes charging more convenient: GaN technology. GaN addresses the need for a single charger that is both versatile and efficient. GaN chargers are smaller and more efficient than traditional chargers, making them ideal for the wide range of devices people use and carry. Whether it is a smartphone, tablet, notebook PC, or gaming device, consumers can utilize a single GaN charger that will charge each device—providing the versatility that enables consumers to lighten the weight of their backpack, reduce the number of chargers cluttering their homes, and decrease content going into landfills.

GaN addresses the challenge of consumers needing a single, versatile charger by offering an energy-efficient high-power output resulting in miniature chargers compatible with all devices in day-to-day use. GaN simplifies the charging experience and has become critical in creating a more sustainable and user-friendly ecosystem.

A simpler, more sustainable solution

Standardizing device charging input ports simplifies our lives, saves us money, and reduces electronic waste. The pervasiveness of USB-C charge ports dovetails nicely with the mainstream adoption of GaN-powered chargers, which enables users only to need a single charger that is smaller, faster, and more environmentally friendly. Together, these developments promise a greener and more convenient future for consumers, where charging requires only one cable and one charger for all our devices.

 

Paul Wiener is the VP of strategic marketing at GaN Systems.

 

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post How Apple’s recent news increases GaN and USB-C ubiquity appeared first on EDN.

Why deep memory matters

Tue, 09/12/2023 - 16:41

The role of deep memory in oscilloscope performance

When it comes to oscilloscopes, bandwidth, sample rate, and memory depth are consistently cited as the three most important specifications. Memory depth determines the amount of waveform data that can be captured and stored for analysis. A larger memory depth allows for longer time durations to be captured and preserves more waveform details. This is especially beneficial when analyzing complex or intermittent signals, capturing rare events, or performing in-depth analysis and troubleshooting.

Acquisition memory and memory depth

Acquisition memory, also referred to as memory depth, refers to the number of samples an oscilloscope can store with each acquisition. It is determined by multiplying the sample rate (MSa/s) by the time captured. For example, an acquisition memory of 1 Mpts means that an oscilloscope acquires one million samples in a single acquisition on one channel. Even if multiple channels acquire the same amount of memory simultaneously, the acquisition memory is still 1 Mpts.

Oscilloscopes typically come with a predetermined amount of base acquisition memory as part of the standard purchase. In the past, manufacturers produced different versions of hardware with specific memory capacities. However, in the early 2000s, manufacturers adopted a more cost-effective approach by creating a single hardware platform with the deepest available memory, which could be enabled through software licensing.

This approach allowed users to start with a lower-priced oscilloscope that had less memory and then license additional memory as their needs evolved. It is important to refer to a datasheet to determine whether a specified memory value is the base memory included with the instrument or the maximum value associated with an additional paid option.

The benefits of increased memory are evident, but there’s a catch: more memory means increased processing requirements, which can result in slower overall operation.

Memory depth is not a static value

When purchasing any instrument, you may discover that a few attractive specifications are mutually exclusive. This also applies to oscilloscope manufacturers and their acquisition memory depth specifications. A promoted memory depth may not be available under certain scope settings due to tradeoffs within the oscilloscope architecture.

One common tradeoff, for instance, involves the number of channels. Oscilloscopes have a fixed amount of acquisition memory shared across channels, reference waveforms, and other functions. The maximum memory specification is generally applicable when half of the analog channels are active but decreases by a factor of two for each active channel that shares the same processing and storage path. For example, the memory depth may be 4 Mpts when only channel 1 is active but drops to 2 Mpts per channel when channel 2 is turned on.

Deep memory

The definition of deep memory varies among oscilloscope vendors and has evolved over time. Early digital oscilloscopes had memory measured in hundreds of points, which increased to Kpts in the 1990s. Modern oscilloscopes typically offer memory depths in the tens to hundreds of Mpts. Vendors may claim their instruments have deep memory. This is particularly true for older oscilloscope models that were considered deep when they were new but have low memory depth compared to current competitors who might provide up to 100 times more memory depth.

Deep memory benefits various types of embedded hardware testing, and it is especially useful when capturing long time intervals. Deep memory allows you to isolate an observable problem and trace it back to its source. It also helps in solving complex issues related to electromagnetic interference (EMI) and crosstalk.

Serial buses like I2C, SPI, RS-232, CAN or LIN, which are commonly used for digital designs, can be analyzed more effectively with deep memory. While protocol triggers assist in troubleshooting, visibility across multiple bursts or packets of data often requires capturing a larger time span. However, there are tradeoffs between reducing the sample rate to capture more time (which risks under-sampling the bus) and using segmented memory (which limits analysis capabilities and inter-segment visibility).

In applications that require further analysis, it is crucial to capture as much information as possible and analyze it afterwards. Oscilloscope tools and analysis applications, as well as offloading captured data to MATLAB or Python scripts, can facilitate in-depth analysis and insights.

Memory, sample rate, and bandwidth

Memory depth, sample rate, and bandwidth are closely related specifications. Having more memory allows users to capture a specific amount of time or extend the capture time at a given sample rate:

Memory = (Sample rate) * (Time captured).

With an increased memory depth, users can retain the needed sample rate or even use a faster sample rate for acquiring a combination of slow and fast signals. However, there is a limit to the memory depth when capturing more time by adjusting the timebase to a slower setting. Beyond this limit, the instrument must reduce its sample rate, potentially leading to undersampled signals and aliasing. This leads to invalid measurement results, and users may not realize that the sample rate is insufficient for the rated bandwidth of their oscilloscope.

Oscilloscopes do not provide notifications when the sample rate is inadequate for the rated bandwidth, making it challenging to identify undersampling or aliasing issues. Oscilloscopes with limited sample rate exacerbate this challenge.

More memory allows the user to capture extended time intervals while preserving an adequate sample rate for fast signals. When a user aims to capture a longer time interval, scopes with limited memory will inevitably reach their maximum memory capacity. Consequently, the instrument makes a tradeoff, decreasing the sample rate to accommodate the specified time period. This means that a shallow acquisition memory can result in a sample rate that is too low to correctly capture a signal (see Figure 1).

Figure 1 More memory means the instrument can capture more time without reducing sample rate. Source: Rohde & Schwarz

Most oscilloscopes come with a default memory limit to prevent performance issues when deep memory is enabled. For instance, one vendor may have a default limit of 10 Kpts while another may have a limit of 10 Mpts, even though both scopes offer higher maximum memory capacities.

Users need to manually adjust the scope settings to change the default limit and utilize more memory. This typically involves accessing a manual acquisition setting dialog where they can control parameters like sample rate.

Some oscilloscopes may not allow independent control over sample rate, timebase, and memory depth, leading to a frustrating user experience with limited workarounds. It is generally advisable to choose oscilloscopes that allow the user to independently control all three settings and offer the advantage of capturing off-screen acquisitions.

Segmented memory

Most oscilloscopes offer segmented memory mode, either as a standard feature or as an optional upgrade. This mode is useful for capturing signals with inactive periods, such as serial buses or RF chirps. In segmented memory mode, the oscilloscope saves memory space by capturing only the active parts of the signal. This enables single-shot captures across a longer time duration compared to continuous acquisition.

Segmented mode does not compensate for shallow memory. The amount of memory available per segment equals maximum memory divided by the number of segments. As shown in Figure 2, oscilloscopes with greater memory depth offer more powerful segmented memory, allowing for more segments, longer time per segment or higher overall sample rates.

Figure 2 Segmented memory saves only a capture window around the trigger event for more efficient memory utilization. More memory means more segments, more sample rate, or more time with each segment. Source: Rohde & Schwarz

It is important to note that segmented memory mode has some tradeoffs. It is a single-shot acquisition and does not work well in RUN repetitive mode. In addition, viewing the measurement results involves moving through multiple acquisition screens, and analysis across segments often has limitations.

Serial bus decode and memory

Oscilloscopes equipped with serial bus triggering and decode applications are useful for debugging and testing. However, the correlation between deep memory and the number of captured packets can be difficult to determine. Each protocol requires a certain sample rate for correct decoding, and deep memory with protocol decode can strain the oscilloscope’s processing resources, making it sluggish.

Next generation oscilloscopes, such as the R&S MXO 4, solve this problem with smart architectures. For example, an oscilloscope could have a duplicate sample path for protocol decode, enabling dual-path protocol analysis (see Figure 3). This technique allows a high number of packets to be decoded correctly, even with a slower analog sample rate. It also stores protocol packets as packets, which results in a much smaller packet information database, meaning that the instrument will have a faster update rate and better responsiveness.

Figure 3 R&S MXO 4 Series oscilloscopes have a dual-path protocol analysis. A separate packet decode memory means a more responsive instrument with a deterministic number of maximum packets that can be captured. Source: Rohde & Schwarz

The vital role of memory depth

Memory depth plays a vital role in oscilloscope performance, enabling the capture of longer time intervals and preserving waveform details. It enhances hardware testing and analysis by facilitating the decoding of serial bus protocols and providing a comprehensive view of signals. However, there are tradeoffs to consider, such as increased processing requirements and potential tradeoffs with the number of channels. Therefore, a careful comparison of manufacturer specifications is essential to make an informed decision when selecting the right oscilloscope for your needs.   

Joel Woodward is an oscilloscope product planner at Rohde & Schwarz.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Why deep memory matters appeared first on EDN.

Surprise, Apple’s in-house 5G modem won’t be in iPhone 15

Tue, 09/12/2023 - 16:05

Apple’s in-house cellular modem chip, which has been in the news for a few years, won’t be inside the iPhone 15 to be launched today. Instead, the Cupertino, California-based company has signed a three-year supply pact with the leading 5G modem chipmaker Qualcomm for iPhones till 2026.

Figure 1 Qualcomm’s modem chip supports all commercial 5G bands from 600 MHz to 41 GHz, allowing FDD-TDD and mmWave-sub-6 aggregation along with standalone mmWave.

That apparently shows that Apple designers have hit a snag in their efforts to build an in-house 5G modem chip. The company has had more success on the computing side; it has successfully replaced Intel processors with homegrown chips in its Macs over the past years without many hitches. Even on the communications side, Apple is confident it will replace Broadcom’s Wi-Fi and Bluetooth chips with in-house devices by 2025.

However, cellular modem chips have proven to be a different story. Though it entails common building blocks like RF, developing a good cellular modem has been quite hard. Just ask Intel, who sold its cellular modem unit to Apple in 2019 for nearly $1 billion. That’s when the news broke out about Apple’s ambitious plans to build the in-house cellular modem chips.

However, according to news reports that followed this story, Apple had started working on homegrown modem chips in 2018 and then bought Intel’s modem business, so it doesn’t start from scratch. In 2020, Apple’s semiconductors chief, Johny Srouji, called this undertaking a “key strategy transition.” Some industry analysts even expected Apple’s 5G modem chip to be incorporated into iPhone 15 to be launched today.

There’s been so much hype around this in-house modem chip that DigiTimes reported early this year about ASE Technology and Amkor Technology competing to package the modem chips after they are manufactured at TSMC. Even the CEO of Qualcomm, the number one supplier of cellular modem chips, anticipated that Apple’s modem chips would be ready by 2024.

Qualcomm has stakes in this game. According to UBS analysts, 21% of its 2022 revenues come from supplying chips to Apple, which amounts to $7.6 billion. Qualcomm claims to have the best smartphone modem chips, which are crucial in 5G phones when it comes to cellular speed and power efficiency.

Figure 2 The iPhone 14 Pro and iPhone 14 Pro Max feature a Snapdragon X65 modem chip. Source: Qualcomm

While having its own modem chips will save Apple a lot of money and give it more control over firmware, it seems to have proven more challenging than originally expected. A 5G modem chip must reliably connect to various networks without impacting the phone battery. It must also be certified by various wireless regulatory authorities.

So, while Apple’s work on a custom 5G modem will certainly carry on, it’s not yet ready for the crucial tasks in iPhone 15. In fact, some industry reports mirror Qualcomm chief Cristiano Amon’s view about Apple’s in-house modem chip. These reports suggest that Apple’s homegrown modem chip will be used in the iPhone SE 4, which will likely be released around March 2024.

If the iPhone SE 4 doesn’t support mmWave bands for 5G, it will lower the performance and energy efficiency bar on Apple’s homegrown modem chip. And the company will also be able to test its caliber for future iPhone launches.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Surprise, Apple’s in-house 5G modem won’t be in iPhone 15 appeared first on EDN.

What does Flanders Semiconductors stand for?

Mon, 09/11/2023 - 18:43

In a season of chip industry-centric initiatives, here comes Flanders Semiconductors, a non-profit organization that aims to create a new European hub for semiconductor innovation in the Flemish region. Its founding members include BelGan, Caeleste, Cochlear, easics, ICsense, NXP, Pharrowtech, Sofics, and Spectricity.

These companies have joined hands to create a unified platform that can represent the semiconductor industry at every level. That includes infrastructure, equipment, materials, processing, testing, and devices.

Figure 1 Flanders Semiconductors aims to represent member companies’ local, regional, and international interests.

The Flemish semiconductor industry employs over 3,000 people, with more than 50 companies having semiconductors as their core business and over 100 firms defining, testing, and integrating semiconductor devices or technologies. The idea is to have an outfit that can foster collaboration, drive innovation, and catalyze growth within the semiconductor industry in this region.

Flanders Semiconductors, based in Leuven, Belgium, will have imec as its high-tech neighbor, which is expected to be a vital support to this aspiring semiconductor technology hub. Then there are local facilities inside universities and research institutes that will likely support Flanders with their semiconductor R&D, education, and training.

Flanders, while open to all qualifying companies with semiconductors as their primary business, is also welcoming universities, R&D organizations, and non-qualifying companies as associate members. The unveiling of the Flanders Semiconductor association is set for 13 September 2023 in Leuven.

Flanders Semiconductors aims to grow the talent pool, share industry roadmaps, maintain a yearly business events calendar, and represent members’ interests at international level. The new organization itself boasts a dedicated team of seasoned semiconductor professionals led by Lou Hermans, who has over three decades of industry expertise.

Figure 2 Lou Hermans, president of Flanders Semiconductors, is a European semiconductor industry veteran. Source: Flanders Semiconductors

Flanders Semiconductors is focused on semiconductor design and complementary areas like materials, processing, and testing. More importantly, it’d most likely want to build a presence on top of the progress that its high-tech neighbor imec has made in advanced semiconductor technologies and translate that into creating a viable design ecosystem for the Flemish semiconductor business.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post What does Flanders Semiconductors stand for? appeared first on EDN.

Playin’ with Google’s Pixel 7

Mon, 09/11/2023 - 16:58

Back in December 2021, I told you about the smartphones I was prepping to transition to next in my periodic end-of-support forced-replacement sequence: a pair of Google Pixel 4a 5G handsets, one for my “day job” mobile  account (Verizon) and the other for my personal account (AT&T). I ended up actualizing that aspiration, at least halfway…as readers who subsequently perused my October 2022 5G “rant” may remember, a Pixel 4a 5G ended up assigned to my Verizon phone number (with another in storage as its all-important spare).

For AT&T, on the other hand (and as regular readers may also recall), I ended up going with a first-generation Microsoft Surface Duo dual-screen foldable.

Fast-forward a year-plus, and both phones are nearing the end of their guaranteed-update lives: the Surface Duo drops off Microsoft’s support list this very month (as you read this; I’m writing these words in early August), with the Pixel 4a 5G following it in November. So, it’s time for another periodic end-of-support forced-replacement cadence, although the subsequent one will hopefully be much further in the future than has historically been the case.

I’ve gone with a pair of 128GB Google Pixel 7 smartphones this time, the first “Obsidian” in color and the second “Snow” (not my preferred tint, but it was on sale for $100 less than its also-on-sale “Obsidian” sibling at the time, and I’ll have a case on it anyway). I’ve already activated and transitioned to them, actually, back in in mid-July, followed by donations of their predecessors to charity, timed to potential recipients’ back-to-school preparation needs:

The bargain-shopping story of how I obtained the first “Obsidian” one is intriguing, at least to me, so I hope you’ll indulge a brief diversion. My wife had bought me a Pixel 6 back in July of 2022 as an early-anniversary present, on sale for $474.05. A couple of months later, Google introduced the Pixel 7 line, and for reasons that still escape me (although I suspect that they had to do at least in part with the Pixel 6 generation’s chronic buggy cellular subsystem and Google’s desire to move users to the improved successor in order to reduce its support-cost burden) offered aggressive trade-in pricing: $490, which yes, is less than we paid for it. The Pixel 7 was $599 (its original MSRP) at the time, so our out-of-pocket cost was only $109. Not bad!

Here’s how the Pixel 6 and Pixel 7 stack up against each other, as well as compared to my Pixel 6a “spare” and the Pixel 4a precursor (“Pro” versions of both the Pixel 6 and 7, which I haven’t included in this table, offer larger screens and more elaborate rear camera subsystems):

 

Pixel 4a (5G)

Pixel 6a

Pixel 6

Pixel 7

Price

$499

$449

$599/$699

$599/$699

Storage

128GB

128GB

128/256GB

128/256GB

DRAM

6GB

6GB

8GB

8GB

Size

6.06 x 2.91 x 0.32 in (153.9 x 74 x 8.2 mm)

5.99 x 2.83 x 0.35 in (152.2 x 71.8 x 8.9 mm)

6.24 x 2.94 x 0.35 in (158.6 x 74.8 x 8.9 mm)

6.13 x 2.88 x 0.34 in (155.6 x 73.2 x 8.7 mm)

Weight

5.93 oz (168 g)

6.28 oz (178 g)

7.30 oz (207 g)

6.95 oz (197 g)

Screen

6.2” OLED (83% screen-to-body ratio), 2340 x 1080 pixels (416 PPI), 60 Hz refresh

6.1” OLED (83% screen-to-body ratio), 2400 x 1080 pixels (429 PPI), 60 Hz refresh

6.4” OLED (83.4% screen-to-body ratio), 2400 x 1080 pixels (411 PPI), 90 Hz refresh

6.3” (84.9% screen-to-body ratio), 2400 x 1080 pixels (416 PPI), 90 Hz refresh

SoC (and lithography)

Qualcomm Snapdragon 765G (7 nm)

Google Tensor (5 nm)

Google Tensor (5 nm)

Google Tensor G2 (4 nm)

CPU

Octa-core (1×2.4 GHz Kryo 475 Prime & 1×2.2 GHz Kryo 475 Gold & 6×1.8 GHz Kryo 475 Silver)

Octa-core (2×2.80 GHz Cortex-X1 & 2×2.25 GHz Cortex-A76 & 4×1.80 GHz Cortex-A55)

Octa-core (2×2.80 GHz Cortex-X1 & 2×2.25 GHz Cortex-A76 & 4×1.80 GHz Cortex-A55)

Octa-core (2×2.85 GHz Cortex-X1 & 2×2.35 GHz Cortex-A78 & 4×1.80 GHz Cortex-A55)

GPU

Adreno 620

Mali-G78 MP20

Mali-G78 MP20

Mali-G710 MP7

NPU

Hexagon 696

Tensor (G1)

Tensor (G1)

Tensor (G2)

Battery capacity

3,885 mAh

4,410 mAh

4,614 mAh

4,355 mAh

Cellular data (most advanced)

5G (sub-6, mmWave Verizon-only)

5G (sub-6 and C-Band, mmWave Verizon-only)

5G (sub-6 and C-Band, mmWave Verizon-only)

5G (sub-6 and C-Band, mmWave Verizon-only)

Front camera

8 MP, f/2.0, 24mm (wide), 1/4.0″, 1.12µm

8 MP, f/2.0, 24mm (wide), 1.12µm

8 MP, f/2.0, 24mm (wide), 1.12µm

10.8 MP, f/2.2, 21mm (ultrawide), 1/3.1″, 1.22µm

Rear camera(s)

12.2 MP, f/1.7, 27mm (wide), 1/2.55″, 1.4µm

 

16 MP, f/2.2, 107˚ (ultrawide), 1.0µm

12.2 MP, f/1.7, 27mm, (wide), 1/2.55″, 1.4µm

 

12 MP, f/2.2, 17mm, 114˚ (ultrawide), 1.25µm

50 MP, f/1.9, 25mm (wide), 1/1.31″, 1.2µm

 

12 MP, f/2.2, 17mm, 114˚ (ultrawide), 1.25µm

50 MP, f/1.9, 25mm (wide), 1/1.31″, 1.2µm

 

12 MP, f/2.2, 114˚ (ultrawide), 1/2.9″, 1.25µm

Wireless charging

No

No

Yes

Yes

Dust/water resistance

No

IP67

IP68

IP68

Analog headphone jack

Yes

No

No

No

Fingerprint sensor

Rear-mounted

Under-display

Under-display

Under-display

Introduction date

September 2020

May 2022 (available July 2022)

October 2021

October 2022

End-of-support date

November 2023

July 2025 (Android updates), July 2027 (security updates)

October 2024 (Android updates), October 2026 (security updates)

October 2025 (Android updates), October 2027 (security updates)

Usage and other observations follow, both in general and related to specific features listed in this table, and in no particular order save how they streamed out of my noggin:

  • So far, I really like the Pixel 7. This isn’t surprising, for at least a couple of reasons:
    • It’s well reviewed, along with its “Pro” big brother, not to mention its Pixel 6a and (especially) 7a siblings, and
    • Per common practice, I began using it around nine months after its initial introduction, which gave Google plenty of time to squelch any initial bugs
  • I’ve long reiterated in multiple writeups the high value I attach to the ability to comfortably fit a smartphone in my front jeans pocket. The only reason I tolerated the Surface Duo, for example, was that when folded up (whether when not in use or when leveraging a wireless earbud or headset while on a call) it was modestly svelte. Note that the Pixel 7 is nearly identical in size to the Pixel 4a 5G, and is tangibly smaller than its Pixel 6 forebear.
  • I’m admittedly surprised at how little I miss the analog headphone jack. Then again, USB-C headphone adapters are generally modest in price and solid in quality.
  • It’s nice to have NFC support on my personal smartphone again; this feature had been missing from my first-generation Surface Duo. I started using Google Pay wherever possible instead of a credit card during the height of COVID, and the habit has stuck.
  • I hadn’t found the Pixel 4a 5G to be performance-deficient in any notable regard; then again, I’m not a “gamer” or otherwise a demanding smartphone user. That said, the Pixel 7 is noticeably “snappier” than its predecessor, although I doubt my perception has much if anything to do with the higher display refresh rate (a topic discussed further in another of my EDN blog posts this month).
  • My biggest frustration with the Pixel 7 so far, albeit modest-at-worst in the grand scheme of things, is its less-than-reliable in-display optical fingerprint sensor. I’d already anticipated this shortcoming from reviews I’d perused prior to purchasing; in fairness, the Pixel 7 supposedly works better than the Pixel 6 in this regard, and both handset generations worked better after Google added a “Screen Protector” enhanced-sensitivity mode in the settings. Plus, the front camera-based face unlock generally works well as an alternative although, since it relies on a conventional image sensor instead of Apple’s infrared “LiDAR” approach, it’s not usable in dim light or after dark.
  • Speaking of screen protectors, I also proactively avoided the worst aspects of the in-display fingerprint sensor “feature” by going with a PET (polyethylene terephthalate) film-based one instead of the tempered glass ones I’d used in the past. PET still prevents scratches, although unlike tempered glass, it won’t shoulder the “crack” burden of a more severe phone drop. Tempered glass screen protectors, especially unless they’re extremely thin (which defeats the purpose, yes?) apparently give in-display fingerprint sensors fits. All other factors equal, I’d still prefer the rear dedicated fingerprint sensors I’ve used in the past (on whose quicker-response longstanding reliance, I’m increasingly realizing, is part of the problem; in-display sensors work much better when I dial down my impatience and wait an extra fraction of a second for them to do their thing!).
  • I suspect Google’s getting away from dedicated rear-mounted fingerprint sensors both for BOM cost reasons and because they complicate the overall system design, considering that the wireless charging circuitry is also on the rear of the phone. Wireless charging is something I’ve admittedly dissed in the past due to its comparative inefficiency versus legacy wired charging. That said, ironically I use Qi almost exclusively now, delivered by first-generation Pixel Stand chargers, and for a reason I hadn’t previously comprehended; it saves wear and tear that the phone’s USB-C connector would otherwise endure to due to repeated insertion-and-removal cycles.
  • Last but not least, I had to laugh at myself the first time I fired up my personal-account Pixel 7 after transferring the AT&T SIM from the Surface Duo to it. I saw it reporting 5G and thought I’d caught a break with the carrier, until I squinted (presbyopia, don’cha know) and noticed the small “E” at the end of the symbol. This “5GE” is AT&T’s relabeled enhanced-LTE scam that I discussed in my October 2022 piece. That said, the reason I’m not getting “true” 5G on AT&T is admittedly a bit obscure; I continue to cling to a “grandfathered” true unlimited-data (with no throttling) cellular plan that I’ve had for over a decade, and while the carrier previously upgraded the plan from 3G to LTE capabilities, the same isn’t true for 5G. And re. my “work” phone, on Verizon it supports not only the “sub-6” (6 GHz) 5G band of its Pixel 4a 5G predecessor, but also emerging C-bands although, since neither handset is Verizon carrier-locked, not mmWave (UWB).

This all said, of course, rumors of Google’s upcoming Pixel 8 successor family are beginning to reach critical mass, with an introduction (if, in contrast to financial disclosures’ qualifiers, past performance is a guarantee of future results) roughly two months from now as I write these words. That said, however, thanks to Google’s extended five-year support guarantee with its latest smartphone families (the result, I suspect in no small part, of Google’s self-developed SoCs, therefore the company’s greater control of its software destiny) I don’t anticipate upgrading beyond the Pixel 7 any time soon. Feel free to chide me for having written these words a year or few down the road when I’m lusting after some new smartphone offering 😉. Sound off with your Pixel 7 (or other mobile device) thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Playin’ with Google’s Pixel 7 appeared first on EDN.

SK hynix’s memory chips next in Huawei’s 5G phone saga

Fri, 09/08/2023 - 17:15

Last week’s relatively low-key launch of Huawei’ Mate 60 Pro 5G phone is still making waves for China’s breakthrough in cutting-edge semiconductor manufacturing technology with a 5G system-on-chip (SoC) produced on SMIC’s 7-nm process node. Especially, when SMIC doesn’t have access to extreme ultraviolet (EUV) lithography equipment.

Figure 1 Huawei’s Mate 60 Pro 5G phone is powered by the Kirin 9000s chip fabricated on SMIC’s 7-nm process node. Source: Bloomberg

While trade media has constantly been digging in for more details, social media in China is celebrating this breakthrough in semiconductor technology. According to Dan Hutcheson, vice-chair of TechInsights, nearly two-thirds of silicon in Huawei’s 5G phone is homegrown, and it’s a major advance.

The Ottawa, Canada-based TechInsights has been examining parts of Huawei’s Mate 60 Pro 5G phone since its launch in the first week of September. However, while the discovery of Huawei’s Kirin 9000s 5G chip manufactured at SMIC’s 7-nm process node was initially making rounds in trade media, there was another surprise in store.

The next saga

Soon after uncovering the SMIC-made 7-nm SoC in Huawei’s Mate 60 Pro 5G phone, TechInsights discovered the presence of SK hynix’s 12 GB LPDDR5 chip and 512 GB NAND flash chip inside the handset. In fact, some early users of the 5G phone also posted videos of the phone containing NAND flash memory chips manufactured by the Icheon, South Korea-based SK hynix.

SK hynix immediately responded that the company abides by the U.S. government’s export restrictions and no longer does business with Huawei. It also announced that it’s starting an investigation to find more details.

Figure 2 TechInsights’ teardown shows that Huawei’s Mate 60 Pro phone has used SK hynix’s LPDDR5 and NAND flash memory chips. Source: Bloomberg

It’s plausible that Huawei purchased the memory chips from the secondary markets. Industry insiders even don’t rule out the possibility that Huawei might have stockpiled memory chips just before the U.S. export curbs kicked in.

Huawei’s smartphone business was disrupted back in 2019 when the United States began restrictions on technology exports to the Shenzhen, China-based tech giant for the risk of chip technology being diverted for military end-use. Now, when the world is wondering where these memory chips came from, there are also jitters about Huawei’s ability to produce a 5G phone with mostly China-made components.

Double-edge sword

Huawei’s 5G smartphone saga cuts both ways. On one hand, Huawei’s ability to produce a 7-nm SoC in collaboration with SMIC has demonstrated sound technical progress without SMIC having access to EUV lithography tools. In fact, there is talk about SMIC having violated the U.S. sanctions by supplying the 7-nm manufacturing technology to Huawei.

On the other hand, there is a reckoning about China’s tech self-sufficiency, which could potentially impact the commercial interests of the U.S. companies. Especially when U.S. semiconductor houses like Qualcomm and Nvidia have been arguing for fewer sanctions to tame China’s motivation for semiconductor technology breakthroughs.

TechInsights’ Hutcheson notes that China has been able to stay 2-2.5 nodes behind the leading fabs like TSMC and Samsung. But he also reminded that people thought China would be stopped at the 14 nm process node.

In a Reuters story, Hutcheson also talked about SMIC’s 7-nm process yield, which is considered below 50% by some research firms compared to the industry norm of 90% or more. In his view, “above 50%” is reasonable because Huawei’s 5G chip has been manufactured in a cleaner fashion. He thinks it’s far more competent than the 7-nm chip SMIC produced for a Bitcoin miner last year.

For now, it’s benefitting Huawei as its new 5G phones are enjoying a brisk sale in China, and it’s winning accolades on Chinese social media. But will it be able to compete with 5G phones built around the 3-nm SoCs manufactured in TSMC and Samsung fabs? Time will tell.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post SK hynix’s memory chips next in Huawei’s 5G phone saga appeared first on EDN.

Pages