EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 1 hour 3 min ago

Computer upgrades: Motivations, hiccups, outcomes, and learnings

Mon, 02/26/2024 - 16:30

I habitually, admittedly, hold onto computers far longer than I should, in the spirit of “if it ain’t broke, don’t fix it” (not to mention “a penny saved is a penny earned”). What I repeatedly forget, in the midst of this ongoing grasping, is that while the computer I’m clinging to might originally have been speedy, sizeable and otherwise “enough” for my needs, the passage of time inevitably diminishes its capabilities. Some of this decline is the result of the inevitable “cruft” it accumulates as I install and then upgrade and/or uninstall applications and their foundation operating systems, as well as the data files I create using them (such as the Word file I’m typing into now). I also fiscally-conveniently overlook, for example, that newer operating system and application revisions make ever-increasing demands on the computer hardware.

Usually, what compels me to finally make the “leap of faith” to something new is some variant of utter desperation: either the existing hardware has been (or will soon be) dropped from the software support list or a software update has introduced a bug that the developer has decided not to fix. Today’s two case studies reflect both of these scenarios, and although the paths to the replacement systems were bumpy, the outcomes were worth the effort (not to mention everything I learned along the way). So much, in fact, that I’ve got another upgrade queued for the upcoming Christmas holiday next-week (as I write these words in mid-December 2023). Wonder how long I’ll wait to update next time?

The 2020 (Intel-based) Apple 13” Apple Retina MacBook Pro (RMBP)

This one had actually been sitting on my shelf for more a year, awaiting its turn in my active-computer rotation, ever since I saw it on sale brand new and open-box discounted at Small Dog Electronics’ website for $1,279.99. When I found out that this particular unit also came with AppleCare+ extended warranty coverage good through mid-May 2025, therefore representing a nearly $1,000 discount from the new-from-Apple total price tag, I pulled the purchase trigger.

It represents the very last iteration of Intel-based laptops from Apple, introduced in May 2020. Why x86, versus Apple Silicon-based? I went for it due in part to its ability to run not only MacOS but also Windows, either virtualized or natively, although I also have a 13” M1 MacBook Air (also open-box, also from Small Dog Electronics, and with similar RAM and SSD capacities: keep reading) in queued inventory for whenever Apple decides to drop x86 support completely.

This high-end RMBP variant, based on a 2.3 GHz quad-core Intel Core i7 “Ice Lake” CPU, includes four Thunderbolt 3 ports, two on either side, versus the two left-side-only configurations of lower-end models. It also integrates 16 GBytes of RAM and a 512 GByte SSD. Unlike its 2016-2019 “butterfly” keyboard precursors, it returns to the reliable legacy “scissors” keyboard (this actually was key—bad pun intended—for me) that Apple amusingly rebranded as the “Magic Keyboard”. Above the keyboard are the Touch ID authentication sensor alongside the nifty (at least to me), now-deprecated Touch Bar. And thankfully, Bluetooth audio support in MacOS 12 “Monterey” for Zoom and other online meeting and webinar apps now works again.

Normally, I’d restore a Time Machine backup, originating from the old machine, to the new one to get me going with the initial setup. But at the time, I was more than 1,000 miles away from my NAS, at my relatives’ house for the Thanksgiving holidays. Migration Assistant was a conceptual alternative, although from what I’ve heard it’s sometimes more trouble than it’s worth. Instead, particularly with my earlier “cruft” comment in mind, I decided to just start from scratch with software reinstalls. That said, I still leveraged a portable drive along with my relatives’ Wi-Fi to copy relevant data files from the old to new machine.

The process was slow and tedious, but the outcome was a solid success. I can still hear the new system’s fan fire up sometimes (a friend with an Apple Silicon system mocks me mercilessly for this) but the new machine’s notably faster than its predecessor. Firefox, for example, thankfully is much snapper than it was before. And speaking of Mozilla applications, I was able to migrate both my Firefox and Thunderbird profiles over intact and glitch-free; the most I ended up having to do was to manually disable and re-enable my browser extensions to get them working again, along with renaming my device name in the new computer’s browser settings for account sync purposes. Oh, and since the new system’s not port-diversity-adorned like its precursor, I also had to assemble a baggie of USB-C “dongles” for USB-A, HDMI, SD cards, wired Ethernet…sigh.

The Microsoft Surface Pro 7+ (SP7+) for business

This next one shouldn’t be surprising to regular readers, as I telegraphed my intentions back in early November. The question you may have, however, is why did I tackle the succession now? For the earlier-discussed MacBook Pro, the transition timing is more understandable, as its early-2015 predecessor will fall off Apple’s O/S-supported hardware list in less than a year. Its performance slowdowns were becoming too noticeable to ignore. And the Bluetooth audio issues I started having after its most recent major O/S upgrade were the icing on the cake.

The Surface Pro 5 (SP5), on the other hand, runs Windows 10, for which Microsoft has promised full support until at least mid-October 2025, longer if you pay up. Its overheating-induced clock throttling was annoying but didn’t occur that often. And although its RAM and SSD capacity limitations were constraining, I could still work around them. Part of the answer, frankly, ties back to how smoothly the RMBP replacement had gone; it tempted me to tackle the SP7+ upgrade sooner than I otherwise would. And another part of the answer is that I wanted to be able to donate both legacy systems to charity while they were still supported and more generally could still be useful to someone else with less demanding use cases. Specifically, I hoped to wrap up both upgrades in time to get the precursor computers to EChO for pass-along in time for them to get wrapped up by their recipients as Christmas presents for others.

Once again, I did “clean” installs of my suite of applications to the SP7+. This strategy, versus an attempted “clone” of the old system’s mass storage contents, was even more necessary in this case because the two computers ran different operating systems (Windows 10 Pro vs Windows 11 Pro). And again, the process was slow but ultimately successful. That said, the overall transition was more complicated this time, due to what I tackled before the installs even started. As I’d mentioned back in November, one of the particularly appealing attributes of the SP7+ (and SP8, for that matter) versus the SP5 is that their SSDs (like that in my Surface Pro X) are user-accessible and -replaceable. What I did first, therefore, after updating Windows 11 and the driver suite to most current versions, was to clone the existing drive image in the new system to a larger-capacity replacement, initially installed in an external enclosure.

Here’s the 256 GByte m.2 2230 SSD that the system came with, complete with its surrounding heatsink, post-clone and removal:

And here’s the 1 TByte replacement, Samsung’s PM991a (PCIe 3.0-based, to allay any excess-energy consumption concerns):

before cloning the disk image to it and installing it (absent a heatsink or thermal tape, but it still seemingly works fine) in place of its precursor:

As you can probably tell from the sticker on one side, it wasn’t new-as-advertised. But it had been only lightly used (and the bulk of that was from me, doing multiple full- and quick-format cycles on it for both initial testing and failed-clone-recoveries) so I kept it:

First step, the clone. I’d thought this might be complicated a bit by the fact that since the system was running the Pro version of Windows 11, (potentially performance-sapping) BitLocker drive encryption was enabled by default. Fortunately, however, my cloning utility of choice (Macrium Reflect Free, which I’ve long recommended) was able to handle the clone as-is just fine, even on a booted O/S with an active partition, although it warned me afterwards that the image on the SSD containing the clone would be unencrypted. Fast forwarding to the future for a moment, I made sure to archive a copy of the existing SSD’s encryption key before doing the swap, in case I ever needed to use it again. The new SSD came up auto-re-encrypted by Windows on first boot, I didn’t need to re-activate the O/S, and I archived its BitLocker key, too, for good measure.

The other—hardware—aspect of the clone was more problematic. Here’s the enclosure that I used to temporarily house the new SSD, Orico’s TCM2-C3, which I bought back in February 2020 and have been using trouble-free for a variety of external-tether purposes ever since:

This time was different. I initially tried tethering the new SSD-inclusive enclosure to the SP7+ via the USB-C to USB-C cable that came with the enclosure, but shortly after each cloning operation attempt started, I’d get an obscure “Error Code 121 – The semaphore timeout period has expired” abort message from Macrium Reflect. Attempts to reformat the SSD before trying the clone again were also inconsistent, sometimes succeeding, other times not due to spontaneous disconnects. Eventually, I got everything to work by instead using the slower but more reliable USB-A to USB-C cable that also came with the enclosure. Is my USB-C to USB-C cable going bad? Or is something amiss with the USB-C transceiver in the system or the enclosure? Dunno.

Once I booted up the computer with the new SSD inside, I ran into two other issues. The first was that the initial O/S partition, which had been hidden on the original SSD, was now visible and had been assigned the C: drive letter. A dive into Windows’ Disk Management utility got this glitch sorted out.

The other quirk, which I’d encountered before, was that the new SSD still self-reported as 256 GBytes in size, the same capacity as its predecessor. Disk Management showed me the sizeable unused partition on the new SSD, which I’d normally be able to expand the main O/S partition into. In this particular case it wasn’t able to do so, though, because the two partitions were non-contiguous; in-between them was 650 Mbyte hidden Windows Recovery partition. I could have just deleted that one, although it would have complicated any subsequent if-needed recovery attempt. Instead, I used another slick (and gratis) utility, MiniTool’s Partition Wizard, to relocate the recovery partition to the “end”, thereby enabling successful O/S partition expansion:

And as hoped-for, the SP7+ is fully compatible with my full suite of existing SP5 accessories:

What’s next?

Requoting what I said upfront in this piece:

I’ve got another upgrade queued for the upcoming Christmas holiday.

It’s my “late 2014” Mac mini, which I’d transitioned to fairly recently, in mid-2021, for similar obsolescence reasons.

Like the early 2015 13” RMPB, it’s not scheduled to exit O/S support until mid-to-late 2024, but it’s becoming even more performance-archaic (due in part to its HDD-centric Fusion Drive configuration). Its replacement will be a 2018 Mac mini, also x86-based, whose specific configuration is “interesting” (I got a great deal on it, explaining why I went with it): a high-end 3.2 GHz Intel Core i7 CPU, coupled with 32 GBytes of RAM but only a 128 GByte SSD (which I plan to augment via external storage). Stand by for more details to come in a future post. And until then, I’m standing by for your thoughts on this piece in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Computer upgrades: Motivations, hiccups, outcomes, and learnings appeared first on EDN.

An MCU test chip embeds 10.8 Mbit STT-MRAM memory

Fri, 02/23/2024 - 14:51

A prototype MCU test chip with a 10.8 Mbit magnetoresistive random-access memory (MRAM) memory cell array—fabricated on a 22-nm embedded MRAM process—claims to accomplish a random read access frequency of over 200 MHz and a write throughput of 10.4 MB/s at a maximum junction temperature of 125°C.

Renesas, which developed circuit technologies for this embedded spin-transfer torque MRAM (STT-MRAM) test chip, presented details about it on February 20 at the International Solid-State Circuits Conference 2024 (ISSCC 2024) held on 18-22 February in San Francisco. The Japanese chipmaker has designed this embedded MRAM macro to bolster read access and write throughput for high-performance MCUs.

Figure 1 The MCU test chip incorporates a 10.8-Mbit embedded MRAM memory cell array. Source: Renesas

Microcontrollers in endpoint devices are expected to deliver higher performance than ever, especially in Internet of Things (IoT) and artificial intelligence (AI) applications. Here, the CPU clock frequencies of high-performance MCUs are in the hundreds of MHz, and to achieve greater performance, read speeds of embedded non-volatile memory need to be increased to minimize the gap between them and CPU clock frequencies.

However, MRAM has a smaller read margin than the flash memory used in conventional MCUs, which makes high-speed read operation more difficult. At the same time, MRAM is faster than flash memory for write performance because it requires no erase operation before performing write operations. That’s why shortening write times is desirable not only for everyday use but also for cost reduction of writing test patterns in test processes and writing control codes by end-product manufacturers.

Renesas has developed circuit technologies for an embedded STT-MRAM test chip with fast read and write operations to address this design conundrum.

Faster read and write

First, take MRAM reading, which is generally performed by a differential amplifier or sense amplifier to determine which of the memory cell current or reference current is larger. But because the difference in memory cell currents between the 0 and 1 states—read window—is smaller for MRAM than for flash memory, the reference current must be precisely positioned in the center of the read window for faster reading.

So, Renesas introduces two mechanisms to achieve faster read speed. First, it aligns the reference current in the center of the window according to the actual current distribution of the memory cells for each chip measured during the test process. Second, it reduces the offset of the sense amplifier.

Another challenge that Renesas engineers have overcome relates to conventional configurations, where large parasitic capacitance in the circuits is used to control the voltage of the bitline, so it doesn’t rise too high during read operations. While it slows the reading process, Renesas has introduced a Cascode connection scheme to reduce parasitic capacitance and speed up reading. That allows design engineers to realize the random read operation at more than 200 MHz frequencies.

Next, for write operation, it’s worth mentioning that Renesas announced in December 2021 that it has improved write throughput by applying write voltage simultaneously to all bits in a write unit using a relatively low write voltage generated from the external voltage (I/O power) of the MCU through a step-down circuit. Then, it used a higher write voltage only for the remaining few bits that could not be written.

Figure 2 In late 2021, Renesas announced an increase in the write speed of an STT-MRAM test chip manufactured on a 16-nm node.

Now, while power supply conditions used in test processes and by end-product manufacturers are stable, Renesas has relaxed the lower voltage limit of the external voltage. As a result, by setting the higher step-down voltage from the external voltage to be applied to all bits in the first phase, write throughput can be improved 1.8-fold. A faster write speed will contribute to more efficient code writing in endpoint devices.

Test chip evaluation

The prototype MCU test chip combines the above two enhancements to offer a 10.8 Mbit MRAM memory cell array fabricated using a 22-nm embedded process. The evaluation of the prototype chip validated that it achieved a random read access frequency of over 200 MHz and a write throughput of 10.4 MB/s.

The MCU test chip also contains 0.3 Mbit of one-time programmable (OTP) memory that uses MRAM cell breakdown to prevent falsification of data. That makes it capable of storing security information. However, writing to OTP requires a higher voltage than writing to MRAM, which makes it more difficult to perform writing in the field, where power supply voltages are often less stable. Here, Renesas suppressed parasitic resistance within the memory cell array, which in turn, makes writing in the field possible.

Renesas has vowed to further increase the capacity, speed, and power efficiency of MRAM.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post An MCU test chip embeds 10.8 Mbit STT-MRAM memory appeared first on EDN.

BLDC motor driver prolongs battery life

Thu, 02/22/2024 - 20:58

A three-phase BLDC motor driver, the AOZ32063MQV from AOS, offers an input voltage range of 5 V to 60 V and 100% duty cycle operation. The IC enables efficient motor operation, while its low standby power helps extend the battery life of cordless power tools and e-bikes.

The AOZ32063MQV drives three half-bridges consisting of six N-channel power MOSFETs for three-phase applications. It has a high-side sink current of 1A and a maximum source current of 0.8 A. A power-saving sleep mode lowers current consumption to just 1 µA.

Along with an integrated bootstrap diode, the driver provides adjustable dead-time control and a fault indication output. Onboard protection functions include input undervoltage, short-circuit, overcurrent, and thermal shutdown. The device operates over a temperature range of -40°C to +125°C.

Housed in a 4×4-mm QFN-28L package, the AOZ32063MQV costs $1.55 in lots of 1000 units. It is available in production quantities, with a lead time of 24 weeks.

AOZ32063MQV product page

Alpha & Omega Semiconductor 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post BLDC motor driver prolongs battery life appeared first on EDN.

100-V MLCC is among the industry’s smallest

Thu, 02/22/2024 - 20:58

Murata expands its GJM022 series of high-Q multilayer ceramic capacitors (MLCCs) with a 100-V device that is just 0.4×0.2 mm (L×W). The MLCC is intended for high-frequency module applications, such as those used in cellular communication infrastructure.

Exhibiting high-Q, low-loss performance, the miniature capacitor enables electronic engineers to overcome packaging limitations, while maintaining optimal performance. A high-temperature guarantee also gives designers greater positioning freedom. The MLCC helps ensure reliable long-term operation, even in close proximity to power semiconductors that radiate heat.

The GJM022 can be used for a wide variety of applications, including impedance matching and DC cutting within RF modules for base stations. In such implementations, the capacitor’s high-Q value and low equivalent series resistance (ESR) contribute to improving power amplifier efficiency and lowering power consumption.

Engineering samples of the GJM022 100-V chip capacitor are available in limited production. The product will move to full stocked production in the next several weeks. A datasheet for the device was not available at the time of this announcement. For information on the GJM series of high-Q MLCCs, click the product page link below.

GJM series product page  

Murata

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 100-V MLCC is among the industry’s smallest appeared first on EDN.

Development kit pairs RISC-V and FPGA

Thu, 02/22/2024 - 20:58

The PolarFire SoC Discovery Kit from Microchip makes RISC-V and FPGA design accessible to a wider range of embedded engineers. This low-cost development platform allows students, beginners, and seasoned engineers alike to leverage RISC-V and FPGA technologies for creating their designs.

The Discovery Kit is built around the PolarFire MPFS095T SoC FPGA, which embeds a quad-core RISC-V processor that supports Linux and real-time applications. It also packs 95,000 FPGA logic elements. A large L2 memory subsystem can be configured for performance or deterministic operation and supports an asymmetric multiprocessing mode.

An embedded FP5 programmer is included for FPGA fabric programming, debugging, and firmware development. The development board also provides a MikroBUS expansion header for Click boards and a 40-pin Raspberry Pi connector, as well as a MIPI video connector. Expansion boards are controlled using protocols like I2C and SPI. 

The PolarFire SoC Discovery Kit costs $132 for the general public and $99 when purchased through Microchip’s Academic Program. Production kit shipments are expected to commence mid-April 2024.

PolarFire Soc Discovery Kit product page

Microchip Technology

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Development kit pairs RISC-V and FPGA appeared first on EDN.

SP4T switches offer high isolation up to 8.5 GHz

Thu, 02/22/2024 - 20:57

pSemi announced production readiness of two UltraCMOS SP4T RF switches that operate from 10 MHz to 8.5 GHz with high channel isolation. According to the manufacturer, the PE42445 and PE42446 switches integrate seamlessly into 4G and 5G base stations and massive MIMO architectures. They can provide digital pre-distortion feedback loops and transmitter monitoring signal paths to prevent interference and maintain signal integrity.

 

Both the PE42445 and PE42446 offer >60 dB isolation at 4 GHz and operate over an extended temperature range of -40°C to +125°C. Additionally, the devices provide low insertion loss across the band, high linearity of 65 dBm IIP3, and a fast switching time of 200 ns.  The SP4T switches are manufactured on the company’s UltraCMOS process, a variation of silicon-on insulator technology.

The PE42445 comes in a 3×3-mm, 20-lead LGA package, while the PE42446 is housed in a 4×4-mm, 24-lead LGA package. Sales inquiries can be submitted using the product page links below.

PE42445 product page

PE42446 product page 

pSemi

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post SP4T switches offer high isolation up to 8.5 GHz appeared first on EDN.

MRAM macro speeds read/write operations

Thu, 02/22/2024 - 20:57

Renesas presented an embedded MRAM macro in an MCU test chip at ISSCC 2024 that delivers a random-read access frequency of over 200 MHz. The test chip also exhibited a write throughput of 10.4-Mbytes/s.

The company developed two high-speed circuit technologies to achieve faster read and write operations in spin-transfer torque magnetoresistive RAM (STT-MRAM). A prototype MCU test chip, fabricated using a 22-nm process, combined the two technologies with a 10.8-Mbit MRAM memory cell array. Evaluation of the prototype chip confirmed the high-speed results at a maximum junction temperature of 125°C.

Advancements in read technology have enabled Renesas to achieve what it claims is the world’s fastest random read access time of 4.2 ns. Even taking into consideration the setup time of the interface circuit that receives the MRAM output data, the company was able to realize random read operation at frequencies in excess of 200 MHz. Further, improved write technology can improve MRAM write throughput 1.8-fold.

For greater detail, read the complete press release here.

Renesas Electronics 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post MRAM macro speeds read/write operations appeared first on EDN.

Logic Probe has a wide voltage range

Thu, 02/22/2024 - 16:53

The logic probe is powered from the device under test (DUT)—it may be any binary logic, which can be powered in the range +2 V to +6 V. This may be a microcontroller or 74/54 series logic chips, including HC/HCT chips.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The probe determines 3 conditions: 

  • Logical 0
  • Logical 1
  • Undefined (this may be a Z-condition, or bad contact).

It also features a counter, which is very handy when you want to count impulses, to estimate the value of frequency or to test an interface. (This part is shown as a sketch.)

The probe in Figure 1 consists of two Schmitt triggers, the upper trigger on the figure determines the logical 0, and the lower trigger determines the logical 1.

Figure 1 The logic probe with two Schmitt triggers where the upper determines logical 0 and the lower determines logical 1.

Two different colors were selected: 

  • Blue for logical 0
  • Red for logical 1

Since the blue LED demands more than 2 V, a slightly modified “joule-thief” circuit on Q2 is used to increase the voltage. The transformer has 2 windings with an inductance ranging from 80 to 200 µH, if the windings are not equal, the greater one should be connected to the collector. (The author used a tiny transformer from an old ferrite memory, but any coil with an added winding over it can do.)

If you choose a green or red LED instead of blue, the “joule-thief” circuit can be eliminated, and the LED connected between the upper terminal of R5 and “+A”.

Due to the wide supply voltage range, the current through the LEDs can increase 100% or more. Since the LEDs are quite bright, some control of brightness is desirable. It’s performed by the circuit’s U3, Q3, and two diodes, which can decrease the LEDs supply by 1.4 V.

Note, the 74HC14 can be used instead of the 74HC132 almost everywhere in the circuit.

Peter Demchenko studied math at the University of Vilnius and has worked in software development.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Logic Probe has a wide voltage range appeared first on EDN.

Making waves: Engineering a spectrum revolution for 6G

Wed, 02/21/2024 - 16:56

6G is looking to achieve a broad range of goals in turn, requiring an extensive array of technologies. Like 5G, no single technology will define 6G. The groundwork laid out in the previous generation will serve as a starting point for the new one. As a distinct new generation though, 6G will also break free from previous ones, including 5G, by introducing new concepts. Among them, new spectrum technologies will help the industry achieve complete coverage for 6G.

Tapping into new spectrum

Looking back, every generation of cellular technology looks to leverage new spectrum. 6G won’t be an exception, with the emergence of new use cases and more demand for high-speed data. As a result, 6G needs to deliver much higher data throughputs than 5G, making millimeter-wave (mmWave) bands extremely attractive.

New frequency bands under consideration for 6G include 100 and 300 GHz, often called sub-terahertz (sub-THz) bands. There is also interest in the upper mid-band—the spectrum between 7 and 24 GHz—because of lower propagation loss compared to sub-THz bands, particularly between the 7 and 15 GHz frequencies.

This spectrum presents regulatory challenges though and is used by various entities including governments and satellite service providers. However, some bands could work for mobile communications with the implementation of more advanced spectrum sharing techniques. Figure 1 provides an overview of the frequencies allocated for mobile and wireless access in this spectrum.

Figure 1 An overview of frequency allocation for mobile and fixed wireless access in the upper mid-band. Source: Radio Regulations, International Telecommunication Union, 2020

While these frequencies have been used for a variety of applications outside of cellular, channel sounding is needed to characterize the use of this spectrum in 6G to ensure it provides the benefits for the targeted 6G application.

The 7 to 24 GHz spectrum is key area of focus for RAN Working Group 1 (RAN1) within the Third Generation Partnership Project (3GPP) for the purpose of Release 19, which will be finalized in late 2025 and facilitate the transition from 5G to 6G.

Scaling with ultra-massive MIMO

Over time, wireless standards have continued to evolve to maximize the bandwidth available in various frequency bands. Multiple-input multiple-output (MIMO) and massive MIMO technologies were major enhancements for radio systems with a significant impact for 5G. By combining multiple transmitters and receivers and using constructive and destructive interference to beamform information toward users, MIMO significantly enhanced performance.

6G can improve on this further. MIMO is expected to scale to thousands of antennas to provide greater data rates to users. Data rates are expected to grow from single gigabits per second to hundreds of gigabits per second. Ultra-massive MIMO will also enable hyper-localized coverage in dynamic environments. The target for localization precision in 6G is of 1 centimeter, a significant leap over 5G’s 1 meter.

Interacting with signals for better range and security

Reconfigurable intelligent surfaces (RIS) also represents a significant development for 6G. Currently, this technology is the focus of discussions at the 3GPP and the European Telecommunications Standard Institute (ETSI).

Using high-frequency spectrum is essential to achieve greater data throughputs but this spectrum is prone to interference. RIS technology will play a key role in addressing this challenge helping mmWave and sub-THz signals to overcome the high free space path loss and blockage of high-frequency spectrum.

RISs are flat, two-dimensional structures that consist of three or more layers. The top layer comprises multiple passive elements that reflect and refract incoming signals, enabling data packets to go around large physical obstacles like buildings, as illustrated in Figure 2.

Figure 2 RISs are two-dimensional multi-layer structures where the top layer consists of an array of passive elements that reflect/refract incoming signals, allowing the sub-THz signals used in 6G to successfully go around large objects. These elements can be programmed to control the phase-shift the signal to into a narrow beam directed at a specific location. Source: RIS TECH Alliance, March 2023

Engineers can program the elements in real time to control the phase shift enabling the RIS to reflect signals in a narrow beam to a specific location. With the ability to interact with the source signal, RISs can increase signal strength and reduce interference in dense multi-user environments or multi-cell networks, extending signal range and enhancing security.

Going full duplex

Wireless engineers have tried to enable simultaneous signal transmission and reception for years to drive a step-function increase in capacity for radio channels. Typically, radio systems employ just one antenna to transmit and receive signals, which requires the local transmitter to deactivate during reception or transmit on a different frequency to be able to receive a weak signal from a distant transmitter.

Duplex communication requires either two separate radio channels or splitting up the capacity of a single channel, but this is changing with the advent of in-band full duplex (IBFD) technology, which is currently under investigation in 3GPP Release 18. IBFD uses an array of techniques to avoid self-interference enabling the receiver to maintain a high level of sensitivity while the transmitter operates simultaneously on the same channel.

Introducing AI/ML-driven waveforms

New waveforms are another exciting development for 6G. Despite widespread use in cellular communications, the signal flatness of orthogonal frequency division multiplexing (OFDM) creates challenges with wider bandwidth signals in radio frequency amplifiers. Moreover, the integration of communication and sensing into a single system, known as joint communications and sensing (JCAS), also requires a waveform that can accommodate both types of signals effectively.

Recent developments in AI and machine learning (ML) offer the opportunity to reinvent the physical-layer (PHY) waveform that will be used for 6G. Integrating AI and ML into the physical layer could give rise to adaptive modulation, enhancing the power efficiency of communications systems while increasing security. Figure 3 shows how the physical layer could evolve to include ML for 6G.

Figure 3 The proposed migration to an ML-based physical layer for 6G to enhance both the power efficiency and security of the transmitter and receiver. Source: IEEE Communications Magazine, May 2021.

 Towards complete coverage

6G is poised to reshape the communications landscape pushing cellular technology to make a meaningful societal impact. Today, the 6G standard is in its infancy with the first release expected to be Release 20, but research on various fronts is in full swing. These efforts will drive the standard’s development.

Predicting the demands of future networks and which applications will prevail is a significant challenge, but the key areas the industry needs to focus on for 6G have emerged, new spectrum technologies being one of them. New spectrum bands, ultra-massive MIMO, reconfigurable intelligent surfaces, full duplex communication, and AI/ML-driven waveforms will help 6G deliver complete coverage to users.

Jessy Cavazos is part of Keysight’s Industry Solutions Marketing team.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Making waves: Engineering a spectrum revolution for 6G appeared first on EDN.

The chiplet universe is coming: What’s in it for you?

Wed, 02/21/2024 - 12:53

There’s a lot of talk and excitement about chiplets these days, but there’s also a lot of confusion. What is available today? What should I expect in terms of interoperability? Is the promise of an emerging ecosystem real? More fundamentally, developers of high-end systems-on-chip (SoCs) need to consider a central question: “What’s in it for me?” The answer, unsurprisingly, varies depending on the type of application and the target market for these devices.

For the last few years, I have been closely monitoring the multi-die market, and I’ve been talking to a wide variety of players ranging from chip designers to chip manufacturers to end users of our system IP product offering. Although commentators and stakeholders accurately describe key benefits of chiplet technology, I’ve observed that these descriptions are rarely comprehensive and often lack structure.

Here is an outline of chiplet driving factors and size of opportunity per vertical Source: Arteris

As a result, I felt the need to identify common themes, reflect on their importance for future deployment and map them on the key industry verticals. This blog aims to summarize these insights in a diagram (see figure above), with the hope that it is useful to you.

  1. Scalability: The key to meeting diverse computing demands

Scalability stands at the forefront of the chiplet revolution. Traditional monolithic chip designs face physical and economic limits as they approach the boundaries of Moore’s Law. Chiplets, however, offer a modular approach. By combining smaller, discrete components or “chiplets,” manufacturers can create larger, more powerful processors.

This modular design allows for the easy scaling of performance and functionality to meet the specific needs of various applications. This is what drove the early adoption of the technology by pioneering companies in the enterprise compute vertical. Today, it also attracts players in the communications and automotive industries, which also crave higher computing power, particularly for AI applications.

  1. Cost efficiency: Lowering expenses and increasing competitiveness

Cost efficiency is another critical factor driving the adoption of chiplets. Traditional chip fabrication, especially at the cutting edge, is exceedingly expensive, with costs escalating as transistors shrink. The chiplet approach mitigates these costs in several ways.

First, it allows for the use of older, more cost-effective manufacturing processes for certain components. Second, by constructing a processor from multiple smaller chiplets, manufacturers can significantly reduce the yield loss associated with defects in large monolithic chips.

If part of a chiplet is defective, it doesn’t render the entire chip unusable, as would be the case with a traditional design. This translates directly into cost savings, making high-performance computing more accessible. This aspect is especially critical for cost-sensitive sectors such as wireless communications, consumer electronics, and industrial applications.

  1. Ecosystem development: Fostering collaboration and innovation

The shift to chiplets also encourages the development of a more collaborative and innovative ecosystem in the semiconductor industry. With chiplets, different companies can specialize in various types of computing hosts and accelerators, contributing their expertise to a larger whole.

This openness can lead to a more vibrant ecosystem, as smaller players can innovate in specific areas without the overhead of designing entire chips. Such collaboration could accelerate technological advancements, benefiting newcomers in the automotive and consumer electronics vertical, for instance, and leading to more rapid iterations and improvements in technology.

  1. Portfolio management: A strategic approach to product development

Finally, the transition to chiplets allows companies to manage their product portfolios more effectively. With the ability to mix and match different chiplets, a company can more quickly and efficiently adapt its product offerings to meet market demands. This flexibility enables faster response times to the emerging trends and customer needs, providing a competitive edge.

Additionally, the ability to reuse chiplets across multiple products can streamline research and development, reducing time-to-market and R&D expenses. The flexibility to mix and match chiplets for different configurations makes it easier to tailor chips to specific market segments and is particularly suited to the needs of the consumer and automotive markets.

Overall, the chiplet architecture is poised to revolutionize the semiconductor industry, with each sector finding unique value in its capabilities. This tailored approach ensures that chiplets will play a critical role in driving forward the technological advancements of each industry vertical.

Guillaume Boillet, senior director of product management and strategic marketing at Arteris, drives the product lifecycle of the interconnect IP and SoC integration automation portfolios.

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The chiplet universe is coming: What’s in it for you? appeared first on EDN.

Power Tips #126: Hot plugging DC/DC converters safely

Tue, 02/20/2024 - 18:44

In power converters, the input capacitors are fed through inductive cabling to the power source. Parasitic inductance will cause ringing of the input voltage to almost twice its DC value when first plugged into the system, also called hot plugging. An insufficiently damped power converter input and a lack of inrush control can damage the converter.

Using input bulk electrolytic capacitors to dampen the input voltage of the off-battery converters can prevent excessive voltage ringing when first applying battery power, while also preventing resonances that can destabilize the converter. With the move to 24 VIN and 48 VIN systems from the traditional 12 V automotive battery, the need to properly dampen the input becomes even more important. 12V battery systems typically use components rated for 40 V or higher to survive short-duration voltage spikes under load-dump conditions. The maximum DC voltage for these 12 V systems can reach 18 VDC. Hot plugging can cause input ringing with the voltage nearing twice the input, such as 36 V. This is well below 40 V or higher rated components. However, in a 48 V system where steady-state input voltages can reach 54 V, ringing on the input can potentially exceed 100 V, damaging components rated for 80 V.

With traditional 12 V systems, one often assumes the damping capacitors have enough effective series resistance (ESR) to tame the resonance. But, with low-cost aluminum electrolytic capacitors, the actual effective ESR is generally much lower than the published maximum, resulting in much less damping and much more ringing when applying battery power. With 12 V systems, the reduced damping may still be enough to prevent destabilization of the downstream DC/DC, and the ringing will not cause damage. However, in 48 V systems that are more vulnerable to ringing, you can add discrete resistors in series with the input damping capacitors. Based on steady-state ripple currents, a size 0603 (1608 metric) should suffice.

In Figure 1, L1 and C1 values from an existing DC/DC converter’s input filter create a resonance that is expressed by Equation 1:

We chose the target damping capacitor (Cd) and damping resistance (Rd), based on the TI E2E™ design support forums technical article, “Damping input bead resonance to prevent oscillations”. Cd should be ideally at least three times C1. We chose a 150 µF standard value for Cd.

Equation 2 expresses the target damping resistance:

For damping resistor (Rd), add two paralleled 1 Ω resistors in series with Cd.

Figure 1 A simplified input filter with damping to prevent excessive voltage ringing when first applying battery power, while also preventing resonances that can destabilize the converter.

Figure 2 shows the simulated hot-plug response both without and with the added 0.5Ω damping resistor in series with Cd.

Figure 2 Simulation of hot plugging without and with damping 0.5Ω damping resistor in series with Cd.

We achieved damping of the input filter by using the correct damping resistor and capacitor combination. There is one aspect, however, that is easy to overlook. In the lab, we experienced the destruction of the damping resistor (Rd) when hot plugging to the supply. What we realized is that the damping resistor has a peak power expressed by Equation 3:

For our 1 Ω resistors across 54 V, that would be about 2,900 W peak in each resistor. Furthermore, the resistor dissipates nearly the same energy as that stored in the damping capacitor (Cd) in a very short period of time. This energy stored in the damping capacitor is expressed by Equation 4:

In our case, that energy is shared equally between the two 1 Ω resistors. A capacitance of 150 µF at 54 VIN is approximately 220 mJ total, or 110 mJ in each 1 Ω resistor. This is a slightly stringent assumption, as the internal ESR of Cd reduces the actual peak voltage across these resistors by about 4%.

Mapping the actual inrush surge to the curve in the surge rating graphs is not straightforward. The actual surge profile will be roughly a decaying exponential waveform, while the resistor ratings assume a fixed-duration constant power, as shown in Figure 3.

Figure 3 Example of surge-rated resistor ratings showing a roughly decaying exponential waveform.

A conservative approach would be to divide the total energy dissipated in the resistor by the peak power. You can then check this resulting pulse duration against the surge rating graph of the resistor. The calculated pulse will be more severe than the actual pulse, which is the same heating energy spread out over a greater time frame. For our case, in each resistor, 110 mJ divided by 2,900 W is 38 µs. A surge-rated resistor size of 2512 SG733A/W3A can handle 4.5 kW for approximately 40 µs, which means that this package resistor is suitable for this application. General-purpose resistors in the same 2512 package have power ratings more than an order of magnitude lower than surge-rated resistors.

This calculation does ignore the series inductance effect. An inductor will slow the rise of current into the resistor and reduce maximum power, but will also add total losses from overshoot, as shown in Figure 2. The simulation results including the 10 µH inductor show peak power in the resistor dropping by 30% from the 2.9 kW calculated power, but the total energy in the resistor is 17% higher than the 110 mJ calculated earlier. The rating curves show that the allowed energy follows the peak power ratio to the negative two-thirds power. Thus, a 30% reduction in peak power enables 27% more losses, and our calculations remain conservative for both without and with series input inductance.

Avoiding failures from hot plugging

While the best automotive installation and maintenance practices will avoid hot plugging, there is a realization that errors will occur. Following procedures stated in this article will avoid costly damage to the system. As your partner in power management, TI is in constant pursuit of pushing the limits of power.

Hrag Kasparian, who joined Texas Instruments over 10 years ago, currently serves as a power applications engineer, designing custom DC-DC switch-mode power supplies. Previously, he worked on the development of battery packs, chargers, and electric vehicle (EV) battery management systems at a startup company in Silicon Valley. Hrag graduated from San Jose State University with a Bachelor of Science in electrical engineering.

Josh Mandelcorn has been at Texas Instrument’s Power Design Services team for almost two decades. He has designed high-current multiphase converters to power core and memory rails of processors handling large rapid load changes with stringent voltage under/overshoot requirements. He is listed as either an author or co-author on 17 US patents related to power conversion. He received a BSEE degree from Carnegie-Mellon University.

Related Content

Additional resources

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Power Tips #126: Hot plugging DC/DC converters safely appeared first on EDN.

The PowerStation PSX3: A portable multifunction vehicular powerhouse with a beefy battery

Mon, 02/19/2024 - 17:38

As regular readers may already recollect, I’ve got two vehicles in outdoor storage, which (at minimum) I start once a year to reorder them and drive the one now in front to a nearby emissions testing center.

Stored-vehicle batteries inevitably drain, and their tires slowly-but-surely also deflate. Which is why the PowerStation PSX3 has long had a rarely-but-repeatedly used prized place in my gadget stable. I’ll start with some stock shots of the product:

As you can tell, it’s (among other things) a portable recharger and jump-starter of vehicles’ cells. It’s also a portable tire inflater. And it’s an emergency light and USB power source, too:

all of which makes it handy to have with me at all times in my Eurovan Camper, for example:

Here’s my unit in all its dusty, dirty glory:

Cables, etc. inside the “door”:

along with closeups on those stickers you saw in the overview shots:

Here’s the thing, though. If you visit the product page, you’ll find that the PowerStation PSX3 is no longer being sold. And after many years of occasional use, in combination with deep discharge cycles between uses, the embedded sealed lead-acid battery in mine had become effectively unusable; it’d take forever to charge, if I could get it to fully charge at all, and its ability to inflate tires and jumpstart vehicles was a shadow of its former self.

My first “bright idea” was to pick up one of those newfangled chargers you may have noticed often on sale at Amazon and elsewhere, which I’m assuming are all Li-ion battery-based (since NiMH cells wouldn’t deliver the necessary “punch”). For tire inflation purposes, I alternatively had a nifty adapter in the garage that leveraged my stock of Black+Decker 20 V MAX multi-purpose batteries:

It wasn’t as powerful as the PowerStation PSX3 had been in its prime, but I had a bunch of batteries and they’re easy to transport, so I figured a jumpstart-only device would suffice as a PowerStation PSX3 successor.

I tried three of these widgets, one claiming to deliver 1200 A of “peak” cranking juice:

Another spec’ing 1500 A:

And a third that promised to deliver 2000 A:

They all promptly went back to Amazon as full-refund returns. Now granted, if someone had left their interior dome light on too long and the battery was drained too low to successfully turn over the engine but still had some “life” one of these might suffice, which is why this combo jump-starter/tire inflater/USB charger/light still resides in the back of my wife’s SUV:

And I’ll grant them one other thing: they’re certainly small and light. But 2000 A of cranking current? Or even 1500 A? Mebbe for a fraction of a second, the time necessary to drain an intermediary capacitor, but not long enough to resurrect a significantly drained battery. Therefore, the quotes I put around the word “peak” earlier. Such products exemplify the well-worn saying, “mileage may vary”. Give me an old-school lead acid battery instead, any day!

At that point, I had another idea, which ended up being brighter. As I wrote about last summer, uninterruptable power supplies (UPSs) often have replaceable embedded batteries (unless the manufacturer has intentionally designed them otherwise, of course). Could the PowerStation PSX3, with user-accessible screws on its backside, be similar?

Yes, hope-inducing YouTube videos like this one reassured me, it could!

(I too hate throwing things out if it wasn’t already intuitively obvious)

At this point, I had maybe my brightest idea of all, if I do say so myself. In that earlier UPS writeup, I’d mentioned that I’d bought six replacement batteries for $49.99 total on sale (they’re now $119.99 for six as I write these words). They were purchased through Amazon but were shipped directly from the manufacturer, Mighty Max. The thing is, though, the first shipment delivered to me was not six smaller batteries but one much larger one.

The Mighty Max rep promptly apologized, sent me the correct ones, and told me to hold onto the first one in case I ever found a use for it. Hmmm…Where in the garage did I put that box?

And hey, it’s not only got the correct dimensions, but the terminals’ polarities match!

Additional included hardware, which I didn’t end up needing to use:

Cool, let’s remove those screws and crack this device open!

At this point, I need to beg for forgiveness from you, dear readers. Were this a proper full teardown, I wouldn’t stop at this point. But the objective here was not to fully dissect the product. It was instead to resurrect it to full health. So, squelching my own curiosity, not to mention all of yours’ in the process, I stopped here. That said, for example, you can clearly see the massive-percentage-of-total-volume motor that implements the air compressor function:

And here’s our patient on the other side:

The negative battery terminal was corroded, so I cleaned everything up in the process of disconnecting it:

The positive terminal was more pristine:

At this point, however, after wrestling the old battery out of its captivity:

I realized I had a problem. Here’s the final shot of the old cell:

And here’s another perspective on the new one:

See what’s different? The two batteries are the same size. And the terminals’ polarities do match. But the terminals’ exact locations are not the same. Force-fitting the negative terminal re-connection was fairly straightforward, since I just had to stretch a few wires already with sufficient slack. The positive terminal reconnection, on the other hand, was admittedly more of a MacGyver move (and I admittedly almost skipped on sharing this image with you, out of embarrassment and fear of your mockery…but hey, at least no duct tape was involved!):

But at the end of the day, I ended up with a good-as-new PowerStation PSX3. Huzzah!

Comments are as-always welcomed…just please be gentle about my MacGyver move…

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The PowerStation PSX3: A portable multifunction vehicular powerhouse with a beefy battery appeared first on EDN.

Foundry PDK aims to train engineers on 2-nm process node

Mon, 02/19/2024 - 16:27

A new process design kit (PDK) from imec aims to provide broad access to a 2-nm gate-all-around (GAA) process node and associated backside connectivity for design pathfinding, system research, and training. This foundry PDK features the necessary infrastructure for digital design based on a set of digital standard cell libraries and SRAM IP macros.

The design PDK—enabling virtual digital designs in imec’s N2 chip manufacturing process technology—comes embedded with EDA tool suites from Cadence Design Systems and Synopsys. And it aims to train the semiconductor experts of tomorrow and enable the industry to transition to next-generation process technologies through meaningful design pathfinding.

Source: imec

Foundry PDKs—which provide chip designers access to a library of tested and proven components—are usually available once process technology reaches a critical level of manufacturability. And here comes the catch: there is restricted access and the need for non-disclosure agreements (NDAs). That, in turn, creates a high threshold for academia and industry to access advanced technology nodes like 2-nm during their development.

What imec’s N2 PDK is trying to do is provide young semiconductor engineers in academia and industry with early access to the infrastructure needed to develop design skills on advanced technology nodes such as 2 nm. “The design pathfinding PDK will help companies to transition their designs to future technology nodes and pre-empt scaling bottlenecks for their products,” said Julien Ryckaert, VP of Logic Technologies.

Next, the accompanying training courses will acquaint engineers with the most recent technology disruptions such as nanosheet devices and wafer backside technology. The training program, starting in the second quarter of 2014, will teach subscribers the specificities of the N2 technology node while offering hands-on training on digital design platforms using the Cadence and Synopsys EDA software.

Yoon Kim, VP of Cadence Academic Network, acknowledged that imec’s design pathfinding PDK represents a major milestone for training the next generation of silicon designers. “Imec used Cadence’s AI-driven digital and custom/analog full flows to create and validate the design pathfinding PDK.”

Likewise, Brandon Wang, VP of technical strategy & strategic partnerships at Synopsys, quoted pathfinding PDK as an example of how industry partnerships can broaden access to advanced process technology for the current and next generation of designers. “Our collaboration with imec to deliver a certified, AI-driven EDA digital design flow for its N2 PDK enables design teams to prototype and accelerate the transition to next-generation technologies using a virtual PDK-based design environment.”

According to imec, the design pathfinding PDK platform will extend to more advanced nodes like 1.4 nm.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Foundry PDK aims to train engineers on 2-nm process node appeared first on EDN.

What does Renesas’ acquisition of PCB toolmaker Altium mean?

Fri, 02/16/2024 - 14:17

Renesas increasingly looks like Cisco of the 1990s, when the router pioneer became an acquisition force and transformed itself into a networking giant within a decade. Renesas, the semiconductor industry’s serial acquirer, has announced to snap PCB design toolmaker Altium nearly a month after announcing the purchase of gallium nitride (GaN) design house Transphorm.

Renesas has worked closely with Altium—headquartered in Las Jolla, California, and listed in Australia—for the past couple of years to standardize its PCB design and evaluation boards on the cloud-based Altium 365 platform. While implementing Altium’s uniform PCB design tool, the Japanese chipmaker aimed to streamline its reference designs and product kits in order to reduce design complexity and speed time to market.

In other words, by employing Altium’s cloud-based PCB design platform, Renesas wanted to harmonize the development workflow around its 400-plus evaluation board designs. Eventually, Renesas figured that it needed to own a design platform that’s efficient and easier to use. Moreover, owning a design software platform could become a competitive advantage in its bid to become a solutions provider instead of merely a chip vendor.

So, Renesas, which has been at the forefront of dealmaking in past years, decided to buy Altium for $5.9 billion in a cash deal. “This acquisition is different from our past acquisitions in many ways,” said Renesas CEO Hidetoshi Shibata.

Figure 1 Renesas acquires PCB tool developer Altium after using its cloud-based solutions for nearly two years.

First and foremost, it will diversify Renesas offerings into the software realm. That, in turn, will help design engineers easily integrate semiconductors into complex electronic designs while streamlining the overall design process. It’s worth mentioning here that Altium rejected a $3.9 billion takeover bid from software company Autodesk as too low back in 2021.

Altium’s origin can be traced to a startup, Protel, which was founded in 1985 by a University of Tasmania staffer, Nick Martin, who wanted to develop software for reducing PCB design complexity. The company was eventually renamed Altium and relocated to California.

Figure 2 Altium provides tools for designing circuit boards.

While Renesas has been acquiring semiconductor companies for nearly a decade, this deal tells a different story. It’s about a chip hardware outfit buying a design software firm to bolster its merits of being a solution provider and thus address pressures to lower design complexity and shorten time to market.

That makes sense because Renesas has been assimilating semiconductors from a multitude of suppliers: Intersil, IDT, Dialog, Sequans, and Transphorm. So, it’s crucial that design engineers using Renesas chips can better organize design kits, libraries, and other components effectively.

All this happens mostly at the board level, and Altium’s acquisition aims to facilitate that stage in the system design cycle. And Renesas has vetted these PCB design tools as their user for nearly two years.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post What does Renesas’ acquisition of PCB toolmaker Altium mean? appeared first on EDN.

IEEE 1588 grandmaster clock handles 25 Gbps

Fri, 02/16/2024 - 00:30

Microchip’s TimeProvider 4500 (TP4500) is an IEEE 1588 PTP grandmaster clock that furnishes high-speed network interfaces up to 25 Gbps. The hardware timekeeping platform not only offers 1-Gbps, 10-Gbps, and 25-Gbps Ethernet options, but also achieves timing accuracy below 1 ns.

TP4500 gives infrastructure operators a terrestrial alternative for distributing precise time that is not dependent on GNSS. Highly scalable, the platform serves thousands of PTP endpoints for customers deploying C-band gNodeBs. Hardware-assist enhancements, including the latest digital synthesis technology and PolarFire SoC FPGA, allow the system to deliver sub-ns timing accuracy.

Oscillator options for the TP4500 include OCXO, super OCXO, and rubidium. The unit also incorporates a 72-channel GNSS receiver with active thermal compensation. TimePictra synchronization management software allows operators to monitor and track real-time faults and threats with visibility across their entire network.

The TimeProvider 4500 is available now for purchase. Contact a Microchip sales representative or authorized distributor.

TimeProvider 4500 product page

Microchip Technology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post IEEE 1588 grandmaster clock handles 25 Gbps appeared first on EDN.

Platform simulates Wi-Fi 7 devices and traffic

Fri, 02/16/2024 - 00:29

The E7515W UXM wireless test platform from Keysight offers network emulation with Wi-Fi 7 signaling RF and throughput testing of Wi-Fi 7 devices. It performs Wi-Fi to application and Wi-Fi to cellular internetworking testing of both Wi-Fi clients (STAs) and Access Points (APs) from 380 MHz to 7.125 GHz.

As a turnkey system, the E7515W simplifies Wi-Fi 7 testing and provides insights for both the physical (PHY) and media access control (MAC) layers. The system emulates hundreds of clients at once through traffic simulation without the need for additional equipment. According to Keysight, this capacity exceeds existing market solutions by threefold.

Analysis software for the E7515W provides PHY/MAC level information, such as rate versus range, as well as enhanced Rx sensitivity, Wi-Fi 6/6E/7 radio unit sweep analysis, and full-rate throughput. Based on the same hardware architecture as the E7515B UXM 5G test platform, the E7515W tests more complex devices with 5G and LTE capabilities. It also performs fixed wireless access (FWA) testing for customer premise equipment (CPE).

Request a price quote for the E7515W Wi-Fi 7 test system using the link to the product page below.

E7515W UXM product page

Keysight Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Platform simulates Wi-Fi 7 devices and traffic appeared first on EDN.

Fast SSD improves computer performance

Fri, 02/16/2024 - 00:29

Joining Samsung’s lineup of consumer SSDs, the 990 EVO delivers a sequential read speed of up to 5000 Mbytes/s, 43% faster than the 970 EVO Plus model. The company also reports that the 990 EVO offers up to a 70% improvement in power efficiency compared to its predecessor.

Available with storage capacities of 1 terabyte and 2 terabytes, the internal NVMe SSD enhances everyday computing experiences like gaming and video/photo editing. In addition to its fast sequential read rate, the drive’s sequential write speed reaches 4200 Mbytes/s. Random read and write operations also get a boost, with speeds of up to 700k and 800k input/output operations/s (IOPS), respectively.

Improved power efficiency allows battery-powered PCs to operate longer between charges. The 990 EVO supports Windows Modern Standby, which enables instant on/off with uninterrupted internet connectivity and seamless notification reception, even in low-power states. What’s more, the SSD’s heat spreader label effectively regulates the thermal condition of the NAND chip.

The 990 EVO supports both PCIe 4.0 x4 and PCIe 5.0 x2 interfaces. Samsung’s Magician software is a set of optimization tools to ensure the best SSD performance. It also streamlines the data migration process for SSD upgrades. Magician protects valuable data, monitors drive health, and provides notification of firmware updates.

The 990 EVO SSD will be available in Malaysia starting next month.

990 EVO product page

Samsung

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Fast SSD improves computer performance appeared first on EDN.

Low-power MCUs perform diverse tasks

Fri, 02/16/2024 - 00:29

Built with an Arm Cortex-M33 core, NXP’s MCX A14x and A15x series of general-purpose MCUs operate at 48 MHz and 96 MHz, respectively. The devices target a broad range of applications, including motor control, industrial sensing, smart metering, automation, and smart home devices.

Both series offer high levels of integration with scalable device options. Peripherals include timers that generate three complementary PWM pairs with deadband insertion and a 4-Msample/s, 12-bit ADC with hardware windowing and averaging. Along with UART, SPI, and I2C interfaces, the MCUs provide an I3C communication interface. I3C improves on the performance and power use of I2C, while maintaining backward compatibility for most devices.

MCX A microcontrollers employ a capless LDO power subsystem that operates from 1.7 V to 3.6 V. Devices consume 59 µA/MHz (3 V, 25°C) in active mode running Coremark from internal flash. In deep sleep mode, current consumption drops to 6.5 µA with 10-µs wake-up and full SRAM retention (3 V, 25°C). Deep power down mode trims consumption to less than 400 nA with 2.78-ms wake.

Packaging options for the MCX A parts include 32-pin QFN, 48-pin QFN, and 64-pin LQDP. MCUs are I/O and pin-compatible across package types, simplifying migration and upgrades. MCX A14x/15x devices are available now through NXP’s distributor network.

MCX A14x/15x product page

NXP Semiconductors 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Low-power MCUs perform diverse tasks appeared first on EDN.

5G router streamlines industrial operations

Fri, 02/16/2024 - 00:28

Purpose-built for Industry 4.0 use cases, the Digi IX40 router provides global 5G and LTE connectivity with edge intelligence and real-time processing. The IoT cellular router allows enterprises to seamlessly connect multiple wired or wireless machines in demanding environments. It also optimizes the integration of cloud-delivered operational technology with information technology to enable network-wide visibility, monitoring, and control.

The IX40 provides edge intelligence by placing computing power in the device at the edge of the enterprise network where the data is collected. Processing sensor data immediately and closer to where it is generated (instead of sending it to the cloud), reduces latency. The router’s built-in computing power and integrated memory enable rapid machine-to-machine communication and robust real-time data processing.

Other Digi IX40 features include:

  • FIPS 140-2 validation for encryption of sensitive data
  • Ethernet, SFP, serial, I/O, and Modbus bridging
  • Failover options like fiber and 4G LTE for redundancy
  • GNSS receiver supporting GPS, GLONASS, BeiDou, and Galileo
  • License-free enterprise software: VPN, firewall, logging, and authentication
  • Digi Remote Manager for mass configuration and management of remote assets

Typical applications for the Digi IX40 router include advanced robotics, predictive maintenance, asset monitoring, industrial automation, and smart manufacturing. FirstNet-capable models are available for critical applications that require emergency response.

For more information or request a price quote, use the link to the product page below.

Digi IX40 product page

Digi International  

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 5G router streamlines industrial operations appeared first on EDN.

Hotwire thermostat: Using fine copper wire as integrated sensor and heater for temperature control

Wed, 02/14/2024 - 15:07

Conventional thermostats are based on separate temperature sensor and heater devices with means for feedback between them. But in some recent EDN design ideas (DIs) we’ve seen thermostat designs that meld the functions of sensor and heater into a single active device (usually FET or BJT). The ploy can make a better fit to applications where the intended thermal load is physically small or has some other quirk of geometry that makes it inconvenient to apply the classic separate sensor/heater schema. This DI (see the figure) follows the melded concept but takes it in a somewhat different direction by using fine gauge copper wire (e.g., 40 AWG polyurethane insulated) as an integrated temperature sensor and heater.

Here’s how it works.

Miniature thermostat utilizing the tempco and I2R heating of 40 AWG copper wire as a melded sensor/heater.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The resistance and temperature coefficient of a standard 40 AWG copper wire at 25 oC are generally spec’d at 1.07 Ω/foot and +0.393%/oC, respectively. Therefore, L feet of 40 ga can be expected to have an approximate resistance at a given temperature T of:

R(L,T) = 1.07 L(1 + 0.00393(T – 25))                 (1)
R = 1.07 L + 0.00421 L T – 0.00421 L 25           (2)
T = (R – 1.07 L + 0.00421 L 25) / 0.00421 L      (3)
T = (R – 0.965 L) / 0.00421 L                            (4)

Equation 4 holds well from R/L = 0.965 Ω/ft at 0o up to 1.6 Ω /ft at 155o (the recommended upper temperature limit for solderable polyurethane wire insulation). 

Consider the implications for the use of fine copper wire as a combination temperature sensor and heater.

If a suitable length (between 5 and 15 feet) of wire is placed in a feedback loop driving current through it so as to dissipate enough I2R heating to raise and maintain a temperature that creates a preselected constant wire resistance, then said temperature, and the temperature of any thermal load thermally bonded to it, would likewise be constant! This is exactly what the circuit in the figure does.

Q1’s drain supplies heating; heating current I to the sensor/heater wire (please ignore for a moment the minor contribution from start-up resistor R2). The voltage induced between the terminals of the R wire resistance is then:

V = IR                         (5)

This causes the A1b, Q2 current source to output:

I2 = V/(R4 + R7) = IR/(R4 + R7)           (6)

Which induces a voltage at pin 2 of A1b:

V2 = I2(R5 + R6) = IR(R5 + R6)/(R4 + R7)           (7)

Meanwhile, Q1’s source current (also equal to I) sampling resistor R1 produces:

V3 = IR1                     (8)

FET control amplifer A1a forces FET gate voltage and thereby R drive current such that:

V2 = V3                                           (9)
IR(R5 + R6)/(R4 + R7) = R1I          (10)
R = R1(R4 + R7)/(R5 + R6)             (11)

Thus, heater current, and therefore wire resistance and temperature, are forced to equilibrium values set purely by the resistance ratios listed in Equation 11, with the resultant constant temperature given by Equation 4.

About Q3. The thermostat circuit is intended to be as flexible as possible in regard to wire gauge, length and associated sensor/heater R resistance. To accommodate R < 10 Ω and consequent possibility of potentially damaging peak I values, Q3 removes Q1 gate drive when necessary and limits I to a safe ~1.4 A.

Setup and calibration. In further pursuit of flexibility in accommodating sensor/heater wire length and initial R, this simple calibration procedure is suggested for whenever the wire is replaced.

  1. Before first power up, allow sensor/heater to fully equilibrate to room temperature.
  2. Set R4 and R5 fully CCW.
  3. Push and hold the CAL NC pushbutton.
  4. Turn the power on.
  5. Slowly turn R4 clockwise until LED first flickers on.
  6. Release CAL.

Done. R5 is now “reasonably well” calibrated for a CCW to CW span of zero to 130oC above room temp.

Thermal coupling of the chosen length of sensor/heater wire to the desired thermal load (e.g., thermostated circuit component, test tube, petri dish, etc.) can be done by winding a meander of wire around the load, and securing it with polyimide tape, RTV silicone, or a similar heat tolerant adhesive.

And about R2. Although not significant in the steady state function of the circuit, without R2 the thermostat might be vulnerable to a failure to start when first switched on and might simply sit looking stupid. Indefinitely. Don’t ask how I know this…

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Hotwire thermostat: Using fine copper wire as integrated sensor and heater for temperature control appeared first on EDN.

Pages