EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 4 години 55 хв тому

Wireless modules expand development options

Чтв, 02/29/2024 - 20:47

Quectel has launched four new Wi-Fi and Bluetooth modules to provide designers with a greater array of options in terms of size, cost, and power efficiency. Joining the company’s portfolio of IoT modules are the FCU741R Wi-Fi 4 module and the FCS950R Wi-Fi 5/Bluetooth 4.2 combo module. The HCM010S Bluetooth LE 5.4 module and the HCM111Z Bluetooth LE 5.3 module also extend the IoT lineup.

The FCU741R Wi-Fi 4 module for wireless LAN connections operates at 2.4-GHz and 5-GHz frequencies to deliver a maximum data rate of 150 Mbps. It offers a USB 2.0 interface and operates over a temperature range of -20°C to +70°C.

The FCS950R Wi-Fi 5 and Bluetooth 4.2 module supports IEEE 802.11a/b/g/n/ac and achieves a maximum data rate of 433.3 Mbps in 802.11ac mode. It also furnishes an SDIO 3.0 interface and is just 12.0×12.0×2.35 mm.

Outfitted with an Arm Cortex-M33 processor, the HCM010S module supports both Bluetooth LE 5.4 and Bluetooth mesh networking. Built-in memory comprises 64 kbytes of SRAM and 768 kbytes of flash. Transmit power up to +20 dBm enables a longer transmission range.

Also based on an Arm Cortex-M33 processor, the HCM111Z Bluetooth LE 5.3 module offers a maximum data rate of 2 Mbps. It includes 48 kbytes of SRAM and 512 kbytes of flash memory, as well 13 general-purpose I/Os and a built-in codec for microphone pickup and audio playback.

Quectel Wireless Solutions 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Wireless modules expand development options appeared first on EDN.

Transceivers enable contactless USB2 connectivity

Чтв, 02/29/2024 - 20:47

Two 60-GHz V-band transceivers, the ST60A3H0 and ST60A3H1 from ST, offer short-range cable-free connectivity at up to 480 Mbps. Operating in half-duplex mode, these compact devices enable embedded USB2 (eUSB2), I2C, SPI, UART, and GPIO RF tunneling.

The ST60A3H0 and ST60A3H1 can be used in personal electronics like digital cameras, wearables, portable hard drives, and small gaming terminals. They also afford data transfer in industrial applications, such as rotating machinery. As cost-effective cable replacements, the transceivers allow designers to create products with slim, aperture-free cases.

Self-discovery with instant mating eliminates pairing, while low power consumption preserves battery runtime. The parts consume 130 mW in eUSB Rx/Tx mode and 90 mW in UART, GPIO, and I2C modes. Shutdown mode reduces power consumption to just 23 µW.

Housed in VFBGA packages, the ST60A3H0 connects to an external antenna, while the ST60A3H1 has an integrated antenna. Samples of the transceivers are available now and cost $5. Detailed technical data, evaluation kits, and production pricing are subject to a non-disclosure agreement.

ST60A3H0 product page

ST60A3H1 product page

STMicroelectronics  

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Transceivers enable contactless USB2 connectivity appeared first on EDN.

APEC 2024, Day 3: Daily Briefing Video

Чтв, 02/29/2024 - 18:45

EDN editor-in-chief Majeed Ahmad and Power Electronics News editor-in-chief Maurizio Di Paolo Emilio discuss the highlights on day 3 of APEC 2024. One major topic included the move to a greener infrastructure for automotive manufacturing and more efficient automotive subsystems such as power trains. Wide bandgap (WBG) semiconductors such as SiC and GaN will be critical in realizing higher efficiencies for these systems moving forward. 

Majeed touched upon the rising popularity of GaN devices for applications outside its previous space of consumer electronics (e.g., USB chargers, AC adapters, etc.) and high frequency (RF) devices to other use cases such as data center power supplies and EV systems. Many players have, in recent years, made the claim that GaN can go beyond 650 V however, the jury is still out on its viability especially in large volumes. GaN power devices must contend with finding a suitable substrate to enhance factors such as power density, voltage capabilities, thermal performance, larger wafer sizes, long-term reliability, etc. Substrates for GaN vary from GaN-on-Si, GaN-on-SiC, to more specialty GaN-on-GaN, GaN-on-sapphire, and GaN on ceramics such as QST as accomplished by Vanguard International Semiconductor (VIS) in Taiwan. 

SiC technology has been steadily maturing where cost and wafer availability issues are appearing to ease up. Many exhibitors displayed wafers up to 8″and test and measurement (T&M) systems for wafer testing. Innovations in simulation tools such as QSPICE continue to keep up the pace with advances in SiC technologies, offering engineers a free platform rapidly to evaluate designs. Finally, Maurizio covers the non-WBG technologies revealed including a hydrogen fuel cell power system by Kohler Energy. 

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post APEC 2024, Day 3: Daily Briefing Video appeared first on EDN.

Photosensitivity: Seizures from displays

Чтв, 02/29/2024 - 17:53

A couple of years ago, I described having witnessed someone undergo an epileptic seizure at the company where I was employed at the time. I tried to keep cool and collected while writing about that incident, but the truth be told, it was jarring. Please read the story here.

I was idly browsing on my “smart” phone the other day when I came across an item about a then upcoming Star Wars movie where clever computer people had recreated the character Princess Leia as she would have been portrayed by the late Carrie Fisher. I was taken aback by an admonishing note on the link, but I grasped its justification as I watched the clip itself (Figure 1).

Figure 1 Film clip of the upcoming Star Wars movie with the warning “Contains flashing images”

Before one gets to watch the clip showing off the video technology that has been brought to bear, there is a warning about “flashing images”. When the film clip runs, the rapid flash-flash-flash for which Star Wars films are noted actually had a somewhat disorienting effect on yours truly and I do NOT have any epileptic history.

The point of all this is that those of us whose work products involve display(s) of any kind need to be cognizant of the possible dangers that a flashing display might present to some users of the product(s).

Some of us will recall that this was one of the plot devices in the movie The Andromeda Strain back in 1969 in which a woman is driven by a flashing image into an epileptic attack.

From more than half a century ago right up to this very moment, this concern is for real and quite frankly, I am glad to see it having been addressed as shown in Figure 1.

John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Photosensitivity: Seizures from displays appeared first on EDN.

APEC 2024, Day 2: Daily Briefing Video

Чтв, 02/29/2024 - 17:35
 

During Day 2 of APEC 2024, Power Electronics News editor-in-chief Maurizio Di Paolo Emilio and EDN editor-in-chief Majeed Ahmad underscored the significance of silicon and silicon carbide technologies alongside passive components, gallium nitride advancements, and the promising outlook of fusion energy. ADI introduced a gate driver tailored for GaN FETs, while Infineon and Qorvo exhibited diverse, SiC-based solutions. SemiQ also made substantial investments in SiC, unveiling 1,200-V MOSFETs.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post APEC 2024, Day 2: Daily Briefing Video appeared first on EDN.

APEC 2024, Day 1: Daily Briefing Video

Срд, 02/28/2024 - 15:15


Welcome to the first day of the 2024 APEC conference, where global leaders converge to discuss pivotal topics shaping our technological landscape. Today, we delve into the field of semiconductor technology, exploring the transformative potential of wide-bandgap semiconductors and the dichotomy between wide-bandgap and not-wide-bandgap semiconductors. In this video, we analyze some points during the plenary session on Day 1.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post APEC 2024, Day 1: Daily Briefing Video appeared first on EDN.

Lenovo’s Smart Clock 2 Charging Dock: Multiple lights and magnetic “locks”

Срд, 02/28/2024 - 14:56

Two months ago, EDN published my teardown of Lenovo’s Smart Clock 2:

I’d mentioned in it that my dissection victim, acquired at steep discount from the original MSRP, included the “optional charging dock for both it, a wireless-charging (with MagSafe support, to boot) smartphone or other device, and a USB-tethered device (the USB charging port moved from the back of the speaker itself to the dock in this second-generation design)”:

An upfront correction before proceeding; I realize in retrospect upon re-read that my imprecise wording might have left you with the impression that the dock not only charged wireless- and USB-connected devices but also powered the Smart Clock 2 itself. Indeed, there’s an array of contacts on the underside of the Smart Clock 2:

which, as you’ll soon see, mate up to an array of spring-loaded pogo pins on the dock. However, as you may have already ascertained, given that that the Smart Clock 2 comes with a wall wart:

which mates with a barrel plug connector on the back of the device:

the power flow actually goes from the Smart Clock 2 to the charging dock and from there to its USB and wireless charging facilities for other devices’ use. One other note on the latter point, by the way…since the wall wart’s DC output is only 18W (12 V x 1.5 A) and since some of that power needs to be devoted to fueling the Smart Clock 2 itself along with whatever might be connected to the dock over USB, that explains (among other reasons) why Lenovo labels the wireless charging pad as “MagSafe-compatible”, not fully “Made for MagSafe”. Indeed, dive into the products’ tech spec minutia and you’ll find the following regarding the dock’s wireless charger:

  • 5 W
  • 7.5 W
  • 10 W
  • Fast-charging

Frankly, I was surprised to see that the peak wireless charging power goes that high; I’m guessing it’s only valid if the USB charging port isn’t in simultaneous use at the time.

In that earlier writeup, I also noted that “I bought mine brand new direct from Lenovo at the end of 2022 for only $29.99, complete with the docking station (which I’ll save for another teardown to come).” That time is now if you haven’t already figured it out ;-).

Our previous allusion to the charging dock, aside from the verbiage and pictures on the outside of the combined packaging:

was intentionally titillating: a brief glimpse of a white box:

underneath the more decorative box for the Smart Clock 2 itself (which was presumably intended to be optionally placed directly on retailer shelves for standalone sale):

Here’s a fuller view of the aforementioned box o’the bottom, as-usual accompanied by a United States penny (0.75 inches/19.05 mm in diameter) for size-comparison purposes:

Riveting presentation, eh? I’ll save you six closeups of various plain white box panels, instead substituting a sole closeup of the product sticker in the previous overview shot:

Yes, the label includes the FCC ID (O57SEA61UW). And yes, if you’re impatient to see what the charging dock looks like inside you could bypass my scintillating prose and jump right to the FCC’s internal photos. But where’s the fun in that? Are you trying to hurt my feelings? 😉

Ahem. Onward:

Here’s our first glimpse of our victim; its bottom side, to be precise:

The charging dock has dimensions of 0.93″ x 8.65″ x 3.26″ (23.66 mm x 219.65 mm x 82.77 mm). I couldn’t find a weight spec anywhere and didn’t think to weigh it myself until after it was already in pieces. Underneath it is nothing but more cardboard along with a literature sliver:

Here’s the dock again, still in its protective translucent sleeve:

First glimpse of the topside:

Finally freed from its plastic captivity:

The two oval inserts fit into matching insets on the underside of the Smart Clock 2, with the one handling power transfer obvious from the aforementioned pins-to-contacts cluster:

Let’s next look around back to get a different perspective on those pins:

Along with, refocusing slightly, that USB charging port:

Finally, flipping the dock back over (the front and sides are bland unless you’re into pictures of off-white plastic):

Let’s take a closer look at those markings and the sticker alongside them:

You probably also saw the two rubberized “feet”. If you’ve perused any of my teardowns before, you know that what’s often underneath them (screw heads, etc.) are prime candidates to get inside, therefore garnering my immediate attention. Habitual behavior rears its head again:

A-ha!

Keen-eyed readers may have already noticed that both feet left plastic film behind:

which thankfully was no match for my trusty Philips screwdriver:

That said, I’m honestly not sure how much purpose the screws served, since after I sufficiently loosened them, I was left with two enclosure halves that still stubbornly clung together. Some additional attention along the sides from my spudger followed by a screwdriver (flat head this time), along with some patience, finally convinced them to separate, however:

In the process of wrestling the bottom panel away, I’d inadvertently also dislodged a previously unknown-to-me top-side insert, which I focused my attention on next:

And after removing four screws holding the metal plate in place (underneath of which, I suspect you’ve probably already guessed, is the wireless charging coil), I was able to lift it away:

See, there’s the coil (other examples of which we’ve seen before in teardowns past)!

Revealing, in the left-behind top half of the chassis, the “MagSafe-compatible” magnets:

Next step: separate the PCB from the insert. The first four screws to be removed were obvious to my eyes, but the PCB still wouldn’t budge…until I looked again more closely and saw #5 (not the first time I’ve overlooked a screw in a disassembly rush, and likely not the last, either):

Free at last!

Speaking of magnets, here’s another (bigger) one:

Revisiting my earlier Smart Clock 2 teardown, I realized I hadn’t mentioned a metal plate on the inside of its underside, focusing instead on the mini-PCB (such an electrical engineer, aren’t I?):

This magnet, perhaps obviously, proximity-clings to the plate, thereby helping keep the Smart Clock 2 connected to the dock below it.

Finally, the closeups of the “guts” that you’ve been waiting for. Note first the black-color ground strap wire connecting the metal plate to the PCB:

Flip it over and you can see the two thick wires connecting the PCB to the coil, along with two much thinner wires that run between the PCB and the temperature sensor at the coil’s center:

Now for the PCB itself. Here’s the side you’ve already seen plenty of, which points downward when the system is assembled:

Near the center, and toward the top, is a chip marked MT581 (along with a vertical line seemingly drawn by hand with a Sharpie?) from Maxic Technology, described as a “highly integrated, high-performance System on Chip (SoC) for magnetic induction based wireless power transmitter solutions”. It’s the function equivalent of various ICs from STMicroelectronics that I’ve encountered in past wireless charger teardowns. Below and to its right is the CH552T, a USB microcontroller manufactured by Nanjing Qinheng Microelectronics. Unsurprisingly, it’s nearby the dock’s USB charging port. And in the upper right quadrant, to the right of the MT581, is a cluster of four small chips with identical markings:

RU3040
PR05078

whose function eludes my Google research (ideas, readers?). Flip the PCB over:

and the dominant feature that’ll likely catch your eye is a rectangular-ish outline near the periphery comprised of 18 small white pieces of what looks like plastic. At first, I thought they might find use in attaching the PCB to the underside of the insert, but more thoughtful analysis quickly dashed that theory. Turning the PCB sideways revealed their true purpose:

They’re LEDs, implementing the charging dock’s “nightlight” function. Duh on me!

That’s all I’ve got for today, folks, although I’ll as-usual hold onto the pieces o’hardware for a while, for potential assistance in answering any questions you might have on stuff I haven’t already covered. More generally, as always sound off with your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Lenovo’s Smart Clock 2 Charging Dock: Multiple lights and magnetic “locks” appeared first on EDN.

Voltage inverter uses gate’s output pins as inputs and its ground pin as output

Втр, 02/27/2024 - 16:33

When analog circuits mix with digital, the former are sometimes dissatisfied with the latter’s usual single supply rail. This creates a need for additional, often negative polarity, voltage sources that are commonly provided by capacitive charge pumps.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The simplest type is the diode pump, consisting of just two diodes and two capacitors. But it has the inherent disadvantages of needing a separately sourced square wave to drive it and of producing an output voltage magnitude that’s at least two diode drops less than the supply rail. 

Active charge pump switches (typically CMOS FETs) are required to avoid that.

Many CMOS charge pump chips are available off the shelf. Examples include the multi-sourced ICL7660 and the Maxim MAX1673 pumps that serve well in applications where the current load isn’t too heavy. But they aren’t always particularly cheap (the 1673 for example is > $5 in singles) and besides, sometimes the designer just feels the call to roll their own. Illustrated here is an example of the peculiar outcomes that can happen when that temptation isn’t resisted.

The saga begins with Figure 1, showing a (vastly simplified) sketch of a CMOS logic inverter.

Figure 1 Simplified schema of typical basic CMOS gate I/O circuitry showing clamping diodes and complementary FET switch pair.

Notice first the input and output clamping diodes. These are included mainly to protect the chip from ESD damage, but a diode is a diode and can therefore perform other useful functions, too. Similarly, the P-channel FET pair was intended to connect the V+ rail to the output pin when outputting a logic ONE, and the N-channel for connection to V- to pin for a ZERO. But CMOS FETs will willingly conduct current in either direction when ON. Thus, current running from pin to rail works just as well as from rail to pin. 

Figure 2 shows how these basic CMOS facts relate to charge pumping and voltage inversion.

Figure 2 Simplified topology of logic gates comprising voltage inverter, showing driver device (U1), switch device (U2), and coupling (Cc), pump (Cp), and filter (Cf) capacitors.

 Imagine two inverters interconnected as shown in Figure 2 with a square wave control signal coupled directly to U1’s input and through DC blocking cap Cc to U2’s with U2’s input clamps providing DC restoration.

Consider the ZERO state half cycle of the square wave. Both U1 and U2 P-channel FETs will turn on, connecting the U1 end of Cp to V+ and the U2 end to ground. This will charge Cp with its U1 terminal at V+ and its U2 end at ground. Note the reversed polarity of current flow into U2’s output pin due to Cp driving the pin positive and from there to ground through U2’s P FET and positive rail pin.

Then consider what happens when the control signal reverses to the ONE state.

Now the P FETs will turn OFF while the N FETs turn ON. This forces the charge previously accepted by Cc to be dumped to ground through U1 and its complement drawn from U2’s V- pin, thus completing a charge-pumping cycle that delivers a quantum of negative charge:

Q- = -(CpV+ + Cf V–)

to be deposited on Cf. Note that reversed current flow through U2 occurs again. This cycle will repeat with the next reversal of the control signal, and so on, etc., etc.

During startup, until sufficient voltage accumulates on Cf for normal operation of internal gate circuitry and FET gate drive, U2 clamp diodes serve to rectify the Cp drive signal and charge Cf.

That’s the theory. Translation of Figure 2 into practice as a complete voltage inverter is shown in Figure 3. It’s really not as complicated as it looks.

Figure 3 Complete voltage inverter: 100 kHz pump clock (set by R1C1), Schmidt trigger and driver (U1), and commutator (U2).

 A 100 kHz pump clock is output on pin 2 of 74AC14 Schmidt trigger U1. This signal is routed to the five remaining gates of U1 and the six gates of U2 (via coupling cap C2). Negative charge transfer occurs through C3 into U2 and accumulates on filter cap C5.

Even though the Schmidt hysteresis feature isn’t really needed for U2, the same type is used for both chips to improve efficiency-promoting synchronicity of charge-pump switching.

Some performance specs (V+ = 5V):

  • Impedance of V- output: 8.5 Ω
  • Maximum continuous load: 50 mA
  • Efficiency at 50 mA load: 92%
  • Efficiency at 25 mA load: 95%
  • Unloaded power consumption: 440 µW
  • Startup time < 1 millisecond

But finally, is there a cost advantage to rolling your own? Well, in singles, the 1673 is $5, the 7660 about $2, but two 74AC14s can be had for only a buck. The cost of passive components is similar, but this DI circuit has more solder joints and occupies more board area. So, the bottom line…??

But at least using outputs as inputs and ground as an output was fun.

And an afterthought: For higher voltage operation, simply drop in CD4106B metal-gate chips for the 74AC14s, then with no other changes, V+ and V- can be as high as 20V.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Voltage inverter uses gate’s output pins as inputs and its ground pin as output appeared first on EDN.

Computer upgrades: Motivations, hiccups, outcomes, and learnings

Пн, 02/26/2024 - 16:30

I habitually, admittedly, hold onto computers far longer than I should, in the spirit of “if it ain’t broke, don’t fix it” (not to mention “a penny saved is a penny earned”). What I repeatedly forget, in the midst of this ongoing grasping, is that while the computer I’m clinging to might originally have been speedy, sizeable and otherwise “enough” for my needs, the passage of time inevitably diminishes its capabilities. Some of this decline is the result of the inevitable “cruft” it accumulates as I install and then upgrade and/or uninstall applications and their foundation operating systems, as well as the data files I create using them (such as the Word file I’m typing into now). I also fiscally-conveniently overlook, for example, that newer operating system and application revisions make ever-increasing demands on the computer hardware.

Usually, what compels me to finally make the “leap of faith” to something new is some variant of utter desperation: either the existing hardware has been (or will soon be) dropped from the software support list or a software update has introduced a bug that the developer has decided not to fix. Today’s two case studies reflect both of these scenarios, and although the paths to the replacement systems were bumpy, the outcomes were worth the effort (not to mention everything I learned along the way). So much, in fact, that I’ve got another upgrade queued for the upcoming Christmas holiday next-week (as I write these words in mid-December 2023). Wonder how long I’ll wait to update next time?

The 2020 (Intel-based) Apple 13” Apple Retina MacBook Pro (RMBP)

This one had actually been sitting on my shelf for more a year, awaiting its turn in my active-computer rotation, ever since I saw it on sale brand new and open-box discounted at Small Dog Electronics’ website for $1,279.99. When I found out that this particular unit also came with AppleCare+ extended warranty coverage good through mid-May 2025, therefore representing a nearly $1,000 discount from the new-from-Apple total price tag, I pulled the purchase trigger.

It represents the very last iteration of Intel-based laptops from Apple, introduced in May 2020. Why x86, versus Apple Silicon-based? I went for it due in part to its ability to run not only MacOS but also Windows, either virtualized or natively, although I also have a 13” M1 MacBook Air (also open-box, also from Small Dog Electronics, and with similar RAM and SSD capacities: keep reading) in queued inventory for whenever Apple decides to drop x86 support completely.

This high-end RMBP variant, based on a 2.3 GHz quad-core Intel Core i7 “Ice Lake” CPU, includes four Thunderbolt 3 ports, two on either side, versus the two left-side-only configurations of lower-end models. It also integrates 16 GBytes of RAM and a 512 GByte SSD. Unlike its 2016-2019 “butterfly” keyboard precursors, it returns to the reliable legacy “scissors” keyboard (this actually was key—bad pun intended—for me) that Apple amusingly rebranded as the “Magic Keyboard”. Above the keyboard are the Touch ID authentication sensor alongside the nifty (at least to me), now-deprecated Touch Bar. And thankfully, Bluetooth audio support in MacOS 12 “Monterey” for Zoom and other online meeting and webinar apps now works again.

Normally, I’d restore a Time Machine backup, originating from the old machine, to the new one to get me going with the initial setup. But at the time, I was more than 1,000 miles away from my NAS, at my relatives’ house for the Thanksgiving holidays. Migration Assistant was a conceptual alternative, although from what I’ve heard it’s sometimes more trouble than it’s worth. Instead, particularly with my earlier “cruft” comment in mind, I decided to just start from scratch with software reinstalls. That said, I still leveraged a portable drive along with my relatives’ Wi-Fi to copy relevant data files from the old to new machine.

The process was slow and tedious, but the outcome was a solid success. I can still hear the new system’s fan fire up sometimes (a friend with an Apple Silicon system mocks me mercilessly for this) but the new machine’s notably faster than its predecessor. Firefox, for example, thankfully is much snapper than it was before. And speaking of Mozilla applications, I was able to migrate both my Firefox and Thunderbird profiles over intact and glitch-free; the most I ended up having to do was to manually disable and re-enable my browser extensions to get them working again, along with renaming my device name in the new computer’s browser settings for account sync purposes. Oh, and since the new system’s not port-diversity-adorned like its precursor, I also had to assemble a baggie of USB-C “dongles” for USB-A, HDMI, SD cards, wired Ethernet…sigh.

The Microsoft Surface Pro 7+ (SP7+) for business

This next one shouldn’t be surprising to regular readers, as I telegraphed my intentions back in early November. The question you may have, however, is why did I tackle the succession now? For the earlier-discussed MacBook Pro, the transition timing is more understandable, as its early-2015 predecessor will fall off Apple’s O/S-supported hardware list in less than a year. Its performance slowdowns were becoming too noticeable to ignore. And the Bluetooth audio issues I started having after its most recent major O/S upgrade were the icing on the cake.

The Surface Pro 5 (SP5), on the other hand, runs Windows 10, for which Microsoft has promised full support until at least mid-October 2025, longer if you pay up. Its overheating-induced clock throttling was annoying but didn’t occur that often. And although its RAM and SSD capacity limitations were constraining, I could still work around them. Part of the answer, frankly, ties back to how smoothly the RMBP replacement had gone; it tempted me to tackle the SP7+ upgrade sooner than I otherwise would. And another part of the answer is that I wanted to be able to donate both legacy systems to charity while they were still supported and more generally could still be useful to someone else with less demanding use cases. Specifically, I hoped to wrap up both upgrades in time to get the precursor computers to EChO for pass-along in time for them to get wrapped up by their recipients as Christmas presents for others.

Once again, I did “clean” installs of my suite of applications to the SP7+. This strategy, versus an attempted “clone” of the old system’s mass storage contents, was even more necessary in this case because the two computers ran different operating systems (Windows 10 Pro vs Windows 11 Pro). And again, the process was slow but ultimately successful. That said, the overall transition was more complicated this time, due to what I tackled before the installs even started. As I’d mentioned back in November, one of the particularly appealing attributes of the SP7+ (and SP8, for that matter) versus the SP5 is that their SSDs (like that in my Surface Pro X) are user-accessible and -replaceable. What I did first, therefore, after updating Windows 11 and the driver suite to most current versions, was to clone the existing drive image in the new system to a larger-capacity replacement, initially installed in an external enclosure.

Here’s the 256 GByte m.2 2230 SSD that the system came with, complete with its surrounding heatsink, post-clone and removal:

And here’s the 1 TByte replacement, Samsung’s PM991a (PCIe 3.0-based, to allay any excess-energy consumption concerns):

before cloning the disk image to it and installing it (absent a heatsink or thermal tape, but it still seemingly works fine) in place of its precursor:

As you can probably tell from the sticker on one side, it wasn’t new-as-advertised. But it had been only lightly used (and the bulk of that was from me, doing multiple full- and quick-format cycles on it for both initial testing and failed-clone-recoveries) so I kept it:

First step, the clone. I’d thought this might be complicated a bit by the fact that since the system was running the Pro version of Windows 11, (potentially performance-sapping) BitLocker drive encryption was enabled by default. Fortunately, however, my cloning utility of choice (Macrium Reflect Free, which I’ve long recommended) was able to handle the clone as-is just fine, even on a booted O/S with an active partition, although it warned me afterwards that the image on the SSD containing the clone would be unencrypted. Fast forwarding to the future for a moment, I made sure to archive a copy of the existing SSD’s encryption key before doing the swap, in case I ever needed to use it again. The new SSD came up auto-re-encrypted by Windows on first boot, I didn’t need to re-activate the O/S, and I archived its BitLocker key, too, for good measure.

The other—hardware—aspect of the clone was more problematic. Here’s the enclosure that I used to temporarily house the new SSD, Orico’s TCM2-C3, which I bought back in February 2020 and have been using trouble-free for a variety of external-tether purposes ever since:

This time was different. I initially tried tethering the new SSD-inclusive enclosure to the SP7+ via the USB-C to USB-C cable that came with the enclosure, but shortly after each cloning operation attempt started, I’d get an obscure “Error Code 121 – The semaphore timeout period has expired” abort message from Macrium Reflect. Attempts to reformat the SSD before trying the clone again were also inconsistent, sometimes succeeding, other times not due to spontaneous disconnects. Eventually, I got everything to work by instead using the slower but more reliable USB-A to USB-C cable that also came with the enclosure. Is my USB-C to USB-C cable going bad? Or is something amiss with the USB-C transceiver in the system or the enclosure? Dunno.

Once I booted up the computer with the new SSD inside, I ran into two other issues. The first was that the initial O/S partition, which had been hidden on the original SSD, was now visible and had been assigned the C: drive letter. A dive into Windows’ Disk Management utility got this glitch sorted out.

The other quirk, which I’d encountered before, was that the new SSD still self-reported as 256 GBytes in size, the same capacity as its predecessor. Disk Management showed me the sizeable unused partition on the new SSD, which I’d normally be able to expand the main O/S partition into. In this particular case it wasn’t able to do so, though, because the two partitions were non-contiguous; in-between them was 650 Mbyte hidden Windows Recovery partition. I could have just deleted that one, although it would have complicated any subsequent if-needed recovery attempt. Instead, I used another slick (and gratis) utility, MiniTool’s Partition Wizard, to relocate the recovery partition to the “end”, thereby enabling successful O/S partition expansion:

And as hoped-for, the SP7+ is fully compatible with my full suite of existing SP5 accessories:

What’s next?

Requoting what I said upfront in this piece:

I’ve got another upgrade queued for the upcoming Christmas holiday.

It’s my “late 2014” Mac mini, which I’d transitioned to fairly recently, in mid-2021, for similar obsolescence reasons.

Like the early 2015 13” RMPB, it’s not scheduled to exit O/S support until mid-to-late 2024, but it’s becoming even more performance-archaic (due in part to its HDD-centric Fusion Drive configuration). Its replacement will be a 2018 Mac mini, also x86-based, whose specific configuration is “interesting” (I got a great deal on it, explaining why I went with it): a high-end 3.2 GHz Intel Core i7 CPU, coupled with 32 GBytes of RAM but only a 128 GByte SSD (which I plan to augment via external storage). Stand by for more details to come in a future post. And until then, I’m standing by for your thoughts on this piece in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Computer upgrades: Motivations, hiccups, outcomes, and learnings appeared first on EDN.

An MCU test chip embeds 10.8 Mbit STT-MRAM memory

Птн, 02/23/2024 - 14:51

A prototype MCU test chip with a 10.8 Mbit magnetoresistive random-access memory (MRAM) memory cell array—fabricated on a 22-nm embedded MRAM process—claims to accomplish a random read access frequency of over 200 MHz and a write throughput of 10.4 MB/s at a maximum junction temperature of 125°C.

Renesas, which developed circuit technologies for this embedded spin-transfer torque MRAM (STT-MRAM) test chip, presented details about it on February 20 at the International Solid-State Circuits Conference 2024 (ISSCC 2024) held on 18-22 February in San Francisco. The Japanese chipmaker has designed this embedded MRAM macro to bolster read access and write throughput for high-performance MCUs.

Figure 1 The MCU test chip incorporates a 10.8-Mbit embedded MRAM memory cell array. Source: Renesas

Microcontrollers in endpoint devices are expected to deliver higher performance than ever, especially in Internet of Things (IoT) and artificial intelligence (AI) applications. Here, the CPU clock frequencies of high-performance MCUs are in the hundreds of MHz, and to achieve greater performance, read speeds of embedded non-volatile memory need to be increased to minimize the gap between them and CPU clock frequencies.

However, MRAM has a smaller read margin than the flash memory used in conventional MCUs, which makes high-speed read operation more difficult. At the same time, MRAM is faster than flash memory for write performance because it requires no erase operation before performing write operations. That’s why shortening write times is desirable not only for everyday use but also for cost reduction of writing test patterns in test processes and writing control codes by end-product manufacturers.

Renesas has developed circuit technologies for an embedded STT-MRAM test chip with fast read and write operations to address this design conundrum.

Faster read and write

First, take MRAM reading, which is generally performed by a differential amplifier or sense amplifier to determine which of the memory cell current or reference current is larger. But because the difference in memory cell currents between the 0 and 1 states—read window—is smaller for MRAM than for flash memory, the reference current must be precisely positioned in the center of the read window for faster reading.

So, Renesas introduces two mechanisms to achieve faster read speed. First, it aligns the reference current in the center of the window according to the actual current distribution of the memory cells for each chip measured during the test process. Second, it reduces the offset of the sense amplifier.

Another challenge that Renesas engineers have overcome relates to conventional configurations, where large parasitic capacitance in the circuits is used to control the voltage of the bitline, so it doesn’t rise too high during read operations. While it slows the reading process, Renesas has introduced a Cascode connection scheme to reduce parasitic capacitance and speed up reading. That allows design engineers to realize the random read operation at more than 200 MHz frequencies.

Next, for write operation, it’s worth mentioning that Renesas announced in December 2021 that it has improved write throughput by applying write voltage simultaneously to all bits in a write unit using a relatively low write voltage generated from the external voltage (I/O power) of the MCU through a step-down circuit. Then, it used a higher write voltage only for the remaining few bits that could not be written.

Figure 2 In late 2021, Renesas announced an increase in the write speed of an STT-MRAM test chip manufactured on a 16-nm node.

Now, while power supply conditions used in test processes and by end-product manufacturers are stable, Renesas has relaxed the lower voltage limit of the external voltage. As a result, by setting the higher step-down voltage from the external voltage to be applied to all bits in the first phase, write throughput can be improved 1.8-fold. A faster write speed will contribute to more efficient code writing in endpoint devices.

Test chip evaluation

The prototype MCU test chip combines the above two enhancements to offer a 10.8 Mbit MRAM memory cell array fabricated using a 22-nm embedded process. The evaluation of the prototype chip validated that it achieved a random read access frequency of over 200 MHz and a write throughput of 10.4 MB/s.

The MCU test chip also contains 0.3 Mbit of one-time programmable (OTP) memory that uses MRAM cell breakdown to prevent falsification of data. That makes it capable of storing security information. However, writing to OTP requires a higher voltage than writing to MRAM, which makes it more difficult to perform writing in the field, where power supply voltages are often less stable. Here, Renesas suppressed parasitic resistance within the memory cell array, which in turn, makes writing in the field possible.

Renesas has vowed to further increase the capacity, speed, and power efficiency of MRAM.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post An MCU test chip embeds 10.8 Mbit STT-MRAM memory appeared first on EDN.

BLDC motor driver prolongs battery life

Чтв, 02/22/2024 - 20:58

A three-phase BLDC motor driver, the AOZ32063MQV from AOS, offers an input voltage range of 5 V to 60 V and 100% duty cycle operation. The IC enables efficient motor operation, while its low standby power helps extend the battery life of cordless power tools and e-bikes.

The AOZ32063MQV drives three half-bridges consisting of six N-channel power MOSFETs for three-phase applications. It has a high-side sink current of 1A and a maximum source current of 0.8 A. A power-saving sleep mode lowers current consumption to just 1 µA.

Along with an integrated bootstrap diode, the driver provides adjustable dead-time control and a fault indication output. Onboard protection functions include input undervoltage, short-circuit, overcurrent, and thermal shutdown. The device operates over a temperature range of -40°C to +125°C.

Housed in a 4×4-mm QFN-28L package, the AOZ32063MQV costs $1.55 in lots of 1000 units. It is available in production quantities, with a lead time of 24 weeks.

AOZ32063MQV product page

Alpha & Omega Semiconductor 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post BLDC motor driver prolongs battery life appeared first on EDN.

100-V MLCC is among the industry’s smallest

Чтв, 02/22/2024 - 20:58

Murata expands its GJM022 series of high-Q multilayer ceramic capacitors (MLCCs) with a 100-V device that is just 0.4×0.2 mm (L×W). The MLCC is intended for high-frequency module applications, such as those used in cellular communication infrastructure.

Exhibiting high-Q, low-loss performance, the miniature capacitor enables electronic engineers to overcome packaging limitations, while maintaining optimal performance. A high-temperature guarantee also gives designers greater positioning freedom. The MLCC helps ensure reliable long-term operation, even in close proximity to power semiconductors that radiate heat.

The GJM022 can be used for a wide variety of applications, including impedance matching and DC cutting within RF modules for base stations. In such implementations, the capacitor’s high-Q value and low equivalent series resistance (ESR) contribute to improving power amplifier efficiency and lowering power consumption.

Engineering samples of the GJM022 100-V chip capacitor are available in limited production. The product will move to full stocked production in the next several weeks. A datasheet for the device was not available at the time of this announcement. For information on the GJM series of high-Q MLCCs, click the product page link below.

GJM series product page  

Murata

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 100-V MLCC is among the industry’s smallest appeared first on EDN.

Development kit pairs RISC-V and FPGA

Чтв, 02/22/2024 - 20:58

The PolarFire SoC Discovery Kit from Microchip makes RISC-V and FPGA design accessible to a wider range of embedded engineers. This low-cost development platform allows students, beginners, and seasoned engineers alike to leverage RISC-V and FPGA technologies for creating their designs.

The Discovery Kit is built around the PolarFire MPFS095T SoC FPGA, which embeds a quad-core RISC-V processor that supports Linux and real-time applications. It also packs 95,000 FPGA logic elements. A large L2 memory subsystem can be configured for performance or deterministic operation and supports an asymmetric multiprocessing mode.

An embedded FP5 programmer is included for FPGA fabric programming, debugging, and firmware development. The development board also provides a MikroBUS expansion header for Click boards and a 40-pin Raspberry Pi connector, as well as a MIPI video connector. Expansion boards are controlled using protocols like I2C and SPI. 

The PolarFire SoC Discovery Kit costs $132 for the general public and $99 when purchased through Microchip’s Academic Program. Production kit shipments are expected to commence mid-April 2024.

PolarFire Soc Discovery Kit product page

Microchip Technology

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Development kit pairs RISC-V and FPGA appeared first on EDN.

SP4T switches offer high isolation up to 8.5 GHz

Чтв, 02/22/2024 - 20:57

pSemi announced production readiness of two UltraCMOS SP4T RF switches that operate from 10 MHz to 8.5 GHz with high channel isolation. According to the manufacturer, the PE42445 and PE42446 switches integrate seamlessly into 4G and 5G base stations and massive MIMO architectures. They can provide digital pre-distortion feedback loops and transmitter monitoring signal paths to prevent interference and maintain signal integrity.

 

Both the PE42445 and PE42446 offer >60 dB isolation at 4 GHz and operate over an extended temperature range of -40°C to +125°C. Additionally, the devices provide low insertion loss across the band, high linearity of 65 dBm IIP3, and a fast switching time of 200 ns.  The SP4T switches are manufactured on the company’s UltraCMOS process, a variation of silicon-on insulator technology.

The PE42445 comes in a 3×3-mm, 20-lead LGA package, while the PE42446 is housed in a 4×4-mm, 24-lead LGA package. Sales inquiries can be submitted using the product page links below.

PE42445 product page

PE42446 product page 

pSemi

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post SP4T switches offer high isolation up to 8.5 GHz appeared first on EDN.

MRAM macro speeds read/write operations

Чтв, 02/22/2024 - 20:57

Renesas presented an embedded MRAM macro in an MCU test chip at ISSCC 2024 that delivers a random-read access frequency of over 200 MHz. The test chip also exhibited a write throughput of 10.4-Mbytes/s.

The company developed two high-speed circuit technologies to achieve faster read and write operations in spin-transfer torque magnetoresistive RAM (STT-MRAM). A prototype MCU test chip, fabricated using a 22-nm process, combined the two technologies with a 10.8-Mbit MRAM memory cell array. Evaluation of the prototype chip confirmed the high-speed results at a maximum junction temperature of 125°C.

Advancements in read technology have enabled Renesas to achieve what it claims is the world’s fastest random read access time of 4.2 ns. Even taking into consideration the setup time of the interface circuit that receives the MRAM output data, the company was able to realize random read operation at frequencies in excess of 200 MHz. Further, improved write technology can improve MRAM write throughput 1.8-fold.

For greater detail, read the complete press release here.

Renesas Electronics 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post MRAM macro speeds read/write operations appeared first on EDN.

Logic Probe has a wide voltage range

Чтв, 02/22/2024 - 16:53

The logic probe is powered from the device under test (DUT)—it may be any binary logic, which can be powered in the range +2 V to +6 V. This may be a microcontroller or 74/54 series logic chips, including HC/HCT chips.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The probe determines 3 conditions: 

  • Logical 0
  • Logical 1
  • Undefined (this may be a Z-condition, or bad contact).

It also features a counter, which is very handy when you want to count impulses, to estimate the value of frequency or to test an interface. (This part is shown as a sketch.)

The probe in Figure 1 consists of two Schmitt triggers, the upper trigger on the figure determines the logical 0, and the lower trigger determines the logical 1.

Figure 1 The logic probe with two Schmitt triggers where the upper determines logical 0 and the lower determines logical 1.

Two different colors were selected: 

  • Blue for logical 0
  • Red for logical 1

Since the blue LED demands more than 2 V, a slightly modified “joule-thief” circuit on Q2 is used to increase the voltage. The transformer has 2 windings with an inductance ranging from 80 to 200 µH, if the windings are not equal, the greater one should be connected to the collector. (The author used a tiny transformer from an old ferrite memory, but any coil with an added winding over it can do.)

If you choose a green or red LED instead of blue, the “joule-thief” circuit can be eliminated, and the LED connected between the upper terminal of R5 and “+A”.

Due to the wide supply voltage range, the current through the LEDs can increase 100% or more. Since the LEDs are quite bright, some control of brightness is desirable. It’s performed by the circuit’s U3, Q3, and two diodes, which can decrease the LEDs supply by 1.4 V.

Note, the 74HC14 can be used instead of the 74HC132 almost everywhere in the circuit.

Peter Demchenko studied math at the University of Vilnius and has worked in software development.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Logic Probe has a wide voltage range appeared first on EDN.

Making waves: Engineering a spectrum revolution for 6G

Срд, 02/21/2024 - 16:56

6G is looking to achieve a broad range of goals in turn, requiring an extensive array of technologies. Like 5G, no single technology will define 6G. The groundwork laid out in the previous generation will serve as a starting point for the new one. As a distinct new generation though, 6G will also break free from previous ones, including 5G, by introducing new concepts. Among them, new spectrum technologies will help the industry achieve complete coverage for 6G.

Tapping into new spectrum

Looking back, every generation of cellular technology looks to leverage new spectrum. 6G won’t be an exception, with the emergence of new use cases and more demand for high-speed data. As a result, 6G needs to deliver much higher data throughputs than 5G, making millimeter-wave (mmWave) bands extremely attractive.

New frequency bands under consideration for 6G include 100 and 300 GHz, often called sub-terahertz (sub-THz) bands. There is also interest in the upper mid-band—the spectrum between 7 and 24 GHz—because of lower propagation loss compared to sub-THz bands, particularly between the 7 and 15 GHz frequencies.

This spectrum presents regulatory challenges though and is used by various entities including governments and satellite service providers. However, some bands could work for mobile communications with the implementation of more advanced spectrum sharing techniques. Figure 1 provides an overview of the frequencies allocated for mobile and wireless access in this spectrum.

Figure 1 An overview of frequency allocation for mobile and fixed wireless access in the upper mid-band. Source: Radio Regulations, International Telecommunication Union, 2020

While these frequencies have been used for a variety of applications outside of cellular, channel sounding is needed to characterize the use of this spectrum in 6G to ensure it provides the benefits for the targeted 6G application.

The 7 to 24 GHz spectrum is key area of focus for RAN Working Group 1 (RAN1) within the Third Generation Partnership Project (3GPP) for the purpose of Release 19, which will be finalized in late 2025 and facilitate the transition from 5G to 6G.

Scaling with ultra-massive MIMO

Over time, wireless standards have continued to evolve to maximize the bandwidth available in various frequency bands. Multiple-input multiple-output (MIMO) and massive MIMO technologies were major enhancements for radio systems with a significant impact for 5G. By combining multiple transmitters and receivers and using constructive and destructive interference to beamform information toward users, MIMO significantly enhanced performance.

6G can improve on this further. MIMO is expected to scale to thousands of antennas to provide greater data rates to users. Data rates are expected to grow from single gigabits per second to hundreds of gigabits per second. Ultra-massive MIMO will also enable hyper-localized coverage in dynamic environments. The target for localization precision in 6G is of 1 centimeter, a significant leap over 5G’s 1 meter.

Interacting with signals for better range and security

Reconfigurable intelligent surfaces (RIS) also represents a significant development for 6G. Currently, this technology is the focus of discussions at the 3GPP and the European Telecommunications Standard Institute (ETSI).

Using high-frequency spectrum is essential to achieve greater data throughputs but this spectrum is prone to interference. RIS technology will play a key role in addressing this challenge helping mmWave and sub-THz signals to overcome the high free space path loss and blockage of high-frequency spectrum.

RISs are flat, two-dimensional structures that consist of three or more layers. The top layer comprises multiple passive elements that reflect and refract incoming signals, enabling data packets to go around large physical obstacles like buildings, as illustrated in Figure 2.

Figure 2 RISs are two-dimensional multi-layer structures where the top layer consists of an array of passive elements that reflect/refract incoming signals, allowing the sub-THz signals used in 6G to successfully go around large objects. These elements can be programmed to control the phase-shift the signal to into a narrow beam directed at a specific location. Source: RIS TECH Alliance, March 2023

Engineers can program the elements in real time to control the phase shift enabling the RIS to reflect signals in a narrow beam to a specific location. With the ability to interact with the source signal, RISs can increase signal strength and reduce interference in dense multi-user environments or multi-cell networks, extending signal range and enhancing security.

Going full duplex

Wireless engineers have tried to enable simultaneous signal transmission and reception for years to drive a step-function increase in capacity for radio channels. Typically, radio systems employ just one antenna to transmit and receive signals, which requires the local transmitter to deactivate during reception or transmit on a different frequency to be able to receive a weak signal from a distant transmitter.

Duplex communication requires either two separate radio channels or splitting up the capacity of a single channel, but this is changing with the advent of in-band full duplex (IBFD) technology, which is currently under investigation in 3GPP Release 18. IBFD uses an array of techniques to avoid self-interference enabling the receiver to maintain a high level of sensitivity while the transmitter operates simultaneously on the same channel.

Introducing AI/ML-driven waveforms

New waveforms are another exciting development for 6G. Despite widespread use in cellular communications, the signal flatness of orthogonal frequency division multiplexing (OFDM) creates challenges with wider bandwidth signals in radio frequency amplifiers. Moreover, the integration of communication and sensing into a single system, known as joint communications and sensing (JCAS), also requires a waveform that can accommodate both types of signals effectively.

Recent developments in AI and machine learning (ML) offer the opportunity to reinvent the physical-layer (PHY) waveform that will be used for 6G. Integrating AI and ML into the physical layer could give rise to adaptive modulation, enhancing the power efficiency of communications systems while increasing security. Figure 3 shows how the physical layer could evolve to include ML for 6G.

Figure 3 The proposed migration to an ML-based physical layer for 6G to enhance both the power efficiency and security of the transmitter and receiver. Source: IEEE Communications Magazine, May 2021.

 Towards complete coverage

6G is poised to reshape the communications landscape pushing cellular technology to make a meaningful societal impact. Today, the 6G standard is in its infancy with the first release expected to be Release 20, but research on various fronts is in full swing. These efforts will drive the standard’s development.

Predicting the demands of future networks and which applications will prevail is a significant challenge, but the key areas the industry needs to focus on for 6G have emerged, new spectrum technologies being one of them. New spectrum bands, ultra-massive MIMO, reconfigurable intelligent surfaces, full duplex communication, and AI/ML-driven waveforms will help 6G deliver complete coverage to users.

Jessy Cavazos is part of Keysight’s Industry Solutions Marketing team.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Making waves: Engineering a spectrum revolution for 6G appeared first on EDN.

The chiplet universe is coming: What’s in it for you?

Срд, 02/21/2024 - 12:53

There’s a lot of talk and excitement about chiplets these days, but there’s also a lot of confusion. What is available today? What should I expect in terms of interoperability? Is the promise of an emerging ecosystem real? More fundamentally, developers of high-end systems-on-chip (SoCs) need to consider a central question: “What’s in it for me?” The answer, unsurprisingly, varies depending on the type of application and the target market for these devices.

For the last few years, I have been closely monitoring the multi-die market, and I’ve been talking to a wide variety of players ranging from chip designers to chip manufacturers to end users of our system IP product offering. Although commentators and stakeholders accurately describe key benefits of chiplet technology, I’ve observed that these descriptions are rarely comprehensive and often lack structure.

Here is an outline of chiplet driving factors and size of opportunity per vertical Source: Arteris

As a result, I felt the need to identify common themes, reflect on their importance for future deployment and map them on the key industry verticals. This blog aims to summarize these insights in a diagram (see figure above), with the hope that it is useful to you.

  1. Scalability: The key to meeting diverse computing demands

Scalability stands at the forefront of the chiplet revolution. Traditional monolithic chip designs face physical and economic limits as they approach the boundaries of Moore’s Law. Chiplets, however, offer a modular approach. By combining smaller, discrete components or “chiplets,” manufacturers can create larger, more powerful processors.

This modular design allows for the easy scaling of performance and functionality to meet the specific needs of various applications. This is what drove the early adoption of the technology by pioneering companies in the enterprise compute vertical. Today, it also attracts players in the communications and automotive industries, which also crave higher computing power, particularly for AI applications.

  1. Cost efficiency: Lowering expenses and increasing competitiveness

Cost efficiency is another critical factor driving the adoption of chiplets. Traditional chip fabrication, especially at the cutting edge, is exceedingly expensive, with costs escalating as transistors shrink. The chiplet approach mitigates these costs in several ways.

First, it allows for the use of older, more cost-effective manufacturing processes for certain components. Second, by constructing a processor from multiple smaller chiplets, manufacturers can significantly reduce the yield loss associated with defects in large monolithic chips.

If part of a chiplet is defective, it doesn’t render the entire chip unusable, as would be the case with a traditional design. This translates directly into cost savings, making high-performance computing more accessible. This aspect is especially critical for cost-sensitive sectors such as wireless communications, consumer electronics, and industrial applications.

  1. Ecosystem development: Fostering collaboration and innovation

The shift to chiplets also encourages the development of a more collaborative and innovative ecosystem in the semiconductor industry. With chiplets, different companies can specialize in various types of computing hosts and accelerators, contributing their expertise to a larger whole.

This openness can lead to a more vibrant ecosystem, as smaller players can innovate in specific areas without the overhead of designing entire chips. Such collaboration could accelerate technological advancements, benefiting newcomers in the automotive and consumer electronics vertical, for instance, and leading to more rapid iterations and improvements in technology.

  1. Portfolio management: A strategic approach to product development

Finally, the transition to chiplets allows companies to manage their product portfolios more effectively. With the ability to mix and match different chiplets, a company can more quickly and efficiently adapt its product offerings to meet market demands. This flexibility enables faster response times to the emerging trends and customer needs, providing a competitive edge.

Additionally, the ability to reuse chiplets across multiple products can streamline research and development, reducing time-to-market and R&D expenses. The flexibility to mix and match chiplets for different configurations makes it easier to tailor chips to specific market segments and is particularly suited to the needs of the consumer and automotive markets.

Overall, the chiplet architecture is poised to revolutionize the semiconductor industry, with each sector finding unique value in its capabilities. This tailored approach ensures that chiplets will play a critical role in driving forward the technological advancements of each industry vertical.

Guillaume Boillet, senior director of product management and strategic marketing at Arteris, drives the product lifecycle of the interconnect IP and SoC integration automation portfolios.

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The chiplet universe is coming: What’s in it for you? appeared first on EDN.

Power Tips #126: Hot plugging DC/DC converters safely

Втр, 02/20/2024 - 18:44

In power converters, the input capacitors are fed through inductive cabling to the power source. Parasitic inductance will cause ringing of the input voltage to almost twice its DC value when first plugged into the system, also called hot plugging. An insufficiently damped power converter input and a lack of inrush control can damage the converter.

Using input bulk electrolytic capacitors to dampen the input voltage of the off-battery converters can prevent excessive voltage ringing when first applying battery power, while also preventing resonances that can destabilize the converter. With the move to 24 VIN and 48 VIN systems from the traditional 12 V automotive battery, the need to properly dampen the input becomes even more important. 12V battery systems typically use components rated for 40 V or higher to survive short-duration voltage spikes under load-dump conditions. The maximum DC voltage for these 12 V systems can reach 18 VDC. Hot plugging can cause input ringing with the voltage nearing twice the input, such as 36 V. This is well below 40 V or higher rated components. However, in a 48 V system where steady-state input voltages can reach 54 V, ringing on the input can potentially exceed 100 V, damaging components rated for 80 V.

With traditional 12 V systems, one often assumes the damping capacitors have enough effective series resistance (ESR) to tame the resonance. But, with low-cost aluminum electrolytic capacitors, the actual effective ESR is generally much lower than the published maximum, resulting in much less damping and much more ringing when applying battery power. With 12 V systems, the reduced damping may still be enough to prevent destabilization of the downstream DC/DC, and the ringing will not cause damage. However, in 48 V systems that are more vulnerable to ringing, you can add discrete resistors in series with the input damping capacitors. Based on steady-state ripple currents, a size 0603 (1608 metric) should suffice.

In Figure 1, L1 and C1 values from an existing DC/DC converter’s input filter create a resonance that is expressed by Equation 1:

We chose the target damping capacitor (Cd) and damping resistance (Rd), based on the TI E2E™ design support forums technical article, “Damping input bead resonance to prevent oscillations”. Cd should be ideally at least three times C1. We chose a 150 µF standard value for Cd.

Equation 2 expresses the target damping resistance:

For damping resistor (Rd), add two paralleled 1 Ω resistors in series with Cd.

Figure 1 A simplified input filter with damping to prevent excessive voltage ringing when first applying battery power, while also preventing resonances that can destabilize the converter.

Figure 2 shows the simulated hot-plug response both without and with the added 0.5Ω damping resistor in series with Cd.

Figure 2 Simulation of hot plugging without and with damping 0.5Ω damping resistor in series with Cd.

We achieved damping of the input filter by using the correct damping resistor and capacitor combination. There is one aspect, however, that is easy to overlook. In the lab, we experienced the destruction of the damping resistor (Rd) when hot plugging to the supply. What we realized is that the damping resistor has a peak power expressed by Equation 3:

For our 1 Ω resistors across 54 V, that would be about 2,900 W peak in each resistor. Furthermore, the resistor dissipates nearly the same energy as that stored in the damping capacitor (Cd) in a very short period of time. This energy stored in the damping capacitor is expressed by Equation 4:

In our case, that energy is shared equally between the two 1 Ω resistors. A capacitance of 150 µF at 54 VIN is approximately 220 mJ total, or 110 mJ in each 1 Ω resistor. This is a slightly stringent assumption, as the internal ESR of Cd reduces the actual peak voltage across these resistors by about 4%.

Mapping the actual inrush surge to the curve in the surge rating graphs is not straightforward. The actual surge profile will be roughly a decaying exponential waveform, while the resistor ratings assume a fixed-duration constant power, as shown in Figure 3.

Figure 3 Example of surge-rated resistor ratings showing a roughly decaying exponential waveform.

A conservative approach would be to divide the total energy dissipated in the resistor by the peak power. You can then check this resulting pulse duration against the surge rating graph of the resistor. The calculated pulse will be more severe than the actual pulse, which is the same heating energy spread out over a greater time frame. For our case, in each resistor, 110 mJ divided by 2,900 W is 38 µs. A surge-rated resistor size of 2512 SG733A/W3A can handle 4.5 kW for approximately 40 µs, which means that this package resistor is suitable for this application. General-purpose resistors in the same 2512 package have power ratings more than an order of magnitude lower than surge-rated resistors.

This calculation does ignore the series inductance effect. An inductor will slow the rise of current into the resistor and reduce maximum power, but will also add total losses from overshoot, as shown in Figure 2. The simulation results including the 10 µH inductor show peak power in the resistor dropping by 30% from the 2.9 kW calculated power, but the total energy in the resistor is 17% higher than the 110 mJ calculated earlier. The rating curves show that the allowed energy follows the peak power ratio to the negative two-thirds power. Thus, a 30% reduction in peak power enables 27% more losses, and our calculations remain conservative for both without and with series input inductance.

Avoiding failures from hot plugging

While the best automotive installation and maintenance practices will avoid hot plugging, there is a realization that errors will occur. Following procedures stated in this article will avoid costly damage to the system. As your partner in power management, TI is in constant pursuit of pushing the limits of power.

Hrag Kasparian, who joined Texas Instruments over 10 years ago, currently serves as a power applications engineer, designing custom DC-DC switch-mode power supplies. Previously, he worked on the development of battery packs, chargers, and electric vehicle (EV) battery management systems at a startup company in Silicon Valley. Hrag graduated from San Jose State University with a Bachelor of Science in electrical engineering.

Josh Mandelcorn has been at Texas Instrument’s Power Design Services team for almost two decades. He has designed high-current multiphase converters to power core and memory rails of processors handling large rapid load changes with stringent voltage under/overshoot requirements. He is listed as either an author or co-author on 17 US patents related to power conversion. He received a BSEE degree from Carnegie-Mellon University.

Related Content

Additional resources

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Power Tips #126: Hot plugging DC/DC converters safely appeared first on EDN.

The PowerStation PSX3: A portable multifunction vehicular powerhouse with a beefy battery

Пн, 02/19/2024 - 17:38

As regular readers may already recollect, I’ve got two vehicles in outdoor storage, which (at minimum) I start once a year to reorder them and drive the one now in front to a nearby emissions testing center.

Stored-vehicle batteries inevitably drain, and their tires slowly-but-surely also deflate. Which is why the PowerStation PSX3 has long had a rarely-but-repeatedly used prized place in my gadget stable. I’ll start with some stock shots of the product:

As you can tell, it’s (among other things) a portable recharger and jump-starter of vehicles’ cells. It’s also a portable tire inflater. And it’s an emergency light and USB power source, too:

all of which makes it handy to have with me at all times in my Eurovan Camper, for example:

Here’s my unit in all its dusty, dirty glory:

Cables, etc. inside the “door”:

along with closeups on those stickers you saw in the overview shots:

Here’s the thing, though. If you visit the product page, you’ll find that the PowerStation PSX3 is no longer being sold. And after many years of occasional use, in combination with deep discharge cycles between uses, the embedded sealed lead-acid battery in mine had become effectively unusable; it’d take forever to charge, if I could get it to fully charge at all, and its ability to inflate tires and jumpstart vehicles was a shadow of its former self.

My first “bright idea” was to pick up one of those newfangled chargers you may have noticed often on sale at Amazon and elsewhere, which I’m assuming are all Li-ion battery-based (since NiMH cells wouldn’t deliver the necessary “punch”). For tire inflation purposes, I alternatively had a nifty adapter in the garage that leveraged my stock of Black+Decker 20 V MAX multi-purpose batteries:

It wasn’t as powerful as the PowerStation PSX3 had been in its prime, but I had a bunch of batteries and they’re easy to transport, so I figured a jumpstart-only device would suffice as a PowerStation PSX3 successor.

I tried three of these widgets, one claiming to deliver 1200 A of “peak” cranking juice:

Another spec’ing 1500 A:

And a third that promised to deliver 2000 A:

They all promptly went back to Amazon as full-refund returns. Now granted, if someone had left their interior dome light on too long and the battery was drained too low to successfully turn over the engine but still had some “life” one of these might suffice, which is why this combo jump-starter/tire inflater/USB charger/light still resides in the back of my wife’s SUV:

And I’ll grant them one other thing: they’re certainly small and light. But 2000 A of cranking current? Or even 1500 A? Mebbe for a fraction of a second, the time necessary to drain an intermediary capacitor, but not long enough to resurrect a significantly drained battery. Therefore, the quotes I put around the word “peak” earlier. Such products exemplify the well-worn saying, “mileage may vary”. Give me an old-school lead acid battery instead, any day!

At that point, I had another idea, which ended up being brighter. As I wrote about last summer, uninterruptable power supplies (UPSs) often have replaceable embedded batteries (unless the manufacturer has intentionally designed them otherwise, of course). Could the PowerStation PSX3, with user-accessible screws on its backside, be similar?

Yes, hope-inducing YouTube videos like this one reassured me, it could!

(I too hate throwing things out if it wasn’t already intuitively obvious)

At this point, I had maybe my brightest idea of all, if I do say so myself. In that earlier UPS writeup, I’d mentioned that I’d bought six replacement batteries for $49.99 total on sale (they’re now $119.99 for six as I write these words). They were purchased through Amazon but were shipped directly from the manufacturer, Mighty Max. The thing is, though, the first shipment delivered to me was not six smaller batteries but one much larger one.

The Mighty Max rep promptly apologized, sent me the correct ones, and told me to hold onto the first one in case I ever found a use for it. Hmmm…Where in the garage did I put that box?

And hey, it’s not only got the correct dimensions, but the terminals’ polarities match!

Additional included hardware, which I didn’t end up needing to use:

Cool, let’s remove those screws and crack this device open!

At this point, I need to beg for forgiveness from you, dear readers. Were this a proper full teardown, I wouldn’t stop at this point. But the objective here was not to fully dissect the product. It was instead to resurrect it to full health. So, squelching my own curiosity, not to mention all of yours’ in the process, I stopped here. That said, for example, you can clearly see the massive-percentage-of-total-volume motor that implements the air compressor function:

And here’s our patient on the other side:

The negative battery terminal was corroded, so I cleaned everything up in the process of disconnecting it:

The positive terminal was more pristine:

At this point, however, after wrestling the old battery out of its captivity:

I realized I had a problem. Here’s the final shot of the old cell:

And here’s another perspective on the new one:

See what’s different? The two batteries are the same size. And the terminals’ polarities do match. But the terminals’ exact locations are not the same. Force-fitting the negative terminal re-connection was fairly straightforward, since I just had to stretch a few wires already with sufficient slack. The positive terminal reconnection, on the other hand, was admittedly more of a MacGyver move (and I admittedly almost skipped on sharing this image with you, out of embarrassment and fear of your mockery…but hey, at least no duct tape was involved!):

But at the end of the day, I ended up with a good-as-new PowerStation PSX3. Huzzah!

Comments are as-always welcomed…just please be gentle about my MacGyver move…

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The PowerStation PSX3: A portable multifunction vehicular powerhouse with a beefy battery appeared first on EDN.

Сторінки