-   Українською
-   In English
Feed aggregator
Miniature MLCCs maintain high stability
MLCCs in Kyocera AVX’s KGU series use a Class 1 C0G (NP0) ceramic dielectric, ensuring stable operation across a wide temperature range. Offered in four miniature chip sizes, these capacitors have a temperature coefficient of capacitance (TCC) of 0 ±30 ppm/°C and exhibit virtually no voltage coefficient.
KGU series MLCCs come in EIA 01005, 0402, 0603, and 0805 chip sizes, with rated voltages ranging from 16 V to 250 V and capacitances from 0.1 pF to 100 pF. These components offer tolerances as tight as ±0.05 pF and operate across a temperature range of -40°C to +125°C. According to the manufacturer, the KGU parts also provide ultra-low ESR, high power, high Q, and self-resonant frequencies.
Optimized for communications, these capacitors are suitable for filter networks, high-Q frequency sources, coupling, and DC blocking circuits. They can be used in cellular base stations, Wi-Fi networks, wireless devices, as well as broadband wireless, satellite communications, and public safety radio systems.
KGU series capacitors are available through Kyocera AVX’s distributor network, including DigiKey, Mouser, and Richardson RFPD.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Miniature MLCCs maintain high stability appeared first on EDN.
Automotive LDO packs watchdog timer
Nisshinbo’s NP4271 LDO regulator features a high-precision watchdog timer and reset functions through window-type output voltage monitoring. Designed for automotive functional safety, the series meets the need for external MCU monitoring and reliable voltage-based reset functions in electronic control units (ECUs).
The LDO operates across a broad input voltage range of 4.0 V to 40 V and offers two output voltage options of 3.3 V or 5.0 V. Output voltage is accurate to within ±2.0% over a range of conditions, including input voltages from 6 V to 40 V, load currents from 5 mA to 500 mA, and temperatures ranging from -40°C to +125°C.
Two reset function options are available based on output voltage monitoring. Version A monitors both the low and high sides, while Version B monitors only the low side. Detection voltage accuracy is ±2.0% for the low side and ±5.0% for the high side, across the full temperature range. Additionally, the NP4271 provides high timing accuracy for both watchdog timer monitoring and reset times.
The NP4271 automotive LDO regulator is available through Nisshinbo authorized distributors, including DigiKey and Mouser.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Automotive LDO packs watchdog timer appeared first on EDN.
PQC algorithms: Security of the future is ready for the present
Quantum computing technology is developing rapidly, promising to solve many of society’s most intractable problems. However, as researchers race to build quantum computers that would operate in radically different ways from ordinary computers, some experts predict that quantum computers could break the current encryption that provides security and privacy for just about everything we do online.
Encryption—which protects countless electronic secrets, such as the contents of email messages, medical records, and photo libraries—carries a heavy load in modern digitized society. It does that by encrypting data sent across public computer networks so that it’s unreadable to all but the sender and intended recipient.
However, far more powerful quantum computers would be able to break the traditional public-key cryptographic algorithms, such as RSA and elliptic curve cryptography, that we use in our everyday lives. So, the need to secure the quantum future has unleashed a new wave of cryptographic innovation, making the post-quantum cryptography (PQC) a new cybersecurity benchmark.
Enter the National Institute of Standards and Technology (NIST), the U.S. agency that has rallied the world’s cryptography experts to conceive, submit, and then evaluate cryptographic algorithms that could resist the assault of quantum computers. NIST started the PQC standardization process back in 2016 by seeking ideas from cryptographers and then asked them for additional algorithms in 2022.
Three PQC standards
On 13 August 2024, NIST announced the completion of three standards as primary tools for general encryption and protecting digital signatures. “We encourage system administrators to start integrating them into their systems immediately, because full integration will take time,” said Dustin Moody, NIST mathematician and the head of the PQC standardization project.
Figure 1 The new PQC standards are designed for two essential tasks: general encryption to protect information exchanged across a public network and digital signatures for identity authentication. Source: NIST
Federal Information Processing Standard (FIPS) 203, primarily tasked for encryption, features smaller encryption keys that two parties can exchange easily at a faster speed. FIPS 203 is based on the CRYSTALS-Kyber algorithm, which has been renamed ML-KEM, short for Module-Lattice-Based Key-Encapsulation Mechanism.
FIPS 204, primarily designed for protecting digital signatures, uses the CRYSTALS-Dilithium algorithm, which has been renamed ML-DSA, short for Module-Lattice-Based Digital Signature Algorithm. FIPS 205, also intended for digital signatures, employs the Sphincs+ algorithm, which has been renamed SLH-DSA, short for Stateless Hash-Based Digital Signature Algorithm.
PQC standards implementation
Xiphera, a supplier of cryptographic IP cores, has already started updating its xQlave family of security IPs by incorporating ML-KEM (Kyber) for key encapsulation mechanism and ML-DSA (Dilithium) for digital signatures according to the final versions of the NIST standards.
“We are updating our xQlave PQC IP cores within Q3 of 2024 to comply with these final standard versions,” said Kimmo Järvinen, co-founder and CTO of Xiphera. “The update will be minor, as we already support earlier versions of the algorithms in xQlave products as of 2023 and have been following very carefully the standardisation progress and related discussions within the cryptographic community.”
Xiphera has also incorporated a quantum-resistant secure boot in its nQrux family of hardware trust engines. The nQrux secure boot is based on pure digital logic and does not include any hidden software components, which bolsters security and ensures easier validation and certification.
The nQrux secure boot uses a hybrid signature scheme comprising Elliptic Curve Digital Signature Algorithm (ECDSA), a traditional scheme, and the new quantum-secure signature scheme, ML-DSA, both standardized by NIST. The solution will ensure system security even if quantum computers break ECDSA, or if a weakness is identified in the new ML-DSA standard.
Figure 2 The hybrid system combines a classical cryptographic algorithm with a new quantum-secure signature scheme. Source: Xiphera
The nQrux secure boot, a process node agnostic IP core, can be easily integrated across FPGA and ASIC architectures. Xiphera plans to make this IP core available for customer evaluations in the fourth quarter of 2024.
PQC standards in RISC-V
Next, RISC-V processor IP supplier SiFive has teamed up with quantum-safe cryptography provider PQShield to accelerate the adoption of NIST’s PQC standards on RISC-V technologies. This will allow designers leveraging SiFive’s RISC-V processors to build chips that comply with NIST’s recently published PQC standards.
SiFive will integrate PQShield’s PQPlatform-CoPro security IP in its RISC-V processors to establish a quantum-resistant hardware root-of-trust and thus build a foundation of a secure system. “This collaboration ensures that designers of RISC-V vector extensions will be working with the latest generation of cybersecurity,” said Yann Loisel, principal security architect at SiFive.
Figure 3 PQPlatform-CoPro adds post-quantum cryptography (PQC) to a security sub-system. Source: PQShield
The partnership will also allow PQShield’s cryptographic libraries to utilize RISC-V vector extensions for the first time. On the other hand, RISC-V processors will incorporate a brand-new security technology with a greater level of protection and trust.
No wait for backup standards
Powerful quantum computers are soon expected to be able to easily crack the current encryption standards used to protect software and hardware applications. So, as the above announcements show, hardware and software makers are starting to migrate their semiconductor products to PQC technologies in line with NIST’s new standards for post-quantum cryptography.
While NIST continues to evaluate two other sets of algorithms that could one day serve as backup standards, NIST’s Moody says there is no need to wait for future standards. “Go ahead and start using these three. We need to be prepared in case of an attack that defeats the algorithms in these three standards, and we will continue working on backup plans to keep our data safe. But for most applications, these new standards are the main event.”
It’s important to note that while these PQC algorithms are implemented on traditional computational platforms, they can withstand both traditional and quantum attacks. That’s a vital consideration for long-lifecycle applications in automotive and industrial designs.
Moreover, the landscape of cryptography and cybersecurity will continue shifting amid the ascent of powerful quantum computers capable of breaking the traditional public-key cryptographic algorithms. That poses an imminent threat to the security foundations of global networks and data infrastructures.
Related Content
- Securing the Internet of Things in a Quantum World
- An Introduction to Post-Quantum Cryptography Algorithms
- Release of Post-Quantum Cryptographic Standards Is Imminent
- The need for post-quantum cryptography in the quantum decade
- U.K. Conference Accelerates Post-Quantum Cryptography Standards Review Process
The post PQC algorithms: Security of the future is ready for the present appeared first on EDN.
Time for some new leads...
This set has served me well - RIP. [link] [comments] |
Latest issue of Semiconductor Today now available
ROHM’s 4th Generation SiC MOSFET Bare Chips Adopted in Three EV Models of ZEEKR from Geely
Integration in traction inverters extends the cruising range and improves performance
ROHM has announced the adoption of power modules equipped with 4th generation SiC MOSFET bare chips for the traction inverters in three models of ZEEKR EV brand from Zhejiang Geely Holding Group (Geely), a top 10 global automaker. Since 2023, these power modules have been mass-produced and shipped from HAIMOSIC (SHANGHAI) Co., Ltd. – a joint venture between ROHM and Zhenghai Group Co., Ltd. to Viridi E-Mobility Technology (Ningbo) Co., Ltd, a Tier 1 manufacturer under Geely.
Geely and ROHM have been collaborating since 2018, beginning with technical exchanges, then later forming a strategic partnership focused on SiC power devices in 2021. This led to the integration of ROHM’s SiC MOSFETs into the traction inverters of three models: the ZEEKR X, 009, and 001. In each of these EVs, ROHM’s power solutions centered on SiC MOSFETs play a key role in extending the cruising range and enhancing overall performance.
ROHM is committed to advancing SiC technology, with plans to launch 5th generation SiC MOSFETs in 2025 while accelerating market introduction of 6th and 7th generation devices. What’s more, by offering SiC in various forms, including bare chips, discrete components, and modules, ROHM is able to promote the widespread adoption of SiC technology, contributing to the creation of a sustainable society.
ZEEKR Models Equipped with ROHM’s EcoSiC
The ZEEKR X, which features a maximum output exceeding 300kW and cruising range of more than 400km despite being a compact SUV, is attracting attention even outside of China due to its exceptional cost performance. The 009 minivan features an intelligent cockpit and large 140kWh battery, achieving an outstanding maximum cruising range of 822km. And for those looking for superior performance, the flagship model, 001, offers a maximum output of over 400kW from dual motors with a range of over 580km along with a four-wheel independent control system.
Market Background and ROHM’s EcoSiC
In recent years, there has been a push to develop more compact, efficient, lightweight electric systems to expand the adoption of next-generation electric vehicles (xEVs) and achieve environmental goals such as carbon neutrality. For electric vehicles in particular, improving the efficiency of the traction inverter, a key element of the drive system, is crucial for extending the cruising range and reducing the size of the onboard battery, heightening expectations for SiC power devices.
As the world’s first supplier to begin mass production of SiC MOSFETs in 2010, ROHM continues to lead the industry in SiC device technology development. These devices are now marketed under the EcoSiC brand, encompassing a comprehensive lineup that includes bare chips, discrete components, and modules.
EcoSiC Brand
EcoSiC is a brand of devices that utilize silicon carbide (SiC), which is attracting attention in the power device field for performance that surpasses silicon (Si). ROHM independently develops technologies essential for the evolution of SiC, from wafer fabrication and production processes to packaging, and quality control methods. At the same time, we have established an integrated production system throughout the manufacturing process, solidifying our position as a leading SiC supplier.
The post ROHM’s 4th Generation SiC MOSFET Bare Chips Adopted in Three EV Models of ZEEKR from Geely appeared first on ELE Times.
Capacitor Discharger - Discharge HV Capacitors up to 450V and 1000 µF
submitted by /u/Southern-Stay704 [link] [comments] |
Renesas Unwraps MCU-Based Sensor Module for Smart Air Quality Monitoring
Hot Chips Keynote: AMD President Shares Thoughts on AI Pervasiveness
Cornell and Lit Thinking working on DARPA-funded project to develop AlN-based PiN diodes with low on-state resistance
Beaming solar power to Earth: feasible or fantasy?
It’s always interesting when we are presented with very different and knowledgeable perspectives about the feasibility of a proposed technological advance. I recently had this experience when I saw two sets of articles about the same highly advanced concept within a short time window, but with completely different assessments of their viability.
In this case, the concept is simple and has been around for a long time in science fiction and speculative stories: capture gigawatts of solar energy using orbiting structures (I hesitate to call them satellites) and then beam that energy down to Earth.
The concept has been written about for decades, is simple to describe in principle, and appears to offer many benefits with few downsides. In brief, the plan is to use huge solar panels to intercept some of the vast solar energy impinging on Earth, convert it to electricity, and then beam the resultant electrical energy to ground-based stations from where it could be distributed to users. In theory, this would be a nearly environmentally “painless” source of free energy. What’s not to like?
It’s actually more than just an “on paper” or speculative concept. There are several serious projects underway, including one at the California Institute of Technology (Caltech) which is building a very small-scale version of some of the needed components. They have been performing ground-based tests and have even launched some elements in orbit for in-pace evaluation in January 2023 (“In a First, Caltech’s Space Solar Power Demonstrator Wirelessly Transmits Power in Space”). The Wall Street Journal even had an upbeat article about it, “Beaming Solar Energy From Space Gets a Step Closer”.
There are many technical advances to be resolved in the real world (actually, they are “out of this world”) issues that have to be addressed. Note that the Caltech project is funded thus far by a $100 million grant, all from a single benefactor.
The Caltech Space Solar Power Project launched their Space Solar Power Demonstrator (SSPD) to test several key components of an ambitious plan to harvest solar power in space and beam the energy back to Earth. In brief, it consists of three main experiments, each tasked with testing a different key technology of the project, Figure 1.
Figure 1 Caltech’s Space Solar Power Demonstrator from their Space Solar Power Project has three key subsystems, encompassing structure, solar cells, and power transfer. Source: Caltech
The three segments are:
- Deployable on-Orbit ultraLight Composite Experiment (DOLCE): A structure measuring 6 feet by 6 feet that demonstrates the architecture, packaging scheme and deployment mechanisms of the modular spacecraft that would eventually make up a kilometer-scale constellation forming a power station, Figure 2;
Figure 2 Engineers carefully lower the DOLCE portion of the Space Solar Power Demonstrator onto the Vigoride spacecraft built by Momentus. Source: Caltech
- ALBA: A collection of 32 different types of photovoltaic (PV) cells, to enable an assessment of the types of cells that are the most effective in the punishing environment of space;
- Microwave Array for Power-transfer Low-orbit Experiment (MAPLE): An array of flexible lightweight microwave power transmitters with precise timing control focusing the power selectively on two different receivers to demonstrate wireless power transmission at distance in space.
Scaling a demonstration unit up to useable size is a major undertaking. The researchers envision the system as being designed and built as a highly modular, building-block architecture. Each spacecraft will carry a square-shaped membrane measuring roughly 200 feet on each side. The membrane is made up of hundreds or thousands of smaller units which have PV cells embedded on one side and a microwave transmitter on the other.
Each spacecraft would operate and maneuver in space on its own but also possess the ability to hover in formation and configure an orbiting power station spanning several kilometers with the potential to produce about 1.5 gigawatts of continuous power. A phased-array antenna would aim the 10-GHz power beam to a surface zone about five kilometers in diameter.
The concept is certainly ambitious. Perhaps most challenging is the very harsh reality that scaling up power-related projects from a small-scale bench-size demonstration unit to full-scale functioning system is a highly nonlinear process. This applies the battery storage systems, solar and wind energy harvesting, and other sources.
Experience shows that there’s an exponential increase in difficulties and issues as physical size and power levels; the only question is “what is that exponent value?” Still, the concept makes sense and seems so straightforward; we just have to keep moving the technology along and we’ll get there, right?
I was almost convinced, but then I saw a strong counterargument in an article in the June 2024 issue of IEEE Spectrum (“A Skeptic’s Take on Beaming Power to Earth from Space”). The article’s author, Henri Barde, joined the European Space Agency in 2007 and served as head of the power systems, electromagnetic compatibility, and space environment division until his retirement in 2017; he has worked in the space industry for nearly 30 years and has reality-based insight.
He looked at various proposed and distinctly different approaches to capturing and beaming the power, including CASSIOPeiA from Space Solar Holdings Group; SPS-ALPHA Mark-III from a former NASA physicist; Solar Power Satellite from Thales Alenia Space; and MR-SPS from the China Academy of Space Technology (there’s a brief mention of the Caltech project as well).
He discusses key attributes, presumed benefits, and most importantly, the real obstacles to success as well the dollar and technical cost to overcoming those obstacles—assuming they can be overcome. These include the hundreds, if not thousands, of launches needed to get everything “up there”; the need for robotic in-space assembly and repair; fuel for station-keeping at the desired low earth orbit (LEO), medium earth orbit (MEO), or geostationary orbit (GEO); temperature extremes (there will be periods when satellites are in the dark) and associated flexing; impacts from thousands of micrometeorites; electronic components capable of handling megawatts in space (none of which presently exist), and many more.
His conclusion is simple: it’s a major waste of resources that could be better spent on improved renewable power sources, storage, and grid on Earth. The problem he points out is that beamed solar power is such an enticing concept. It’s so elegant in concept and seems to solve the energy problem so cleanly and crisply, once you figure it out.
So now I am perplexed. The sobering reality described in Barde’s “downer” article wiped out the enthusiasm I was developing for these projects such as the one at Caltech. At some point, the $100 million seed money (and similar at other projects) will need to be supplemented by more money, and lots of it (easily, trillions), to take any of these ideas to their conclusion, while there will be substantial risk.
Is beamed solar power one of those attractive ideas that is actually impractical, impossible, too risky, and too costly when it meets reality of physics, electronics, space, and more? Do we need to keep pushing it to see where it can take us?
Or will the spigot of money as well as the personal energy of its proponents eventually dry up, since it is not a project that you can do part way? After all, with a project like this one, you’re either all in or you are all out.
I know that when it comes to the paths that technology advances take, you should “never say never.” So, check back in a few decades, and we’ll see where things stand.
Related Content
- Keep solar panels clean from dust, fungus
- Lightning as an energy harvesting source?
- The other fusion challenge: harvesting the power
References
- IEEE Spectrum, “A Skeptic’s Take on Beaming Power to Earth from Space”
- IEEE Spectrum, “Space-based Solar Power: A Great Idea Whose Time May Never Come”
- IEEE Spectrum, “Powering Planes With Microwaves Is Not the Craziest Idea”
- IEEE/Caltech Technical Paper, “The Caltech Space Solar Power Demonstration One”
- Caltech, “Solar Power at All Hours: Inside the Space Solar Power Project”
- Caltech, “Space Solar Power Project Ends First In-Space Mission with Successes and Lessons”
- Caltech, “In a First, Caltech’s Space Solar Power Demonstrator Wirelessly Transmits Power in Space”
- Caltech, “Caltech to Launch Space Solar Power Technology Demo into Orbit in January”
- The Wall Street Journal, “Beaming Solar Energy From Space Gets a Step Closer”
The post Beaming solar power to Earth: feasible or fantasy? appeared first on EDN.
ROHM’s fourth-generation SiC MOSFET chips adopted in three Geely ZEEKR EV models
Олександр Мирончук. Тут народжується майбутнє
Лідер технічної освіти України – КПІ ім. Ігоря Сікорського активно співпрацює із зовнішніми партнерами з метою оновлення навчально-лабораторної бази та впроваджує освітні програми, адаптовані до запитів ринку праці і високотехнологічного бізнесу. Уже кілька років на РТФ працює лабораторія "Datacom", обладнана потужним сучасним обладнанням Huawei. Навчатися в ній можуть не лише студенти-радіотехніки, а й КПІшники з інших факультетів.
EPC Space unveils dynamic cross-reference tool for rad-hard MOSFET device replacement
Coherent unveils CW DFB InP lasers for silicon photonics transceivers
Understanding the Effect of Diode Reverse Recovery in Class D Amplifiers
Hot Chips Heavy Hitter: IBM Tackles Generative AI With Two Processors
🔔 Сесія професорсько-викладацького складу КПІ
29 серпня відбудеться сесія професорсько-викладацького складу КПІ у змішаному форматі.
📍Урочисте засідання пройде за участі деканів факультетів, директорів навчально-наукових інститутів, завідувачів кафедр у залі засідань Вченої ради університету.
Wise-integration forms Hong Kong-based subsidiary to manage Asian business development
USB 3: How did it end up being so messy?
After this blog post’s proposed topic had already been approved, but shortly before I started to write, I realized I’d recently wasted a chunk of money. I’m going to try to not let that reality “color” the content and conclusions, but hey, I’m only human…
Some background: as regular readers may recall, I recently transitioned from a Microsoft Surface Pro 5 (SP5) hybrid tablet/laptop computer:
to a Surface Pro 7+ (SP7+) successor:
Both computer generations include a right-side USB-A port; the newer model migrates from a Mini DisplayPort connector on that same side (and above the USB-A connector) to a faster and more capable USB-C replacement.
Before continuing with my tale, a review: as I previously discussed in detail six years ago (time flies when you’re having fun), bandwidth and other signaling details documented in the generational USB 1.0, USB 2.0, USB 3.x and still embryonic USB4 specifications are largely decoupled from the connectors and other physical details in the USB-A, USB-B, mini-USB and micro-USB, and latest-and-greatest USB-C (formally: USB Type-C) specs.
The signaling and physical specs aren’t completely decoupled, mind you; some USB speeds are only implemented by a subset of the available connectors, for example (I’ll cover one case study here in a bit). But the general differentiation remains true and is important to keep in mind.
Back to my story. In early June, EDN published my disassembly of a misbehaving (on MacOS, at least) USB flash drive. The manufacturer had made the following performance potential claims:
USB 3.2 High-Speed Transmission Interface
Now there is no reason to shy away from the higher cost of the USB 3.2 Gen 1 interface. The UV128 USB flash drive brings the convenience and speed of premium USB drives to budget-minded consumers.
However, benchmarking showed that it came nowhere close to 5 Gbps baseline USB 3.x transfer rates, far from the even faster 10 and 20 Gbps speeds documented in newer spec versions:
What I didn’t tell you at the time was that the results I shared were from my second benchmark test suite run-through. The first time I ran Blackmagic Design’s Disk Speed Test, I had connected the flash drive to the computer via an inexpensive (sub-$5 inexpensive, to be exact) multi-port USB 3.0 hub intermediary.
The benchmark site ran ridiculously slow that first time: in retrospect, I wish I would have grabbed a screenshot then, too. In trying to figure out what had happened, I noticed (after doing a bunch of research; why Microsoft obscures this particular detail is beyond me) that its USB-C interface specified USB 3.2 Gen2 10 Gbps speeds. Here’s the point where I then over-extrapolated; I assumed (incorrectly, in retrospect) that the USB-A port was managed by the same controller circuitry and therefore was capable of 10 Gbps speeds, too. And indeed, direct-connecting the flash drive to the system’s USB-A port delivered (modestly) faster results:
But since this system only includes a single integrated USB-A port, I’d still need an external hub for ongoing use. So, I dropped (here’s the “wasted a chunk of money” bit) $40 each, nearly a 10x price increase over those inexpensive USB 3.0 hubs I mentioned earlier, on the only 10 Gbps USB-A hub I could find, Orico’s M3H4-G2:
I bought three of them, actually, one for the SP7+, one for my 2018 Mac mini, and the third for my M1 Max Mac Studio. All three systems spec 10 Gbps USB-C ports; those in the latter two systems do double duty with 40 Gbps Thunderbolt 3 or 4 capabilities. The Orico M3H4-G2 isn’t self-powered over the USB connection, as was its humble Idsonix precursor. I had to provide the M3H4-G2 with external power in order for it to function, but at least Orico bundled a wall wart with it. And the M3H4-G2’s orange-dominant paint job was an…umm…“acquired taste”. But all in all, I was still feeling pretty pleased with my acquisition…
…until I went back and re-read that Microsoft-published piece, continuing a bit further in it than I had before, whereupon I found that the SP7+ USB-A port was only specified at 5 Gbps. A peek at the Device Manager report also revealed distinct entries for the USB-A and USB-C ports:
Unfortunately, my MakerHawk Makerfire USB tester only measures power, not bandwidth, so I’m going to need to depend on the Microsoft documentation as the definitive ruling.
And, of course, when I went back to the Mac mini and Mac Studio product sheets, buried in the fine print was indication that their USB-A ports were only 5 Gbps, too. Sigh.
So, what had happened the first time I tried running Blackmagic Design’s Disk Speed Test on the SP7+? My root-case guess is a situation that I suspect at least some of you’ve also experienced; plug in a USB 3.x peripheral, and it incorrectly enumerates as being a USB 1.0 or USB 2.0 device instead. Had I just ejected the flash drive from the USB 3.0 hub, reinserted it and re-run the benchmarks, I suspect I would have ended up with the exact same result I got from plugging it directly into the computer, saving myself $120 plus tax in the process. Bitter? Who, me?
Here’s another thought you might now be having: why does the Orico M3H4-G2 exist at all? Good question. To be clear, USB-A optionally supports 10 Gbps USB 3 speeds, as does USB-C; the only USB-C-specific speed bin is 20 Gbps (for similar reasons, USB4 is also USB-C-only from a physical implementation standpoint). But my subsequent research confirmed that my three computers weren’t aberrations; pretty much all computers, even latest-and-greatest ones and both mobile and desktop, are 5 Gbps-only from a USB-A standpoint. Apparently, the suppliers have decided to focus their high-speed implementation attention solely on USB-C.
That said, I did find one add-in card, Startech’s PEXUSB311AC3, that implemented 10 Gbps USB-A:
I’m guessing there might also be the occasional motherboard out there that’s 10 Gbps USB-A-capable, too. You could theoretically connect the hub to a 10 Gbps USB-C system port via a USB-C-to-USB-A adapter, assuming the adapter can do 10 Gbps bidirectional transfers, too (I haven’t yet found one). And of course, two 10 Gbps USB-A-capable peripherals, such as a couple of SSD storage devices, can theoretically interact with each through the Orico hub at peak potential speeds. But suffice it to say that I now more clearly understand why the M3H4-G2 is one-of-a-kind and therefore pricey, both in an absolute sense and versus 5 Gbps-only hub alternatives.
1,000+ words in, what’s this all have to do with the “Why is USB 3 so messy” premise of this piece? After all, the mistake was ultimately mine in incorrectly believing that my systems’ USB-A interfaces were capable of faster transfer speeds than reality afforded. The answer: go back and re-scan the post to this point. Look at both the prose and photos. You’ll find, for example:
- A USB flash drive that’s variously described as being “USB 3.0” and with a “USB 3.2 Gen 1” interface and a “USB 3.2 High-Speed Transmission Interface”
- An add-in card whose description includes both “10 Gbps” and “USB 3.2 Gen 2” phrases
- And a multi-port hub that’s “USB 3.1”, “USB 3.1 Gen2” and “10Gbps Super Speed”, depending on where in the product page you look.
What I wrote back in 2018 remains valid:
USB 3.0, released in November 2008, is once again backwards compatible with USB 1.x and USB 2.0 from a transfer rate mode(s) standpoint. It broadens the pin count to a minimum of nine wires, with the additional four implementing the two differential data pairs (one transmitter, one receiver, for full duplex support) harnessed to support the new 5 Gbps SuperSpeed transfer mode. It’s subsequently been renamed USB 3.1 Gen 1, commensurate with the January 2013 announcement of USB 3.1 Gen 2, which increases the maximum data signaling rate to 10 Gbps (known as SuperSpeed+) along with reducing the encoding overhead via a protocol change from 8b/10b to 128b/132b.
Even more recently, in the summer of 2017 to be exact, the USB 3.0 Promoter Group announced two additional USB 3 variants, to be documented in the v3.2 specification. They both leverage multi-lane operation over existing cable wires originally intended to support the Type-C connector’s rotational symmetry. USB 3.2 Gen 1×2 delivers a 10 Gbps SuperSpeed+ data rate over 2 lanes using 8b/10b encoding, while USB 3.2 Gen 2×2 combines 2 lanes and 128b/132b encoding to support 20 Gbps SuperSpeed+ data rates.
But a mishmash of often incomplete and/or incorrect terminology, coupled with consumers’ instinctive interpretation that “larger numbers are better”, has severely muddied the waters as to what exactly a consumer is buying and therefore should expect to receive with a USB 3-based product. In fairness, the USB Implementers Forum would have been perfectly happy had its member companies and compatibility certifiers dispensed with the whole numbers-and-suffixes rigamarole and stuck with high-level labels instead (40 Gbps and 80 Gbps are USB4-specific):
That said:
- 5 Gbps = USB 3.0, USB 3.1 Gen 1, and USB 3.2 Gen 1 (with “Gen 1” implying single-lane operation even in the absence of an “x” lane-count qualifier)
- 10 Gbps = USB 3.1 Gen 2, USB 3.2 Gen 2 (with the absence of an “x” lane-count qualifier implying single-lane operation), and USB 3.2 Gen 2×1 (the more precise alternative)
- 20 Gbps = USB 3.2 Gen 2×2 (only supported by USB-C).
So, what, for example, does “10 Gbps USB 3” mean? Is it a single-lane USB 3.1 device, with that one lane capable of 10 Gbps speed? Or is it a dual-lane USB 3.2 device with each lane capable of 5 Gbps speeds? Perhaps obviously, try to connect devices representing both these 10 Gbps implementations together and you’ll end up with…5 Gbps (cue sad trombone sound).
So, like I said, what a mess. And while I’d like to think that USB4 will fix everything, a brief scan of the associated Wikipedia page details leave me highly skeptical. If anything, in contrast, I fear that the situation will end up even worse. Let me know your thoughts in the comments.
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- USB: Deciphering the signaling, connector, and power delivery differences
- An O/S-fussy USB flash drive
- A deep dive inside a USB flash drive
- USB Power Delivery: incompatibility-derived foibles and failures
- Cutting into a conventional USB-C charger
- Checking out a USB microphone
The post USB 3: How did it end up being so messy? appeared first on EDN.