Українською
  In English
Новини світу мікро- та наноелектроніки
Vehicle emissions: Issues and workarounds for various monitoring conditions

In March 2022, within a broader-treatment post on automobile owners’ rights to repair their own vehicles, I briefly introduced the topic of the on-board diagnostics (OBD) port, used to monitor (and in some cases, as you’ll soon see, manipulate) vehicle status information:
I dove into the topic in greater detail earlier this year, within the teardown of an OBD-II scanner:
Here again, by means of (re-)introduction, is the summary definition from Wikipedia’s entry:
On-board diagnostics (OBD) is a term referring to a vehicle’s self-diagnostic and reporting capability. OBD systems give the vehicle owner or repair technician access to the status of the various vehicle sub-systems. The amount of diagnostic information available via OBD has varied widely since its introduction in the early 1980s versions of on-board vehicle computers. Early versions of OBD would simply illuminate a malfunction indicator light (MIL) or “idiot light” if a problem was detected, but would not provide any information as to the nature of the problem. Modern OBD implementations use a standardized digital communications port to provide real-time data in addition to a standardized series of diagnostic trouble codes, or DTCs, which allow a person to rapidly identify and remedy malfunctions within the vehicle.
Likely unsurprisingly, at least to some of you, OBD monitoring finds use for (among other things) assessing a vehicle’s emissions status, including but not limited to factors such as the catalytic converter’s effectiveness and whether (and if so, to what degree and with what results) the vehicle’s periodic emissions auto-checking has completed. Over time, I’ve gotten more adept at this particular OBD monitoring “angle”, out of necessity and aided by equipment acquisitions.
As background: my wife and I are blessed to own two more vehicles (both of which, along with my 2008 Volvo XC70 primary “wheels”, came to the marriage via yours truly) than fit inside our two-car garage. They’re both rare “classics” (IMHO, at least), and all three cars are low mileage and in great condition, thereby explaining my reluctance to part with any of them. One “extra” is a 2001 Volkswagen Eurovan Camper (a Winnebago conversion, not the Westfalia model):
The other’s a 2006 Jeep Wrangler Rubicon Unlimited, complete with both hard and soft tops:
Since our homeowners’ association (HOA) is a stickler for vehicles permanently parked on the street or in driveways, we’ve been storing the two “extras” under thick weather-resistant covers at a nearby outdoor lot.
The Eurovan Camper was my “moving van” for pets (two dogs and two cats) and other important possessions when I relocated part-time (at the time) from California to Colorado. Later, about a decade ago, I took a road trip back to CA, because the vehicle was still registered there and was due for periodic emissions testing, with passing as a condition of renewal.
Just outside Reno, NV, and less than an hour east of my destination, I stopped to fill up the gas tank. Only a few miles further down the road, the “check engine” light in the dashboard went on (although the engine thankfully remained running smoothly) and stayed on. This was a problem, particularly considering the primary motivation to make the road trip in the first place. Emissions testing requests are summarily rejected, at least in both CA and CO, if the “check engine” alert is illuminated at any point during the tests, reflective of OBD Diagnostic Trouble Codes (DTCs) (not to mention whatever foundational issue(s) had activated the alerts).
The local mechanic told me that I’d probably gotten some “bad gas” and that I just needed to drive the VW around for a while, at various speeds both steady and time-variant, to:
- Burn through the “bad gas” (I’d then replace it with hopefully higher quality stuff), and
- Allow the vehicle’s various diagnostics tests to run again and issue an OBD “all-clear”.
He explained these tests (commonly referred to as a “drive cycle”) auto-ran under different operating conditions, as well as after varying numbers of ignition key start sequences, therefore the need for driving-behavior diversity. I’d honestly (and likely ignorantly) not heard of such a thing before, but he seemed like he knew what he was talking about, so I followed his instructions with as much driving variety as I could muster. And he was right; a couple of days later the “check engine” light went off and stayed off, and I still had time to get it emissions-tested (it thankfully passed) prior to the deadline for my return to CO.
Fast forward to last year. As previously mentioned, I store the vehicles in an outdoor lot. And they’re due for emissions testing (an every-two-year requirement, complete with time on a dynamometer, given their ages) on alternating years. I’ve also historically kept their batteries disconnected when parked, since I don’t have AC outlet access for trickle chargers and the batteries would therefore otherwise slowly-but-surely drain. So, every year, I go to the lot, remove both vehicles’ covers, hook up their batteries and jump-start as needed, reorder them:
and drive the now-in-front vehicle a couple of miles down the road for emissions testing.
Last year (why it took until then, since I’ve been doing this procedure for nearly a decade now, is something I still haven’t sorted out), that pattern no longer panned out. The guy at the testing site told me not that the Jeep had failed emissions, but that the testing hadn’t completed. He then handed me a report, stamped “Incomplete” at the top, along with a pamphlet from the state (apparently this happens often), which was informative. A few report excerpts:
Readiness Monitors:
- Catalytic Converter: Ready
- EGR System: Not Supported
- Evaporative: Not Ready
- Oxygen Sensor: Ready
- Oxygen Sensor Heater: Not Ready
- Air System: Not Supported
- Air Conditioning: Not Supported
- Heated Catalyst: Not Supported
The vehicle’s OBD system has not yet completed its self evaluation. The OBD system must be working properly and the supported readiness monitors set to “Ready” before attempting another OBD test.
Readiness Monitor Information
There are several readiness monitors that allow the vehicle’s OBD system to identify potential emissions problems. The following tips will help ensure the vehicle’s OBD system is prepared for the emissions inspection:
- Make sure the Check Engine light is not illuminated, while the engine is running.
- If this vehicle recently had emissions-related repairs, or if its diagnostic trouble codes were cleared, or if its battery was disconnected, then the vehicle must complete a drive cycle for its readiness monitor status to be set to “Ready”.
You can find a list of different vehicle manufacturers’ drive cycles at colorado.gov/cdphe/drive-cycles [editor note: I can’t actually find them there, only general information, although maybe I overlooked something]. For most vehicles, driving it for a few days in both city and highway conditions will set the readiness monitors. Contact your vehicle manufacturer, a qualified service technician, or a State Emissions Technical Center for more information.
And here are scans of the various pages of the accompanying pamphlet (yellow highlighter markings on page 2 are courtesy of my new testing-center friend):
Remember how I earlier mentioned that I’d kept the vehicles’ batteries disconnected when parked, to prevent them from draining? Apparently, some of the discharge was caused by battery-backing up, whenever the vehicle wasn’t running, whatever volatile memory (SRAM, I assume) was storing the most recent drive-cycle results data. No battery = no more data.
The Jeep, as I’ve already mentioned, had been sitting outside essentially unused for many years, always covered (and with tires elevated above the ground, keeping them in good shape too) but still exposed to the elements (and rodents) from underneath. I was concerned about (among other things) belts and hoses that may have dried and cracked, not to mentioned chewed wiring, so my first step was to take it to a local mechanic for a once-over. Good news: it was still in great shape. Bad news: he couldn’t get all the drive cycle tests to complete.
So, I drove it around for a few days, then took it back to the local mechanic, who did another OBD scan (I hadn’t yet bought my own unit) and confirmed it still hadn’t completed all the tests. The “oxygen sensor heater” test was the one still “stuck”, and I’d done research indicating that the root cause might be a wiring fault covered by the manufacturer under lifetime warranty, so my next visit was to a nearby Jeep dealer’s service depot. Fortunately, there was one mechanic there who was old enough to have had experience with 2006-era vehicles, and he assured me that I just hadn’t yet covered the exact driving scenario necessary for it to run. Finally, it reported “ready”, and my return visit to the emissions testing site was also successful.
Fast forward to this year. It was now the Eurovan Camper’s turn in the emissions testing spotlight. Remember this quote from my September OBD-II scanner teardown?
The OBD-II scanner that I currently own (for reasons that I’ll save for another blog post another day) is an Autel AutoLink AL519, which I bought last November from an eBay retailer for $59.98.
This is the “another blog post, another day” I was alluding to.
Two years ago, when driving the vehicle away from the center after a successful test, the “check engine” light had turned on, so I knew I’d need to be proactive this time. After a few dead-ends, I eventually found a local (excellent) specialty repair shop, who did a full refurb on the vehicle, both mechanically and cosmetically. I pulled away armed with guidance to drive around for a few days to give it a proper drive cycle test opportunity and clear out any remaining “old” gas along with the fuel-system cleaning additive he’d put in, then refill it with “fresh” premium fuel before visiting the emissions testing center.
Two days later, while still driving the Eurovan Camper around on its “old” gas, the “check engine” light came on. When I got home, I pulled the OBD reader out of its box, updated its firmware and database to the latest-and-greatest, and plugged it into the OBD port in the underside of the steering column. Here’s what I saw:
Back to my mechanic, who confirmed the failed rear oxygen sensor diagnosis. He also installed the dashboard-mounted, OBD-based ScanGauge II that I previously purchased at his suggestion:
which monitors vehicles’ status both in additional detail for gauges and even lower-precision warning lights already in the dashboard (such as engine temperature and RPM, and battery and alternator status) and over-and-above information (transmission temperature, for example, particularly important in a heavy vehicle such as this, and gas mileage).
I left the mechanic’s shop with the “check engine” light off again…and the next day it was back on. Fortunately, the ScanGauge II, like my AutoLink AL519 albeit more conveniently, also supports not only reading and decoding but also clearing OBD diagnostic trouble codes. A few front-panel button presses and the “check engine” light was back off. At that point, with the registration-renewal expiration date looming, I figured it was time for me to fill up the van with fresh fuel and get down to the testing center. On the way there, the light came back on, and it also re-illuminated on the way back home afterwards. But in-between, the vehicle passed.
I’m still not sure why I’m getting intermittent P0140 DTCs, especially since the vehicle’s running smoothly and per its test results isn’t exceeding any emissions test limits. My best guess at this point is faulty wiring; I’ll be taking it back to the shop shortly for another diagnostics scrub. But thankfully, I’m “good” from an emissions perspective for another couple of years, in no small part thanks to the assistive knowledge and gear I’ve accumulated, along with the regulatory requirements that have ensured my access to the vehicle information I needed to sort out and resolve the issues I came across. If you’ve encountered and dealt with similar situations in the past, I look forward to hearing about them in the comments.
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Disassembling an OBD-II scanner
- Teardown: OBD-II Bluetooth adapter
- Runaway acceleration
- The headlights and turn signal design blunder
The post Vehicle emissions: Issues and workarounds for various monitoring conditions appeared first on EDN.
Three key trends for EDA in the cloud in 2024

No longer a curiosity or even a fad, cloud is now deeply entrenched into the operations for an array of industries. With its scalability, flexibility, and elasticity, it simply makes good business sense to embrace cloud computing—especially in the semiconductor industry. Last year, when I looked into the crystal ball, I envisioned several events taking shape this year.
From the mainstreaming of the cloud for peak use models to increased adoption by chip designers, expanded use by semiconductor companies with large data center investments, diminished supply chain issues, and the emergence of verification as a key cloud workload for EDA, these predictions have largely been realized in 2023.
What will 2024 bring to the cloud landscape?
Looking ahead, there are three key trends taking shape now that will likely grow in prevalence in 2024:
- Design-centric AI models will increasingly be run in the cloud.
- Chip design teams will continue to seek greater cost optimizations for cloud-based EDA workloads.
- Cloud will drive players in the EDA ecosystem to collaborate more closely.
Let’s take a deeper look at each of these trends to understand why they are poised to make a bigger impact in the coming year.
- AI models and the cloud
AI is everywhere these days, from the edge devices you might have in your home to large-scale modeling systems used in science, medicine, finance, and many other sectors. The AI and machine learning models that generate valuable insights require highly complex chips to deliver the high bandwidth, low latency, and low power that make many of these applications feasible.
Designing and verifying the chips in the cloud with pay-per-use models enables engineers to tap into EDA tools, compute resources, and storage options they need when they need them. This flexibility can lower overall costs and simplify the process for getting these chip design and verification solutions up and running. As cloud-based EDA adoption rose in 2023, we can only see this trend accelerate in the new year, particularly as AI continues to demand more from the underlying chips.
Increasingly, AI capabilities are being integrated into EDA solutions, enabling them to take on not only the repetitive tasks within massive workloads but also those that are impossible for humans to accomplish in the timeframes needed. This enables engineers to focus on more value-added tasks like product differentiation, while resulting in better quality-of-results, time-to-results, and cost-of-results. At the same time, this also drives further multiplicative requirements for flexible access to compute resources.
Figure 1 Engineers can now use AI at every stage of chip design, from system architecture to design and manufacturing. Source: Synopsys
In 2023, we started to see more instances of AI-driven EDA tools running in the cloud. Cloud-based solutions for tasks such as design space exploration demonstrated their ability to increase exploration productivity while exceeding power, performance, and area (PPA) goals. We can expect that in 2024, adoption of cloud-based, AI-driven EDA solutions will continue to rise.
- Optimizing cost of cloud-based solutions
As more chipmakers design and verify on the cloud, their interest in optimizing costs will continue to grow. For companies without the resources to maintain an on-premises infrastructure for EDA workflows, the cloud presents an attractive option. For companies on the other end of the spectrum, cost-optimized solutions on cloud can make it worthwhile. The predominant approaches for cloud deployment offer high levels of cost flexibility:
- Software-as-a-service (SaaS) models eliminate the time and overhead of building and maintaining infrastructure or managing license servers.
- Bring-your-own-cloud (BYOC) models allow teams to work with their established cloud provider, accessing EDA tools through pay-per-use pricing.
Figure 2 With SaaS model, design engineers have a single contact for all EDA tools and services. Source: Synopsys
Figure 3 With BYOC model, design engineers can maintain control over their cloud environment while taking advantage of a variety of EDA tools and services. Source: Synopsys
However, there’s always room for additional cost optimizations. One effective avenue to lower costs is by deploying spot virtual machines. A spot virtual machine (VM) stems from excess capacity of specific compute VMs that cloud providers make available at heavily discounted prices when demand at a given moment does not meet their capacity projections. The challenge with running EDA workloads on spot VMs is that they can be removed on short notice.
As such, EDA workloads need to be able to recover from a spot VM termination signal to avoid lost processing time when a job has been running for a while. Checkpoint restore functionality built into the tools mitigates this challenge. Then there are the EDA jobs with high memory workloads—such as physical verification or an RTL-to-gate implementation—where the runtime state can be several hundred gigabytes.
In these cases, since the time needed to checkpoint can be much longer than the cloud provider’s spot warning window, the job would get terminated without the ability to restore and without saving the runtime state. AI-driven technologies that utilize termination signal predictions to manage EDA workloads between spot and on-demand VMs have emerged to address this challenge.
- Increasing collaboration across the semiconductor ecosystem
While EDA vendors may like to think that their customers use their flows exclusively, the reality is that customers choose the solutions that they feel are ideally suited for their designs. Often, this means they’ve got a mix of solutions from multiple EDA vendors, IP providers and foundries in their flow. As chip designers move their work to the cloud, they don’t want to get locked into a pre-defined tool flow.
Instead, they want to enable the same flows they have implemented and perfected over time—which may consist of an array of different chip design and verification solutions, and IP from different vendors—in their cloud environment. These dynamics are pushing the players in the EDA ecosystem to collaborate more closely than ever in the interest of optimizing the customer experience.
Driving business success
As the cloud becomes increasingly integral to the semiconductor industry, the key players will continue seeking ways to optimize its implementation in all the ways that will help drive business success. From the use of AI to design and verify chips to the deployment of spot virtual machines for cost savings and increased ecosystem collaboration to drive seamless interoperability on the cloud, the future is shaping up to be a bright one.
Bottom line, anything that can help make it a little easier to develop today’s complex chips holds promise for the innovation that this industry needs to thrive.
Vikram Bhatia is head of Cloud Product Management & GTM Strategy at Synopsys.
Related Content
- A bright outlook for EDA in the cloud
- Bespoke EDA Differentiates Silicon Chips
- EDA in the Cloud Will be Key to Rapid Innovative SoC Design
- 4 basic considerations in migrating to cloud-based EDA tools
- How EDA workloads inside the cloud reinvigorate chip design
The post Three key trends for EDA in the cloud in 2024 appeared first on EDN.
A holiday shopping guide for engineers: 2023 edition

As of this year, EDN has consecutively published my odes to holiday-excused consumerism for a half-decade straight: here are the 2019, 2020, 2021 and 2022 editions (I skipped a few years between the 2014 edition and its successors). As in the past, I’ve included up-front links to the prior-year versions because I’ve done my best here to not regurgitate any past product category recommendations; the stuff I’ve previously suggested largely remains valid, after all. That said, it gets harder and harder each year not to repeat myself!
Without any further ado, and ordered solely in the order in which they initially came out of my cranium…
Bite into the Raspberry Pi 5
After just telling you that I wasn’t going to repeat myself, I’m going to go back on what I said…sorta. Three years ago, I suggested you pick up a Raspberry Pi 4 development board, which I’d previously covered (as the latest addition to the generational product family) earlier that same year. The Raspberry Pi 4 had been introduced in June 2019, and I snagged a 4 GByte one as soon as available in early 2020…a good thing, as it turns out, because shortly thereafter the world plunged into pandemic lockdown and supply evaporated. What I could have resold my board for back then, if I’d been motivated to do so…what’s that expression, “greed is good”?
Thankfully, component supply has subsequently rebounded, leading to a tangible board assembly restart beginning earlier this year. At this point, in fact, availability has sufficiently rebounded that I occasionally even see current-generation boards (and kits based on them) on sale…and the Raspberry Pi Foundation is also now comfortable talking about next-generation offerings. I’m referring, of course, to the Raspberry Pi 5, unveiled in late September and scheduled to be available by the time you read this, priced at $60 (4GByte) and $80 (8GByte):
In transitioning from the Raspberry Pi 4’s Broadcom BCM2711 SoC, fabricated on a 24 nm process, to the next-generation 16 nm-based BCM2711, the Raspberry Pi 5 makes several notable architectural advancements, which are reflected in (among other things) significant early benchmark results (both absolute and relative):
- A 64-bit CPU core cluster evolution from the quad-core Arm Cortex-A72 running at 1.5 GHz (later 1.8 GHz) to a quad-core 2.4 GHz Arm Cortex-A76 with 512KByte per-core L2 caches and a 2MByte shared L3 cache. Here’s how they compare, per Arm (PDF).
- An upgraded GPU core, the VideoCore VII, developed by the Raspberry Pi Foundation
- A subdivision of functions formerly integrated fully within the application processor (the BCM2711 in the case of the Raspberry Pi 4), now allocated between the BCM2712 and a separate Raspberry Pi Foundation-designed I/O chip, the RP1, fabricated on TSMC’s more mature (translation: cost-effective) 40LP process.
- A new imaging signal processor (ISP), subdivided between the BMC2712 and RPI, and also designed by Raspberry Pi Foundation personnel.
- A newly developed (by Renesas, with Raspberry Pi Foundation assistance) PMIC (power management IC), the DA9091, and
- Tweaks to the board layout that obsolete some Hardware Attached on Top (HAT) add-on boards and open the doors to other new ones.
My Raspberry Pi 5 is on order, and hopefully it’ll be in hand by the time you all read this. One upfront heads-up; per early reviews, the board runs hot (in spite of its inherent higher efficiency versus its predecessor, apparently counterbalanced by higher performance) and the Active Cooler (or alternatively, a fan-inclusive case) should therefore generally be treated as a requirement, versus optional.
Learn from an NVIDIA Jetson Developer Kit
Back in November 2020, I did a writeup on Google’s various AIY (“Artificial-Intelligence-Yourself,” a play on DIY, i.e., “Do-It-Yourself) Project Kits, suggesting them as platform paths to learning more about deep learning and broader AI concepts. Three years later, I’m not exactly surprised to revisit the article, click through the various embedded links, and find inventory at Google’s various retail partners either depleted or accompanied by “Discontinued” notices. After all, deep learning and AI remain one of the hottest areas of technology innovation, in the process rapidly rendering existing silicon and software solutions obsolete.
To wit, sitting downstairs in storage is an NVIDIA Jetson TX1 developer kit, which my wife bought me in November 2017 (as a that-year Christmas present) for $500 and change from Newegg. Unfortunately, less than a year after I received it, NVIDIA issued an EOL notice on it, and further associated software development also ceased after the JetPack v3.3 toolset release at around that same time (as I type these words, the JetPack SDK is up to v5.1.2).
(I shudder to wonder what the fire extinguisher was doing sitting in-frame in that video)
That said…them’s the breaks, and that’s technology’s pace for you. The Tegra X1 SoC on which the Jetson TX1 was based was introduced nearly a decade ago at the January 2015 CES, after all, with the Jetson TX1 module (and broader-function developer board derived from it) following it that same November. And I can’t deny that, although plenty of other suppliers are gunning for NVIDIA’s current dominance in the space, if you do a Google search on “AI developer board” (or, if you prefer, “deep learning developer board”), the Jetson kit family is consistently at the top of the results list…along with Raspberry Pi boards, ironically, and Google’s AiY successor, Coral.
NVIDIA’s current developer kit lineup (again, as I type these words in mid-October; CES 2024 is coming up soon, after all) consists of three different offerings, with varying silicon foundations, periphery allotments, form factors, price points and the like:
- Jetson Nano (~$199)
- Jetson Orin Nano (shown below, ~$499), and
- Jetson AGX Orin (~$1,999)
Kits are available for purchase both from NVIDIA’s own online store and from retail partners such as (but not limited to) Amazon.
Revisit some technology-tome classics
I don’t know about you, but I spend way too much time during an average day staring at way too many screens: computer monitors and laptop integrated displays, smartphones, tablets, e-book readers and the like, not to mention televisions. When I realize (often in conjunction with a splitting headache) that I’ve overdone it, it feels good to put all those electronics devices away for a while and instead curl up with a good book, both because the paper-based media is comforting and because doing so discourages even more headache-inducing multitasking.
Two years ago, I suggested you might want to subscribe to an engineering journal (or few). This time, I’m going to offer up a list of classic (IMHO, at least) technology-themed books, most (but not all) of which currently reside on my bookshelves or in my Kindle library. Some are oldie-but-goodies, others are more recently published, and the list that follows (ordered by primary author’s last name) certainly isn’t intended to be comprehensive, instead aligned with my personal tech interests. That all said, I think all of you will enjoy at least some of them. Wherever possible, BTW, I’ve linked to Wikipedia entries or generic Google searches for either book titles or authors. Regardless, don’t worry, any Amazon URLs here aren’t referral-tagged:
- The Deal of the Century: The Breakup of AT&T by Steve Coll
- The Pentium Chronicles: The People, Passion, and Politics Behind Intel’s Landmark Chips by Bob Colwell
- Accidental Empires: How the Boys of Silicon Valley Make Their Millions, Battle Foreign Competition, and Still Can’t Get a Date by Robert X. Cringely (Mark Stephens)
- Introduction to Computer Graphics and Computer Graphics: Principles and Practice by James Foley, Andries Van Dam et al
- Fire in the Valley: The Making of the Personal Computer by Paul Freiberger and Michael Swaine
- Computer Architecture: A Quantitative Approach and Computer Organization and Design: The Hardware/Software Interface by John Hennessy and David Patterson
- Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age by Michael Hiltzik
- Hacking the Xbox and The Hardware Hacker by Andrew “bunnie” Huang
- Steve Jobs and The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution by Walter Isaacson
- The Soul of a New Machine by Tracy Kidder (quick aside: in retrospect, my first-of-many reads of this while in high school was seminal in cultivating my subsequent interest in computer and electrical engineering as an academic focus and, later, professional career)
- The Art of Computer Programming by Donald Knuth
- The Age of Intelligent Machines, The Age of Spiritual Machines and The Singularity is Near by Ray Kurzweil
- Hackers: Heroes of the Computer Revolution by Steven Levy
- The New New Thing: A Silicon Valley Story by Michael Lewis
- What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry by John Markoff and Takedown: The Pursuit and Capture of Kevin Mitnick, America’s Most Wanted Computer Outlaw — By the Man Who Did It by John Markoff and Tsutomu Shimomura
- Computer Lib/Dream Machines by Ted Nelson (good luck finding a copy of this long out-of-print classic!)
- Code: The Hidden Language of Computer Hardware and Software by Charles Petzold
- The New Hacker’s Dictionary by Eric Raymond
- The Chip: How Two Americans Invented the Microchip and Launched a Revolution by T.R. Reid
- Skunk Works: A Personal Memoir of My Years at Lockheed by Ben Rich
- Inside the Machine: An Illustrated Introduction to Microprocessors and Computer Architecture by Jon Stokes (editor note: co-founder of the Ars Technica website)
- The Cuckoo’s Egg: Tracking a Spy Through the Maze of Computer Espionage by Clifford Stoll
Suspend disbelief with some tech fiction
The classics that follow were originally in my previous list, which as you’ve already seen, got quite long. So, I decided to split out the works of fiction separately!
- I, Robot by Isaac Asimov
- Basically anything by Ray Bradbury
(another heavy influence on me as a youth)
- Microserfs by Douglas Coupland
- Do Androids Dream of Electric Sheep? by Philip K. Dick
- Neuromancer by William Gibson
- Brave New World by Aldous Huxley
- Nineteen Eighty-Four by George Orwell
- Snow Crash by Neal Stephenson
Bolster your business savvy
You’re a techie. I know. So am I. But amassing a critical mass of business intelligence is also critical to future career success, in my opinion, if only to be able to comprehend and effectively counter all the corporate-speak jargon the sales and marketing staffers and executives periodically spew at you ;-). Many of these are fairly old (“classics”, again) but I’d argue that the fundamentals of business success haven’t changed much (if at all) since then:
- The Long Tail and Free: The Future of a Radical Price by Chris Anderson
- The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail by Clayton Christensen
- Marketing High Technology by Bill Davidow
- Build: An Unorthodox Guide to Making Things Worth Making by Tony Fadell
- Microcosm: The Quantum Revolution in Economics and Technology by George Gilder
- Outliers: The Story of Success by Malcolm Gladwell
- High-Output Management and Only the Paranoid Survive by Andy Grove (I was handed both of these when, fresh out of college, I joined Intel back when Andy was still running the company)
- Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers by Geoffrey Moore
Upgrade your video gear and skills
Last year, I suggested that you follow in my footsteps and pick up an “old school” film- or digital image sensor-based medium format still camera. This year, I’ll make a related recommendation, this time for video. First off, here’s what I wrote a year ago about my archaic film Pentax 67:
The seeming tedium necessary to operate it turned out to have a silver (halide?)-lining upside. The effort and expense necessary to capture just a single image forced me to be far more selective about what images I ended up capturing, which forced me to be much more aware than I otherwise might be about the beautiful scenery that I was moment-by-moment surrounded by…which led to a far richer appreciation for that scenery and my experiences in it.
And, of course, the payoff for all that effort and expense was a higher-than-average chance (compared to a conventional standalone camera or camera-inclusive smartphone) of a beautiful image, courtesy of the film’s huge negative size and the camera family’s quality “glass”. The same goes for my digital P645Z and its smaller size but 51+ Mpixel resolution CMOS sensor.
Now for video. Two years ago, I’d suggested you select from a variety of available gear options that’d enable you to cost-effectively capture 4K video. But earlier this year, regular readers may remember that I revealed I’d also gotten two 6K Blackmagic Pocket Cinema Cameras (BMPCCs):
Auto-exposure? Nope. Auto-focus? Nope again (at least dynamic…there is a focus assist button that’ll one-shot get you in the ballpark, but the camera won’t retain focus as you and/or what you’re filming move around until you push the button again or manual focus for yourself). Other auto-niceties? You already know the answer. But cost-effective? You bet; my BMPCC 6K G1 cost me less than $1,000 gently used, with the also-used G2 only a few hundred dollars more; they sell brand new for under $2,500 even in the G2’s built-in variable ND “Pro” variant. The BMPCCs use industry-standard, multi-sourced, widely available M43 (G1) and Canon EF (G2) mount lenses. And the resultant footage? Stunning; that is, if you take the time to learn about and implement fundamentals such as the Exposure Triangle (further augmented by variable neutral density filters I’ve acquired) and if you’re willing to manage focus and other settings yourself.
Reiterating what I wrote a year ago:
I realize that my perspective may be seen as archaic by some…after all, the foundational concept of owning any camera is seemingly viewed as increasingly anachronistic. And I’ve always been a believer in the well-known saying (among photographers, at least) that “the best camera is the one that’s with you”…which often means the smartphone in your pocket or purse. But if you’ve got an interest in capturing image memories and the disposable income to turn it into reality, I’d encourage you to follow in my acquisition footsteps. You won’t regret it.
Snag some speedy direct-attached external storage
This last suggestion is closely related to the one above it. High-resolution cameras, especially if you use RAW or still-high-bitrate visually lossless codecs to capture still images and video sequences with them, generate really big files. Initially storing those files therefore takes a really long time. And accessing them for editing and final rendering also takes a really long time, with non-productive read and subsequent write speeds often dominating the overall editing-session duration and actual image processing a comparatively small percentage of the total time taken.
Two years ago, I suggested you pick up a beefy NAS for all the computers and other devices on your network to simultaneously access and storage-share. I still stand by that recommendation today. And four years ago, I suggested you get a high-capacity HDD-based DAS (direct-attached storage) device to augment the built-in storage in each computer. This year I’m going to revisit that particular earlier recommendation with a twist, echoing something I said nine years ago: instead of HDDs, “go solid-state”.
Increasingly over time, flash memory is comparatively cost-effective at modest home-office DAS capacities versus the HDD alternative. And although the power and energy consumption comparison results are indeterminate, at least to some, solid state storage’s performance strengths (especially with random access-dominant usage patterns) are undeniable. For both my in-process Windows computer transitions and those planned for their MacOS-based counterparts, I’ve picked up three flash memory-based DASs, all of which I’m quite pleased with so far:
A gently used 1.92TB G-Technology G-Drive Pro off eBay:
An open-box Mercury Pro U.2 Dual from Other World Computing (OWC), along with two used (and eBay-sourced) OWC U2 ShuttleOne NVMe M.2 to 2.5-inch U.2 SSD adapters; each adapter arrived pre-populated with an also-used 2 TByte Crucial P5 Plus SSD:
And another OWC Mercury Pro U.2 Dual, this one used (and also eBay-sourced) and paired with two new-from-OWC U2 Shuttle quad NVMe M.2 to 3.5-inch U.2 SSD adapters, each mated with a 2 TByte WD BLACK SN750 SSD (one new from eBay, the other used from Amazon Warehouse; stand by for more on the latter one in a blog post next month):
While I don’t normally purchase already-used SSDs, the versus-new prices on these were irresistible, and I’ll be running them in software RAID 1 mode for mirrored redundancy in case one fails, anyway. As for the G-Technology G-Drive Pro, I honestly don’t know how its architecture is internally implemented, and system software doesn’t provide much insight:
but check out these preliminary results over a Thunderbolt 2 interface (in combination with an Apple TB3-to-TB2 adapter) to my early 2015 model Apple 13” Retina MacBook Pro:
Native Thunderbolt 3 would potentially be faster still; I’ll retest and report back after my Mac migrations are complete. And on that note, by the way, all three of these storage devices implement Intel’s “Alpine Ridge” first-generation TB3 chipset, which (as I discussed a couple of months ago) doesn’t also support USB-C, so I wouldn’t necessarily advocate any of them for a Windows-based computer user, for example. But regardless of whether you go with any of mine or their conceptually similar counterparts from other suppliers, I highly recommend one of ‘em.
While visions of oscilloscopes danced in their heads…
I’ve got plenty of additional presents-to-others-and/or-self ideas, but the point isn’t to write a book, so I’ll close here to preclude passing through 3,000 words. Upside: I’ve already got topics for next year’s edition! And speaking of books, I’m particularly curious to hear what classics are on your lists. Sound off in the comments, and happy holidays!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- A holiday shopping guide for engineers: 2022 edition
- A holiday shopping guide for engineers: 2021 edition
- A holiday gift wish list for 2020
- A holiday shopping guide for consumer tech
- 10 holiday gifts for tech lovers
The post A holiday shopping guide for engineers: 2023 edition appeared first on EDN.
U.S. Unveils Strategic Initiatives to Halt China’s Semiconductor Advance, Spotlight on Advanced Packaging in Global Tech Rivalry
In an effort to impede China’s access to cutting-edge chipsets and semiconductor equipment, President Joe Biden and U.S. officials have devised two strategic initiatives. This move aims to curtail Beijing’s technological advancement while simultaneously bolstering domestic chip manufacturing. The focal point of this new global tech rivalry is advanced semiconductor packaging, an aspect that experts argue has been neglected for too long.
The U.S. government is directing substantial attention and subsidies toward attracting chip manufacturers to the country. Representative Jay Obernolte, a California Republican and one of the vice-chairs of the Congressional Artificial Intelligence Caucus, emphasized the critical importance of semiconductor packaging, stating that the semiconductor ecosystem’s growth is contingent on giving packaging due prominence. Obernolte cautioned against in-house packaging development, emphasizing that it would yield no positive impact.
Traditionally considered back-end manufacturing, Packaging, Assembly, and Testing (PAT) have historically received less focus, innovation, and productivity compared to front-end chip manufacturing. However, the landscape is evolving rapidly as new technologies facilitate the stacking and combining of chips, marking an industry inflexion point.
While advanced packaging alone may not enable China to compete with the U.S. semiconductor growth, experts believe it can assist the U.S. in crafting faster, cost-effective computing systems by closely integrating various chips. This strategy could allow China to preserve its high-priced, limited-quantity chip technology.
China, under the Made in China program announced by Premier Xi Jinping in 2015, has prioritized the development of semiconductor packaging technology. Although China trails the U.S. and Taiwan in advanced semiconductor packaging, it is making significant strides in wafer processing on a large scale.
China possesses a substantial volume of back-end facilities and is home to the world’s third-largest Assembly, Testing, Marking, and Packaging (ATMP) firm, JCET Group, following Taiwan’s ASE Group and the U.S.’s Amkor Technology in profit ranking. Chinese firms are aggressively increasing market share, with JCET’s recent acquisition of a cutting-edge production facility in Singapore and the establishment of a state-of-the-art packaging unit in Jiangyin.
The post U.S. Unveils Strategic Initiatives to Halt China’s Semiconductor Advance, Spotlight on Advanced Packaging in Global Tech Rivalry appeared first on ELE Times.
Leaf-inspired photovoltaic cell is efficient and provides “free” water, too

Photovoltaic (PV) cells—often referred to as “solar cells”—have a hard life. They are exposed to the weather and the sun’s energy that they capture to generate electricity which also causes them to heat up. According to some tests, for every 10°C increase in operating temperature, the efficiency of Si-based PV panels typically decreases by 4.0 to 6.5% and their ageing rate doubles. Cell temperatures up to 65°C are common in sunny and hot settings.
The solution to “too hot” is well known to engineers: use some sort of cooling arrangement. This can be done using an active approach using forced air or water, but the required heat exchangers, pumps, and plumbing adds cost, complexity, and power consumption. Alternatively, a less complicated and less costly, but also less efficient, passive approach can be used. In addition to well-known convection cooling, some passive designs use optical techniques of sub-bandgap reflection or selectively emissive coatings in order to emit heat via radiation to the cold outer space, but these can reduce heat by only about 4°C.
As a result, there’s a need to explore new approaches, and that’s what university researchers are doing. A team based at Imperial College London has devised a “bio-inspired” PV-leaf technology which uses low-cost materials and relatively simple construction to overcome the thermal problem and actually provide new benefits. This design eliminates the need for pumps, fans, control units and expensive porous materials. It automatically adapts to ambient-temperature and solar-condition variations and can even provide additional clean water. According to their various tests, the PV-leaf can generate over 10% more electricity compared to conventional solar panels.
The design takes its inspiration from plant leaves—nature’s own solar-energy capture process—and mimics the transpiration process which allows water to move, be distributed, and evaporate. Natural fibers mimic leaf vein bundles while hydrogels simulate sponge cells, so a PV-leaf can effectively and affordably remove heat from solar PV cells.
[In case you’ve forgotten: Transpiration is the process of water movement through a plant and its subsequent evaporation (some call it “exhalation”) from leaves, stems, and flowers. It is a passive process that requires no energy expense by the plant. Transpiration cools plants, changes osmotic pressure of cells, and enables mass flow of mineral nutrients.]
In this design, a biomimetic transpiration (BT) layer is attached to the back of a solar PV cell in order to remove the heat generated in the cell (Figure 1). The bamboo-fiber bundles mimic the vascular bundles in transporting and distributing liquid water over the cell’s surface, while hydrogel cells with a large specific surface area and excellent water-absorption performance are used to mimic the sponge cells in providing effective evaporation.
Figure 1 Schematic illustration of the PV cell and transpiration structure arrangement within the bio-inspired PV-leaf: a) Typical internal structure of a real leaf. The vascular bundles uniformly distribute liquid water throughout the whole surface of the leaf. Effective transpiration cooling protects the photosynthetic process. b) Internal structure of the bio-inspired transpiration structure. Hydrophilic fiber bundles and hydrogel cells are used to mimic the vascular bundles and sponge cells. c) Exploded view of the transpiration structure. The BT layer is constructed of bamboo fiber bundles and packed hydrogel cells. The root of the fiber bundles is soaked in bulk water. d) Diagram and working principle of the PV-leaf transpiration structure. Water flows from the root to the hydrogel cells driven by capillary and osmotic processes. The water molecules in the molecular mesh then evaporate, removing PV heat. e) Photograph of the single PV-leaf prototype. Source: Imperial College London
The configuration of the PV-leaf transpiration structure comprises a BT layer (~1 mm thick) and a supporting mesh (0.5 mm thick) connected to the underside of a PV cell layer (~150 μm thick) over an effective area of 10 × 10 cm2. In the BT layer, around 30 branches of the bamboo fiber bundles are homogeneously embedded into the potassium polyacrylate (PAAK) superabsorbent polymer (SAP) hydrogel cells, distributing water over the entire area covered by the BT layer. The ends of the fiber branches are gathered together and soaked in water.
Such a structure is interesting, but it needs to be tested, of course. The research team tested indoors under controlled lighting as well as outdoors, and the results were impressive and in line with the design simulation: reduced heat, increased power output, and water, Figure 2. For increased efficiency, which is a primary metric here, the number seems to be about a 10% improvement, which is a significant number.
Figure 2 Synergistic generation of electricity, heat, and clean vapor of the hybrid multi-generation PV-leaf. a) Schematic of the PV-leaf. A chamber was attached below the BT layer to collect clean vapor. b) Temperature profile (temperatures of the solar cell Tcell, the vapor on the interface Tvap, and the outlet of the chamber Tout) and transpiration rate of the PV-leaf when the ventilation fan power is Pf = 60 mW. c) Comparison of outputs of the PV-leaf and the standalone PV cell when Pf = 60 mW. d) Electrical efficiency, thermal efficiency, and transpiration rate of the PV-leaf as a function of Tvap, which was adjusted by changing the ventilation fan power. Tvap increases as the ventilation power decreases. e) Different condensation technologies and corresponding condensation rate (CR) limits. A separate condenser is needed to condense the vapor. The limit of the water-based condenser (hc = 50 W/m2/K) is assumed to be five times that of the air-based condenser. f) The salinity of the input saline and output freshwater tested by a refractometer. A separate flat-plate water-based condenser was used to condense the vapor generated by the PV-leaf. The saline was dyed blue to help visualize the cleaning effect. Source: Imperial College London
If you are interested in all the details, check out the 10-page paper “High-efficiency bio-inspired hybrid multi-generation photovoltaic leaf” published in Nature Communications, along with a 16-page Supplementary Information file which provides extensive additional insight into the design and test as well as the impact of variations in test conditions.
While they are justifiably pleased with their idea, its execution, and the results, I was also impressed that that they didn’t label it as a “breakthrough” or “revolutionary”. We already have way too much hype, especially when a development is related to clean or renewables, and the eternal quest for research grants.
The harsh reality is that for most small-scale projects, and especially those related to energy and power issues, these developments often don’t scale well. As you go from benchtop to pilot run and then full size, all sorts of things happen. Production, fabrication, and installation issues interfere; non-linearities in the materials become more prominent; and inescapable factors such as the changing relationship between surface area and volume affect mechanical, chemical, and thermal performance.
If you don’t think this is the case, look at the many battery-related advances created in the lab touted as “breakthroughs”, but which never made it much further. The laws of physics and chemistry, as well as economic realities, dictate what you can and cannot achieve at various scales.
What’s your view on the practicality of this novel approach? Is it viable on a larger scale, or is it another innovation which will prove to be not-quite feasible in the larger real world, as has been the case with so many innovations?
Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.
Related Content
- Keep solar panels clean from dust, fungus
- Solar panel partial blockage
- Solar fan with dynamic battery backup for constant speed of operation
- Then and Now: Solar panels track the sun
- Enough With the Solar Power ‘Magic,’ Please!
- Solar Energy Harvesting: A Great Solution, Except When It’s Not
The post Leaf-inspired photovoltaic cell is efficient and provides “free” water, too appeared first on EDN.
Bought a lot of refub fans..
![]() | Hey guys if any one want to buy fans 12v in India Dm me.. International shipping would cost more than new fans so... Lot of 120mm and 80mm Also have got 2 motors GR63x55 original they are like 2-3kgs.. Can any one suggest me what i can do with them? Any decoration /show item on wall etc.. please suggest thank you. [link] [comments] |
AI Demand and Sustainability Pressures Shape 2024 Trends in the Data Center Industry
Industry Leader Vertiv Forecasts Major Shifts
In the lead-up to 2024, the data centre industry faces a dual challenge of meeting the burgeoning demand for artificial intelligence (AI) capabilities while navigating the imperative to reduce energy consumption, costs, and greenhouse gas emissions. Global provider Vertiv (NYSE: VRT) forecasts pivotal trends, highlighting the transformative impact of AI on data centre densities and power demands.
CEO Highlights Dominant Storylines
Vertiv CEO Giordano Albertazzi underscores the dominance of AI and its downstream effects on data centre operations. Addressing the critical need to support AI demand while minimising environmental impact, Albertazzi emphasizes the necessity for collaborations between data centres, chip and server manufacturers, and infrastructure providers.
Key Trends for 2024 Unveiled
Vertiv’s experts predict several trends that will shape the data centre landscape in 2024:
- AI Dictates Construction and Retrofitting Strategies:
The escalating demand for AI applications prompts organizations to overhaul their operations. Ill-prepared for high-density AI computing, legacy facilities will witness a surge in new construction and large-scale retrofits. Prefabricated modular solutions will gain prominence, facilitating quicker deployments and offering opportunities to adopt eco-friendly technologies such as liquid cooling.
- Diversification in Energy Storage Solutions:
A quest for energy storage alternatives intensifies, with a focus on technologies seamlessly integrating with the grid. Battery energy storage systems (BESS) gain traction, supporting extended runtime demands and reducing reliance on generators. BESS installations are expected to increase in 2024, evolving towards “bring your own power” (BYOP) models to meet the demands of AI-driven applications.
- Enterprise Emphasis on Flexibility:
Amidst the cloud and colocation providers’ push for expansions, enterprises managing their data centres will diversify investments. The impact of AI on sustainability objectives prompts organizations to consider on-premise capacity for proprietary AI applications. Prefabricated modular solutions are pivotal in incremental investments, while service and maintenance strategies optimize legacy equipment, enhancing energy efficiency and reducing carbon emissions.
- Security Challenges in Cloud Migration Race:
Gartner’s projection of a 20.4% increase in global spending on public cloud services in 2024 indicates a continued mass migration to the cloud. Cloud providers, grappling with the need for rapid capacity expansion to support AI and high-performance computing, turn to global colocation partners. However, security concerns become paramount as data migration intensifies. Disparate national and regional data security regulations pose complex challenges, necessitating efforts to standardize security measures. According to Gartner, 80% of CIOs plan to increase spending on cyber/information security in response to these challenges.
The post AI Demand and Sustainability Pressures Shape 2024 Trends in the Data Center Industry appeared first on ELE Times.
All India Council for Technical Education (AICTE) announces the finale of The Inventors Challenge 2023 in collaboration with Arm Education and STMicroelectronics
The Government of India has been focused on developing the country’s semiconductor ecosystem and catalysing India’s rapidly expanding electronics manufacturing industry. These efforts will enable innovation and India’s emergence as a global hub for electronics manufacturing and design. The Inventors Challenge contest is an effort towards nurturing faculty and students’ capabilities in semiconductor and technology innovation.
The Inventors Challenge 2023 was a team event which saw 1,370 ideas submitted based on the United Nations’ Global Goals , with over 80 teams receiving developer boards from STMicroelectronics for prototyping their ideas.
8 teams were announced as winners of The Inventors Challenge 2023 on 20th November,2023
“The Inventors Challenge -2023 reiterated our belief on the importance of the industry-academia knowledge sharing. The initiative was well received and the feedback from participants has further strengthened our belief to continue such interactions, these interactions with Industry exposes academic ecosystem to technological innovations,” said, Prof. T.G. Sitharam, Chairman, AICTE
“We are pleased to run The Inventor’s Challenge once again this year together with AICTE & Arm, we believe this will strengthen fostering innovation and pave the way for valuable partnerships to enrich the local ecosystem, “said, Vivek Sharma, Managing Director, India, STMicroelectronics
“At Arm we believe in the power of technology to build a better world for everyone. Higher and further education institutions have a pivotal role to play in enabling technology innovation in the semiconductor ecosystem, and we work closely with academic, industry and government partners to drive technological advancements that will have a positive impact. Huge congratulations to this year’s winners and we look forward to seeing how their innovations will help progress toward the United Nations’ Global Goals,” added, Guru Ganesan, President, Arm India
The post All India Council for Technical Education (AICTE) announces the finale of The Inventors Challenge 2023 in collaboration with Arm Education and STMicroelectronics appeared first on ELE Times.
The impressive fusion of innovation and integration
ASMPT, the global innovation and market leader in SMT and semiconductor assembly & packaging solutions, calls its extensive presence at the world’s leading trade fair productronica for electronics development and manufacturing in Munich a complete success. All its businesses – ASMPT Semiconductor Solutions, ASMPT SMT Solutions, and Critical Manufacturing – presented themselves as an innovative and integrated whole.
“What belongs together in the semiconductor industry is coming together,” said Guenter Lauber, EVP & Chief Strategy and Digitalization Officer at ASMPT. “Innovative developments such as system-in-package components, but also increasing cost pressures, require us to overcome the boundaries between SMT and die processing, and to think and act across lines, products and manufacturing facilities when it comes to data use.”
Industry visitors were able to see what this looks like in practice at the joint ASMPT booth, where ASMPT Semiconductor Solutions presented new innovative machines that are particularly interesting for automotive applications in the areas of ADAS, connectivity and electrification that require maximum precision and unique speeds. With an SMT production line that was fully optimized for volume production and a flexible SMT line for small-batch production, ASMPT demonstrated once again that the innovation and market leader is able to cover the entire hardware spectrum for modern electronics manufacturing. The hybrid pick-and-place machine SIPLACE CA marks the successful synthesis between chip assembly and SMT processing, and the industry audience’s interest was correspondingly high.
ASMPT’s software products were all about its new Intelligent Factory concept, which focuses on the smart use of data and connects all production levels from the machines to the enterprise into a functional and productive whole. As a result, skilled workers are deployed more efficiently, materials are scheduled more effectively, and errors and production impediments are detected and rectified more quickly. Also trend-setting was ASMPT’s Critical Manufacturing software business, which presented its modern, integrated manufacturing execution system (MES) designed specifically for electronics manufacturing at the trade fair.
“Frequent feedback from our customers was: “What I would otherwise have to put together from many different sources, the market leader is now offering from a single source”,” said Guenter Lauber. “This means ASMPT features a hardware and software concept that is as coherent as it is comprehensive while incorporating existing third-party solutions and combining the performance and precision of proven hardware with future-oriented software.”
“Lively interest by the public and extremely positive customer feedback at the fair are both a confirmation of our work and an incentive,” says Guenter Lauber, about his company’s endeavors. “In 2024, we will continue to consistently advance our integrative strategy in all divisions. Achieving the greatest return on investment for our customers, however, is and continues to be our overarching goal in everything we do.”
The post The impressive fusion of innovation and integration appeared first on ELE Times.
New STM32C0: More memory and lower prices will convert more systems to 32-bit
Author : STMicroelectronics
ST is announcing today the launch of STM32C071s with 128 KB of flash and a USB controller, thus further cementing our new series as an entry-level MCU and gateway to 32-bit architectures. We are also divulging a new roadmap with devices housing up to 256 KB of flash by the end of next year and will update this blog post when they become available. In the meantime, samples of the STM32C071 should arrive by mid-2024 but are publicly revealed today to help integrators plan ahead. We are also announcing new price drops with the existing STM32COs featuring 32 KB of flash dipping below $0.24 for 10,000 units, making the series even more accessible.
Despite launching the STM32C0 only a few months ago, in January 2023, the reception has been so positive that competitors have adopted similar strategies, some even calling out STM32s in their documentation. In a nutshell, the new price-per-performance ratio of the STM32C0s disrupted the market by enabling integrators to not only consider 32-bit MCUs but also envision roadmaps and upgrade paths previously impossible. Hence, as we close 2023, we wanted the STM32C0 to continue disrupting markets by further lowering prices and increasing memory so more engineers can jump on the bandwagon.
The STM32C071 is the most impressive upgrade as it quadruples the memory configuration thanks to 128 KB of flash and 24 KB of RAM. Put simply, products that had to adopt significantly more expensive devices because of memory constraints can now exist in entry-level markets, thus making them vastly more competitive. And because we anticipate a lot of these systems to use USB to deliver power, we added a crystal-less USB controller. Indeed, engineers can use the internal clock, thus alleviating the need for an external crystal, which increases the bill of materials and the PCB layout complexity.
Consequently, the STM32C071 is highly symbolic because it brings more functionalities from the STM32G0 down to the STM32C0. Besides the USB controller, there’s an additional SPI and I2C interface and a 32-bit timer. After all, it’s been our strategy all along: make more features accessible to all systems. The STM32C071 is, therefore, a new roadmap enabler as it becomes a bridge between entry-level MCUs and the STM32G0 that teams would use to provide a more costly system to their customers featuring lower power consumption and more features. That’s why we also ensured pin-out compatibility between the STM32C071 and the STM32G0.
An age-old challenge: creating entry-level applications The STM32C0The STM32C0 is a new microcontroller for entry-level applications with a price that can fit bills of materials that previously required inexpensive 8-bit MCUs. Hence, the device increases the accessibility of the STM32 family of devices while offering significant computational throughput thanks to a Cortex-M0+ running at 48 MHz and scoring 114 points in CoreMark. Depending on the configuration, the STM32C0 series will also oscillate between 16 KB of Flash and 6 KB of RAM to 128 KB of Flash and 24 KB of RAM. ST also provides a wide range of packages to ensure PCBs that rely on a small 8-bit microcontroller retain their form factor.
The entry-level challenge8-bit microcontrollers continue to play an exciting role in the industry, and ST remains dedicated to its STM8 series. Some companies need the EEPROM available in our 8-bit MCUs, while others depend on the AEC-Q10x automotive qualification of some of our devices. However, in many instances, designers choose an 8-bit MCU only because of pricing concerns. Their applications work well enough with 8-bit registers, meaning their primary focus is the bill of materials. The problem is that choosing an 8-bit architecture can have costly long-term consequences.

One of the challenges when working on an entry-level application is the limited upgradability. While prioritizing a low BoM, many successful projects often need more memory, computational throughput, pins, etc. However, 8-bit architectures have stricter restrictions and thus provide far fewer upgrade possibilities. The inherent limitations on 8-bit MCUs may also mean that a company has to qualify multiple devices instead of having one component that can fit numerous applications. Finally, as the industry inevitably marches toward 32-bit systems, using an 8-bit device may prevent developers from using software stacks or existing codes that would vastly shorten their time to market.
A new solution: a 32-bit device as an alternative to an 8-bit MCU How is ST helping developers transition to 32-bit?

ST understands that despite all the benefits of a 32-bit architecture, financial and physical constraints may force some teams to use an 8-bit alternative. That’s why the STM32C0 has packages and a price rivaling 8-bit MCUs. Put simply, it opens engineers to new markets by enabling them to transition without blowing up their BoM or existing designs. Given ST’s guarantee of reliability, our device’s ability to support operating temperatures of up to 125ºC, and many peripherals, the STM32C0 is the most affordable MCU today.
Furthermore, ST ensured that transitioning from an 8-bit architecture to a 32-bit one would be as efficient and straightforward as possible. For example, we published an application note with guidelines for moving from an STM328L or STM328S to an STM32C0. It delves into peripheral migration and even shows that moving to a 32-bit architecture often means an increase in code size of only 6% to 15% in most cases. ST also organized a webinar available on demand, and the STM32 development environment can greatly optimize operations. Tools like STM32CubeMX and STM32CubeIDE, debug software like STM32CubeProgrammer, or STM32Cube expansion packages optimize workflows and even help reuse code or modules.
How is the STM32C0 facilitating the transition?
The STM32C0 wasn’t only designed to encourage engineers to transition from 8-bit systems but to breed more capable entry-level applications. Consequently, we worked on improving the feature density. The STM32C0 thus has one of the smallest packages for a general-purpose MCU thanks to its 3 mm x 3 mm 20-pin QFN housing, which is only possible because the die is so tiny. ST also offers an 8-pin SO8N version or a particularly thin WLCSP12 package. Similarly, the STM32C0 has power consumption modes significantly lower than other 8-bit devices, which means it’s possible to create more efficient designs.
How is the STM32C0 a stepping stone to more powerful systems?
The most astute readers will have recognized that the new STM32C0 takes essential cues from the STM32G0, which uses the same Cortex core. Consequently, ST ensured developers could quickly move from the STM32C0 to the STM32G0. For instance, the new MCU has the same single Vdd and Vss power supply line found on the STM32G0, simplifying PCB designs and reducing costs. The STM32C0 also includes a highly accurate internal high-speed RC oscillator at 48 MHz. As a result, designers don’t need to add an external one, which lowers the overall BoM. The two devices also share a similar ADC and timers, and a consistent pinout configuration facilitates the move from one to the other.
First StepsThe best way to start experimenting with the STM32C0 is to get one of the development boards released last January. The NUCLEO-C031C6 is a traditional Nucleo-64 system with an Arduino Uno V3 connector to allow users to stack expansion cards. The STM32C0316-DK uses the same STM32C031 device but in a bundle that comes with the STLINK-V3MINIE, the first STLINK probe to use a USB-C port. The board also features a DIP28 connector compatible with the ATMEGA328 8-bit microcontroller. Interestingly, the board can also welcome STM32G0 devices. It thus serves as a transition tool to migrate to 32-bit applications and more easily experiment with a more powerful MCU.
Finally, the STM32C0116-DK is a smaller platform that uses the STM32C011 in a DIL20 module so teams can remove and share the module from one board to the next. ST is, therefore, offering a new approach to prototyping to make workflows more practical by creating a portable and interchangeable solution.
Read the full article at https://blog.st.com/stm32c0/
The post New STM32C0: More memory and lower prices will convert more systems to 32-bit appeared first on ELE Times.
My electronic workbench is expensive so I implemented sophisticated audible fuse technology
![]() | submitted by /u/Comfortable_Bank6611 [link] [comments] |
Did My Own Breadboard Power Supply, With Flair -- Selectable 5/3.3 V Individually Per Rail, 2A Output Per Rail, Input and Output Protection
![]() | submitted by /u/Southern-Stay704 [link] [comments] |
Proper IC interconnects for high-speed signaling

The increased demand for high-speed data transmission fueled by social media and online activities in the past two decades lead to the use of more complex ICs operating at higher speed on higher density PCBs. The combination of the high density of a PCB and high-speed signals traveling on it are a good source for interference between different components when interconnecting them.
When dealing with high-speed signaling, interconnects between components must be treated as transmission lines and line termination must be considered to avoid impedance mismatching and line discontinuities, which lead to signal reflections, interference, and performance degradation. This article aims to give an overview of different transmission line termination techniques to interface between devices with similar or different I/O signal formats (LVPECL, LVDS, CML, HCSL, LP-HCSL). Proper line termination should maintain impedance matching and proper biasing for higher performance and good noise immunity and provide the right signal translation to avoid I/O incompatibilities, which can lead to device malfunction, eventual reliability issues and—in the worst case—device damage.
DC coupling vs AC coupling
When DC coupling a driver to a receiver, both the continuous and switching components of the signal will flow from the driver output to the receiver input. While in AC coupling, only the switching component of the signal will reach the receiver since the continuous component will be blocked by the coupling capacitor.
DC coupling offers the advantage of less components count and less power consumption over AC coupling. However, with DC coupling devices, compatibility between driver’s output and receiver’s input is not always guaranteed and, in some cases, comes with the price of adding more components with increase in power consumption. In many cases DC coupling is not possible at all, leaving AC coupling as the only solution.
AC coupling blocks the DC component of the signal between the driver’s output and the receiver’s input thus eliminating the issue of common mode voltage incompatibility between them. The receiver’s input can then be biased at the optimum levels that offers the best performance in terms of jitter, duty cycle distortion, and crossing. While there is no issue with AC coupling clock signals, AC coupling data signals requires that the data be DC-balanced (same overall number of zeros and ones). This will avoid signal decay in the absence of transitions (during long chains of identical bits) and at the two ends of the receiver termination to the same level, which will reduce noise margin.
Driver output/receiver input voltage level
To understand the driver receiver compatibility, let’s look at Figure 1. In this example, the driver’s output and the receiver’s input have the same common mode voltage and the driver’s output signal levels fall within the receiver’s input signal level range.
Figure 1 The driver’s output and the receiver’s input voltage level which have the same common mode voltage where the driver’s output signal levels fall within the receiver’s input signal level range. Source: Microchip
This is the case when interfacing devices with the same I/O format, especially when they are from the same manufacturer. It is the optimal configuration for DC coupling between the two devices. This perfect matching is not always offered and sometimes even interfacing devices with the same I/O format from different manufacturers requires special care when DC coupling. When the gap between the common mode voltage of the receiver’s input and the common mode of the driver’s output is large enough to cause the driver’s signal to go beyond the receiver input range. This results in DC coupling incompatibility and AC coupling must be used to keep the driver and receiver at their sweet spots of operation. Figure 2 shows I/O operating levels of commonly used format in high-speed interconnect, LVPECL, LVDS, CML, and HCSL.
Figure 2 I/O operating levels of commonly used format in high-speed interconnect, LVPECL, LVDS, CML, and HCSL. Source: Microchip
I/O structures
To understand how to interface between different driver/receivers, let’s overview the I/O structure for the most common logics used for ICs interfacing LVPECL, LVDS, CML, HCSL.
As shown in Figure 3, the LVPECL output stage consists of a differential pair driving an emitter follower pair. The output should be terminated with 50Ω to VCC-2V to create a common mode voltage of VCC-1.3V at the output corresponding to 14mA current flowing through the 50Ω. The output can also be terminated with a Thevenin network (130Ω to VCC / 82Ω to GND) or just a 100Ω to 200Ω resistor to GND. The PECL input stage consists of a switching differential pair that sometimes integrates a high impedance bias resistor network.
Figure 3 The (a) PECL output stage consists of a differential pair driving an emitter follower pair and (b) PECL input stage consists of a switching differential pair that sometimes integrates a high impedance bias resistor network. Source: Microchip
The LVDS output consists of a current-mode driver which sources 3.5mA through a switching network to the differential output (Figure 4). The output is usually connected to a 100Ω differential transmission line which requires a 100Ω differential termination at the receiver side to match the transmission line and create the 350mV swing. The standard common mode for LVDS is 1.2V regardless of VCC. LVDS input stage consists of a switching differential pair with or without an integrated 100Ω resistor to terminate the driver output.
Figure 4 The (a) LVDS output consisting of a current-mode driver which sources 3.5mA through a switching network to the differential output and an (b) LVDS input stage consisting of a switching differential pair with or without an integrated 100Ω resistor to terminate the driver output. Source: Microchip
The CML output stage consists of a differential pair of common-emitter transistors with a 16mA switching current and a 50Ω collector resistance to VCC (Figure 5). This results in a 400mV swing (from VCC to VCC-400mV) and a common mode voltage of VCC-200mV. The CML input structure consists of common emitter pair driving a differential pair with or without integrated 50Ω termination to VCC at the input. If not integrated, the 50Ω must be installed on the PCB.
Figure 5 The (a) CML output stage consists of a differential pair of common-emitter transistors with a 16mA switching current and a 50Ω collector resistance to VCC and a (b) CML input stage consists of common emitter pair driving a differential pair. Source: Microchip
The HCSL output (Figure 6) consists of a differential pair with open source which steers a 15mA constant current between the true and complementary output. The circuit requires an external 50Ω termination to ground to create the 750mV swing and a series resistor to increase the driver’s output impedance (about 17Ω) to the transmission line characteristic impedance (50Ω). The HCSL input is a differential pair that can accept 700mV at each input and has standard common mode voltage of about 350mV. Finally, the LP-HCSL output stage consists of a push-pull voltage drive stage powered from a 750mV voltage source. No external 50Ω to ground termination needed as in HCSL. The series resistor can be integrated inside the chip to minimize external components count.
Figure 6 The (a) HCSL output consists of a differential pair with open source, the (b) HCSL input differential pair, and the (c) LP-HCSL output with a of a push-pull voltage drive stage powered from a 750mV voltage source. Source: Microchip
DC coupling LVDS driver
For DC coupling from the LVDS driver to the LVDS receiver, just connect the LVDS output to the LVDS input (Figure 7) and if the receiver doesn’t have internal termination, terminate with external 100Ω differential close to the receiver input.
Figure 7 LVDS receiver (a) without internal termination and (b) with internal termination. Source: Microchip
The circuit in Figure 8 will work fine for DC coupling from the LVDS driver to the LVPECL receiver even though the difference between the common mode voltage, 1.2V for LVDS vs VCC-1.3V for LVPECL, this is due to the wide common mode range of the LVPECL input and the relatively small swing of LVDS (400mV) which will not cause the saturation of the LVPECL input stage current source.
Figure 8 DC coupling LVDS driver to LVPECL receiver. Source: Microchip
Another solution to DC couple LVDS to LVPECL is to use a resistor network to shift the DC level from LVDS common mode voltage (1.2V) to LVPECL common mode voltage (VCC-1.3V). This can be achieved using the circuit in Figure 9.
Figure 9 DC coupling LVDS to LVPECL by level shifting. Source: Microchip
Resistors values can be calculated from the following equations dictated by the following circuit constraints.
LVDS common mode voltage at point A:
(R1/(R2+R3)) Vcc = 1.2 (1)
LVPECL common mode voltage at point B:
((R1+R2)/(R1+R2+R3)) Vcc = Vcc-1.3 (2)
Impedance matching:
(R0/2) // (R1 // (R2+ R3) = 50 (3)
Considering Vcc = 3.3 V and solving (1) and (2) leads to R2 = 0.615 R3 and R1 = 0.571 (R2+R3).
For R2=200 Ω, R3=325 Ω (324 Ω normalized), and R1 = 299 Ω (301 Ω normalized).
Equation (3) leads to R0 = 136 Ω (137 Ω normalized).
Selecting high-value resistors has the advantage of low-power consumption while lower value resistors allow the circuit to perform better at higher frequencies.
Finally, due to the large gap between the LVDS and CML common mode voltage it’s not practical to DC couple LVDS driver to CML receiver and vice versa.
DC coupling LVPECL driver
For DC coupling the LVPECL driver to the LVPECL receiver, the conditions for proper biasing and impedance matching allow us to calculate the values of the components in Figure 10.
Figure 10 DC coupling the LVPECL driver to the LVPECL receiver where the conditions for proper biasing can be calculated with the equations (4) through (6). Source: Microchip
In Figure 10-a the Thevenin termination is equivalent to the 50Ω to VCC-2V standard LVPECL termination and satisfy equations (4) and (5):
R1.R2 / (R1 + R2) = 50 (4)
R2 / (R1+R2). Vcc = Vcc – 2 (5)
R1 and R2 solutions for these two equations are:
R1 = 50.Vcc / (VCC-2)
R2 = 25.Vcc
For Vcc = 3.3 V, R1 = 127 Ω and R2 = 82.5 Ω.
In Figure 10-b, the voltage at node A is VCC-2V (PECL termination: 50Ω to VCC-2V) and the current flowing through R is the sum of the currents flowing through the two 50Ω termination resistors.
I = (VOH-(VCC-2)) / 50 + (VOL-(VCC-2)) / 50 (6)
I = (VOH + VOL -2VCC+4)/50
Thus,
R = (VCC – 2) / I = 50 (VCC-2) / (VOH + VOL -2VCC+4).
If we consider SY58012U as an example, Table 1 shows R values for VCC = 3.3V and 2.5V.
Table 1 | |||
VCC |
VOH |
VOL |
R |
3.3V |
VCC-0.895V |
VCC-1.695V |
40Ω |
2.5V |
18Ω |
The DC coupling the LVPECL driver to the LVDS receiver can be seen in Figure 11 and the conditions for proper biasing and impedance matching can be solved in equations (7) through (12).
Figure 11 Circuit for DC coupling the LVPECL driver to the LVDS receiver (a) without internal termination and (b) with internal termination. Source: Microchip
The voltages at nodes A and B in both Figures 11-a and 11-b are:
A: VCM (LVPECL) = Vcc-1.3V = 2 V
B: VCM (LVDS) = 1.2 V
In Figure 11-a we have:
(R2+R3)/(R1+R2+R3) x 3.3 = 2 (7)
R3/(R1+R2+R3) x 2= 1.2 (8)
R1//(R2+R3) = 50 (9)
The values of R1, R2 and R3 satisfying these equations are R1 = 82.5 Ω, R2 = 51 Ω, and R3 = 75.8 Ω.
In Figure 11-b where the receiver has internal differential 100Ω termination we have:
(R2+R3)/(R1+R2+R3) x 3.3 = 2 (10)
R3/(R1+R2+R3) x 2= 1.2 (11)
R1//(R2+R3//50) = 50 (12)
The values of R1, R2 and R3 satisfying these equations are R1=102Ω, R2=63.4Ω, R3=95.3Ω
It is not recommended to DC couple the LVPECL driver to CML receiver unless AC coupling cannot be used due for example to unbalanced data. In this case the diagram on Figure 12 can be used at the cost of more components and power dissipation. From the LVPECL output, the resistor network is seen as a Thevenin termination with 82.5 Ω to GND and 127 Ω to Vcc (208 // (275+50)). The CML input is biased with the 50 Ω to VCC.
Figure 12 DC coupling the LVPECL driver to CML receiver. Source: Microchip
DC coupling CML driver
Due to the high common mode voltage of the CML driver (VCC-200mV), it’s hard if not impossible to DC couple CML driver to other logics (Figure 13).
Figure 13 DC coupling CML driver to CML receiver where the driver (a) has an internal termination and (b) does not have an internal termination. Source: Microchip
DC coupling HCSL/LPHCSL driver
The LPHCSL is a voltage driver which doesn’t require the 50 Ω termination to GND necessary for HCSL driver which is a current source that needs a path to ground. Due to the low common mode voltage of the HCSL/LPHCS driver (250 mV-550 mV), it is also hard if not impossible to DC couple HCSL/LPHCSL driver to other logics (Figure 14).
Figure 14 DC coupling HCSL driver to HCSL receiver (a) and DC coupling the LPHCSL driver to HCSL receiver (b). Source: Microchip
AC coupling LVDS driver
The AC coupling for the LVDS driver to the LVDS receiver can be seen in Figure 15. For LVDS receivers without internal termination, the termination network on Figure 15-a sets the appropriate termination at the receiver input and sets the LVDS input common mode voltage. If the receiver has internal termination, the external network used to generate the common mode voltage should use high value resistors to preserve the transmission line termination (100 Ω differential). The 5.1K and 9.1K resistors on Figure 15-b set the common mode voltage to 1.2 V.
Figure 15 AC coupling LVDS driver to an LVDS receiver (a) without an internal termination and (b) with an internal termination. Source: Microchip
Figure 16 shows the AC coupling for the LVDS driver to the LVPECL, CML, and HCSL receivers. The termination network in Figure 16-a sets the LVPECL input common mode voltage (VCC-1.3V) and provides a 50 Ω line termination (100 differential). If the receiver has a VBB (2 V) bias source, just terminate each input with a 50 Ω to VBB. For Figure 16-b, the 50Ω resistors provide the bias to the CML input and the 100Ω differential termination to the LVDS driver. For AC coupling the LVDS driver to the HCSL receiver in Figure 16-c, the 471Ω/ 56Ω network sets the transmission line termination to 50Ω and HCSL receiver common mode voltage to about 350mV.
Figure 16 AC coupling for the LVDS driver to the (a) LVPECL, (b) CML, and (c) HCSL receivers. Source: Microchip
AC coupling LVPECL driver
Figure 17 shows AC Coupling the LVPECL driver to the LVPECL, LVDS, and CML receivers. In Figure 17-a, the 150 Ω provides a path to ground to the emitter follower and sets the LVPECL output common mode voltage and the 82.5Ω/127Ω network terminates the line with 50 Ω and sets the LVPECL input common mode voltage to VCC-1.3V. In Figure 17-b, the 100 Ω terminates the transmission line and the 5.1KΩ/9.1KΩ network sets the LVDS input common mode voltage. In Figure 17-c, the 50Ω resistor terminates the line and sets the bias for the CML input while the series resistor attenuates the PECL signal to be within range for the CML input. In Figure 17-d, the 471Ω/56Ω network provides a 50Ω line termination and set the HCSL input common voltage close to 400mV.
Figure 17 AC Coupling the LVPECL driver to the (a) LVPECL, (b) LVDS, (c) CML, and (d) HCSL receivers. Source: Microchip
AC coupling CML driver
As shown in Figure 18, for AC coupling the CML driver to CML receiver, if the driver doesn’t have internal 50Ω termination to VCC, the output must be terminated outside before the coupling cap.
Figure 18 AC coupling CML driver to a CML receiver (a) with an internal termination and (b) without an internal termination. Source: Microchip
Figure 19 shows AC Coupling the CML driver to the LVDS, PECL, and HCSL receivers. The CML driver in Figure 19-a has internal 50Ω termination to VCC and LVDS receiver doesn’t have internal termination. The 5.1KΩ/9.1KΩ network sets the 1.2V common mode voltage for the LVDS receiver. For CML driver without internal termination refer to Figure 13 for and LVDS receiver with internal termination refer to Figure 15.
Figure 19 AC Coupling the CML driver to the (a) LVDS, (b) PECL, and (c) HCSL receivers. Source: Microchip
AC Coupling HCSL Driver
Figure 20 shows the AC coupling from the HCSL driver to HCSL receiver as well as the LPHCSL driver to the HCSL receiver.
Figure 20 AC coupling from the (a) HCSL driver to HCSL receiver and (b) LPHCSL driver to the HCSL receiver. Source: Microchip
Solving for R1 and R2 to set the common mode voltage at the HSCL input to 350mV leads to equation:
R2=R1(Vcc/0,35 – 1) (13)
To avoid attenuating the signal at the input of the receiver just use high values for R1 and R2. For R1 = 5.1 KΩ, R2 will be 43.2 KΩ. As an example, the values R1 = 56.2 Ω and R2 = 475 Ω can be selected to match the transmission line impedance at the cost of signal swing drop by half. Finally, the AC coupling for the HCSL driver to the LVDS, LVPECL, and CML receivers can be seen in Figure 21.
Figure 21 AC coupling for the HCSL driver to the (a) LVDS, (b) LVPECL, and (c) CML receivers. Source: Microchip
Proper ICs interconnect for high speed signaling
To successfully interface between high-speed ICs populating high density boards, it is important to know the specifications of the driver output and receiver input. Only when obtaining these specifications, the transmitted signal type, or clock signal or balanced/unbalanced data can someone decide on the type of coupling to be used (DC or AC). It is important to always use the topology that preserves the integrity of the signal first and then prioritize choosing the less complex topology second, with minimum components and low-power consumption in mind. In this article, we displayed solutions for interfacing between LVDS/LVPECL/CML/HCSL (there may be other circuits weren’t shown). To define a new interface, it is important to always solve for network elements that satisfy the basic constraints that will preserve the transmitted signal integrity. These elements include proper common mode voltage, impedance matching, and signal reaching the receiver input within receiver input range.
Abdennour Mezerreg is a senior technical staff applications engineer for Microchip Technology’s timing and communications business unit.
Related Content
- Interconnecting common interfaces
- Interfacing LVDS with other differential-I/O types
- Understanding LVDS Fail-Safe Circuits
- LVDS, CML, ECL-differential interfaces with odd voltages
- Survival guide to high-speed A/D converter digital outputs part 2
The post Proper IC interconnects for high-speed signaling appeared first on EDN.
Memory in automotive designs: An industry in transition

Memory chips serving automotive markets have come a long way over the past decades, and if there is one area that has driven the shift in their use inside vehicles, it’s the adoption of advanced driver assistance systems (ADAS) and autonomous driving technologies. New automotive technologies—radar, lidar, hi-resolution imaging, and object recognition—have mandated high-density memory for internal and external perception applications.
So, Micron Technology Inc., celebrating 30 years in the automotive memory business, is now reinvigorating its focus on memory devices serving ADAS-enabled and autonomous driving vehicles. Here, Garima Mathur, director of automotive strategic marketing at Micron, is quick to point out that the Boise, Idaho-based memory supplier has been in the memory market for automotive when it wasn’t that big or exciting.
She also notes that memory sockets in automotive have changed a lot in the past decade, citing two significant automotive trends: autonomy and rich infotainment. Regarding autonomy, Mathur acknowledged that a few years ago we thought that by now every car would be driving by itself in certain parts of the world. “That process is now happening in steps, and the current goal is to make our cars safer.”
Regarding infotainment, she pointed to screens and heads-up displays, which seem to emulate the smartphone experience inside the car. “We are bringing our digital lifestyle in the cars.” Mathur added that these things are very exciting from memory and storage perspective.
Figure 1 Everything except electrification is driving the need for higher densities in memory and storage. Source: Micron
“Everything except electrification is driving the need for higher densities in storage,” she said. “As we increase the level of autonomy, there are more sensors in the car, which inevitably leads to more sensor fusion and compute requirements.” In other words, when the car is acting on its own, it needs more compute, and to support that compute, more DRAM is required to store all the data points.
Next, Mathur added that memory and storage inside the cars must be automotive grade, which encompasses certain certifications and qualifications because quality and reliability are a critical part of the value proposition. “Even at the design stage, we incorporate all the automotive requirements into the wafer design process, which goes all the way to end products.”
Functional safety-compliant DDR5 DRAM
Micron claims to have launched the industry’s first automotive low-power DDR5 DRAM (LPDDR5) memory that is hardware-evaluated to meet the most stringent Automotive Safety Integrity Level (ASIL) requirement for functional safety: ASIL D. These functional safety-evaluated DRAM chips are compatible with ADAS applications like adaptive cruise control, automatic emergency braking systems, lane departure warning, and blind spot detection systems.
“Autonomous vehicles need powerful, trusted memory that can enable real-time decision-making in extreme environments,” said Kris Baxter, corporate VP and GM of Micron’s Embedded Business Unit. He also pointed out that Micron’s hardware evaluation of DRAMs has been independently assessed and verified by exida, an automotive safety expert.
Alexander Griessing, COO and principal safety expert at exida, added that while functional safety is essential to developing advanced automotive systems, memory has had a somewhat neglected commercial off-the-shelf existence. “Micron’s automotive LPDDR5 with a laser focus on ISO-26262 functional safety is setting a new standard for the rest of the memory industry.”
Apparently, data-intensive automotive technologies are on the rise; the ADAS-enabled vehicles now run over 100 million lines of code. That, in turn, requires hundreds of tera operations per second, and here, LPDDR5 can address these requirements with a 50% increase in data access speeds and more than 20% improvement in power efficiency.
As a result, these DRAMs enable modern vehicles with near-instantaneous decision-making from the fusion of multiple sensors and inputs. Micron’s automotive LPDDR5 is also ruggedized to support extreme temperature ranges and qualified for automotive reliability standards such as AEC-Q100 and International Automotive Task Force 16949.
Figure 2 LPDDR5 enables high-performance compute for cars while minimizing power consumption for both electric and conventional vehicles. Source: Micron
Market research firm Gartner projects that the automotive memory market will grow to $6.3 billion in 2024, more than doubling from $2.4 billion in 2020. It’s worth noting here that, according to Yole Intelligence estimates, DRAM and NAND flash dominate the automotive memory market with a combined share of 80%. Here, DRAMs capture 41%, NAND flash chips acquire 39% and the rest is shared among memory technologies like NOR flash and EEPROMs.
No more focus on legacy memory
So, what’s new in the memory space for automotive applications? Mathur told EDN in an exclusive interview that Micron is working closely with automotive design engineers as their use cases are changing. “Automotive in the past used to be on the tail end as OEMs would take something robust, well tested, and fail-proof,” she added. “Consequently, more legacy memory technologies would go inside the car.”
That landscape has changed entirely over the past decade, and there is little focus on legacy memory technologies. “For instance, on the DRAM side, we see requirements for high bandwidth both in infotainment and ADAS designs,” Mathur said. Especially on the ADAS side, memory technology requirements are changing as we move toward higher automation levels.
On the storage side, she noted that eMMC used to be mainstream. “But we are seeing a demand for higher performance, so USF is now picking up pretty strong.” That shows how densities will grow in the future, whether it’s UFS or other solution, Mathur added.
Figure 3 The architectural shift in automotive compute designs is expected to drive more demand for memory devices. Source: Micron
In the final analysis, she noted that while these are end consumer trends, what’s happening under the hood is architectural changes. “There used to be hundreds of ECUs in a vehicle, but they are being consolidated now,” Mathur said. “Automotive compute designs are moving from discrete ECUs to domain-based architecture to zonal/central architecture.”
According to her, that will have a fundamental impact on the compute content, and subsequently, on memory and storage that go in the car. The memory company, which has been in the automotive market for 30 years, is confident that this shift will benefit it. Especially when it’s a one-stop shop for memory devices going into modern vehicles.
Related Content
- Memory use in automotive
- Busy Road Ahead for Automotive Memory
- LPDDR5 DRAM ready for Level 5 autonomy
- Automotive memory: Many types and applications
- Another NAND Flash for Automotive, OTA, AI and More
The post Memory in automotive designs: An industry in transition appeared first on EDN.
US military says national security depends on ‘forever chemicals’
Folks, I present: My new favourite bodge job!
![]() | submitted by /u/avrovulcanxh607 [link] [comments] |
Ventana Micro Systems Unveils Second Generation Veyron Family RISC-V Processor, Paving the Way for Data Center-Class Performance
Ventana Micro Systems Inc. has introduced the latest iteration of its Veyron family of RISC-V processors, positioning itself as a pioneer with the world’s first data centre-class RISC-V processor. The newly unveiled processor, known as Veyron V2, is available in both chiplet and IP configurations and represents a significant leap forward in high-performance RISC-V CPUs.
Balaji Baktha, Founder and CEO of Ventana, emphasized the processor’s groundbreaking features, stating, “This signifies a significant advancement in our relentless pursuit to lead the industry in high-performance RISC-V CPUs. The V2 processor underscores our commitment to customer-driven innovation, workload acceleration, and optimizing overall performance for industry-leading efficiency in terms of performance per Watt per dollar.”
Key Highlights of the Veyron V2 Processor:
1. Substantial Performance and Efficiency Boost:
- Up to 40% performance improvement was achieved through enhancements in microarchitecture, advanced processor fabric architecture, improved cache hierarchy, and a high-performance vector processor.
2. Ecosystem Advancement with RISE:
- Introduction of RISE, a new ecosystem initiative enhancing support for V2, facilitating the rapid deployment of open, scalable, and versatile solutions.
3. Streamlined Development and Cost-Efficiency:
- Utilization of the industry-leading UCIe chiplet interconnects for chiplet-based solutions, offering cost-effective unit economics, accelerating time to market, and reducing development expenses by up to 75%.
4. Specialized Workload Acceleration:
- Integration of Domain Specific Accelerator technology designed to enhance the efficiency of workloads across data centre infrastructure, fostering customer-driven innovation and distinctiveness.
Technical Specifications of Veyron V2 Processor:
- Fifteen comprehensive out-of-order pipelines.
- Clock speed of 3.6GHz and cutting-edge 4nm process technology.
- 32 cores per cluster, with multi-cluster scalability extending to an impressive 192 cores.
- 128MB of shared L3 cache per cluster and a 512b vector unit for handling intensive computational tasks.
- Ventana AI matrix extensions for advanced AI capabilities.
- Server-class IOMMU and Advanced Interrupt Architecture (AIA) system IP for enhanced performance and reliability.
- Advanced side-channel attack countermeasures for heightened security.
Reliability and Serviceability:
- Comprehensive RAS (Reliability, Availability, and Serviceability) features.
- Top-down performance-tuning methodology for optimal performance.
Ventana Micro Systems complements the Veyron V2 Processor with a Software Development Kit (SDK) comprising various validated software building blocks tailored for the RISC-V platform. The Veyron V2 Development Platform is readily available, opening the doors to high-performance computing and AI applications.
The post Ventana Micro Systems Unveils Second Generation Veyron Family RISC-V Processor, Paving the Way for Data Center-Class Performance appeared first on ELE Times.
Light emitting resistors
![]() | submitted by /u/Comfortable_Bank6611 [link] [comments] |
Bimetallic strip thermostat in my parts dryer shit the bed, so decided to upgrade it.
![]() | This used to be a food dehydrator, now it’s a parts dryer full of reusable desiccant. Contacts melted out of the strip that turned a 1200 watt halogen bulb on and off. A bit of re-wiring and an Amazon purchase later, now I have pretty temps thanks to a thermocouple, a solid state relay, and a PID controller all crammed into a little project box. [link] [comments] |
Сторінки
