Новини світу мікро- та наноелектроніки

2023: Is it just me, or was this year especially crazy?

EDN Network - Tue, 01/02/2024 - 18:50

As any of you who’ve already seen my precursor “2024 Look Ahead” piece may remember, we’ve intentionally flipped the ordering of these two end-of-year writeups once again this year. This time, I’ll be looking back over 2023: for historical perspective, here are my prior retrospectives for 2019, 2021 and 2022 (we skipped 2020).

As I did last year, though, I thought I’d start by scoring the topics I wrote about a year ago in forecasting the year to come:

  • Inconsistently easing semiconductor constraints
  • Mounting environmental concerns
  • Unpredictable geopolitical tensions, and
  • Unclearly legal generative AI

As I also noted last year, maybe I’m just biased but I think I nailed ‘em all. Let me know your opinions in the comments. And here, by the way, are the other topics I ended up not covering in detail in last year’s forecast for 2023 but still briefly mentioned in its summary:

  • Industry layoffs
  • Electric vehicles
  • Autonomous vehicles
  • The Metaverse, and
  • Lingering pandemic trends and impacts

In the sections that follow, I’m going to elaborate further on a few of these themes, as well as discuss other topics that didn’t make my year-ago forecast but ended up being particularly impactful (IMHO, of course).

Large language models quickly become commonplace

Generative AI in its various implementation permutations ended up, as I’d predicted a year ago, dramatically accelerating in popularity this year, despite its prodigious environmental-resource impacts (which, as with bitcoin mining’s ongoing transition from “proof of work” to more resource-efficient “proof of stake”, will hopefully decrease over time). Heck, to the power consumption point, some analyst firms are suggesting that generative AI might be what finally kickstarts battery-powered smartphone sales again (as a counterpoint to last-month’s year-ahead forecast of their ongoing atrophy: to be clear, I’m still skeptical).

What I didn’t necessarily predict a year ago though, was the in-parallel emergence and rapid usage increase of large language models (LLMs), which at least currently are unfortunately also environmental catastrophes in their own right. Just a few days ago as I write these words in early December 2023, in fact, OpenAI’s ChatGPT celebrated its one-year birthday. LLMs existed prior to ChatGPT’s unveiling, of course, and other robust implementation options also remain available and under ongoing development. But ChatGPT has captured a disproportionate percentage of the general public’s mindshare, aided in no small part by Microsoft’s sizeable investment coupled with more recent coverage of management turmoil at the company.

In retrospect however, LLMS’ speedy widespread acceptance, both as a generative AI input (and sometimes also output) mechanism and more generally as an AI-and-other interface scheme, isn’t a surprise…their popularity was a matter of when, not if. Natural language interaction is at the longstanding core of how we communicate with each other after all, and would therefore inherently be a preferable way to interact with computers and other systems (which Star Trek futuristically showcased more than a half-century ago). To wit, nearly a decade ago I was already pointing out that I was finding myself increasingly (and predominantly, in fact) talking to computers, phones, tablets, watches and other “smart” widgets in lieu of traditional tapping on screens and keyboards, and the like. That the intelligence that interprets and responds to my verbally uttered questions and comments is now deep learning trained and subsequent inferred versus traditionally algorithmic in nature is, simplistically speaking, just an (extremely effective in its end result, mind you) implementation nuance.

Battery materials as geopolitical chess pieces

In last year’s retrospective, I pointed out the increasing prevalence of batteries based on lithium ion and other chemistries, in two- and four-wheeled vehicles as well as those driven by propellers and impellers, along with other applications. Given that environmental concerns were one of the big-picture topics I’d explored just one month earlier, I of course also noted the importance of ongoing improvements in charge density, cost, charge speed, recharge cycles, and other metrics as a means of meaningfully weaning us off greenhouse gas-generating fossil fuels.

I closed out that section of the writeup with the following comment:

Barring the discovery of vast, cost-effectively mineable new lithium deposits somewhere(s) in the world, we’re going to need to count on two other additional demand-mitigating variables:

What I admittedly didn’t comprehend at the time was the degree to which the then-current geographic concentrations of lithium and other key raw materials in relatively few regions of the world would find use both, to advance those countries’ leadership in batteries based on those raw materials, and in parallel, to hamper the aspirations of competitors and others. Take, for example, these excerpts from an NPR interview published in late July:

When it comes to supply chains for the electric vehicle industry, China is far ahead for the number of batteries and EV cars that it produces. It’s also cornered the market on the minerals, metal, cathodes and anodes that go into batteries. Can the rest of the world catch up?

The numbers speak for themselves when it comes to critical elements used in electric vehicle batteries and other forms of renewable energy storage. China mines more than two-thirds of the world’s graphite, extracts 60% of the rare earth. It owns almost half of the cobalt mines and controls a quarter of the lithium.

Last year, China refined 95% of manganese, roughly 70% of cobalt and graphite, two-thirds of lithium, and over 60% of nickel. These are all the key materials for lithium-ion batteries that currently dominate the market.

To wit, as announced in late October and effective just a couple of days ago as I write these words, China is restricting exports of graphite. And it’s not just China; a host of African countries rich in various critical minerals are also negotiating tough with the United States and other high-volume importers, for example. The U.S. and others are aggressively seeking out domestic supplies for lithium and other raw materials, but translating a find into high volume extraction won’t happen overnight and may also be constrained by environmental impact concerns.

Political tensions impact technology firms’ businesses

Speaking of China…as of a couple of days ago, the United States issued long-awaited regulations that limit Chinese content in batteries eligible for electric vehicle tax credits beginning in 2024, starting with fully assembled cells and later spreading to raw materials. This is just one example of the suite of technology-related sanctions and other restrictions that the U.S. and other Western countries have issued against China in recent years, seemingly accelerating of late.

Those countries’ officials point, for example, to claimed official China-sanctioned, often even China government-coordinated, espionage programs against Western businesses and political entities, as discussed for example in a recent 60 Minutes segment:

along with price “dumping” designed to force non-China competitors out of markets artificially made unprofitable near-term, only for Chinese businesses to then raise long-term prices once competitors have been eliminated. In bringing up these claims, to be clear, I’m not offering any opinion as to their validity-or-not, I’m just reporting them.

These sanctions, unsurprisingly, also include restrictions on the types, and the performance and other features within a given type, of SoCs and other ICs, along with the optical lithography and other equipment used to fabricate advanced chips. AMD, Intel and NVIDIA, for example, are all export-constrained as to which host processors, GPUs, and AI accelerators, and at what clock speeds, can be shipped to Chinese customers both directly and via intermediaries (NVIDIA in particular appears to have plenty of other customers for its AI-tailored chips and seemingly hasn’t ended up with oversupply). The sanctions have seemingly had at least some effect, at least in the near term, although visionary Chinese firms reportedly stockpiled supplies in advance, anticipating the rules’ unveil. The long-term impact, on the other hand, is less clear.

Ongoing GPU high prices and availability limitations

And speaking of NVIDIA…a year ago, in discussing the forecasted easing of prior pandemic supply chain- and consumer demand-induced semiconductor product constraints, I wrote:

The downturn of the bitcoin mining market has enabled the graphics processor segment (another high-volume consumer of wafers and other fab, test and packaging facilities and resources) to regain some semblance of normalcy, a situation which I suspect will extend into the new year.

I was right…at least sorta…but only for the initial part of the year. Let’s review. Beginning with the emergence of COVID in 2020 and extending into 2022, it was nearly impossible to obtain a board based on a modern graphics processor except at ridiculous markups. Why? Several primary factors:

  • Pandemic lockdowns, coupled with widespread workers’ illnesses and deaths, crippled supply chains starting from IC fabs all the way to retailer warehouses.
  • Consumers that had previously inhabited office cubicles during the week instead found themselves sitting on Zoom calls all day. And because they weren’t wasting off-hours time round-trip commuting to the office every day (among other factors), they ended up with spare time on their hands that they filled with (among other things) gaming.
  • And many of them also delved into speculative bitcoin trading, which was particularly lucrative (at least in places where utility bills were reasonable) if they also did GPU-accelerated bitcoin “mining”.

Unfortunately, the situation as we exit 2023 is eerily reminiscent of recent-past GPU constraints, although the defining factors are different. Folks are increasingly back in the office. And bitcoin trading has fallen out of favor. But AI, as I wrote about earlier, is exploding. GPUs, being massively parallel processing architectures, are well suited for accelerating both deep learning training and inference operations. And AMD and NVIDIA, the leading two GPU suppliers, are both foundry-based from a fabrication standpoint. If you’re them, and you’ve got limited foundry supply at your disposal, what would you prefer to leverage it for: highly profitable AI accelerators or less profitable graphics chips? Exactly.

There’s a specific reason for my showcase of an Intel graphics board in this section, by the way. Here’s the second half of the year-ago paragraph I quoted earlier in the section:

Intel seems to finally be getting its manufacturing house in order, albeit after a multi-year flailing-about delay, which should stabilize (and maximize) yields out of its existing fab network, both for itself and its fledgling foundry services aspirations.

Intel, unlike both AMD and NVIDIA, has less constrained, captive fab capacity available to it. And, although the company’s 2022 foray into bitcoin ICs didn’t pan out, with Intel unceremoniously dumping them a year later (to clarify: the Blockscale ASICs were general-purpose hashing acceleration chips, not bitcoin-specific, although other blockchain-related apps apparently didn’t deliver the demand volumes necessary to rationalize ongoing investment), the company’s re-engagement with discrete graphics has been notably more successful. For the moment, at least, Intel’s products don’t target the high end of the graphics market, but that’s the only segment that AMD and NVIDIA are currently meaningfully active in, anyway. For entry-level and mainstream markets, on the other hand, Intel’s increasingly the only game (pun intended) in town. I’m curious to see how Intel’s pragmatic strategy plays out in 2024 and beyond.

The enduring popularity of HDDs

A couple of weeks ago, Western Digital released two 24 TByte HDD product tiers, with 28 TByte successor versions nipping at their heels. Seagate and other remaining HDD suppliers are making similar capacity-boosting moves. What’s going on? Wouldn’t SSDs’ superior random access performance and lower power consumption (after all, they don’t contain rapidly spinning motors and platters and rapidly oscillating read/write heads), along with steadily decreasing cost/bit metrics, sooner-or-later ensure HDD precursors’ inevitable complete demise?

Maybe that’s what you thought, but I never did, and I’ve got the longstanding documentation to prove it ;-). HDDs have also exhibited steadily decreasing cost/bit metrics over the years. And although they may start out at a higher fixed cost than an SSD, due to the aforementioned motors, platters, read/write arms and heads, and such, beyond a particular aggregate capacity point their total cost ends up being less than that of the SSD alternative (not to mention lower in the total unit volume required to implement that capacity). And regarding power and energy consumption, to quote my favorite engineer-lingo line, “it depends”.

“Cloud” and other enterprise storage is perhaps obviously the dominant driver of HDD demand nowadays, therefore the recently announced WD products’ feature set tailoring. But plenty of consumer NASs (such as the two whirring away downstairs as I type these words) and direct-attached storage devices remain HDD-based, too. And in all these usage scenarios, a higher-performance, lower-capacity flash memory “buffer” may also be included ahead of the rotating media, implementing a “hybrid” architecture. Face it; at the end of the day, we’re all digital (and otherwise) packrats. And HDDs will long have a place in satisfying our accumulated-data needs.

Autonomous vehicle setbacks

Last but definitely not least is the fairly recent story of Cruise’s near-widespread success but rapid demise (near-term, at least) in California, and what it means for the autonomous robotaxi and broader self-driving vehicle market going forward. Let’s review:

Cruise subsequently lost its robotaxi permit in Los Angeles too, then paused all driverless robotaxi operations to ‘rebuild public trust’. Production of the next-generation Origin robotaxi was abruptly halted. Introduced in January 2023, the steering wheel-less Origin had been claimed “just days away” from receiving the necessary regulatory approval only a few weeks before the San Francisco crash, and Cruise had already assembled a several-hundred-vehicle Origin fleet in anticipation. Pending testing of wheelchair-compatible robotaxis was also halted.

All of Cruise’s vehicles in the field were recalled for software and other updates in early November, and employee layoffs and stock program suspensions predictably followed, along with the resignation of the founder (and with acquiring company GM’s executives taking over). Near-term spending by GM has also been dramatically slashed.

Sounds dire for autonomous vehicles generally, and Cruise specifically, right? Maybe…or maybe not. Waymo, for example, has seemingly come out of its competitor’s troubles comparatively unscathed, at least for now. While Cruise likely flew too fast and close to the sun for its own near-term good, humans’ memories are notoriously short-term. Autonomous vehicles, particularly robotaxis and long-haul trucks, do have compelling benefits and, in these two particular cases (and others), operate in comparatively closed-route and other implementation robustness-beneficial scenarios. And while I’m pretty confident that a human (vs autonomous) driver would quickly stop if he or she sensed another person trapped under a vehicle, people more generally hit other people all the time. I’m not trying to be crass here, just pragmatic; echoing a point I’ve made before, at least some of former Cruise CEO Kyle Vogt’s early-September rant about autonomous vehicles unfairly being held to a different (specifically far more stringent) standard than traditional human-navigated vehicles rings true to me.

Coda

As was the case last year (and plenty of other times before, negatively impacting poor Aalyia’s workload), I’m nearing 3,000 words, with more things that I wanted to write about than I had a reasonable wordcount budget to do so. I’m once again therefore going to restrain myself and wrap up, saving the additional topics (as well as updates on the ones I’ve explored here) for dedicated blog posts to come in the coming year(s). Let me know your thoughts on my top-topic selections, as well as what your list would have looked like, in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 2023: Is it just me, or was this year especially crazy? appeared first on EDN.

Quantum Computing Explained: A Simple Dive into the Future of Tech

ELE Times - Tue, 01/02/2024 - 13:00

What is Quantum Computing?

Utilizing the ideas of quantum mechanics to carry out computations, quantum computing is a paradigm shift in computing. Quantum computers employ quantum bits, or qubits, which can exist in numerous states concurrently due to the laws of superposition and entanglement. This is in contrast to classical computers, which use bits as binary units (0 or 1).

Quantum Computing History

Physicist Richard Feynman first introduced the idea of quantum computing in the early 1980s as a way to emulate quantum systems. David Deutsch later came up with the name “quantum computing” in 1985. But the first quantum algorithms, like Grover’s and Shor’s, didn’t show off the potential capabilities of quantum computing until the late 1990s.

Types of Quantum Computing

Although there are many ways to create quantum computers, gate-based quantum computing and quantum annealing are the two primary varieties. Quantum gates are used by gate-based quantum computers, like those made by Google and IBM, to control qubits. D-Wave and other quantum annealers use quantum annealing to solve optimization issues.

How Does Quantum Computing Work?

The concepts of superposition and entanglement are used in quantum computing to carry out intricate calculations. Because qubits can exist in several states at once, quantum computers can process enormous volumes of data at once. Qubits are manipulated by quantum gates to carry out operations, and the outcome is determined by measuring the final state.

Quantum Computing Applications

Quantum computing holds promise for a wide range of applications, including:

  1. Cryptography: Quantum computers can potentially break widely used encryption algorithms, prompting the need for quantum-resistant cryptographic methods.
  2. Optimization: Quantum computing excels at solving complex optimization problems, such as route optimization for logistics or portfolio optimization in finance.
  3. Drug Discovery: Quantum computers can simulate molecular structures and interactions, accelerating drug discovery processes.
  4. Machine Learning: Quantum computing has the potential to enhance machine learning algorithms, offering speed-ups in training and solving certain problems.

Quantum Computing Technology

Technologies for quantum computing are being actively developed by several businesses and academic institutes. IBM, Google, Microsoft, Rigetti, IonQ, and D-Wave are some of the major participants. Usually kept in specially designed buildings with extremely low temperatures to minimize interference from outside sources, quantum computers are stored there.

Quantum Computing Advantages

  1. Parallelism: Quantum computers can process multiple possibilities simultaneously, providing a potential for exponential speed-up in certain computations.
  2. Problem Solving: Quantum computers excel at solving complex problems that are practically intractable for classical computers.

Quantum Computing Disadvantages

  1. Error Rates: Qubits are susceptible to errors due to environmental factors, requiring sophisticated error correction techniques.
  2. Decoherence: The delicate quantum states of qubits can be easily disturbed, leading to a loss of information.

Future of Quantum Computing

Quantum computing has immense potential for revolutionary developments in the future. Researchers work to address issues like mistake rates and scalability as technology advances. It is already possible for quantum computers to achieve quantum supremacy, wherein they surpass classical computers in specific tasks. It is anticipated that further advancements in quantum hardware and algorithms will open up new avenues and influence the direction of computing in the future.

In conclusion, quantum computing is an exciting new area of technology that has the potential to completely transform several different sectors. Even while quantum computing is still in its infancy, its quick development and growing interest from academia and industry point to a bright future. We may expect a quantum leap in computational power and efficiency as long as researchers keep overcoming obstacles.

The post Quantum Computing Explained: A Simple Dive into the Future of Tech appeared first on ELE Times.

STM32 Developer Zone, STM32 Finder 3.0: Some of the best STM32 journeys start here! Hubs for beginners and experts alike

ELE Times - Tue, 01/02/2024 - 08:57

Author: STMicroelectronics

The STM32 Developer Zone and the STM32 Finder represent a great starting point for new and experienced developers. See how they can help reduce your time to market.

Where do you start when you are unfamiliar and potentially intimidated by such a large and rich development world as the STM32Cube ecosystem? The answer is, “Here!” Many of our partners and customers have expressed the desire to see more products use the STM32Cube environment. It’s for this exact reason that we have moved devices like the BlueNRG-LPS and S2-LP over to STM32 microcontrollers (for more information, see our blog post on the STM32WB0 and STM32WL3). Developers enjoy the graphical user interface and ease of use of tools like STM32CubeMX, the free STM32CubeIDE, and the many software packages, drivers, and middleware that help them release a product to market faster.

However, we also know that many are trying an STM32 device for the first time. Whether because they got their hands on a Discovery Kit or Nucleo boards, or simply because more and more companies are choosing to adopt our devices, there are an increasing number of engineers taking their first steps in our ecosystem. Hence, to make this experience with an STM32 more accessible, and lower the barrier to entry, we have developed tools like the STM32 Developer Zone and the STM32 Finder. Let’s explore how they can assist teams in their journey, even be a positive contributor to seasoned experts, and how they bring the STM32 ecosystem together.

The STM32 Developer Zone New approaches to development

Increasingly, new markets are adopting embedded systems, and engineers must familiarize themselves with complex concepts. For instance, developers may need to quickly learn how to take advantage of AI on a microcontroller, write a low-power wireless application designed for harsh environments, or implement strong security safeguards to meet new regulatory requirements. It was thus important for ST to help teams make the right choices for their products faster. The STM32 MCU Developer Zone is already playing a significant role in our community and is ranked as the number one page on ST.com in customer satisfaction. It was thus normal to use this platform to serve STM32 developers better.

CleanShot-2023-03-08-at-16.50.56@2x-1536x458STM32 MCU Developer Zone

While keeping the original spirit that made the Developer Zone successful, we felt it would help our community further by providing a new STM32 MPU Developer Zone. Additionally, we worked on a new application-based approach to complement the existing product or software selectors for tools like STM32CubeIDE. We also have a “Solutions” tab with sections on GUIs, motor controls, USB-C Power Delivery, and more, while “Developer Resources” will guide newcomers and experts alike by pointing them to relevant technical documentation. The website thus remains a quick way to find the right development board and software tools while guiding new engineers as they take their first steps.

CleanShot-2023-03-08-at-16.51.18@2x-1536x657The solutions in the STM32 Developer Zone (MCU version) Localization in Chinese and Japanese

In our effort to reach more STM32 developers, we are thrilled to have launched the Chinese and Japanese versions of the STM32 MCU and MPU Developer Zone. The sites provide feature parity with the English site, thus offering a strong platform for our communities in Asia. Indeed, beyond simply translating the landing pages, we are making technical documentation available in those languages, such as our white paper on security. In a nutshell, the localized versions of the Developer Zone are another testament to our desire to reach engineers where they are and work with regions by providing solutions tailored to their needs and markets.

Operating systems and an official Visual Studio Code extension

The STM32 Developer Zone will continue to receive frequent updates. For instance, we are working on releasing other solutions for the STM32U5 besides AzureRTOS in STM32CubeU5Similarly, the STM32 Developer Zone will also promote an official Visual Studio Code extension. Developers will be able to flash their devices, track variables, and get error messages within their environment, thus vastly simplifying their workflow. Finally, the STM32 Developer Zone will also receive updates featuring software for the newly announced STM32H5 and for the new STLINK-V3PWR, which both launched at this year’s STM32 Innovation Live.

The STM32Cube Ecosystem What is the STM32Cube Ecosystem?

Launched in mid-2014, the STM32Cube brand designates our solutions to help developers

ST18187_OBN_STM32Cube-expansion-packages_0720-scr-300x206The STM32Cube Ecosystem

design products and applications. The software ecosystem relies on two pillars: embedded packages and software tools. There are two types of STM32Cube Packages: MCU Packages and Expansion Packages. The MCU Packages (STM32CubeF4, for instance) contain drivers, low-level APIs, and demos or example codes for Nucleo and Discovery boards. The STM32Cube Expansion Packages complement the device packages by offering additional middleware or drivers, as we recently saw with X-CUBE-AI, the first package in the industry to enable the conversion of a neural network into the optimized code for STM32 MCUs.

The STM32Cube software tools for PCs assist in the design of applications. It is common to hear partners say they rely on utilities like STM32CubeMX or STM32CubeProgrammer for their projects. And many of our tutorials use them to make our technologies more accessible. However, there are many other STM32Cube software tools. For instance, STM32CubeMonUCPD is a monitoring tool that works with all our USB-C PD interfaces and libraries to facilitate testing and implementation operations. And STM32CubeProgrammer is a programming tool that makes STM32 MCUs more accessible and efficient.

How tools in the STM32Cube ecosystem work together?

As time passes, tools and packages within the STM32Cube ecosystem have come to work together. For instance, STM32CubeMX is baked into STM32CubeIDE. Put simply, over the years, developers have experienced how the ST toolchain has become more cohesive. Obviously, we will also continue to release standalone versions of our STM32Cube tools for the developers who use other toolchains, ensuring that anyone can easily benefit from our STM32Cube ecosystem. However, ST engineers and researchers will continue to commit to dogfooding our tools, such as using STM32CubeIDE, because we want to be truly invested in our ecosystem and closer to our community.

How software packages in the STM32Cube Ecosystem work together?

Up to now, developers who wanted to use an STM32Cube expansion package had to find the right one, download it, and unpack it. That meant adding source files to an IDE or even exploring its source code. Additionally, porting it from one MCU to the next isn’t always straightforward if an application uses specific pins or IPs. It may also be imperative to install drivers, libraries, or middleware. Until now, ST offered documentation and tutorials to help developers. When there were only a few expansion packages, things were much more straightforward. Now that the STM32Cube ecosystem is so large, frictions can significantly increase.

The solution comes from the integration of STM32Cube expansion packages within STM32CubeMX. In a nutshell, developers can select an X-CUBE package straight from the MCU configuration tool. It required that we update existing packages, and a list of compatible solutions is available. We will also continue to ensure that most upcoming STM32 expansion packages from ST support this feature. By integrating these software packs within STM32CubeMX, users select the package, generate the files, and simply start coding. As a result, it lowers the barrier to entry for developers less familiar with our ecosystem.

How can ST Authorized Partners bring their software packages to the STM32Cube ecosystem?

Another issue that developers may encounter pertains to the ability to share their custom solutions. It is common for a company with specific needs to create a custom expansion package. Partners may then want to offer solutions to the community. For instance, we talked about embOS from SEGGER and Unison RTOS from Rowebots on the blog, but there are many others. These solutions, found under the I-CUBE initiative, help engineers add features and experiment with various technologies. However, sharing a custom package within a company or the community is not always obvious or easy. Hence, we wanted to help partners more easily create highly sharable packages.

To remedy this particular point of friction, ST is opening STM32CubeMX to I-CUBE packages. Put simply, the same integration we bring to our STM32 expansions (X-CUBE) is now available to all developers. Anyone can now build a package using STM32CubePackCreator to create a solution that can appear within STM32CubeMX. However, we’ll curate what’s visible by default within the MCU configurator tool. We offer documentation to guide developers in this process to ensure uniformity and compatibility within the STM32Cube Ecosystem. We are also offering STM32PackCreator. The utility, found within STM32CubeMX, facilitates the creation of a software package from scratch.

An expansion software follows CMSIS-Pack (Cortex Microcontroller Software Interface Standard). Many will also be configurable within STM32CubeMX’s GUI. To abide by the CMSIS-Pack specifications, developers must include a PDSC (Pack Description) file. Such a document uses XML and demands detailed information on all the pack’s content. Similarly, to make the X-CUBE or I-CUBE configurable within STM32CubeMX, STM32PackCreator uses a dedicated UI. It opens the door to a system that puts a wealth of options at a user’s fingertips, ensuring developers no longer have to configure everything manually by writing code. STM32PackCreator thus removes friction by automatically generating the PDSC file. It also ensures the software components are configurable within STM32CubeMX.

STM32 Finder What is STM32 Finder?

Not everyone working with STM32 necessarily writes code or designs a PCB. For instance, a

STM32Finder-screenshot-210x300STM32 Finder

manager may plan for a project, or a decision-maker may want to know a component’s specifications. In such a situation, having to download STM32CubeIDE or STM32CubeMX would be cumbersome. As a result, we created STM32 Finder, ST’s mobile application for smartphones and tablets. The tool includes an extensive search feature to find a device or a related development board rapidly. Users also get to download various documentation or rapidly access social media channels and community forums.

How did ST optimize its search engine?

To improve the user experience, ST made STM32 Finder faster and added features for power users. The former came from overhauling the mobile version. By optimizing its code, we increased response times significantly. We are also adopting a responsive design to allow users to compare many devices at once, regardless of the display size. ST also changed the app’s update system to only download changes to the database rather than an entirely new one. Hence, updates are more frequent and take far less time to install to ensure searches are up to date. The latest version also includes new links to various online outlets to find partners, ask questions, or learn what’s new.

ST also reworked the search feature to make it vastly more customizable. For instance, users can now distinguish between packages. As a result, they can see how various models may influence thermal performance or prices, among other things. The application can also group categories of specifications. For example, users can search for a device by grouping UART, LPUART, and USART together. Hence, finding a device’s total number of peripherals can help answer specific questions without digging into the datasheet. Developers could also use the new grouping system to search for devices with SPI and USART since the latter also serves as an SPI.

The post STM32 Developer Zone, STM32 Finder 3.0: Some of the best STM32 journeys start here! Hubs for beginners and experts alike appeared first on ELE Times.

Exploring software-defined radio (without the annoying RF)—Part 2

EDN Network - Mon, 01/01/2024 - 16:45

Editor’s Note: See Part 1 here

In the last installment we looked at the ultrasonic hardware used in the development of a software designed ultrasonic data transmission system we are calling the SDU-X. 

In this second, and last, installment we will discuss the firmware that enables the system to send and receive data using software defined modulation schemes.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The SDU-X firmware

(The Arduino source code can be downloaded in the link at the bottom of this article.)

It needs to be mentioned that an Arduino Nano will have a number of constraints when creating the SDU-X firmware. These range from low clock speed, a small amount of RAM, a large tolerance on the resonator frequency, and the lack of a hardware floating point. So long as we’re aware of the constraints we can work around them.

So, knowing the constraints, let’s figure out a sample rate for the 40 kHz carrier. Nyquist would say it needs to be greater than 80 kilo-samples per second (ksps) but typical practice is to be at least 5 times the 40 kHz. For various reasons, and much testing, I settled for 8 times the carrier, or 320 ksps. For a baud rate, and again after much testing, I decided on 250 baud. Yes, it’s slow but sufficient for sending data back and forth from solar panels or for experimenting. (For those that remember, early PCs communicated at 300 baud over phone lines.)

Sending bytes to the DAC, for transmission, 320k times per second means we will have time to execute less than 50 instructions between bytes. Obviously, we cannot calculate sine values for the transmitted waveform on the fly, so we need to precalculate these. But again, we are constrained by the amount of RAM we have to store this data. A symbol would require 320000/250 = 1280 bytes, and we would need one array for a “0” symbol and another for a “1” symbol. That’s 2560 bytes—that’s 2k more than the Nano has.

(Let me stop for a second to make the point that even on the larger SDR system I worked on, even though there was a lot of RAM and speed, these were still constraints that needed to be worked around. The point is that developing on the Nano is a good proxy for the development on larger SDR systems.)

So, the trick for this scarcity of memory issue I came up with was to break symbols (carrier modulated by “0”s and “1”s) into subsymbols which are ¼ of the symbol time. At transmission time, the subsymbol will be repeated 4 times to make a full symbol. Subsymbols now only require 320 bytes each for “0” and “1”. Note that the subsymbols need to connect together without a discontinuity in the sine wave, whether it is connecting to a “1” or a “0”. This was part of the reason the sample rate and baud rate numbers, above, were chosen.

There is one piece of firmware, and it is shared by both the Requester and the Responder. You can compile it for use in the Requester, or compile it for use in the Responder by changing one “#define” statement in the code. To compile code for the Requester, the line towards the top of the main code that reads “#define REQUESTER” should be active, not commented out. To compile code for the Responder that same line should be commented out.

Starting in the main “loop”, there is a section to select a modulation scheme we would like to use. There are several types to choose from but only four are fully implemented: On-Off Keying (OOK), Binary Phase Shift Keying (BPSK), NONE, and Noise. By uncommenting the OOK line alone and compiling the code, we will be communicating using OOK as the modulation. Selecting BPSK by uncommenting that line alone will allow communications using BPSK. Selecting NONE will transmit short bursts of unmodulated 40 kHz sine waves. Selecting Noise transmits short bursts of somewhat random noise.

Now, let’s look at the Requester compiled code that is surrounded with “#ifdef REQESTER” and “#endif” which tells the compiler to include this code in the Requester compile but not in the Responder compile. The main pieces of Requester code in this block are:

  • Get the packet to send
  • Preprocess the packet
  • Create the modulated symbols
  • Break the packet into modulated symbols
  • Transmit the packet
  • Receive the packet sent from the Responder
  • Print received info to the serial port

First, we get some data to send. Currently this is a call to a routine that sets some hardcoded data bytes to send, and a fixed length. But the intention is for the user to modify this to send what they want such as something from the serial port, I2C port, or a timed command for a task like periodic data logging. In the Requester, the first, and sometimes the second, byte is currently used to identify the data it wants the Responder to send or the command it wants executed.

Packet structure

Following this is a call to preprocess the data packet (Figure 5). This first moves a preamble to the packet to be transmitted. This preamble is based on the modulation selected. The preamble is useful for finding a signal amplitude, which aids in slicing. The preamble also contains a bit pattern to assist in syncing to the start of a symbol (bit). It also has a bit sequence to signal the start of the actual data (this is known as the “start-of-packet delimiter”). For OOK modulation, this routine also inserts some pilot bits into the data packet to break up long strings of “1”s or “0”s, which can inhibit symbol sync in the receiver. Lastly, the preprocessor routine creates a 16-bit CRC and appends it to the end of all its bytes of the packet (preamble, data, pilot bits, and now CRC).

Figure 5 The packet structure SDU-X packet structure with the preamble, data bytes, pilot bits, and CRC.

After that we execute one of the tricks used to save us cycles—we create the modulated subsymbols by predetermining parts of the carrier wave that we will transmit. Using a trig function during transmission would reduce execution to a crawl, so we do it in advance. There is a set of routines to do the calculations for a few modulation types and a couple of test types.

To save time in the transmit routine, we next breakdown all the packet bytes to be sent into an array of bits. This is wasteful in memory but saves significant time when we start transmitting the bits at 320k bps.

Transmitting the packet

We are now ready to transmit the packet. After some initialization, we enter a loop controlled by a timer set to the bit time. The code polls the timer until it sees the timer’s timeout bit set, and if it is, it is time to send the next byte to the DAC. The byte is grabbed, in sequence, from the correct precalculated array. This means if the next symbol is a “1”, the byte is taken from the precalculated array built for transmission of a “1” modulated symbol. If it is a “0”, then the byte is drawn from the recalculated array for the “0” modulated symbol. After the 320 subsymbol bytes are sent, they are repeated three more times to complete a full symbol. After this, the next bit to be sent is used to select the correct, precalculated array. This process continues until all bytes in the packet are sent.

During this time that the Requester was transmitting, the Responder was receiving the transmission. To sample the 40 kHz received signal the sampling routine would need to sample at something faster than 80 ksps to satisfy the Nyquist criterion. The maximum speed of the Nano’s ADC is around 9 ksps for 10-bit resolution so, for both OOK and BPSK, the input signal is actually downsampled (taken at less than Nyquist criteria). This means the 40 kHz modulated signal will appear in a new, lower, frequency.

To do the sampling in the Responder a timer is again initialized and then polled in the receiving loop to obtain samples at the correct time. Samples are read from the Nano’s ADC which is connected to the receiver amplifier. OOK and BPSK start to diverge from here, so I’ll just break each down the major steps for each.

OOK: The sample is taken at 4800 sps. The absolute value of the sample is taken and filtered to get a signal that goes up and down with the transmitted “1”s and “0”s. It then finds the mid-value of the two levels so it can slice the stream to capture the bits, but it can’t slice until it syncs to the timing of the filtered symbol stream. This is done by noticing transitions of the filtered stream rising or falling past the mid-value. We also count samples so we can keep synced when there are no transitions such as “000000” or “111”. Now that we have a sync we can wait for the time-center of the symbol and then slice (above mid-value is a “1”, below is a “0”).

While the code is gathering bits by slicing, it is also comparing the last 16 bits received to the defined start-of-packet delimiter. If they compare, then the next bit received is the first data bit.

Bits are collected until the known number of bits in a complete packet have been received. Then a CRC is calculated from the collected bits and compared to the CRC received in the data packet. If they agree, it is marked as a good packet and the receive routine returns to the main code, otherwise the process is reinitialized and the system starts receiving again.

BPSK: This is similar to OOK in that it is downsampled, but is sampled at a more controlled 4500 sps (and the timer is hand calibrated). Due to aliasing, the 40 kHz phase modulated signal will now appear at 500 Hz. The samples are now run through a correlator which is simply a multiplication of the current sample with a sample exactly one symbol time previously, and run through an averager of a length equal to the number of samples in a symbol. The correlator output is now a rising and falling signal and a mid-value is calculated, which will be used for slicing. But slicing is different than OOK. When the correlator output is above the mid-value, it signals that the symbol has not changed from the last symbol. If the correlator output is below the mid-value, it signals that the symbol has changed from the last symbol. So, we are actually using differential BPSK (DBPSK). Again, we need to sync the symbol timing first. A sync occurs as the correlator output passes the midpoint and will occur at every 4 ms after that (4ms is 1/Baud). Slicing is then executed at each sync time.

Similar to OOK, we also search for the 16 bits of the start-of-packet delimiter but with a twist. Because this is differential detection, we don’t know if we will get the bits as defined or the complement of the bits. This is because we did not know if we should have started from a “1” or a ”0”. Therefore, the start-of-packet delimiter code does a compare of both and, if it compares to the complement, it complements the last bit to get differential bit tracking corrected.

Along the way of receiving both OOK and BPSK the signal level and noise level are computed so it can later be output to the user.

It’s interesting to note that, even though OOK is considered a simpler modulation scheme, the OOK receiver code takes more time to execute than the BPSK code. It takes about 170 µs, worst case, to process an OOK symbol and a BPSK symbol takes about 100 µs. Correlation in BPSK is a much more elegant and efficient detection method than energy detection as used in OOK.

One point to remember—transmit code is always easier to write than receiver code. This is mostly due to the fact that receiver code has to deal with variable receive amplitudes and also needs to find and track symbol edges for syncing. It also must scan for the preamble. Transmit code also executes faster than the receiver code as everything it needs is prepared in advance. That’s why we can use much higher sample rates on the transmitter, which is necessary to create the modulated signal.

Now, let’s take a look at the Responder code that is surrounded with “#ifdef RESPONDER” and “#endif” which tells the compiler to include this code in the Responder compile but not in the Requester compile. The main pieces of the Responder code are executed very much like the Requester but the “Receive the packet” is the first thing called. The receiver code is executed until a valid packet is found. After a valid packet is found, the Responder code returns to the main code and calls a routine to execute the command that it received. Inside this routine, the data to send back to the Requester is also set up. The Responder then executes the steps to send the packet, as described above.

Examples

Included with the code are some debug code snippets (located in the tab labeled “Debug”). These can be used while experimenting with the code to view some parameters and look at timing in the code, or checking to see if certain parts of the code are executed.

For example, to view a value of a variable I sometimes send the value to the DAC and watch the DAC output with a scope probe on TP2.This is useful for looking at things like the correlator values but must be scaled for the 8-bit DAC output. You will find an example of this code commented out in the BPSK code.

An example of checking timing is placing code to set TP11 high and then low around the lines of code you want to time (then subtract 3.6 µs for digitalWrite execution time). Setting TP11 high can be done with the line “digitalWrite(TP11, ON)” and setting TP11 low can be done with the line “digitalWrite(TP11, OFF)”. Setting TP11 high and then low is also useful to see things like when the sync is found. You will also find an example of this code commented out in the BPSK code.

That’s the code.

Wrap Up

If you’re interested in the SDU-X you can download all the info such as:

  • Source code
  • PCB design
    • Schematic
    • PCB design and artwork (KiCad format)
    • Bill of materials
  • Design notes
  • STL files for 3D printing the parabolic transmit/receive tower

Find them at: https://www.thingiverse.com/thing:6268613

There are many other modulation schemes to explore such as FSK, QPSK, QAM, etc. It may also be an interesting exercise to add error correcting to the packet and explore improvements in packet errors versus S/N. Adding an address to the packet may also be interesting in exploring systems with multiple Requesters and Responders. A couple of the more advanced areas to explore are using the power of I/Q data and the beauty of negative frequencies.

After you experiment with the SDU-X for a while you may want to explore other uses for the hardware. Since it is flexible, you can create code to make the hardware perform different functions. Some ideas are measuring distance or speed of an object, or measuring windspeed and direction, or downsampling the receiver stream and listening for sound in the 40 kHz band.

I hope the SDU-X is useful to those that want to learn more about SDR or are teaching others the basics of SDR.

Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.

Phoenix Bonicatto is a freelance writer.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Exploring software-defined radio (without the annoying RF)—Part 2 appeared first on EDN.

TSMC reaffirms path to 1-nm node by 2030 on track

EDN Network - Mon, 01/01/2024 - 16:12

TSMC, while reaffirming its commitment to launch the 1-nm fabrication process in due time, is confident it will overcome technological and financial challenges all the way to 2030. The Hsinchu, Taiwan-based foundry showcased its technology roadmap for 2 nm, 1.4 nm, and 1 nm process nodes at the recent IEDM conference.

It’s worth mentioning that EUV lithography tool supplier ASML expects to reach 1-nm process technology in 2028. On TSMC’s part, it has opened a research and development center in Hsinchu, Taiwan, where 7,000 researchers are working on novel materials and transistor structures for 1-nm chips.

Figure 1 TSMC’s R&D center is working on 1-nm process technologies and below. Source: Reuters

Concurrent to work on a 1-nm process node for monolithic chips, TSMC is also focusing on advancements in packaging technologies to produce multi-chiplet solutions capable of packing more than a trillion transistors by 2030. TSMC plans to put a trillion chips on a package using multiple 3D-stacked chiplets.

Intel catching up

Like TSMC, Intel is also concurrently focusing on cutting-edge process nodes as well as chiplets and advanced packaging technologies to put a trillion transistors on a package. However, while TSMC plans to ride the 2-nm train by 2025, Intel claims that it will leapfrog Taiwan’s mage-fab by launching its 2-nm process node called A20 in 2024. Yet, as we enter 2024, it’s still to be seen if Intel can meet its 2-nm deadline. Intel has announced the production of a 20A-based CPU called Arrow Lake in 2024.

Likewise, the Santa Clara, California-based chip giant plans to advance to a 1.8-nm process node—or Intel 18A—in 2025. Next, by 2028, Intel plans to develop a 1.4-nm process node it calls A14; on the other hand, while TSMC earlier claimed to complete the 1.4-nm process node development by 2026, it’s now hinting about a 1.4-nm node to be ready by 2028.

Figure 2 Intel’s launch of the Arrow Lake processor built around 2 nm will show the company’s hold on its nanometer roadmap.

Samsung, another archrival of TSMC, is also confident that its 1.4-nm process technology will come into its own in 2027. Still, the company more visible in the 1-nm fray besides TSMC is based in Japan, Rapidus, a Japanese government-funded startup fab.

Rapidus joins the fray

Rapidus, which previously engaged with IBM and Belgium’s imec for the design of a 2-nm fabrication process, has now joined hands with the University of Tokyo and French research institute CEA Leti to develop a 1 nm node in the 2030s. The collaboration first aims to produce a 1.4-nm process by 2027.

Here, Leti will focus on exploring novel transistor structures while Rapidus and other Japanese partners, including Riken Research Institute, will contribute through staff exchanges, fundamental research sharing, and the assessment and testing of prototypes.

Leti’s role regarding new transistor structures will be crucial because industry observers anticipate that vertically stacked complementary field effect transistors (CFETs) may replace gate-all-around (GAA) FET technology at 1.4 nm and 1 nm nodes.

Rapidus’ tie-up with IBM and imec is expected to lead to 2-nm pilot chip production in 2025 and high-volume production in 2027. Rapidus entry in the manometer race with a labyrinth of partnerships and alliances shows that while TSMC has become an undisputed leader in chip fabrication, the field is gradually getting crowded. And that’s good for chip vendors and the semiconductor industry at large.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post TSMC reaffirms path to 1-nm node by 2030 on track appeared first on EDN.

Automatic Card Shuffler Repair

Reddit:Electronics - Sun, 12/31/2023 - 22:33
Automatic Card Shuffler Repair

Repaired a slipped gear but am stumped on reassembly of the activating mechanism. I need to connect the Red and Yellow safety pin-looking ends to the Black House-piece so that when re-assembled pushing down on the lever will complete the circuit and activate both motors. Any advice as how to position the springs and the yellow and red switches?

submitted by /u/t-town-tony
[link] [comments]

Whoops

Reddit:Electronics - Sun, 12/31/2023 - 01:00
Whoops

200 mf at 16vdc. Something went very wrong.

Big bang, lots of smoke but it's working again (not the capacitor!) with a parts bin replacement cap.

submitted by /u/APLJaKaT
[link] [comments]

Weekly discussion, complaint, and rant thread

Reddit:Electronics - Sat, 12/30/2023 - 18:00

Open to anything, including discussions, complaints, and rants.

Sub rules do not apply, so don't bother reporting incivility, off-topic, or spam.

Reddit-wide rules do apply.

To see the newest posts, sort the comments by "new" (instead of "best" or "top").

submitted by /u/AutoModerator
[link] [comments]

Russian missile sensor teardown.

Reddit:Electronics - Sat, 12/30/2023 - 13:05

This popped up in my YouTube feed, thought you would enjoy.

A teardown of an Russian 'Iskander' missile sensor, recovered in Ukraine :

https://www.youtube.com/watch?v=Ac2ioGwfsbI

submitted by /u/Geoff_PR
[link] [comments]

Pages

Subscribe to Кафедра Електронної Інженерії aggregator - Новини світу мікро- та наноелектроніки