Новини світу мікро- та наноелектроніки

5N Plus commercializing its GaN-on-Si patents

Semiconductor today - Чтв, 03/21/2024 - 18:32
Specialty semiconductor and performance materials producer 5N Plus Inc (5N+) of Montreal, Québec, Canada is officially launching the commercialization rights for its portfolio of gallium nitride on silicon (GaN-on-Si) patents which, it says, can enable the rapid prototype development and first-to-market commercialization of novel vertical GaN-on-Si power devices by companies operating in the high-power electronics (HPE), electric vehicles (EV) and artificial intelligence (AI) server sectors...

Workarounds (and their tradeoffs) for integrated storage constraints

EDN Network - Чтв, 03/21/2024 - 16:14

Over the Thanksgiving 2023 holiday weekend, I decided to retire my trusty silver-color early-2015 13” MacBook Pro, which was nearing software-induced obsolescence, suffering from a Bluetooth audio bug, and more generally starting to show its age performance- and other-wise. I replaced it with a “space grey” color scheme 2020 model, still Intel x86-based, which I covered in detail in one of last month’s posts.

Over the subsequent Christmas-to-New Year’s week, once again taking advantage of holiday downtime, I decided to retire my similarly long-in-use silver late-2014 Mac mini, too. Underlying motivations were similar; pending software-induced obsolescence, plus increasingly difficult-to-overlook performance shortcomings (due in no small part to the system’s “Fusion” hybrid storage configuration). Speed limitations aside, the key advantage of this merged-technology approach had been its cost-effective high capacity: a 1 TByte HDD, visible and accessible to the user, behind-the-scenes mated by the operating system to 128 GBytes of flash memory “cache”.

Its successor was again Intel-based (as with its laptop-transition precursor, the last of the x86 breed) and space grey in color; a late-2018 Mac mini:

This particular model, versus its Apple Silicon successors, was notable (as I’ve mentioned before) for its comparative abundance of back-panel I/O ports:

And this specific one was especially attractive in nearly all respects (thereby rationalizing my mid-2023 purchase of it from Woot!). It was brand new, albeit not an AppleCare Warranty candidate (instead, I bought an inexpensive extended warranty from Asurian via Woot! parent company Amazon). It was only $449 plus tax after discounts. It included the speediest-available Intel Core i7-8700B 6-core (physical; 12-core virtual via HyperThreading) 3.2 GHz CPU option, capable of boost-clocking to 4.1 GHz. And it also came with 32 GBytes of 2666 MHz DDR4 SDRAM which, being user-accessible SoDIMM-based (unlike the soldered-down memory in its predecessor), was replaceable and even further upgradeable to 64 GBytes max.

Note, however, my prior allusion to this new system not being attractive in all respects. It only included a 128 GByte integrated SSD, to be precise. And, unlike this system’s RAM (or the SSD in the late 2014 Mac mini predecessor, for that matter), its internal storage capacity wasn’t user-upgradeable. I’d figured that similar to my even earlier mid-2011 Mac mini model, I could just boot from a tethered external drive instead, and that may still be true (online research is encouraging). However, this time I decided to first try some options I’d heard about for relocating portions of my app suite and other files while keeping the original O/S build internal and intact.

I’ve subsequently endured no shortage of dead-end efforts courtesy of latest operating system limitations coupled with applications’ shortsightedness, along with experiments that functionally worked but ended up being too performance-sapping or too little capacity-freeing to be practical. However, after all the gnashing of teeth, I’ve come up with a combination of techniques that will, I think, deliver a long-term usable configuration (then again, I haven’t attempted a major operating system update yet, so don’t hold me to that prediction). I’ve learned a lot along the way, which I hope will not only be helpful to other MacOS users but, thanks to MacOS’s BSD Unix underpinnings, may also be relevant to those of you running Linux, Android, Chrome OS, and other PC and embedded Unix-based operating systems.

Let’s begin with a review of my chosen external-storage hardware. Initially, I thought I’d just tether a Thunderbolt 3 external SSD (such as the 2TB Plugable drive that I picked up from B&H Photo Video on sale a year ago for $219) to the mac Mini, and that remains a feasible option:

However, I decided to “kill two birds with one stone” by beefing up the Mac mini’s expansion capabilities in the process. Specifically, I initially planned on going with one of Satechi’s aluminum stand and hubs. The baseline-feature set one that color-matches my Mac mini’s space grey scheme has plenty of convenient-access front-panel connections, but that’s it:

Its “bigger brother” additionally supports embedding a SATA (or, more recently, NVMe) M.2 format SSD, but connectivity is the same 5-or-more-recently-10 Gbps USB-C as before (ok for tethering peripherals, not so much for directly running apps from mass storage). Plus, it only came in a silver color scheme (ok for Apple Silicon Mac minis, not so much for x86-based ones):

So, what did I end up with? I share the following photo with no shortage of chagrin:

In the middle is the Mac mini. Above it is a Windows Dev Kit 2023, aka “Project Volterra,” an Arm- (Qualcomm Snapdragon 8cx Gen 3, to be precise, two SoC steppings newer than the Gen 1 in my Surface Pro X) and Windows 11-based mini PC, which I’ll say more about in a future post.

And at the bottom of the stack is my external storage solution—dual-storage, to be precise—an OWC MiniStack STX in its original matte black color scheme (it now comes in silver, too).

Does it color-match the Mac mini? No, even putting aside the glowing blue OWC-logo orb on the front panel. And speaking of the front panel, are there any easily user-accessible expansion capabilities? Again, no. In fact, the only expansion ports offered are three more Thunderbolt 3 ones around back…the fourth there connects to the computer. But Thunderbolt 3’s 40 Gbps bandwidth is precisely what drove my decision to go with the OWC MiniStack STX, aided by the fact that I’d found a gently used one on eBay at substantial discount from MSRP.

Inside, I’ve installed a 2 TByte Samsung 980 Pro PCIe 4.0 NVMe SSD which I bought for $165.59 used at Amazon Warehouse a year ago (nowadays, new ones sell for the same price…sigh…):

alongside a 2 TByte Kingston 2.5” KC600 2.5” SATA SSD:

They appear as separate external drives on system bootup, and the performance results are nothing to sneeze at. Here’s the Samsung NVMe PCI 4.0 SSD (the enclosure’s interface to the SSD, by the way, is “only” PCIe 3.0; it’s leaving storage performance potential “on the table”):

and here’s the Kingston, predictably a bit slower due to its SATA III interface and command set (therefore rationalizing why I’ve focused my implementation attention on the Samsung so far):

For comparison, here’s the Mac mini’s internal SSD:

The Samsung holds its own from a write performance standpoint but is more than 3x slower on reads, rationalizing my strategy to keep as much content as possible on internal storage. To wit, how did I decide to proceed, after quickly realizing (mid-system setup) that I’d fill up the internal available 128 GBytes well prior to getting my full desired application suite installed?

(Abortive) Step 1: Move my entire user account to external storage

Quoting from the above linked article:

In UNIX operating systems, user accounts are stored in individual folders called the user folder. Each user gets a single folder. The user folder stores all of the files associated with each user, and settings for each user. Each user folder usually has the system name of the user. Since macOS is based on UNIX, users are stored in a similar manner. At the root level of your Mac’s Startup Disk you’ll see a number of OS-controlled folders, one of which is named Users.

Move (copy first, then delete the original afterwards) an account’s folder structure elsewhere (to external storage, in this case), then let the foundation operating system know what you’ve done, and as my experience exemplifies, you can free up quite a lot of internal storage capacity.

Keep in mind that when you relocate your user home folder, it only moves the home folder – the rest of the OS stays where it was originally.

One other note, which applies equally to other relocation stratagems I subsequently attempted, and which perhaps goes without saying…but just to cover all the bases:

Consider that when you move your home folder to an external volume, the connection to that volume must be perfectly reliable – meaning both the drive and the cable connecting the drive to your Mac. This is because the home folder is an integral part of macOS, and it expects to be able to access files stored there instantly when needed. If the connection isn’t perfectly reliable, and the volume containing the home folder disappears even for a second, strange and undefined behavior may result. You could even lose data.

That all being said, everything worked great (with the qualifier that initial system boot latency was noticeably slower than before, albeit not egregiously so), until I noticed something odd. Microsoft’s OneDrive client indicated that it has successfully sync’d all the cloud-resident information in my account, but although I could then see a local clone of the OneDrive directory structure, all of the files themselves were missing, or at least invisible.

This is, it turns out, a documented side effect of Apple’s latest scheme for handling cloud storage services. External drives that self-identify as capable of being “ejectable” can’t be used as OneDrive sync destinations (unless, perhaps, you first boot the system from them…dunno). And the OneDrive sync destination is mirrored within the user’s account directory structure. My initial response was “fine, I’ll bail on OneDrive”. It turns out, however, that Dropbox (on which I’m much more reliant) is, out of operating system support necessity, going down the same implementation-change path. Scratch that idea.

Step 2: Install applications to external storage

This one seems intuitively obvious, yes? Reality proved much more complicated and ultimately limited in its effectiveness, however. Most applications I wanted to use that had standalone installers, it turns out, didn’t even give me an option to install anywhere but internal storage. And for the ones that did give me that install-redirect option…well, please take a look at this Reddit thread I started and eventually resolved, and then return to this writeup afterwards.

Wild, huh? That said, many MacOS apps don’t have separate installer programs; you just open a DMG (disk image) file and then drag the program icon inside (behind which is the full program package) to the “Applications” folder or anywhere else you choose. This led to my next idea…

Step 3: Move already-installed applications to external storage

As previously mentioned, “hiding” behind an application’s icon is the entire package structure. Generally speaking, you can easily move that package structure intact elsewhere (to external storage, for example) and it’ll still run as before. The problem, I found out, comes when you subsequently try to update such applications, specifically where a separate updater utility is involved. Take Apple’s App Store, for example. If you download and install apps using it (which is basically the only way to accomplish this) but you then move those apps elsewhere, the App Store utility can no longer “find” them for update purposes. The same goes for Microsoft’s (sizeable, alas) Office suite. In these and other cases, ongoing use of internal storage is requisite (along with trimming down the number of installed App Store- and Office suite-sourced applications to the essentials). Conversely, apps with integrated update facilities, such as Mozilla’s Firefox and Thunderbird, or those that you update by downloading and swapping in a new full-package version, upgrade fine post-move.

Step 4: Move data files, download archives, etc. to external storage

I mentioned earlier that Mozilla’s apps (for example) are well-behaved from a relocation standpoint. I was specifically referring to the programs themselves. Both Firefox and Thunderbird also create user profiles, which by default are stored within the MacOS user account folder structure, and which can be quite sizeable. My Firefox profile, for example, is just over 3 GBytes in size (including the browser cache and other temporary files), while my Thunderbird profile is nearly 9 GBytes (I’ve been using the program for a long time, and I also access email via POP3—which downloads messages and associated file attachments to my computer—vs IMAP). Fortunately, by tweaking the entries in both programs’ profiles.ini files, I’ve managed to redirect the profiles to external storage. Both programs now launch more slowly than before, due to the aforementioned degraded external drive read performance, but they then run seemingly as speedy as before, thanks to the aforementioned comparable write performance. And given that they’re perpetually running in the background as I use the computer, the launch-time delay is a one-time annoyance at each (rare) system reboot.

Similarly, I’ve redirected my downloaded-files default (including a sizeable archive of program installers) to external storage, along with an encrypted virtual drive that’s necessary for day-job purposes. I find, in cases like these, that creating an alias from the old location to the new is a good reminder of what I’ve previously done, if I subsequently find myself scratching my head because I can’t find a particular file or folder.

The result

By doing all the above (steps 2-4, to be precise), I’ve relocated more than 200 GBytes (~233 GBytes at the moment, to be precise) of files to external storage, leaving me with nearly 25% free in my internal storage (~28 GBytes at the moment, to be precise). See what I meant when I earlier wrote that in the absence of relocation success, I’d “fill up the available 128 GBytes well prior to getting my full desired application suite installed”? I should clarify that “nearly 25% free storage” comment, by the way…it was true until I got the bright idea to command-line install recently released Wine 9, which restores MacOS compatibility (previously lost with the release of 64-bit-only MacOS 10.15 Catalina in October 2019)…which required that I first command-line install the third-party Homebrew package manager…which also involved command-line installing the Xcode Command Line Tools…all of which installed by default to internal storage, eating up ~10 GBytes (I’ll eventually reverse my steps and await a standalone, more svelte package installer for Wine 9 to hopefully come).

Thoughts on my experiments and their outcomes? Usefulness to other Unix-based systems? Anything else you want to share? Let me know in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Workarounds (and their tradeoffs) for integrated storage constraints appeared first on EDN.

Automotive PCIe: To Switch or Not to Switch?

ELE Times - Чтв, 03/21/2024 - 13:39

Courtesy : Microchip

The myths and false economy of direct chip-to-chip PCIe connect in ADAS and vehicle autonomy applications.

PCIe’s Rising Role in Autonomous Driving and ADAS Technology

Before pondering the question of whether or not to switch, let’s first set the scene by considering why Peripheral Component Interconnect Express (PCIe) is becoming so popular as an interconnect technology in advanced driver assistance systems (ADAS) applications—and why it will be so crucial in the realization of completely autonomous driving (AD) as the automotive industry seeks standard interfaces that deliver performance while ensuring compatibility and ease-of-use.

With its roots in the computing industry, PCIe is a point-to-point bidirectional bus for connecting high-speed components. Subject to the system architecture (PCIe’s implementation), data transfer can take place over 1, 2, 4, 8 or 16 lanes, and if more than one lane is used the bus becomes a serial/parallel hybrid.

The PCIe specification is owned and managed by the PCI Special Interest Group (PCI-SIG), an association of 900+ industry companies committed to advancing its non-proprietary peripheral technology. As demand for higher I/O performance grows, the group’s scope and ecosystem reach are both expanding, and to paraphrase words from PCI-SIG’s membership page:

Current PCIe and other related technology roadmaps account for new form factors and lower power applications. Innovation on these fronts will remain true to PCI-SIG’s legacy of delivering solutions that are backward compatible, cost-efficient, high performance, processor agnostic, and scalable.

With vehicles becoming high-performance computing platforms (HPCs—and data centers, even) on wheels, these words are exactly what vehicle OEMs developing ADAS and AD solutions want to hear. Also, every generation of PCIe results in performance improvements – from gen 1.0’s data (giga) transfer rate of 2.5GT/s and total bandwidth of 4G/s (16 lanes) to today’s gen 6.0’s 64GT/s and 128G/s (16 lanes). Note: PCIe 7.0, slated to arrive in 2025, will have a data rate of 128GT/s and a bandwidth of 512GB/s through 16 lanes.

PCIe’s performance power cannot be disputed, and it will certainly be required to support the kind of real-time processing of large volumes of data needed for AI- and ML-enabled ADAS and AD applications.

But, as ever, there is debate around implementing PCIe-based architectures, not least when it comes to whether the connections between PCIe-enabled components should be direct or switched.

Making the Connection

To provide higher levels of automation, vehicles must incorporate increasingly sophisticated combinations of electronic components including central processing units (CPUs), electronic control units (ECUs), graphics processing units (GPUs), system-on-chips (SoCs), “smart sensors” and high-capacity and high-speed storage devices (such as NVMe memory).

Of these components, the ECUs (there are many) combine across separate zones based on a common functionality. These zonal ECUs communicate with HPC platforms using Ethernet. But within those platforms, there is a need for high-bandwidth processing to achieve real-time decision making.

Accordingly, PCIe technology is being used by automotive designers in a manner very similar to the way in which a data center is designed. Connecting sensors with high-speed serial outputs to processing units is best addressed with an open standard called Automotive SerDes Alliance (ASA).

In essence, there are three pillars of automotive networking (see figure 1).

Figure 1 - Three Pillars of future of Automotive NetworkingFigure 1 – Three Pillars of future of Automotive Networking

However, some SoC vendors are saying that for PCIe you can simply connect directly between chips without a switch. Well, yes, you can… but it doesn’t scale to higher ADAS levels and it’s a false economy do so.

An HPC system without a switch exponentially increases software complexity, as each end requires its own software stack. Also, there’s the “bigger picture” benefits of switched over unswitched PCIe to consider:

  • IO Bandwidth Optimization: Packet switching reduces the SoC interconnection pin count requirement which lowers SoC power and cost.
  • Peripheral Sharing: Single peripherals, such as SSD storage or ethernet controllers, may be shared across several SoCs
  • Scalability: You can easily scale for more performance without changing the system architecture by increasing switch size, SoC count and peripheral count.
  • Serviceability: PCIe has built-in error detection and diagnostic test features which have been thoroughly proven in the high-performance compute environment over many years to significantly ease serviceability.
  • And as a result of the above points, a much better total cost of ownership (TCO) is possible.

When PCIe combines forces with Ethernet and ASA, it allows for the creation of an optimized, heterogeneous system architecture (as figure 2 illustrates with respect to an ADAS example).

Figure 2 - Heterogenous architecture for ADASFigure 2 – Heterogenous architecture for ADAS

Although the three communications technologies evolved at different times to support different needs, and have their respective pros and cons, the heterogeneous architecture makes the best of each.

As mentioned, PCIe provides point-to-point connection, meaning devices are not competing for bandwidth, which is fine if only a few devices need to connect. However, an autonomous vehicle is best realized as a set of distributed workloads, which means bandwidth needs to be shared between multiple sub-system components.

In this respect, PCIe switches provide an excellent solution as they are “transparent,” meaning that software and other devices do not need to be aware of the presence of switches in the hierarchy, and no drivers are required.

The Answer: Switch

PCIe is ideal for ADAS, AD and other HPC applications within a vehicle, but its “point-to-point” connectivity has many thinking that that’s how it should be implemented—as chip-to-chip, for example. However, integrating switching using technologies such as the Microchip Switchtec family (the world’s first automotive-qualified PCIe switches) minimizes software complexity and realizes a host of other benefits for high-performance automotive systems with multiple sub-system components that demand low latencies and high data rates.

The post Automotive PCIe: To Switch or Not to Switch? appeared first on ELE Times.

Techniques to Identify and Correct Asymmetric Wafer Map Defects Caused by Design and Process Errors

ELE Times - Чтв, 03/21/2024 - 13:21

JAMES KIM, Senior Semiconductor and Process Integration Engineer | Lam Research

Asymmetries in wafer map defects are usually treated as random production hardware defects. For example, asymmetric wafer defects can be caused by particles inadvertently deposited on a wafer during any number of process steps. In this article, I want to share a different mechanism that can cause wafer defects. Namely, that these defects can be structural defects that are caused by a biased deposition or etch process.

It can be difficult for a process engineer to determine the cause of downstream structural defects located at a specific wafer radius, particularly if these defects are located in varying directions or at different locations on the wafer. As a wafer structure is formed, process behavior at that location may vary from other wafer locations based upon the radial direction and specific wafer location. Slight differences in processes at different wafer locations can be exaggerated by the accumulation of other process steps as you move toward that location. In addition, process performance differences (such as variation in equipment performance) can also cause on-wafer structural variability.

In this study, structural defects will be virtually introduced on a wafer to provide an example of how structural defects can be created by differences in wafer location. We will then use our virtual process model to identify an example of a mechanism that can cause these types of asymmetric wafer map defects.

Methods Figure 1 - Techniques to Identify and Correct Asymmetric Wafer Map Defects Caused by Design and Process ErrorsFigure 1. Anisotropic liner/barrier metal
deposition on a tilted structure caused by wafer warping

A 3D process model of a specific metal stack (Cu/TaN/Ta) on a warped wafer was created using SEMulator3D virtual fabrication (Figure 1). After the 3D model was generated, electrical analysis of 49 sites on the wafer was completed.

In our model, an anisotropic barrier/liner (TaN/Ta) deposition process was used. Due to wafer tilting, there were TaN/Ta deposition differences seen across the simulated high aspect ratio metal stack. To minimize the number of variables in the model, Cu deposition was assumed to fill in an ideal manner (without voids). Forty-nine (49) corresponding 3D models were created at different locations on the wafer, to reflect differences in tilting due to wafer warping. Next, electrical simulation was completed on these 3D models to monitor metal line resistance at each location. Serpentine metal line patterns were built into the model, to help simulate the projected electrical performance on the warped wafer at different points on the same radius, and across different directions on the wafer (Figure 2).

Figure 2 - Techniques to Identify and Correct Asymmetric Wafer Map Defects Caused by Design and Process ErrorsFigure 2 – Techniques to Identify and Correct Asymmetric Wafer Map Defects Caused by Design and Process Errors

Using only incoming structure and process behavior, we can develop a behavioral process model and extend our device performance predictions and behavioral trend analysis outside of our proposed process window range. In the case of complicated processes with more than one mechanism or behavior, we can split processes into several steps and develop models for each individual process step. There will be phenomena or behavior in manufacturing that can’t be fully captured by this type of process modeling, but these models provide useful insight during process window development.

Results

Of the 49 3D models, the models on the far edge of the wafer were heavily tilted by wafer warpage. Interestingly, not all of the models at the same wafer radius exhibited the same behavior. This was due to the metal pattern design. With anisotropic deposition into high aspect ratio trenches, deposition in specific directions was blocked at certain locations in the trenches (depending upon trench depth and tilt angle). This affected both the device structure and electrical behavior at different locations on the wafer.

Since the metal lines were extending across the x-axis, there were minimal differences seen when tilting the wafer across the x-axis in our model. X-axis tilting created only a small difference in thickness of the Ta/TaN relative to the Cu. However, when the wafer was tilted in the y-axis using our model, the high aspect ratio wall blocked Ta/TaN deposition due to the deposition angle. This lowered the volume of Ta/TaN deposition relative to Cu, which decreased the metal resistance and placed the resistance outside of our design specification.

X-axis wafer tilting had little influence on the device structure. The resistance on the far edge of the x-axis did not significantly change and remained in-spec. Y-axis wafer tilting had a more significant influence on the device structure. The resistance on the far edge of the y-axis was outside of our electrical specification (Figure 3).

Figure 3 - Techniques to Identify and Correct Asymmetric Wafer Map Defects Caused by Design and Process ErrorsFigure 3 – Techniques to Identify and Correct Asymmetric Wafer Map Defects Caused by Design and Process Errors Conclusion

Even though wafer warpage occurs in a circular manner due to accumulated stress, unexpected structural failures can occur in different radial directions on the wafer due to variations in pattern design and process behavior across the wafer. From this study, we demonstrated that asymmetric structures caused by wafer warping can create top-bottom or left-right wafer performance differences, even though processes have been uniformly applied in a circular distribution across the wafer.

Process simulation can be used to better understand structural failures that can cause performance variability at different wafer locations. A better understanding of these structural failure mechanisms can help engineers improve overall wafer yield by taking corrective action (such as performing line scanning at specific wafer locations) or by adjusting specific process windows to minimize asymmetric wafer defects.

The post Techniques to Identify and Correct Asymmetric Wafer Map Defects Caused by Design and Process Errors appeared first on ELE Times.

EFFECT Photonics raises $38m in Series D funding

Semiconductor today - Чтв, 03/21/2024 - 13:04
EFFECT Photonics b.v. – a spin off from the Technical University of Eindhoven (TU/e) in The Netherlands – has secured $38m in a Series D funding round, led by Innovation Industries Strategic Partners Fund, backed by Dutch pension funds PMT and PME, along with co-investor Invest-NL Deep Tech Fund and participation from other existing investors...

Executive Blog – Companies that Embrace Digital Transformation Have More Resilient Design and Supply Chains

ELE Times - Чтв, 03/21/2024 - 12:59

Sailesh Chittipeddi | Executive Vice President Operations | Renesas

Digital transformation has evolved quickly from a conceptual phase to a semiconductor industry change agent. The rapid take up of AI-enhanced product development is only accelerating this transformation and is further influenced by two connected trends: The movement of Moore’s Law from transistor scaling to system-level scaling, and the relatively recent redistribution of the global electronics supply chain due to the COVID-19 pandemic.

I spoke on this subject earlier this month at the Industry Strategy Symposium 2024 in Half Moon Bay, California, where leaders from across the chip industry gather annually to share their insights on technology and trend drivers and what they could mean for our respective businesses.

Between the early 1970s and around 2005, increased chip performance was largely a function of clock frequency improvements driven by advances in lithography, transistor density, and energy efficiency. With increasing transistor counts (and die size), clock frequencies are limited by interconnect delays and not by transistor performance. To overcome this challenge, designers moved to multi-core designs to increase system performance without blowing up energy. Novel packaging techniques such as chiplets and multi-chip modules are helping further improve system performance, particularly in AI chips.

A single chip package may be comprised of multiple chiplets each housing specific functions such as high-performance logic elements, AI accelerators, high-bandwidth DDR memory, and high-speed peripherals. Very often, each of these components is sourced from a different fab, a trend that has resulted in a fragmented global supply chain. This creates its own set of challenges as die from multiple fabs must be integrated into a package or system that must then be thoroughly tested. Test failures at this stage have enormous financial consequences. These challenges, require a “shift left” mindset in product development. The shift left mentality has major ramifications for how we, as an industry, should be managing our supply chains by moving the heavy emphasis from architecture/design to final system testing and quality.

Supply chain challenges during the COVID pandemic have resulted in further decentralization of the supply chain components. To illustrate the enormity of the change underway, consider that between 2022 and December 2024 construction began on 93 wafer fabs around the world. Compare that to the global construction of automated test facilities. In 2021 alone, the industry broke ground on 484 back-end test sites, which provides a measure of how committed the chip sector is to driving resiliency across the manufacturing landscape.

The Role of AI in Semiconductor Design and Manufacture

So, where does AI come into the picture?

A key area in which AI will exert its influence is the shift from an analytic to a predictive model. Today, we wait to detect a problem and then look at past data to identify the root cause of the problem and prevent it from reoccurring. This inefficient approach adds time, cost, unpredictability, and waste to the supply chain. AI, on the other hand, allows us to examine current data to predict future outcomes.

Instead of using spreadsheets to analyze old data, we build AI models that production engineers continuously train with new data. This “new” data is no longer merely a set of numbers or measurements but includes unstructured data such as die photos, equipment noise, time series sensor data, and videos to make better predictions.

In the end, it’s about pulling actionable information from a sea of data points. In other words, data without action is mostly useless. Why am I driving this point home? Because today, 90 percent of data created by enterprises is never used. It’s dark data. And when you think about AI implementation, 46 percent of them never make it from pilot to production because the complexity of the programs is not scoped appropriately.

Despite these challenges, equipment makers are already starting to implement digital transformation techniques into their product development processes. The benefits are palpable. Research from Boston Consulting Group found that companies that have built resiliency into their supply and design chains recovered from COVID-related downturns twice as fast as companies that have yet to embrace digital transformation.

At Renesas, we acquired a company called Reality AI that generates a compact machine learning model that runs on a microcontroller or microprocessor. This provides the unique ability to quickly detect deviations from normal patterns that may cause equipment problems. It allows manufacturing facilities to schedule preventive maintenance or minimize downtime associated with sudden equipment failure.

Digital Transformation Is Future-Proofing Our Industry

Digital transformation with AI is key to business success today. As the semiconductor industry undergoes a major evolution – embracing system-level design and adapting to a changing global supply chain – digital transformation and the shift left approach are powerful tools that deliver on two fronts.

The first is a productivity increase that comes from optimized tools and design processes. The closer you are to where the failure is likely to occur, the more quickly you learn and the more quickly you can fix things.

Second, and perhaps most importantly, digital transformation solves one of the biggest problems the industry has with chip design – the availability of talent. When we reduce the time taken to design a chip, we’re making our engineers far more efficient than they would be otherwise, which is increasingly important as the semiconductor industry demographic skews older.

The post Executive Blog – Companies that Embrace Digital Transformation Have More Resilient Design and Supply Chains appeared first on ELE Times.

Network RTK vs PPP-RTK: an insight into real-world performance

ELE Times - Чтв, 03/21/2024 - 12:43

By- Patty Felts, Product Marketing Manager, Product Center Services

Australian automation and positioning technology provider conduct static and kinematic tests

Locating people, animals, or objects on Earth with high precision requires the use of GNSS receivers and the support of network RTK correction services that account for errors caused by the atmosphere, satellite clock drift, and signal delays.

Three standard approaches to correct these errors are Real Time Kinematic (RTK), Precise Point Positioning (PPP) GNSS correction services, and a combination of the two, PPP-RTK. Beyond these, a pairing device such as a survey-grade GNSS receiver or a mass-market smart antenna is also required to enhance positioning accuracy. Combining any of these approaches with one device will optimize the positioning accuracy of the end-use application.

Many GNSS navigation applications require high accuracy. The accuracy of survey-grade GNSS receivers exceeds what mass-market smart antennas can provide. Of course, this comes at a price. Still, several high-precision GNSS navigation applications can do well with the accuracy offered by mass-market smart antennas. Examples include transportation, e-mobility, IoT use cases, and field robotics. Designers aim to equip devices with reliable, high-precision positioning at a reasonable cost.

GNSS users can verify the capabilities of setups by hitting the roads and testing them in real-world situations. Doing so enables them to understand the capabilities of these setups and differentiate them.

Aptella (formerly branded as Position Partners), an Australasian provider of automation and positioning technology solutions, had the opportunity to test the capabilities of network RTK vs PPP-RTK GNSS correction services and present the findings to their client.

We will discuss the findings, but as a first step, let us review how the RTK, PPP, and PPP-RTK approaches operate, the equipment needed, and the participants in this exercise.

Network RTK, Precise Point Positioning GNSS, and PPP-RTK

The mentioned correction approaches follow different paths. RTK GNSS correction services calculate and correct GNSS errors by comparing satellite signals from one or more reference stations. Any errors detected are then transmitted using IP-based communications, which can be reliable beyond a radius of 30 km from the nearest base station. Network RTK typically requires bi-directional communication between the GNSS receiver and the service, making the solution more challenging to scale. This approach can provide centimeter-level positioning accuracy in seconds.

Precise Point Positioning GNSS correction services operate differently. They broadcast a GNSS error model valid over large geographic regions. Because this service requires only unidirectional communication (IP-based or via satellite L-band), it’s more scalable to multiple users, unlike RTK.

PPP high-precision positioning takes between three minutes and half an hour to provide a position estimate with an accuracy of less than 10 cm. Static applications such as surveying or mapping typically use this solution, but it can be a poor fit for dynamic applications such as unmanned aerial vehicles or mobile robotics.

More recently, both approaches have been combined into what is known as PPP-RTK GNSS correction services (or State Space Representation (SSR) correction services). This combination provides the accuracy of the RTK network and its fast initialization times with the broadcast nature of Precise Point Positioning. Similar to PPP, the approach is based on a model of GNSS errors that has broad geographic validity. Once a GNSS receiver has access to these PPP-RTK correction data through one-way communication, it computes the GNSS receiver position.

Survey-grade GNSS receiver versus mass-market smart antenna

Survey-grade receivers are devices typically used for geodetic surveying and mapping applications. They are designed to provide highly accurate and precise positioning information for civil engineering, construction, GIS data, land development, mining, and environmental management.

Today’s modules can access data from multiple satellite constellations and network RTK support. These devices are typically very expensive, costing thousands of dollars each, because they are highly precise, with accuracies ranging from centimeters to millimeters.

Mass-market smart antennas are specialized receiver/antenna-integrated devices designed to receive signals from satellite constellations and GNSS correction services right out of the box. Smart antennas capture and process raw data to determine precise locations. Standalone GNSS antennas don’t have a precision rating, as this depends on the integrated GNSS receiver and correction service to which the antennas are coupled.

While mass-market smart antennas are more affordable than survey-grade GNSS receivers, there is a corresponding performance trade-off, with accuracies ranging from a few centimeters to decimeters.

The following tests used a survey-grade GNSS receiver to verify control coordinates in static mode and compare RTK versus PPP-RTK results in the kinematic mode. The GNSS smart antenna was also employed as a pairing device for these static and kinematic tests.

Participating companies

Aptella is the company that conducted the performance test and presented the results to their client. However, the participation of four other companies was crucial.

AllDayRTK operates Australia’s highest-density network of Continuously Operating Reference Stations (CORS). Its network RTK correction services were used to compare with PPP-RTK.

u-blox’s PointPerfect provided the PPP-RTK GNSS correction services used in these tests.

Both correction services were coupled with a survey GNSS receiver, Topcon HiPer VR, and a mass-market smart antenna, the Tallysman TW5790.

Testing two correction services solutions

In the Australian city of Melbourne, Aptella conducted static and kinematic tests with several objectives in mind:

  • Test RTK and PPP-RTK GNSS corrections using a mass-market GNSS device like the Tallysman TW5790.
  • Demonstrate the capabilities of the Tallysman smart antenna coupled with PPP-RTK corrections.
  • Evaluate PointPerfect PPP-RTK GNSS corrections and assess “real world” results against published specifications.
  • Determine whether these specifications meet mass-market applications and e-transport safety requirements of 30 cm @ 95%.
  • Provide insight into use cases and applications suitable for PPP-RTK corrections.
Static results  gnss antenna and survey grade receiverFigure 1: gnss antenna and survey grade receiver

These tests allowed experts to compare the accuracy of RTK and PPP-RTK GNSS correction services supported by a mass-market Tallysman smart antenna.  They were also able to verify the PPP-RTK performance specifications published by u-blox.

First, a survey-grade Topcon HiPer VR GNSS receiver was used to verify the control coordinates in static mode. Once these were obtained, the Tallysman smart antenna took its place.

The table below summarizes representative results from both methods, PPP-RTK and RTK. Horizontal (planar) accuracy is similar for both, while the vertical accuracy is less accurate with PPP-RTK than RTK.

The horizontal accuracy level of RTK and PPP-RTK is in the centimeter range. In contrast, RTK maintains a centimeter range at the vertical accuracy level, but the PPP-RTK correction errors were in the decimeter range.

GNSS augmentation

 

Horizontal error (m) Vertical error (m) Horizontal 95% (m) Vertical 95% (m)
RTK AllDayRTK 0.009 0.010 0.012 0.018
PointPerfect PPP-RTK 0.048 0.080 0.041 0.074

 

Furthermore, the accuracy of the mass market device is within published specifications to meet the 30 cm @ 95% for location (plan) even when obstructed. Still, when measuring heights, these were less accurate than 2D horizontal coordinates. Absolute horizontal location accuracy meets the mass market requirement of 30 cm @ 95%, although RTK is more accurate at a vertical level than PPP-RTK.

Kinematic results

On the streets of Melbourne, Aptella experts tested RTK and PPP-RTK corrections operating in different kinematic modes with variable speeds, such as walking under open skies and driving in different environments.

The test setup using an RTK network consisted of AllDayRTK corrections and a survey-grade GNSS receiver. On the other hand, the PPP-RTK test setup was supported by u-blox PointPerfect and the Tallysman smart antenna. The antennas for both setups were mounted on the roof of the vehicle and driven through different routes to encounter various GNSS conditions.

Walking in the open sky: This test involved a walk along the riverbank. Comparing the results, both were similar, proving that PPP-RTK is well-suited for mass-market applications.

 walking tests with rtk and ppp-rtkFigure 2: walking tests with rtk and ppp-rtk

On-road driving with varying conditions: This test consisted of driving on Melbourne roads in different conditions, including open skies and partial or total obstructions to GNSS. The route included driving under bridges and areas with multipath effects. Vegetation in the area at the start of the test prevented the smart antenna’s IMU from initializing. No IMU/dead reckoning capability was used during the drive test.

The results obtained while the vehicle moved through a long tunnel under the railroad tracks were of utmost importance. In this situation, the PPP-RTK approach reported a position even in an adverse environment. In addition, PPP-RTK reconverged shortly after RTK.

 rtk vs ppp-rtk under railway bridge in melbourneFigure 3: rtk vs ppp-rtk under railway bridge in melbourne

Another revealing result of this second test was that the Tallysman smart antenna didn’t seem to deviate from its path when passing under short bridges.

 rtk vs ppp-rtk under a short bridgeFigure 4: rtk vs ppp-rtk under a short bridge

Driving through an outage: The outage test took place in an extended, challenging environment for GNSS. This occurred when the car drove under the pedestrian overpass at the Melbourne Cricket Ground. The PPP-RTK solution maintained the travel trajectory and effectively tracked the route (in yellow). On the other hand, the RTK network solution reported positions off the road and on the railway tracks. In this outage condition, RTK took a long time to reconverge to a fixed solution.

 correction services tests under a long structureFigure 5: correction services tests under a long structure

Open-sky driving: The final on-road test was conducted in an open-sky environment where the two setups performed similarly. They provided lane-level accuracy and suitability for mass-market applications. However, ground truthing and further testing are required to fully evaluate the accuracy and reliability of PPP-RTK in these conditions.

 correction services comparison driving through MelbourneFigure 6: correction services comparison driving through Melbourne Final remarks

The five static and dynamic tests conducted by Aptella were instrumental in assessing the effectiveness of different setups to determine the position of stationary and moving entities.

  • From the static test, Aptella concluded that PPP-RTK, coupled with the Tallysman smart antenna, provides centimeter-level horizontal accuracy and performs similarly to RTK. However, this was not the case for vertical accuracy, with PPP-RTK at the decimeter level.
  • Regarding the kinematic tests, Aptella obtained significant results, particularly when the environment impeded communication with GNSS. Even without IMU or dead reckoning, the PPP-RTK performed well with lane-level tracking. With short outages such as railway bridges and underpasses, PPP-RTK maintained an acceptable trajectory, while RTK required a long time to reconverge after emerging from these challenging conditions.
  • Overall, Aptella has demonstrated that the PPP-RTK and GNSS smart antenna combination delivers results suitable for mass-market applications requiring centimeter-level horizontal accuracy.

As mentioned above, survey-grade devices are costly although highly accurate. A combination of survey-grade GNSS receiver and network RTK correction service is recommended in geodetic surveying use cases that require high height accuracy.

Conversely, mass-market smart antenna devices using PPP-RTK corrections are less expensive but also less accurate. Nevertheless, they are well suited for static applications that don’t require GNSS heights at survey grade.

For many high-precision navigation applications, such as transportation, e-mobility, and mobile robotics, PPP-RTK is sufficient to achieve the level of performance these end applications require. The relative affordability of smart antenna devices, combined with PPP-RTK’s ability to broadcast a single stream of corrections to all endpoints, makes it easier to scale from a few prototypes to large fleets of mobile IoT devices.

The post Network RTK vs PPP-RTK: an insight into real-world performance appeared first on ELE Times.

Unparalleled capacitance for miniaturized designs: Panasonic Industry launches new ZL Series Hybrid capacitors

ELE Times - Чтв, 03/21/2024 - 12:00

The compact and AEC-Q200-compliant EEH-ZL Series stands out with industry-leading capacitance and high Ripple Current specs

The ZL series is the latest offspring of Panasonic Industry’s Electrolytic Polymer Hybrid capacitor portfolio. Related to its compact dimensions, it offers unrivalled capacitance values – and hence might evoke a remarkable market echo:

Capacitance: For five case sizes from ø5×5.8 mm to ø10×10.2 mm, the ZL series offers the largest capacitance in the industry and exceeds the values of competitor standard products by approximately 170%.

Ripple Current performance outnumbers the competitor products’ specs besides lower ESR within the same case size.

The new ZL is AEC-Q200 compliant, enforcing strict quality control standards, particularly crucial for the automotive industry. It boasts high-temperature resistance, and is guaranteed to operate at 125°C and 135°C at 4000h. With a focus on durability, the ZL series offers vibration-proof variants capable of withstanding shocks up to 30G, making it a reliable choice.

In summary, this next-generation, RoHS qualified Hybrid Capacitor stands as the ultimate solution for automotive and industrial applications, where compact dimensions are an essential prerequisite.

Tailored for use in various automotive components including water pumps, oil pumps, cooling fans, high-current DC to DC converters, and advanced driver-assistance systems (ADAS), it also proves invaluable in industrial settings such as inverter power supplies for robotics, cooling fans, and solar power systems. Furthermore, it serves a pivotal role in industrial power supplies for both DC and AC circuits, spanning from inverters to rectifiers, and finds essential application in communication infrastructure equipment such as base stations, servers, routers, and switches.

The post Unparalleled capacitance for miniaturized designs: Panasonic Industry launches new ZL Series Hybrid capacitors appeared first on ELE Times.

Silicon carbide power device market to grow to $5.33bn in 2026

Semiconductor today - Чтв, 03/21/2024 - 11:59
Benefitting from robust demand from downstream applications, market research firm TrendForce forecasts that the silicon carbide (SiC) power device market will grow to $5.33bn by 2026, with mainstream applications still highly reliant on electric vehicles and renewable energy sources...

Silicon carbide (SiC) counterviews at APEC 2024

EDN Network - Чтв, 03/21/2024 - 11:06

At this year’s APEC in Long Beach, California, Wolfspeed CEO Gregg Lowe’s speech was a major highlight of the conference program. Lowe, the chief of the only vertically integrated silicon carbide (SiC) company and cheerleader of this power electronics technology, didn’t disappoint.

In his plenary presentation, “The Drive for Silicon Carbide – A Look Back and the Road Ahead – APEC 2024,” he called SiC a market hitting the major inflection point. “It’s a story of four decades of American ingenuity at work, and it’s safe to say that the transition from silicon to SiC is unstoppable.”

Figure 1 Lowe: The future of this amazing technology is only beginning to dawn on the world at large, and within the next decade or so, we will look around and wonder how we lived, traveled, and worked without it. Source: APEC

Lowe told the APEC 2024 attendees that the demand for SiC is exploding, and so is the number of applications using this wide bandgap (WBG) technology. “Technology transitions like this create moments and memories that last a lifetime, and that’s where we are with SiC right now.”

Interestingly, just before Lowe’s presentation, Balu Balakrishnan, chairman and CEO of Power Integrations, raised questions about the viability of SiC technology during his presentation titled “Innovating for Sustainability and Profitability”.

Balakrishnan’s counterviews

While telling the Power Integrations’ gallium nitride (GaN) story, Balakrishnan narrated how his company started heavily investing in SiC 15 years ago and spent $65 million to develop this WBG technology. “One day, sitting in my office, while doing the math, I realized this isn’t going to work for us because of the amount of energy it takes to manufacture SiC and that the cost of SiC is so much more than silicon,” he said.

“This technology will never be as cost-effective as silicon despite its better performance because it’s such a high-temperature material, which takes a humongous amount of energy,” Balakrishnan added. “It requires expensive equipment because you manufacture SiC at very high temperatures.”

The next day, Power Integrations cancelled its SiC program and wrote off $65 million. “We decided to discontinue not because of technology, but because we believe it’s not sustainable and it’s not going to be cost-effective.” he said. “That day, we switched over to GaN and doubled down on it because it’s low-temperature, operates at temperatures similar to silicon, and mostly uses same equipment as silicon.”

Figure 2 Balakrishnan: GaN will eventually be less expensive than silicon for high-voltage switches. Source: APEC

So, why does Power Integrations still have SiC product offerings? Balakrishnan acknowledged that SiC can go to higher voltages and power levels and is a more mature technology than GaN because it started earlier.

“There are certain applications where SiC is very attractive today, but I’ll dare to say that GaN will get there sometimes in the future,” he added. “Fundamentally, there isn’t anything wrong with taking GaN to higher voltages and power levels.” He mentioned a 1,200 GaN device Power Integrations recently announced and claimed that his company plans to announce another GaN device with even a higher voltage very soon.

Balakrishnan recognized that there are problems to be solved. “But these challenges require R&D efforts rather than a technology breakthrough,” he said. “We believe that GaN will get to the point where it’ll be very competitive with SiC while being far less expensive to build.”

Lowe’s defense

In his speech, Lowe also recognized the SiC-related cost and manufacturability issues, calling them near-term turbulence. However, he was optimistic that undersupply vs demand issues encompassing crystal boules, substrate capability, wafering, and epi will be resolved by the end of this decade.

“We will continue to realise better economic value with SiC by moving from 150-mm to 200-mm wafers, which increases the area by 1.7x and decreases the cost by about 40%,” he said. His hopes for resolving cost and manufacturability issues also seemed to lie in a huge investment in SiC technology and the automotive industry as a major catalyst.

For a reality check on these counterviews about the viability of SiC, a company dealing with both SiC and GaN businesses could offer a balanced perspective. Hence, Navitas’ booth at APEC 2024, where the company’s VP of corporate marketing, Stephen Oliver, explained the evolution of SiC wafer costs.

He said a 6-inch SiC wafer from Cree cost nearly $3,000 in 2018. Fast forward to 2024, a 7-inch wafer from Wolfspeed (renamed from Cree) costs about $850. Moving forward, Oliver envisions that the cost could come down to $400 by 2028 while being built on 12-inch to 15-inch SiC wafers.

Navitas, a pioneer in the GaN space, acquired startup GeneSiC in 2022 to cater to both WBG technologies. At the show, in addition to Gen-4 GaNSense Half-Bridge ICs and GaNSafe, which incorporates circuit protection functionality, Navitas also displayed Gen-3 Fast SiC power FETs.

In the final analysis, Oliver’s viewpoint about SiC tilted toward Lowe’s pragmatism in SiC’s shift from 150-mm to 200-mm wafers. The recent technology history is a testament to how economy of scale has been able to manage cost and manufacturability issues, and that’s what the SiC camp is counting on.

A huge investment in SiC device innovation and the backing of the automotive industry should also be helpful along the way.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Silicon carbide (SiC) counterviews at APEC 2024 appeared first on EDN.

Designing a Battery Pack That’s Right For Your Application

AAC - Чтв, 03/21/2024 - 01:00
Learn how to design the battery array that best fits your system’s power requirements. This article will help you interpret battery specifications, estimate operating life, and understand the relationship between capacity, load, and environment.

Intel Clocks ‘World’s Fastest Desktop Processor’ at 6.2 GHz Frequency

AAC - Срд, 03/20/2024 - 19:00
After months of waiting, Intel has finally released its flagship desktop processor, along with its core specifications.

Veeco releases fifth sustainability report

Semiconductor today - Срд, 03/20/2024 - 18:32
Epitaxial deposition and process equipment maker Veeco Instruments Inc of Plainview, NY, USA has released its fifth Sustainability Report highlighting its progress towards environmental, social and governance (ESG) initiatives. Reflecting 2023 data, this Sustainability Report demonstrates continued progress and commitment to executing a robust sustainability strategy, says the firm...

Riber wins US order for Compact 21 research MBE system

Semiconductor today - Срд, 03/20/2024 - 16:28
Riber S.A. of Bezons, France — which makes molecular beam epitaxy (MBE) systems as well as evaporation sources — has received an order from a US customer for a Compact 21 research MBE system, for delivery in 2024, to be used for the development of III-V semiconductor materials and devices for microelectronics and photonics...

MACOM awarded by Northrop Grumman for Supplier Excellence

Semiconductor today - Срд, 03/20/2024 - 16:23
MACOM Technology Solutions Inc of Lowell, MA, USA (which designs and makes RF, microwave, analog and mixed-signal and optical semiconductor technologies) has received two awards at US-based aerospace & defense technology company Northrop Grumman Corp’s Supplier Excellence Awards...

A self-testing GPIO

EDN Network - Срд, 03/20/2024 - 15:49

General purpose input-output (GPIO) pins are the simplest peripherals.

The link to an object under control (OUC) may become inadvertently unreliable due to many reasons: a loss of contact, short circuit, temperature stress or a vapor condensate on the components. Sometimes a better link can be established with the popular bridge chip by simply exploring the possibilities provided by the chip itself.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The bridge, such as NXP’s SC18IM700, usually provides a certain amount of GPIOs, which are handy to implement a test. These GPIOs preserve all their functionality and can be used as usual after the test.

To make the test possible, the chip must have more than one GPIO. This way, they can be paired, bringing the opportunity for the members of the pair to poll each other.

Since the activity of the GPIO during test may harm the regular functions of the OUC, one of the GPIO pins can be chosen to temporary prohibit these functions. Very often, when this object is quite inertial, this prohibition may be omitted.

Figure 1 shows how the idea can be implemented in the case of the SC18IM700 UART-I2C bridge.

Figure 1: Self-testing GPIO using the SC18IM70pytho0 UART-I2C bridge.

The values of resistors R1…R4 must be large enough not to lead to an unacceptably large current; on the other hand, they should provide sufficient voltage for the logic “1” on the input. The values shown on Figure 1 are good for the most applications but may need to be adjusted.

Some difficulties may arise only with a quasi-bidirectional output configuration, since in this configuration it is weakly driven when the port outputs a logic HIGH. The problem may occur when the resistance of the corresponding OUC input is too low.

If the data rate of the UART output is too high for a proper charging of the OUC-related capacitance during the test, it can be decreased or, the corresponding values of the resistors can be lessened.

The sketch of the Python subroutine follows:

PortConf1=0x02 PortConf2=0x03 def selfTest(): data=0b10011001 bridge.writeRegister(PortConf1, data) #PortConfig1 data=0b10100101 bridge.writeRegister(PortConf2, data) #PortConfig2 #--- write 1 cc=0b11001100 bridge.writeGPIO(cc) aa=bridge.readGPIO() # 0b11111111 if aa != 0b11111111 : return False # check #---- write 0 cc=0b00000000 bridge.writeGPIO(cc) aa=bridge.readGPIO() if aa != 0b00000000 : return False # check # partners swap data=0b01100110 bridge.writeRegister(PortConf1, data) #PortConfig1 data=0b01011010 bridge.writeRegister(PortConf2, data) #PortConfig2 #---write 1 cc=0b00110011 bridge.writeGPIO(cc) aa=bridge.readGPIO() if aa != 0b11111111 : return False # check #---- write 0 cc=0b00000000 bridge.writeGPIO(cc) aa=bridge.readGPIO() if aa != 0b00000000 : return False # check # check quasy-bidirect data=0b01000100 bridge.writeRegister(PortConf1, data) #PortConfig1 data=0b01010000 bridge.writeRegister(PortConf2, data) #PortConfig2 #---write 1 cc=0b00110011 bridge.writeGPIO(cc) aa=bridge.readGPIO() if aa != 0b11111111 : return False # check #---- write 0 cc=0b00000000 bridge.writeGPIO(cc) aa=bridge.readGPIO() if aa != 0b00000000 : return False # check return True

Peter Demchenko studied math at the University of Vilnius and has worked in software development.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post A self-testing GPIO appeared first on EDN.

DDS Option for high-speed AWGs generates up to 20 sine waves

ELE Times - Срд, 03/20/2024 - 14:20

20 independent sine waves up to 400 MHz can be controlled on one generator channel

Spectrum Instrumentation has released a new firmware option for its range of versatile 16-bit Arbitrary Waveform Generators (AWGs) with sampling rates up to 1.25 GS/s and bandwidths up to 400 MHz. The new option allows users to define 23 DDS cores per AWG-card, that can be routed to the hardware output channels. Each DDS core (sine wave) can be programmed for frequency, amplitude, phase, frequency slope and amplitude slope. This enables, for example, the control of lasers through AODs and AOMs, as often used in quantum experiments, with just a few simple commands – instead of making large data array calculations. The DDS output can be synchronized with external trigger events or by a programmable timer with resolution of 6.4 ns.

DDS – Direct Digital Synthesis – is a method for generating arbitrary periodic sine waves from a single, fixed-frequency reference clock. It is a technique widely used in a variety of signal generation applications. The DDS functionality implemented on Spectrum Instrumentation’s AWGs is based on the principle of adding multiple ‘DDS cores’ to generate a multi-carrier (multi-tone) signal with each carrier having its own well-defined frequency, amplitude and phase.

Advantages of using DDS for arbitrary waveform generators

With the ability to switch between the normal AWG mode (which generates waveforms out of pre-programmed data) and the DDS mode (which needs only a few commands to generate sine wave carriers), the Spectrum AWGs are highly versatile and can be adapted to almost any application. In DDS-mode, the AWG acts as a base for the multi-tone DDS. The units built-in 4 GByte of memory and fast DMA transfer mode then allows the streaming of DDS commands at a rate as high as 10 million commands per second! This unique capability provides the flexibility to perform user-defined slopes (e.g. s-shaped) as well as various modulation types (e.g. FM and AM) with simple, easy-to-use, DDS commands.

DDS in Quantum Experiments Pic2_DDS-commands_(print)In DDS-mode, only a few commands are needed to e.g. generate a sine wave (orange block), accelerate the frequency (blue block) and lower the amplitude (green block).

For years now, Spectrum AWGs have been successfully used worldwide in pioneering quantum research experiments. Since 2021, Spectrum Instrumentation has been part of the BMBF (German federal ministry of education and research) funding program ‘quantum technologies – from basic research to market’ as part of the Rymax One consortium. The aim of this consortium is building a Quantum Optimizer. The development of the DDS option was based on feedback from the consortium partners and other research institutes worldwide.

The flexibility and fast streaming-mode of Spectrum’s AWGs, which also enables data to be streamed straight from a GPU, allows the control of Qubits directly from a PC. While using an AWG in this way offers full control of the generated waveforms, the drawback is that huge amounts of data need to be calculated. This slows the critical decision-making loop. In contrast, using the versatile multi-tone DDS functionality greatly reduces the amount of data that must be transferred, while still keeping full control. All the key functionality required for quantum research is built in. With just a single command users can apply intrinsic dynamic linear slope functions to produce extremely smooth changes to frequency and amplitude.

DDS controls waveforms in Test, Measurement and Communications

In many kinds of testing systems, it is important to produce and readily control accurate waveforms. The DDS option provides an easy and programmable way for users to produce trains of waveforms, frequency sweeps or finely tuneable references of various frequencies and profiles. Applications that require the fast frequency switching and fine frequency tuning that DDS offers are widespread. They can be found in industrial, medical, and imaging systems, network analysis or even communication technology, where data is encoded using phase and frequency modulation on a carrier.

Availability of DDS option Pic3_AWGs_(print)23 different AWGs are able to use the new DDS firmware option. They offer 16-bit resolution, up to 1.25 GS/s speed and up to 32 channels.

The DDS option is available now for the full range of M4i.66xx PCIe cards, M4x.66xx PXIe modules, portable LXI/Ethernet DN2.66x units and multi-channel desktop LXI/Ethernet DN6.66xx products. By simply performing a firmware update, all previously purchased 66xx series products can be equipped with the new firmware option. Programming can be done using the existing driver SDKs that are included in the delivery. Examples are available for Python, C++, MATLAB, LabVIEW and many more. The option is available now.

About Spectrum Instrumentation

Spectrum Instrumentation, founded in 1989, uses a unique modular concept to design and produce a wide range of more than 200 digitizers and generator products as PC-cards (PCIe and PXIe) and stand-alone Ethernet units (LXI). In over 30 years, Spectrum has gained customers all around the world, including many A-brand industry-leaders and practically all prestigious universities. The company is headquartered near Hamburg, Germany, known for its 5-year warranty and outstanding support that comes directly from the design engineers. More information about Spectrum can be found at www.spectrum-instrumentation.com

The post DDS Option for high-speed AWGs generates up to 20 sine waves appeared first on ELE Times.

Сторінки

Subscribe to Кафедра Електронної Інженерії підбірка - Новини світу мікро- та наноелектроніки