Новини світу мікро- та наноелектроніки

Image sensor elevates smartphone HDR

EDN Network - Птн, 03/22/2024 - 15:42

Omnivision’s OV50K40 smartphone image sensor with TheiaCel technology achieves human eye-level high dynamic range (HDR) with a single exposure. Initially introduced in automotive image sensors, TheiCel employs lateral overflow integration capacitors (LOFIC) to provide superior single-exposure HDR, regardless of lighting conditions.

The OV50K40 50-Mpixel image sensor features a 1.2-µm pixel in a 1/1.3-in. optical format. High gain and correlated multiple sampling enable optimal performance in low-light conditions. At 50 Mpixels, the sensor has a maximum image transfer rate of 30 fps. Using 4-cell pixel binning, the OV50K40 delivers 12.5 Mpixels at 120 fps, dropping to 60 fps in HDR mode but with a fourfold increase in sensitivity.

To achieve high-speed autofocus, the OV50K40 offers quad phase detection (QPD). This enables 2×2 phase detection autofocus across the sensor’s entire image array for 100% coverage. An on-chip QPD remosaic enables full 50-Mpixel Bayer output, 8K video, and 2x crop-zoom functionality.

The OV50K40 image sensor is now in mass production.

OV50K40 product page  

Omnivision

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Image sensor elevates smartphone HDR appeared first on EDN.

Snapdragon SoC brings AI to more smartphones

EDN Network - Птн, 03/22/2024 - 15:42

Qualcomm’s Snapdragon 8s Gen 3 SoC offers select features of the high-end Snapdragon 8 Gen 3 for a wider range of premium Android smartphones. The less expensive 8s Gen 3 chip provides on-device generative AI and an always-sensing image signal processor (ISP).

The SoC’s AI engine supports multimodal AI models comprising up to 10 billion parameters, including large language models (LLMs) such as Baichuan-7B, Llama 2, Gemini Nano, and Zhipu ChatGLM. Its Spectra 18-bit triple cognitive ISP offers AI-powered features like photo expansion, which intelligently fills in content beyond a capture’s original aspect ratio.

The Snapdragon 8s Gen 3 is slightly slower than the Snapdragon 8 Gen 3, and it has one less performance core. The 8s variant employs an Arm Cortex-X4 prime core running at 3 GHz, along with four performance cores operating at 2.8 GHz and three efficiency cores clocked at 2 GHz.

Snapdragon 8s Gen 3 will be adopted by key smartphone OEMs, including Honor, iQOO, Realme, Redmi, and Xiaomi. The first devices powered by the 8s Gen 3 are expected as soon as this month.

Snapdragon 8s Gen 3 product page

Qualcomm Technologies

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Snapdragon SoC brings AI to more smartphones appeared first on EDN.

Precision at all Altitudes for Aerospace: Addressing the Challenges of Additive Manufacturing in Aerospace Production.

ELE Times - Птн, 03/22/2024 - 13:26

The aerospace industry in India is one of the fastest growing sectors with an increasingly strong domestic manufacturing base. To gain further competitive advantage, the implementation of new technologies such as additive manufacturing has been gaining importance in the recent past. While this method leads to cost reduction of building low-volume parts, as well as enables the industry to challenge the limits of efficiency through extremely accurate and complex design executions, the quality challenges faced by this new manufacturing processes should also be thoroughly addressed. High-precision metrology solutions are not only an opportunity to optimize the manufacturing process but also offer valuable insight for material sciences and ensure the quality of the output.

Additive Manufacturing as an Opportunity in Aerospace

Air travel, a preferred mode of transportation, relies on aircraft parts meeting stringent quality standards. For instance, before a supplier commences production, up to 1500 inspection features of a turbine blade must be verified, adhering to tight tolerance ranges at every production step. Beyond this challenge, another is the vital maintenance and repair operations (MRO) which often involves replacing high-complexity, quality-intensive low-volume or single parts. Traditional manufacturing processes for MROs prove both time and cost-intensive, unable to meet the demanded complexity and accuracy efficiently. Consequently, additive manufacturing, specifically 3D printing, is increasingly integrated into the aerospace production chain in India, positioning the industry as a pioneer in additive manufacturing innovation. However, the adoption of this technology brings its own challenges, which our experience suggests can be effectively addressed through high-quality metrology solutions.

Hitting the Brake: The Process and Challenges of Additive Manufacturing

Powder is the building block of additively manufactured parts. The particles are small, typically ranging from a few micrometers to tens of microns in diameter. Their size distribution and shape influence spread ability and hence possible defects might occur during the process. The defect density is among other aspects and also a factor for recycling and aging of the powder. A uniformly distributed powder bed is the essential basis for a stable and reliable additive manufacturing process. Improper powder quality, powder rheology and the process parameters might cause voids to form in the final structure. The additive manufacturing process, unlike traditional manufacturing methods, requires powders to be melted layer by layer during the build. Melt temperatures and process parameters greatly affect the crystallography and, as a consequence, part properties. After printing, the part is still attached to the build plate. It is then heat-treated for stress relieving and removed from the build plate with a band saw or wire EDM. Some parts are then heat treated again for microstructure changes. These processes possibly influence the characteristic and accuracy of the part, impacting the quality and safety. Post which, Dimensional accuracy and surface finish are critical to ensure proper assembly and consistent matching across multiple parts. Even though additive manufacturing is an immense opportunity since it enables an unprecedented control over material microstructures. Analyzing and understanding these structures is key for an efficient and optimized process that ensures the demanded quality and safety.

Precision at all Altitudes: Overcoming the Challenges

Jet engine turbine (3D xray blue transparent)

Utilizing cutting-edge measurement and inspection equipment is crucial for meeting aerospace parts’ sophisticated requirements. Our metrology solutions support and can be implemented throughout the manufacturing process, enabling immediate corrective actions, ensuring high-quality output, and promoting sustainable resourcing. We employ Light or Electron Microscopes and CT for continuous powder characterization, identifying sources of quality issues in the powder bed during or after printing. Defective parts can be detected and fixed during the build, avoiding downstream costs and increasing yield. Optical 3D-scanners, Coordinate Measuring Machines, and high-resolution CT validate accuracy, inspect finished parts, and analyze internal structures, contributing to defining optimal settings for future processes. The comprehensive data analysis across the process chain, facilitated by metrology devices equipped with IoT and PiWeb software by ZEISS, ensures correlation and supports an efficient and optimized process. Investing in high-quality metrology and research equipment is indispensable for ensuring safety and quality in the aerospace industry, particularly as ‘Make in India’ propels the sector’s growth, with additive manufacturing playing a vital role in material science and process optimization.

 

ZEISS, as a key global provider, plays a pivotal role with its Blue Line process, contributing to the industry’s success through precise metrology and quality solutions. Moreover, the company’s commitment to excellence extends beyond mere provision, as it actively engages in collaborative ventures. The company’s globally unique application lab not only facilitates joint customer projects and scientific studies but also serves as a dynamic hub for hands-on demonstrations. This collaborative approach fosters a rich environment for learning and knowledge distribution, ensuring that the aerospace industry benefits not only from cutting-edge technology but also from shared insights and collective expertise.

In my opinion, the aerospace industry in India stands at the forefront of innovation and technological advancements, embracing additive manufacturing as a crucial element in its production chain. By leveraging cutting-edge measurement and inspection equipment throughout the entire manufacturing process, the industry can achieve immediate corrective actions, increase yield, and streamline resource utilization. With continued investments in high-quality metrology and research equipment, the aerospace sector can ensure the safety and quality of its intricate and complex components, further solidifying its position as a leader in technological innovation.

Aveen Padmaprabh_Head of Industrial Quality Solutions_Carl Zeiss India (Bangalore) Pvt LtdAJAY KUMAR LOHANY
Delivery Sr. Director- Aero & Rail
Cyient

The post Precision at all Altitudes for Aerospace: Addressing the Challenges of Additive Manufacturing in Aerospace Production. appeared first on ELE Times.

Emerging solutions in all-electric air mobility service

ELE Times - Птн, 03/22/2024 - 13:01

With projections indicating a doubling of air passenger numbers to 8.2 million by 2037, the advancement of all-electric and hybrid-electric propulsion for powering Advanced Air Mobility (AAM) is evolving into a billion-dollar industry. Recent assessments by Rolls Royce suggest that approximately 15,000 Electric Vertical Take-Off and Landing (eVTOL) vehicles will be indispensable across 30 major cities by 2035 solely to meet the demand for intracity travel. By 2030, top players in the passenger Advanced Air Mobility (AAM) sector could boast larger fleets and significantly more daily flights than the world’s biggest airlines. These flights, averaging just 18 minutes each, will typically carry fewer passengers (ranging from one to six, plus a pilot).

The increasing urbanization, expanding population, aging infrastructure, and the surge in e-commerce and logistics underscore the need for a contemporary, safe, and cost-effective transportation solution for both people and goods. Urban Air Mobility (UAM) presents a seamless, reliable, and swift mode of transportation, addressing present and future urban challenges. With the capacity to transform intra and inter-city transportation, UAM offers a quicker and more effective alternative to conventional ground-based transportation methods. The adoption of Urban Air Mobility hinges on five primary factors:

image 1

  • Growing demand for alternate modes of transportation in urban mobility.
  • Need for convenient, efficient and last mile delivery.
  • Zero emission and noise free mandates.
  • Advancement in technologies (Energy storage, Autonomous, Connected, Power Electronics).
  • Security.

Despite the growing Urban Air Mobility (UAM) sector, it faces significant challenges that need addressing for future growth and success. These challenges range from developing reliable electric propulsion systems to achieving autonomous flight capabilities and establishing necessary infrastructure like vertiports and charging stations. Overcoming these hurdles is vital for unlocking UAM’s transformative potential in urban transportation.

AI Integration for UAM Enhancement

Utilizing AI for predictive maintenance enables analysis of sensor data and onboard sources to forecast maintenance needs, reducing downtime and increasing aircraft availability. AI-enabled maintenance inspections allow for rapid issue identification through image analysis of eVTOLs and UAVs, minimizing errors and oversights. AI aids in making better decisions for aircraft maintenance support by thoroughly analyzing various considerations, likely leading to improved outcomes. Additionally, robotic systems equipped with AI algorithms can autonomously repair or replace minor parts, enhancing safety for maintenance teams. Moreover, AI facilitates better diagnostics and targeted troubleshooting, expediting issue identification and repair suggestions. Ultimately, proactive maintenance, data integration, and improved safety are promised by AI in UAM, ensuring aircraft are maintained effectively from takeoff to landing.

AI in Intelligent Cabin Management (ICMS)

The Intelligent Cabin Management System (ICMS), utilized in aviation and rail industries, undergoes continuous advancements fueled by emerging technologies. Enhanced facial recognition algorithms, driven by artificial intelligence (AI), significantly improve efficiencies and reliability in user authentication, behavior analysis, safety, threat detection, and object tracking. Moreover, ICMS prioritizes monitoring passengers’ vital signs onboard for health safety.

This solution ensures cabin operations with a focus on passenger safety, security, and health, suitable for various passenger cabins in aircraft and rail, and particularly ideal for UAM applications. It facilitates cabin entry by authorized crew and passengers, guides seating arrangements, enforces luggage placement regulations, ensures compliance with air travel advisories, monitors passenger behavior for preemptive intervention, identifies permitted and potentially threatening objects, flags left luggage, and detects vital health parameters for real-time monitoring and control.

AI in UAM Maintenance

AI-driven predictive maintenance involves analyzing sensor data and onboard sources to anticipate UAM maintenance needs, aiding in proactive scheduling and minimizing downtime. Similarly, AI-based inspections utilize image analysis to swiftly identify potential issues during regular checks, enhancing accuracy and reducing errors. Additionally, AI supports maintenance decision-making by analyzing various factors like repair costs and part availability, providing informed recommendations. Future advancements may see autonomous maintenance systems, powered by AI, performing routine tasks such as inspections and minor repairs, improving efficiency and safety. Furthermore, AI assists technicians in diagnostics and troubleshooting by analyzing data and historical records to pinpoint issues and suggest appropriate solutions, streamlining maintenance processes and ensuring UAM operational reliability.

Conclusion

The integration of AI into UAM maintenance offers numerous benefits that significantly enhance the efficiency, safety, and reliability of UAM operations. Through proactive maintenance enabled by AI’s predictive capabilities, maintenance teams can anticipate and address potential failures before they occur, reducing unplanned downtime and enhancing operational reliability. Furthermore, AI-supported maintenance increases aircraft availability, ensuring vehicles are consistently safe and reliable, thus contributing to higher customer satisfaction and overall operational performance.

Moreover, AI-driven maintenance optimization leads to cost reduction by accurately predicting maintenance needs and minimizing unnecessary inspections and component replacements, thereby reducing labor and material costs. Additionally, AI’s continuous monitoring of UAM vehicle conditions enhances safety by detecting anomalies or safety risks in real-time, preventing accidents and ensuring timely maintenance. Overall, the application of AI in UAM maintenance represents a transformative step towards a more efficient, safe, and reliable urban air transportation system.

Ajay Kumar Lohany | Delivery Sr. Director- Aero & Rail | CyientAjay Kumar Lohany | Delivery Sr. Director- Aero & Rail | Cyient

The post Emerging solutions in all-electric air mobility service appeared first on ELE Times.

Coherent debuts 800Gbps QSFP-DD form-factor transceiver module for IP-over-DWDM

Semiconductor today - Птн, 03/22/2024 - 11:13
At the Optical Fiber Communication Conference & Exposition (OFC 2024) in San Diego, CA, USA (26–28 March), materials, networking and laser technology firm Coherent Corp of Saxonburg, PA, USA is demonstrating its 800Gbps coherent transceiver module in QSFP-DD form factor, operated in 800G ZR mode transmitting over 9000ps/nm of dispersion (equivalent to about 450km of fiber)...

The role of cache in AI processor design

EDN Network - Птн, 03/22/2024 - 09:34

Artificial intelligence (AI) is making its presence felt everywhere these days, from the data centers at the Internet’s core to sensors and handheld devices like smartphones at the Internet’s edge and every point in between, such as autonomous robots and vehicles. For the purposes of this article, we recognize the term AI to embrace machine learning and deep learning.

There are two main aspects to AI: training, which is predominantly performed in data centers, and inferencing, which may be performed anywhere from the cloud down to the humblest AI-equipped sensor.

AI is a greedy consumer of two things: computational processing power and data. In the case of processing power, OpenAI, the creator of ChatGPT, published the report AI and Compute, showing that since 2012, the amount of compute used in large AI training runs has doubled every 3.4 months with no indication of slowing down.

With respect to memory, a large generative AI (GenAI) model like ChatGPT-4 may have more than a trillion parameters, all of which need to be easily accessible in a way that allows to handle numerous requests simultaneously. In addition, one needs to consider the vast amounts of data that need to be streamed and processed.

Slow speed

Suppose we are designing a system-on-chip (SoC) device that contains one or more processor cores. We will include a relatively small amount of memory inside the device, while the bulk of the memory will reside in discrete devices outside the SoC.

The fastest type of memory is SRAM, but each SRAM cell requires six transistors, so SRAM is used sparingly inside the SoC because it consumes a tremendous amount of space and power. By comparison, DRAM requires only one transistor and capacitor per cell, which means it consumes much less space and power. Therefore, DRAM is used to create bulk storage devices outside the SoC. Although DRAM offers high capacity, it is significantly slower than SRAM.

As the process technologies used to develop integrated circuits have evolved to create smaller and smaller structures, most devices have become faster and faster. Sadly, this is not the case with the transistor-capacitor bit-cells that lie at the heart of DRAMs. In fact, due to their analog nature, the speed of bit-cells has remained largely unchanged for decades.

Having said this, the speed of DRAMs, as seen at their external interfaces, has doubled with each new generation. Since each internal access is relatively slow, the way this has been achieved is to perform a series of staggered accesses inside the device. If we assume we are reading a series of consecutive words of data, it will take a relatively long time to receive the first word, but we will see any succeeding words much faster.

This works well if we wish to stream large blocks of contiguous data because we take a one-time hit at the start of the transfer, after which subsequent accesses come at high speed. However, problems occur if we wish to perform multiple accesses to smaller chunks of data. In this case, instead of a one-time hit, we take that hit over and over again.

More speed

The solution is to use high-speed SRAM to create local cache memories inside the processing device. When the processor first requests data from the DRAM, a copy of that data is stored in the processor’s cache. If the processor subsequently wishes to re-access the same data, it uses its local copy, which can be accessed much faster.

It’s common to employ multiple levels of cache inside the SoC. These are called Level 1 (L1), Level 2 (L2), and Level 3 (L3). The first cache level has the smallest capacity but the highest access speed, with each subsequent level having a higher capacity and a lower access speed. As illustrated in Figure 1, assuming a 1-GHz system clock and DDR4 DRAMs, it takes only 1.8 ns for the processor to access its L1 cache, 6.4 ns to access the L2 cache, and 26 ns to access the L3 cache. Accessing the first in a series of data words from the external DRAMs takes a whopping 70 ns (Data source Joe Chang’s Server Analysis).

Figure 1 Cache and DRAM access speeds are outlined for 1 GHz clock and DDR4 DRAM. Source: Arteris

The role of cache in AI

There are a wide variety of AI implementation and deployment scenarios. In the case of our SoC, one possibility is to create one or more AI accelerator IPs, each containing its own internal caches. Suppose we wish to maintain cache coherence, which we can think of as keeping all copies of the data the same, with the SoCs processor clusters. Then, we will have to use a hardware cache-coherent solution in the form of a coherent interconnect, like CHI as defined in the AMBA specification and supported by Ncore network-on-chip (NoC) IP from Arteris IP (Figure 2a).

Figure 2 The above diagram shows examples of cache in the context of AI. Source: Arteris

There is an overhead associated with maintaining cache coherence. In many cases, the AI accelerators do not need to remain cache coherent to the same extent as the processor clusters. For example, it may be that only after a large block of data has been processed by the accelerator that things need to be re-synchronized, which can be achieved under software control. The AI accelerators could employ a smaller, faster interconnect solution, such as AXI from Arm or FlexNoC from Arteris (Figure 2b).

In many cases, the developers of the accelerator IPs do not include cache in their implementation. Sometimes, the need for cache wasn’t recognized until performance evaluations began. One solution is to include a special cache IP between an AI accelerator and the interconnect to provide an IP-level performance boost (Figure 2c). Another possibility is to employ the cache IP as a last-level cache to provide an SoC-level performance boost (Figure 2d). Cache design isn’t easy, but designers can use configurable off-the-shelf solutions.

Many SoC designers tend to think of cache only in the context of processors and processor clusters. However, the advantages of cache are equally applicable to many other complex IPs, including AI accelerators. As a result, the developers of AI-centric SoCs are increasingly evaluating and deploying a variety of cache-enabled AI scenarios.

Frank Schirrmeister, VP solutions and business development at Arteris, leads activities in the automotive, data center, 5G/6G communications, mobile, aerospace and data center industry verticals. Before Arteris, Frank held various senior leadership positions at Cadence Design Systems, Synopsys and Imperas.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The role of cache in AI processor design appeared first on EDN.

US Critical Materials announces discovery of gallium at Sheep Creek, Montana

Semiconductor today - Чтв, 03/21/2024 - 18:45
Private rare-earths exploration, development and process technology company US Critical Materials Corp of Salt Lake City, Utah, has confirmed what it calls a “strategically significant” deposit of high-grade gallium on its 6700 acres of claims in Sheep Creek, Montana...

5N Plus commercializing its GaN-on-Si patents

Semiconductor today - Чтв, 03/21/2024 - 18:32
Specialty semiconductor and performance materials producer 5N Plus Inc (5N+) of Montreal, Québec, Canada is officially launching the commercialization rights for its portfolio of gallium nitride on silicon (GaN-on-Si) patents which, it says, can enable the rapid prototype development and first-to-market commercialization of novel vertical GaN-on-Si power devices by companies operating in the high-power electronics (HPE), electric vehicles (EV) and artificial intelligence (AI) server sectors...

Workarounds (and their tradeoffs) for integrated storage constraints

EDN Network - Чтв, 03/21/2024 - 16:14

Over the Thanksgiving 2023 holiday weekend, I decided to retire my trusty silver-color early-2015 13” MacBook Pro, which was nearing software-induced obsolescence, suffering from a Bluetooth audio bug, and more generally starting to show its age performance- and other-wise. I replaced it with a “space grey” color scheme 2020 model, still Intel x86-based, which I covered in detail in one of last month’s posts.

Over the subsequent Christmas-to-New Year’s week, once again taking advantage of holiday downtime, I decided to retire my similarly long-in-use silver late-2014 Mac mini, too. Underlying motivations were similar; pending software-induced obsolescence, plus increasingly difficult-to-overlook performance shortcomings (due in no small part to the system’s “Fusion” hybrid storage configuration). Speed limitations aside, the key advantage of this merged-technology approach had been its cost-effective high capacity: a 1 TByte HDD, visible and accessible to the user, behind-the-scenes mated by the operating system to 128 GBytes of flash memory “cache”.

Its successor was again Intel-based (as with its laptop-transition precursor, the last of the x86 breed) and space grey in color; a late-2018 Mac mini:

This particular model, versus its Apple Silicon successors, was notable (as I’ve mentioned before) for its comparative abundance of back-panel I/O ports:

And this specific one was especially attractive in nearly all respects (thereby rationalizing my mid-2023 purchase of it from Woot!). It was brand new, albeit not an AppleCare Warranty candidate (instead, I bought an inexpensive extended warranty from Asurian via Woot! parent company Amazon). It was only $449 plus tax after discounts. It included the speediest-available Intel Core i7-8700B 6-core (physical; 12-core virtual via HyperThreading) 3.2 GHz CPU option, capable of boost-clocking to 4.1 GHz. And it also came with 32 GBytes of 2666 MHz DDR4 SDRAM which, being user-accessible SoDIMM-based (unlike the soldered-down memory in its predecessor), was replaceable and even further upgradeable to 64 GBytes max.

Note, however, my prior allusion to this new system not being attractive in all respects. It only included a 128 GByte integrated SSD, to be precise. And, unlike this system’s RAM (or the SSD in the late 2014 Mac mini predecessor, for that matter), its internal storage capacity wasn’t user-upgradeable. I’d figured that similar to my even earlier mid-2011 Mac mini model, I could just boot from a tethered external drive instead, and that may still be true (online research is encouraging). However, this time I decided to first try some options I’d heard about for relocating portions of my app suite and other files while keeping the original O/S build internal and intact.

I’ve subsequently endured no shortage of dead-end efforts courtesy of latest operating system limitations coupled with applications’ shortsightedness, along with experiments that functionally worked but ended up being too performance-sapping or too little capacity-freeing to be practical. However, after all the gnashing of teeth, I’ve come up with a combination of techniques that will, I think, deliver a long-term usable configuration (then again, I haven’t attempted a major operating system update yet, so don’t hold me to that prediction). I’ve learned a lot along the way, which I hope will not only be helpful to other MacOS users but, thanks to MacOS’s BSD Unix underpinnings, may also be relevant to those of you running Linux, Android, Chrome OS, and other PC and embedded Unix-based operating systems.

Let’s begin with a review of my chosen external-storage hardware. Initially, I thought I’d just tether a Thunderbolt 3 external SSD (such as the 2TB Plugable drive that I picked up from B&H Photo Video on sale a year ago for $219) to the mac Mini, and that remains a feasible option:

However, I decided to “kill two birds with one stone” by beefing up the Mac mini’s expansion capabilities in the process. Specifically, I initially planned on going with one of Satechi’s aluminum stand and hubs. The baseline-feature set one that color-matches my Mac mini’s space grey scheme has plenty of convenient-access front-panel connections, but that’s it:

Its “bigger brother” additionally supports embedding a SATA (or, more recently, NVMe) M.2 format SSD, but connectivity is the same 5-or-more-recently-10 Gbps USB-C as before (ok for tethering peripherals, not so much for directly running apps from mass storage). Plus, it only came in a silver color scheme (ok for Apple Silicon Mac minis, not so much for x86-based ones):

So, what did I end up with? I share the following photo with no shortage of chagrin:

In the middle is the Mac mini. Above it is a Windows Dev Kit 2023, aka “Project Volterra,” an Arm- (Qualcomm Snapdragon 8cx Gen 3, to be precise, two SoC steppings newer than the Gen 1 in my Surface Pro X) and Windows 11-based mini PC, which I’ll say more about in a future post.

And at the bottom of the stack is my external storage solution—dual-storage, to be precise—an OWC MiniStack STX in its original matte black color scheme (it now comes in silver, too).

Does it color-match the Mac mini? No, even putting aside the glowing blue OWC-logo orb on the front panel. And speaking of the front panel, are there any easily user-accessible expansion capabilities? Again, no. In fact, the only expansion ports offered are three more Thunderbolt 3 ones around back…the fourth there connects to the computer. But Thunderbolt 3’s 40 Gbps bandwidth is precisely what drove my decision to go with the OWC MiniStack STX, aided by the fact that I’d found a gently used one on eBay at substantial discount from MSRP.

Inside, I’ve installed a 2 TByte Samsung 980 Pro PCIe 4.0 NVMe SSD which I bought for $165.59 used at Amazon Warehouse a year ago (nowadays, new ones sell for the same price…sigh…):

alongside a 2 TByte Kingston 2.5” KC600 2.5” SATA SSD:

They appear as separate external drives on system bootup, and the performance results are nothing to sneeze at. Here’s the Samsung NVMe PCI 4.0 SSD (the enclosure’s interface to the SSD, by the way, is “only” PCIe 3.0; it’s leaving storage performance potential “on the table”):

and here’s the Kingston, predictably a bit slower due to its SATA III interface and command set (therefore rationalizing why I’ve focused my implementation attention on the Samsung so far):

For comparison, here’s the Mac mini’s internal SSD:

The Samsung holds its own from a write performance standpoint but is more than 3x slower on reads, rationalizing my strategy to keep as much content as possible on internal storage. To wit, how did I decide to proceed, after quickly realizing (mid-system setup) that I’d fill up the internal available 128 GBytes well prior to getting my full desired application suite installed?

(Abortive) Step 1: Move my entire user account to external storage

Quoting from the above linked article:

In UNIX operating systems, user accounts are stored in individual folders called the user folder. Each user gets a single folder. The user folder stores all of the files associated with each user, and settings for each user. Each user folder usually has the system name of the user. Since macOS is based on UNIX, users are stored in a similar manner. At the root level of your Mac’s Startup Disk you’ll see a number of OS-controlled folders, one of which is named Users.

Move (copy first, then delete the original afterwards) an account’s folder structure elsewhere (to external storage, in this case), then let the foundation operating system know what you’ve done, and as my experience exemplifies, you can free up quite a lot of internal storage capacity.

Keep in mind that when you relocate your user home folder, it only moves the home folder – the rest of the OS stays where it was originally.

One other note, which applies equally to other relocation stratagems I subsequently attempted, and which perhaps goes without saying…but just to cover all the bases:

Consider that when you move your home folder to an external volume, the connection to that volume must be perfectly reliable – meaning both the drive and the cable connecting the drive to your Mac. This is because the home folder is an integral part of macOS, and it expects to be able to access files stored there instantly when needed. If the connection isn’t perfectly reliable, and the volume containing the home folder disappears even for a second, strange and undefined behavior may result. You could even lose data.

That all being said, everything worked great (with the qualifier that initial system boot latency was noticeably slower than before, albeit not egregiously so), until I noticed something odd. Microsoft’s OneDrive client indicated that it has successfully sync’d all the cloud-resident information in my account, but although I could then see a local clone of the OneDrive directory structure, all of the files themselves were missing, or at least invisible.

This is, it turns out, a documented side effect of Apple’s latest scheme for handling cloud storage services. External drives that self-identify as capable of being “ejectable” can’t be used as OneDrive sync destinations (unless, perhaps, you first boot the system from them…dunno). And the OneDrive sync destination is mirrored within the user’s account directory structure. My initial response was “fine, I’ll bail on OneDrive”. It turns out, however, that Dropbox (on which I’m much more reliant) is, out of operating system support necessity, going down the same implementation-change path. Scratch that idea.

Step 2: Install applications to external storage

This one seems intuitively obvious, yes? Reality proved much more complicated and ultimately limited in its effectiveness, however. Most applications I wanted to use that had standalone installers, it turns out, didn’t even give me an option to install anywhere but internal storage. And for the ones that did give me that install-redirect option…well, please take a look at this Reddit thread I started and eventually resolved, and then return to this writeup afterwards.

Wild, huh? That said, many MacOS apps don’t have separate installer programs; you just open a DMG (disk image) file and then drag the program icon inside (behind which is the full program package) to the “Applications” folder or anywhere else you choose. This led to my next idea…

Step 3: Move already-installed applications to external storage

As previously mentioned, “hiding” behind an application’s icon is the entire package structure. Generally speaking, you can easily move that package structure intact elsewhere (to external storage, for example) and it’ll still run as before. The problem, I found out, comes when you subsequently try to update such applications, specifically where a separate updater utility is involved. Take Apple’s App Store, for example. If you download and install apps using it (which is basically the only way to accomplish this) but you then move those apps elsewhere, the App Store utility can no longer “find” them for update purposes. The same goes for Microsoft’s (sizeable, alas) Office suite. In these and other cases, ongoing use of internal storage is requisite (along with trimming down the number of installed App Store- and Office suite-sourced applications to the essentials). Conversely, apps with integrated update facilities, such as Mozilla’s Firefox and Thunderbird, or those that you update by downloading and swapping in a new full-package version, upgrade fine post-move.

Step 4: Move data files, download archives, etc. to external storage

I mentioned earlier that Mozilla’s apps (for example) are well-behaved from a relocation standpoint. I was specifically referring to the programs themselves. Both Firefox and Thunderbird also create user profiles, which by default are stored within the MacOS user account folder structure, and which can be quite sizeable. My Firefox profile, for example, is just over 3 GBytes in size (including the browser cache and other temporary files), while my Thunderbird profile is nearly 9 GBytes (I’ve been using the program for a long time, and I also access email via POP3—which downloads messages and associated file attachments to my computer—vs IMAP). Fortunately, by tweaking the entries in both programs’ profiles.ini files, I’ve managed to redirect the profiles to external storage. Both programs now launch more slowly than before, due to the aforementioned degraded external drive read performance, but they then run seemingly as speedy as before, thanks to the aforementioned comparable write performance. And given that they’re perpetually running in the background as I use the computer, the launch-time delay is a one-time annoyance at each (rare) system reboot.

Similarly, I’ve redirected my downloaded-files default (including a sizeable archive of program installers) to external storage, along with an encrypted virtual drive that’s necessary for day-job purposes. I find, in cases like these, that creating an alias from the old location to the new is a good reminder of what I’ve previously done, if I subsequently find myself scratching my head because I can’t find a particular file or folder.

The result

By doing all the above (steps 2-4, to be precise), I’ve relocated more than 200 GBytes (~233 GBytes at the moment, to be precise) of files to external storage, leaving me with nearly 25% free in my internal storage (~28 GBytes at the moment, to be precise). See what I meant when I earlier wrote that in the absence of relocation success, I’d “fill up the available 128 GBytes well prior to getting my full desired application suite installed”? I should clarify that “nearly 25% free storage” comment, by the way…it was true until I got the bright idea to command-line install recently released Wine 9, which restores MacOS compatibility (previously lost with the release of 64-bit-only MacOS 10.15 Catalina in October 2019)…which required that I first command-line install the third-party Homebrew package manager…which also involved command-line installing the Xcode Command Line Tools…all of which installed by default to internal storage, eating up ~10 GBytes (I’ll eventually reverse my steps and await a standalone, more svelte package installer for Wine 9 to hopefully come).

Thoughts on my experiments and their outcomes? Usefulness to other Unix-based systems? Anything else you want to share? Let me know in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Workarounds (and their tradeoffs) for integrated storage constraints appeared first on EDN.

Automotive PCIe: To Switch or Not to Switch?

ELE Times - Чтв, 03/21/2024 - 13:39

Courtesy : Microchip

The myths and false economy of direct chip-to-chip PCIe connect in ADAS and vehicle autonomy applications.

PCIe’s Rising Role in Autonomous Driving and ADAS Technology

Before pondering the question of whether or not to switch, let’s first set the scene by considering why Peripheral Component Interconnect Express (PCIe) is becoming so popular as an interconnect technology in advanced driver assistance systems (ADAS) applications—and why it will be so crucial in the realization of completely autonomous driving (AD) as the automotive industry seeks standard interfaces that deliver performance while ensuring compatibility and ease-of-use.

With its roots in the computing industry, PCIe is a point-to-point bidirectional bus for connecting high-speed components. Subject to the system architecture (PCIe’s implementation), data transfer can take place over 1, 2, 4, 8 or 16 lanes, and if more than one lane is used the bus becomes a serial/parallel hybrid.

The PCIe specification is owned and managed by the PCI Special Interest Group (PCI-SIG), an association of 900+ industry companies committed to advancing its non-proprietary peripheral technology. As demand for higher I/O performance grows, the group’s scope and ecosystem reach are both expanding, and to paraphrase words from PCI-SIG’s membership page:

Current PCIe and other related technology roadmaps account for new form factors and lower power applications. Innovation on these fronts will remain true to PCI-SIG’s legacy of delivering solutions that are backward compatible, cost-efficient, high performance, processor agnostic, and scalable.

With vehicles becoming high-performance computing platforms (HPCs—and data centers, even) on wheels, these words are exactly what vehicle OEMs developing ADAS and AD solutions want to hear. Also, every generation of PCIe results in performance improvements – from gen 1.0’s data (giga) transfer rate of 2.5GT/s and total bandwidth of 4G/s (16 lanes) to today’s gen 6.0’s 64GT/s and 128G/s (16 lanes). Note: PCIe 7.0, slated to arrive in 2025, will have a data rate of 128GT/s and a bandwidth of 512GB/s through 16 lanes.

PCIe’s performance power cannot be disputed, and it will certainly be required to support the kind of real-time processing of large volumes of data needed for AI- and ML-enabled ADAS and AD applications.

But, as ever, there is debate around implementing PCIe-based architectures, not least when it comes to whether the connections between PCIe-enabled components should be direct or switched.

Making the Connection

To provide higher levels of automation, vehicles must incorporate increasingly sophisticated combinations of electronic components including central processing units (CPUs), electronic control units (ECUs), graphics processing units (GPUs), system-on-chips (SoCs), “smart sensors” and high-capacity and high-speed storage devices (such as NVMe memory).

Of these components, the ECUs (there are many) combine across separate zones based on a common functionality. These zonal ECUs communicate with HPC platforms using Ethernet. But within those platforms, there is a need for high-bandwidth processing to achieve real-time decision making.

Accordingly, PCIe technology is being used by automotive designers in a manner very similar to the way in which a data center is designed. Connecting sensors with high-speed serial outputs to processing units is best addressed with an open standard called Automotive SerDes Alliance (ASA).

In essence, there are three pillars of automotive networking (see figure 1).

Figure 1 - Three Pillars of future of Automotive NetworkingFigure 1 – Three Pillars of future of Automotive Networking

However, some SoC vendors are saying that for PCIe you can simply connect directly between chips without a switch. Well, yes, you can… but it doesn’t scale to higher ADAS levels and it’s a false economy do so.

An HPC system without a switch exponentially increases software complexity, as each end requires its own software stack. Also, there’s the “bigger picture” benefits of switched over unswitched PCIe to consider:

  • IO Bandwidth Optimization: Packet switching reduces the SoC interconnection pin count requirement which lowers SoC power and cost.
  • Peripheral Sharing: Single peripherals, such as SSD storage or ethernet controllers, may be shared across several SoCs
  • Scalability: You can easily scale for more performance without changing the system architecture by increasing switch size, SoC count and peripheral count.
  • Serviceability: PCIe has built-in error detection and diagnostic test features which have been thoroughly proven in the high-performance compute environment over many years to significantly ease serviceability.
  • And as a result of the above points, a much better total cost of ownership (TCO) is possible.

When PCIe combines forces with Ethernet and ASA, it allows for the creation of an optimized, heterogeneous system architecture (as figure 2 illustrates with respect to an ADAS example).

Figure 2 - Heterogenous architecture for ADASFigure 2 – Heterogenous architecture for ADAS

Although the three communications technologies evolved at different times to support different needs, and have their respective pros and cons, the heterogeneous architecture makes the best of each.

As mentioned, PCIe provides point-to-point connection, meaning devices are not competing for bandwidth, which is fine if only a few devices need to connect. However, an autonomous vehicle is best realized as a set of distributed workloads, which means bandwidth needs to be shared between multiple sub-system components.

In this respect, PCIe switches provide an excellent solution as they are “transparent,” meaning that software and other devices do not need to be aware of the presence of switches in the hierarchy, and no drivers are required.

The Answer: Switch

PCIe is ideal for ADAS, AD and other HPC applications within a vehicle, but its “point-to-point” connectivity has many thinking that that’s how it should be implemented—as chip-to-chip, for example. However, integrating switching using technologies such as the Microchip Switchtec family (the world’s first automotive-qualified PCIe switches) minimizes software complexity and realizes a host of other benefits for high-performance automotive systems with multiple sub-system components that demand low latencies and high data rates.

The post Automotive PCIe: To Switch or Not to Switch? appeared first on ELE Times.

Techniques to Identify and Correct Asymmetric Wafer Map Defects Caused by Design and Process Errors

ELE Times - Чтв, 03/21/2024 - 13:21

JAMES KIM, Senior Semiconductor and Process Integration Engineer | Lam Research

Asymmetries in wafer map defects are usually treated as random production hardware defects. For example, asymmetric wafer defects can be caused by particles inadvertently deposited on a wafer during any number of process steps. In this article, I want to share a different mechanism that can cause wafer defects. Namely, that these defects can be structural defects that are caused by a biased deposition or etch process.

It can be difficult for a process engineer to determine the cause of downstream structural defects located at a specific wafer radius, particularly if these defects are located in varying directions or at different locations on the wafer. As a wafer structure is formed, process behavior at that location may vary from other wafer locations based upon the radial direction and specific wafer location. Slight differences in processes at different wafer locations can be exaggerated by the accumulation of other process steps as you move toward that location. In addition, process performance differences (such as variation in equipment performance) can also cause on-wafer structural variability.

In this study, structural defects will be virtually introduced on a wafer to provide an example of how structural defects can be created by differences in wafer location. We will then use our virtual process model to identify an example of a mechanism that can cause these types of asymmetric wafer map defects.

Methods Figure 1 - Techniques to Identify and Correct Asymmetric Wafer Map Defects Caused by Design and Process ErrorsFigure 1. Anisotropic liner/barrier metal
deposition on a tilted structure caused by wafer warping

A 3D process model of a specific metal stack (Cu/TaN/Ta) on a warped wafer was created using SEMulator3D virtual fabrication (Figure 1). After the 3D model was generated, electrical analysis of 49 sites on the wafer was completed.

In our model, an anisotropic barrier/liner (TaN/Ta) deposition process was used. Due to wafer tilting, there were TaN/Ta deposition differences seen across the simulated high aspect ratio metal stack. To minimize the number of variables in the model, Cu deposition was assumed to fill in an ideal manner (without voids). Forty-nine (49) corresponding 3D models were created at different locations on the wafer, to reflect differences in tilting due to wafer warping. Next, electrical simulation was completed on these 3D models to monitor metal line resistance at each location. Serpentine metal line patterns were built into the model, to help simulate the projected electrical performance on the warped wafer at different points on the same radius, and across different directions on the wafer (Figure 2).

Figure 2 - Techniques to Identify and Correct Asymmetric Wafer Map Defects Caused by Design and Process ErrorsFigure 2 – Techniques to Identify and Correct Asymmetric Wafer Map Defects Caused by Design and Process Errors

Using only incoming structure and process behavior, we can develop a behavioral process model and extend our device performance predictions and behavioral trend analysis outside of our proposed process window range. In the case of complicated processes with more than one mechanism or behavior, we can split processes into several steps and develop models for each individual process step. There will be phenomena or behavior in manufacturing that can’t be fully captured by this type of process modeling, but these models provide useful insight during process window development.

Results

Of the 49 3D models, the models on the far edge of the wafer were heavily tilted by wafer warpage. Interestingly, not all of the models at the same wafer radius exhibited the same behavior. This was due to the metal pattern design. With anisotropic deposition into high aspect ratio trenches, deposition in specific directions was blocked at certain locations in the trenches (depending upon trench depth and tilt angle). This affected both the device structure and electrical behavior at different locations on the wafer.

Since the metal lines were extending across the x-axis, there were minimal differences seen when tilting the wafer across the x-axis in our model. X-axis tilting created only a small difference in thickness of the Ta/TaN relative to the Cu. However, when the wafer was tilted in the y-axis using our model, the high aspect ratio wall blocked Ta/TaN deposition due to the deposition angle. This lowered the volume of Ta/TaN deposition relative to Cu, which decreased the metal resistance and placed the resistance outside of our design specification.

X-axis wafer tilting had little influence on the device structure. The resistance on the far edge of the x-axis did not significantly change and remained in-spec. Y-axis wafer tilting had a more significant influence on the device structure. The resistance on the far edge of the y-axis was outside of our electrical specification (Figure 3).

Figure 3 - Techniques to Identify and Correct Asymmetric Wafer Map Defects Caused by Design and Process ErrorsFigure 3 – Techniques to Identify and Correct Asymmetric Wafer Map Defects Caused by Design and Process Errors Conclusion

Even though wafer warpage occurs in a circular manner due to accumulated stress, unexpected structural failures can occur in different radial directions on the wafer due to variations in pattern design and process behavior across the wafer. From this study, we demonstrated that asymmetric structures caused by wafer warping can create top-bottom or left-right wafer performance differences, even though processes have been uniformly applied in a circular distribution across the wafer.

Process simulation can be used to better understand structural failures that can cause performance variability at different wafer locations. A better understanding of these structural failure mechanisms can help engineers improve overall wafer yield by taking corrective action (such as performing line scanning at specific wafer locations) or by adjusting specific process windows to minimize asymmetric wafer defects.

The post Techniques to Identify and Correct Asymmetric Wafer Map Defects Caused by Design and Process Errors appeared first on ELE Times.

EFFECT Photonics raises $38m in Series D funding

Semiconductor today - Чтв, 03/21/2024 - 13:04
EFFECT Photonics b.v. – a spin off from the Technical University of Eindhoven (TU/e) in The Netherlands – has secured $38m in a Series D funding round, led by Innovation Industries Strategic Partners Fund, backed by Dutch pension funds PMT and PME, along with co-investor Invest-NL Deep Tech Fund and participation from other existing investors...

Executive Blog – Companies that Embrace Digital Transformation Have More Resilient Design and Supply Chains

ELE Times - Чтв, 03/21/2024 - 12:59

Sailesh Chittipeddi | Executive Vice President Operations | Renesas

Digital transformation has evolved quickly from a conceptual phase to a semiconductor industry change agent. The rapid take up of AI-enhanced product development is only accelerating this transformation and is further influenced by two connected trends: The movement of Moore’s Law from transistor scaling to system-level scaling, and the relatively recent redistribution of the global electronics supply chain due to the COVID-19 pandemic.

I spoke on this subject earlier this month at the Industry Strategy Symposium 2024 in Half Moon Bay, California, where leaders from across the chip industry gather annually to share their insights on technology and trend drivers and what they could mean for our respective businesses.

Between the early 1970s and around 2005, increased chip performance was largely a function of clock frequency improvements driven by advances in lithography, transistor density, and energy efficiency. With increasing transistor counts (and die size), clock frequencies are limited by interconnect delays and not by transistor performance. To overcome this challenge, designers moved to multi-core designs to increase system performance without blowing up energy. Novel packaging techniques such as chiplets and multi-chip modules are helping further improve system performance, particularly in AI chips.

A single chip package may be comprised of multiple chiplets each housing specific functions such as high-performance logic elements, AI accelerators, high-bandwidth DDR memory, and high-speed peripherals. Very often, each of these components is sourced from a different fab, a trend that has resulted in a fragmented global supply chain. This creates its own set of challenges as die from multiple fabs must be integrated into a package or system that must then be thoroughly tested. Test failures at this stage have enormous financial consequences. These challenges, require a “shift left” mindset in product development. The shift left mentality has major ramifications for how we, as an industry, should be managing our supply chains by moving the heavy emphasis from architecture/design to final system testing and quality.

Supply chain challenges during the COVID pandemic have resulted in further decentralization of the supply chain components. To illustrate the enormity of the change underway, consider that between 2022 and December 2024 construction began on 93 wafer fabs around the world. Compare that to the global construction of automated test facilities. In 2021 alone, the industry broke ground on 484 back-end test sites, which provides a measure of how committed the chip sector is to driving resiliency across the manufacturing landscape.

The Role of AI in Semiconductor Design and Manufacture

So, where does AI come into the picture?

A key area in which AI will exert its influence is the shift from an analytic to a predictive model. Today, we wait to detect a problem and then look at past data to identify the root cause of the problem and prevent it from reoccurring. This inefficient approach adds time, cost, unpredictability, and waste to the supply chain. AI, on the other hand, allows us to examine current data to predict future outcomes.

Instead of using spreadsheets to analyze old data, we build AI models that production engineers continuously train with new data. This “new” data is no longer merely a set of numbers or measurements but includes unstructured data such as die photos, equipment noise, time series sensor data, and videos to make better predictions.

In the end, it’s about pulling actionable information from a sea of data points. In other words, data without action is mostly useless. Why am I driving this point home? Because today, 90 percent of data created by enterprises is never used. It’s dark data. And when you think about AI implementation, 46 percent of them never make it from pilot to production because the complexity of the programs is not scoped appropriately.

Despite these challenges, equipment makers are already starting to implement digital transformation techniques into their product development processes. The benefits are palpable. Research from Boston Consulting Group found that companies that have built resiliency into their supply and design chains recovered from COVID-related downturns twice as fast as companies that have yet to embrace digital transformation.

At Renesas, we acquired a company called Reality AI that generates a compact machine learning model that runs on a microcontroller or microprocessor. This provides the unique ability to quickly detect deviations from normal patterns that may cause equipment problems. It allows manufacturing facilities to schedule preventive maintenance or minimize downtime associated with sudden equipment failure.

Digital Transformation Is Future-Proofing Our Industry

Digital transformation with AI is key to business success today. As the semiconductor industry undergoes a major evolution – embracing system-level design and adapting to a changing global supply chain – digital transformation and the shift left approach are powerful tools that deliver on two fronts.

The first is a productivity increase that comes from optimized tools and design processes. The closer you are to where the failure is likely to occur, the more quickly you learn and the more quickly you can fix things.

Second, and perhaps most importantly, digital transformation solves one of the biggest problems the industry has with chip design – the availability of talent. When we reduce the time taken to design a chip, we’re making our engineers far more efficient than they would be otherwise, which is increasingly important as the semiconductor industry demographic skews older.

The post Executive Blog – Companies that Embrace Digital Transformation Have More Resilient Design and Supply Chains appeared first on ELE Times.

Network RTK vs PPP-RTK: an insight into real-world performance

ELE Times - Чтв, 03/21/2024 - 12:43

By- Patty Felts, Product Marketing Manager, Product Center Services

Australian automation and positioning technology provider conduct static and kinematic tests

Locating people, animals, or objects on Earth with high precision requires the use of GNSS receivers and the support of network RTK correction services that account for errors caused by the atmosphere, satellite clock drift, and signal delays.

Three standard approaches to correct these errors are Real Time Kinematic (RTK), Precise Point Positioning (PPP) GNSS correction services, and a combination of the two, PPP-RTK. Beyond these, a pairing device such as a survey-grade GNSS receiver or a mass-market smart antenna is also required to enhance positioning accuracy. Combining any of these approaches with one device will optimize the positioning accuracy of the end-use application.

Many GNSS navigation applications require high accuracy. The accuracy of survey-grade GNSS receivers exceeds what mass-market smart antennas can provide. Of course, this comes at a price. Still, several high-precision GNSS navigation applications can do well with the accuracy offered by mass-market smart antennas. Examples include transportation, e-mobility, IoT use cases, and field robotics. Designers aim to equip devices with reliable, high-precision positioning at a reasonable cost.

GNSS users can verify the capabilities of setups by hitting the roads and testing them in real-world situations. Doing so enables them to understand the capabilities of these setups and differentiate them.

Aptella (formerly branded as Position Partners), an Australasian provider of automation and positioning technology solutions, had the opportunity to test the capabilities of network RTK vs PPP-RTK GNSS correction services and present the findings to their client.

We will discuss the findings, but as a first step, let us review how the RTK, PPP, and PPP-RTK approaches operate, the equipment needed, and the participants in this exercise.

Network RTK, Precise Point Positioning GNSS, and PPP-RTK

The mentioned correction approaches follow different paths. RTK GNSS correction services calculate and correct GNSS errors by comparing satellite signals from one or more reference stations. Any errors detected are then transmitted using IP-based communications, which can be reliable beyond a radius of 30 km from the nearest base station. Network RTK typically requires bi-directional communication between the GNSS receiver and the service, making the solution more challenging to scale. This approach can provide centimeter-level positioning accuracy in seconds.

Precise Point Positioning GNSS correction services operate differently. They broadcast a GNSS error model valid over large geographic regions. Because this service requires only unidirectional communication (IP-based or via satellite L-band), it’s more scalable to multiple users, unlike RTK.

PPP high-precision positioning takes between three minutes and half an hour to provide a position estimate with an accuracy of less than 10 cm. Static applications such as surveying or mapping typically use this solution, but it can be a poor fit for dynamic applications such as unmanned aerial vehicles or mobile robotics.

More recently, both approaches have been combined into what is known as PPP-RTK GNSS correction services (or State Space Representation (SSR) correction services). This combination provides the accuracy of the RTK network and its fast initialization times with the broadcast nature of Precise Point Positioning. Similar to PPP, the approach is based on a model of GNSS errors that has broad geographic validity. Once a GNSS receiver has access to these PPP-RTK correction data through one-way communication, it computes the GNSS receiver position.

Survey-grade GNSS receiver versus mass-market smart antenna

Survey-grade receivers are devices typically used for geodetic surveying and mapping applications. They are designed to provide highly accurate and precise positioning information for civil engineering, construction, GIS data, land development, mining, and environmental management.

Today’s modules can access data from multiple satellite constellations and network RTK support. These devices are typically very expensive, costing thousands of dollars each, because they are highly precise, with accuracies ranging from centimeters to millimeters.

Mass-market smart antennas are specialized receiver/antenna-integrated devices designed to receive signals from satellite constellations and GNSS correction services right out of the box. Smart antennas capture and process raw data to determine precise locations. Standalone GNSS antennas don’t have a precision rating, as this depends on the integrated GNSS receiver and correction service to which the antennas are coupled.

While mass-market smart antennas are more affordable than survey-grade GNSS receivers, there is a corresponding performance trade-off, with accuracies ranging from a few centimeters to decimeters.

The following tests used a survey-grade GNSS receiver to verify control coordinates in static mode and compare RTK versus PPP-RTK results in the kinematic mode. The GNSS smart antenna was also employed as a pairing device for these static and kinematic tests.

Participating companies

Aptella is the company that conducted the performance test and presented the results to their client. However, the participation of four other companies was crucial.

AllDayRTK operates Australia’s highest-density network of Continuously Operating Reference Stations (CORS). Its network RTK correction services were used to compare with PPP-RTK.

u-blox’s PointPerfect provided the PPP-RTK GNSS correction services used in these tests.

Both correction services were coupled with a survey GNSS receiver, Topcon HiPer VR, and a mass-market smart antenna, the Tallysman TW5790.

Testing two correction services solutions

In the Australian city of Melbourne, Aptella conducted static and kinematic tests with several objectives in mind:

  • Test RTK and PPP-RTK GNSS corrections using a mass-market GNSS device like the Tallysman TW5790.
  • Demonstrate the capabilities of the Tallysman smart antenna coupled with PPP-RTK corrections.
  • Evaluate PointPerfect PPP-RTK GNSS corrections and assess “real world” results against published specifications.
  • Determine whether these specifications meet mass-market applications and e-transport safety requirements of 30 cm @ 95%.
  • Provide insight into use cases and applications suitable for PPP-RTK corrections.
Static results  gnss antenna and survey grade receiverFigure 1: gnss antenna and survey grade receiver

These tests allowed experts to compare the accuracy of RTK and PPP-RTK GNSS correction services supported by a mass-market Tallysman smart antenna.  They were also able to verify the PPP-RTK performance specifications published by u-blox.

First, a survey-grade Topcon HiPer VR GNSS receiver was used to verify the control coordinates in static mode. Once these were obtained, the Tallysman smart antenna took its place.

The table below summarizes representative results from both methods, PPP-RTK and RTK. Horizontal (planar) accuracy is similar for both, while the vertical accuracy is less accurate with PPP-RTK than RTK.

The horizontal accuracy level of RTK and PPP-RTK is in the centimeter range. In contrast, RTK maintains a centimeter range at the vertical accuracy level, but the PPP-RTK correction errors were in the decimeter range.

GNSS augmentation

 

Horizontal error (m) Vertical error (m) Horizontal 95% (m) Vertical 95% (m)
RTK AllDayRTK 0.009 0.010 0.012 0.018
PointPerfect PPP-RTK 0.048 0.080 0.041 0.074

 

Furthermore, the accuracy of the mass market device is within published specifications to meet the 30 cm @ 95% for location (plan) even when obstructed. Still, when measuring heights, these were less accurate than 2D horizontal coordinates. Absolute horizontal location accuracy meets the mass market requirement of 30 cm @ 95%, although RTK is more accurate at a vertical level than PPP-RTK.

Kinematic results

On the streets of Melbourne, Aptella experts tested RTK and PPP-RTK corrections operating in different kinematic modes with variable speeds, such as walking under open skies and driving in different environments.

The test setup using an RTK network consisted of AllDayRTK corrections and a survey-grade GNSS receiver. On the other hand, the PPP-RTK test setup was supported by u-blox PointPerfect and the Tallysman smart antenna. The antennas for both setups were mounted on the roof of the vehicle and driven through different routes to encounter various GNSS conditions.

Walking in the open sky: This test involved a walk along the riverbank. Comparing the results, both were similar, proving that PPP-RTK is well-suited for mass-market applications.

 walking tests with rtk and ppp-rtkFigure 2: walking tests with rtk and ppp-rtk

On-road driving with varying conditions: This test consisted of driving on Melbourne roads in different conditions, including open skies and partial or total obstructions to GNSS. The route included driving under bridges and areas with multipath effects. Vegetation in the area at the start of the test prevented the smart antenna’s IMU from initializing. No IMU/dead reckoning capability was used during the drive test.

The results obtained while the vehicle moved through a long tunnel under the railroad tracks were of utmost importance. In this situation, the PPP-RTK approach reported a position even in an adverse environment. In addition, PPP-RTK reconverged shortly after RTK.

 rtk vs ppp-rtk under railway bridge in melbourneFigure 3: rtk vs ppp-rtk under railway bridge in melbourne

Another revealing result of this second test was that the Tallysman smart antenna didn’t seem to deviate from its path when passing under short bridges.

 rtk vs ppp-rtk under a short bridgeFigure 4: rtk vs ppp-rtk under a short bridge

Driving through an outage: The outage test took place in an extended, challenging environment for GNSS. This occurred when the car drove under the pedestrian overpass at the Melbourne Cricket Ground. The PPP-RTK solution maintained the travel trajectory and effectively tracked the route (in yellow). On the other hand, the RTK network solution reported positions off the road and on the railway tracks. In this outage condition, RTK took a long time to reconverge to a fixed solution.

 correction services tests under a long structureFigure 5: correction services tests under a long structure

Open-sky driving: The final on-road test was conducted in an open-sky environment where the two setups performed similarly. They provided lane-level accuracy and suitability for mass-market applications. However, ground truthing and further testing are required to fully evaluate the accuracy and reliability of PPP-RTK in these conditions.

 correction services comparison driving through MelbourneFigure 6: correction services comparison driving through Melbourne Final remarks

The five static and dynamic tests conducted by Aptella were instrumental in assessing the effectiveness of different setups to determine the position of stationary and moving entities.

  • From the static test, Aptella concluded that PPP-RTK, coupled with the Tallysman smart antenna, provides centimeter-level horizontal accuracy and performs similarly to RTK. However, this was not the case for vertical accuracy, with PPP-RTK at the decimeter level.
  • Regarding the kinematic tests, Aptella obtained significant results, particularly when the environment impeded communication with GNSS. Even without IMU or dead reckoning, the PPP-RTK performed well with lane-level tracking. With short outages such as railway bridges and underpasses, PPP-RTK maintained an acceptable trajectory, while RTK required a long time to reconverge after emerging from these challenging conditions.
  • Overall, Aptella has demonstrated that the PPP-RTK and GNSS smart antenna combination delivers results suitable for mass-market applications requiring centimeter-level horizontal accuracy.

As mentioned above, survey-grade devices are costly although highly accurate. A combination of survey-grade GNSS receiver and network RTK correction service is recommended in geodetic surveying use cases that require high height accuracy.

Conversely, mass-market smart antenna devices using PPP-RTK corrections are less expensive but also less accurate. Nevertheless, they are well suited for static applications that don’t require GNSS heights at survey grade.

For many high-precision navigation applications, such as transportation, e-mobility, and mobile robotics, PPP-RTK is sufficient to achieve the level of performance these end applications require. The relative affordability of smart antenna devices, combined with PPP-RTK’s ability to broadcast a single stream of corrections to all endpoints, makes it easier to scale from a few prototypes to large fleets of mobile IoT devices.

The post Network RTK vs PPP-RTK: an insight into real-world performance appeared first on ELE Times.

Unparalleled capacitance for miniaturized designs: Panasonic Industry launches new ZL Series Hybrid capacitors

ELE Times - Чтв, 03/21/2024 - 12:00

The compact and AEC-Q200-compliant EEH-ZL Series stands out with industry-leading capacitance and high Ripple Current specs

The ZL series is the latest offspring of Panasonic Industry’s Electrolytic Polymer Hybrid capacitor portfolio. Related to its compact dimensions, it offers unrivalled capacitance values – and hence might evoke a remarkable market echo:

Capacitance: For five case sizes from ø5×5.8 mm to ø10×10.2 mm, the ZL series offers the largest capacitance in the industry and exceeds the values of competitor standard products by approximately 170%.

Ripple Current performance outnumbers the competitor products’ specs besides lower ESR within the same case size.

The new ZL is AEC-Q200 compliant, enforcing strict quality control standards, particularly crucial for the automotive industry. It boasts high-temperature resistance, and is guaranteed to operate at 125°C and 135°C at 4000h. With a focus on durability, the ZL series offers vibration-proof variants capable of withstanding shocks up to 30G, making it a reliable choice.

In summary, this next-generation, RoHS qualified Hybrid Capacitor stands as the ultimate solution for automotive and industrial applications, where compact dimensions are an essential prerequisite.

Tailored for use in various automotive components including water pumps, oil pumps, cooling fans, high-current DC to DC converters, and advanced driver-assistance systems (ADAS), it also proves invaluable in industrial settings such as inverter power supplies for robotics, cooling fans, and solar power systems. Furthermore, it serves a pivotal role in industrial power supplies for both DC and AC circuits, spanning from inverters to rectifiers, and finds essential application in communication infrastructure equipment such as base stations, servers, routers, and switches.

The post Unparalleled capacitance for miniaturized designs: Panasonic Industry launches new ZL Series Hybrid capacitors appeared first on ELE Times.

Silicon carbide power device market to grow to $5.33bn in 2026

Semiconductor today - Чтв, 03/21/2024 - 11:59
Benefitting from robust demand from downstream applications, market research firm TrendForce forecasts that the silicon carbide (SiC) power device market will grow to $5.33bn by 2026, with mainstream applications still highly reliant on electric vehicles and renewable energy sources...

Silicon carbide (SiC) counterviews at APEC 2024

EDN Network - Чтв, 03/21/2024 - 11:06

At this year’s APEC in Long Beach, California, Wolfspeed CEO Gregg Lowe’s speech was a major highlight of the conference program. Lowe, the chief of the only vertically integrated silicon carbide (SiC) company and cheerleader of this power electronics technology, didn’t disappoint.

In his plenary presentation, “The Drive for Silicon Carbide – A Look Back and the Road Ahead – APEC 2024,” he called SiC a market hitting the major inflection point. “It’s a story of four decades of American ingenuity at work, and it’s safe to say that the transition from silicon to SiC is unstoppable.”

Figure 1 Lowe: The future of this amazing technology is only beginning to dawn on the world at large, and within the next decade or so, we will look around and wonder how we lived, traveled, and worked without it. Source: APEC

Lowe told the APEC 2024 attendees that the demand for SiC is exploding, and so is the number of applications using this wide bandgap (WBG) technology. “Technology transitions like this create moments and memories that last a lifetime, and that’s where we are with SiC right now.”

Interestingly, just before Lowe’s presentation, Balu Balakrishnan, chairman and CEO of Power Integrations, raised questions about the viability of SiC technology during his presentation titled “Innovating for Sustainability and Profitability”.

Balakrishnan’s counterviews

While telling the Power Integrations’ gallium nitride (GaN) story, Balakrishnan narrated how his company started heavily investing in SiC 15 years ago and spent $65 million to develop this WBG technology. “One day, sitting in my office, while doing the math, I realized this isn’t going to work for us because of the amount of energy it takes to manufacture SiC and that the cost of SiC is so much more than silicon,” he said.

“This technology will never be as cost-effective as silicon despite its better performance because it’s such a high-temperature material, which takes a humongous amount of energy,” Balakrishnan added. “It requires expensive equipment because you manufacture SiC at very high temperatures.”

The next day, Power Integrations cancelled its SiC program and wrote off $65 million. “We decided to discontinue not because of technology, but because we believe it’s not sustainable and it’s not going to be cost-effective.” he said. “That day, we switched over to GaN and doubled down on it because it’s low-temperature, operates at temperatures similar to silicon, and mostly uses same equipment as silicon.”

Figure 2 Balakrishnan: GaN will eventually be less expensive than silicon for high-voltage switches. Source: APEC

So, why does Power Integrations still have SiC product offerings? Balakrishnan acknowledged that SiC can go to higher voltages and power levels and is a more mature technology than GaN because it started earlier.

“There are certain applications where SiC is very attractive today, but I’ll dare to say that GaN will get there sometimes in the future,” he added. “Fundamentally, there isn’t anything wrong with taking GaN to higher voltages and power levels.” He mentioned a 1,200 GaN device Power Integrations recently announced and claimed that his company plans to announce another GaN device with even a higher voltage very soon.

Balakrishnan recognized that there are problems to be solved. “But these challenges require R&D efforts rather than a technology breakthrough,” he said. “We believe that GaN will get to the point where it’ll be very competitive with SiC while being far less expensive to build.”

Lowe’s defense

In his speech, Lowe also recognized the SiC-related cost and manufacturability issues, calling them near-term turbulence. However, he was optimistic that undersupply vs demand issues encompassing crystal boules, substrate capability, wafering, and epi will be resolved by the end of this decade.

“We will continue to realise better economic value with SiC by moving from 150-mm to 200-mm wafers, which increases the area by 1.7x and decreases the cost by about 40%,” he said. His hopes for resolving cost and manufacturability issues also seemed to lie in a huge investment in SiC technology and the automotive industry as a major catalyst.

For a reality check on these counterviews about the viability of SiC, a company dealing with both SiC and GaN businesses could offer a balanced perspective. Hence, Navitas’ booth at APEC 2024, where the company’s VP of corporate marketing, Stephen Oliver, explained the evolution of SiC wafer costs.

He said a 6-inch SiC wafer from Cree cost nearly $3,000 in 2018. Fast forward to 2024, a 7-inch wafer from Wolfspeed (renamed from Cree) costs about $850. Moving forward, Oliver envisions that the cost could come down to $400 by 2028 while being built on 12-inch to 15-inch SiC wafers.

Navitas, a pioneer in the GaN space, acquired startup GeneSiC in 2022 to cater to both WBG technologies. At the show, in addition to Gen-4 GaNSense Half-Bridge ICs and GaNSafe, which incorporates circuit protection functionality, Navitas also displayed Gen-3 Fast SiC power FETs.

In the final analysis, Oliver’s viewpoint about SiC tilted toward Lowe’s pragmatism in SiC’s shift from 150-mm to 200-mm wafers. The recent technology history is a testament to how economy of scale has been able to manage cost and manufacturability issues, and that’s what the SiC camp is counting on.

A huge investment in SiC device innovation and the backing of the automotive industry should also be helpful along the way.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Silicon carbide (SiC) counterviews at APEC 2024 appeared first on EDN.

Сторінки

Subscribe to Кафедра Електронної Інженерії підбірка - Новини світу мікро- та наноелектроніки