Microelectronics world news

RF amplifier series gains 300-W model

EDN Network - Fri, 05/31/2024 - 00:12

The latest addition to the R&S BBA300 family of RF amplifiers delivers output power of 300 W P1dB or software-adjustable saturation power up to 450 W. Operating at up to 6 GHz, the broadband amplifier can generate the high field strengths required for critical test environments, making it useful for EMC, OTA coexistence, and RF component testing.

The 300-W model is available in the both the BBA300-CDE and BBA300-DE series, which have respective continuous frequency ranges of 380 MHz to 6 GHz and 1 GHz to 6 GHz. This wide frequency range enables the instrument to cover GSM, LTE, and 5G/NR mobile communication standards, as well as WLAN, Bluetooth, and Zigbee wireless standards. The amp also supports continuous sweeping of RF signals across the entire frequency range.

The PK1 software option offers two tools for tailoring the RF output signal: bias point adjustment, which allows toggling between class A and class AB, and a choice between maximum output power or high mismatch tolerance.

To request a price quote for the BBA300 RF amplifier, use the link to the product page below.

BBA300 product page

Rohde & Schwarz  

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post RF amplifier series gains 300-W model appeared first on EDN.

SaaS vs Traditional Software: What’s Best for You?

Electronic lovers - Thu, 05/30/2024 - 22:17

When it comes to choosing software solutions for your business, the decision between Software as a Service (SaaS) and traditional software can be daunting. Each has its own set of benefits and drawbacks, and what works for one company might not be ideal for another. So, how do you decide which is best for you? Let’s explore the key differences and advantages of each to help you make an informed decision.

Cost Considerations

One of the most significant differences between SaaS and traditional software is the cost structure. Traditional software typically requires a hefty upfront investment. You’ll need to purchase licenses, invest in hardware, and possibly hire IT staff to manage installations and maintenance. These costs can add up quickly, especially for small to mid-sized businesses.

SaaS, on the other hand, operates on a subscription model. You pay a regular fee that usually covers everything from software access to updates and customer support. This spreads out the costs over time and can make budgeting easier. There’s no need for a large capital expenditure upfront, which can free up resources for other critical areas of your business. Additionally, SaaS eliminates the need for dedicated IT infrastructure, potentially saving you money on both personnel and physical space.

Flexibility and Scalability

In today’s fast-paced business environment, flexibility and scalability are crucial. Traditional software can be rigid, often requiring complex installations and significant downtime when scaling up. If your business grows, you might need to purchase additional licenses or even new hardware, which can be both time-consuming and costly.

SaaS excels in this area. It’s designed to be flexible and scalable, allowing you to adjust your usage based on your current needs. Whether you’re expanding your team or adding new features, SaaS can scale effortlessly without major disruptions. This agility is particularly valuable for businesses experiencing rapid growth or those with fluctuating demands.

Accessibility and Collaboration

Another critical factor to consider is accessibility. Traditional software is usually installed on individual computers or servers within your office. This can limit access to your team members who work remotely or travel frequently. Collaboration can also be challenging, as sharing files and information often requires manual processes or additional software.

SaaS offers a distinct advantage here. Being cloud-based, SaaS applications can be accessed from anywhere with an internet connection. This makes it easier for your team to collaborate in real-time, no matter where they are. Imagine your sales team being able to update the CRM system from the field, or your remote employees working seamlessly with in-office staff. This level of accessibility can significantly boost productivity and improve communication across your organization.

Updates and Maintenance

Keeping software up-to-date is essential for security and performance. Traditional software often requires manual updates, which can be time-consuming and disruptive. You might also need IT staff to manage these updates, adding to your operational costs.

SaaS simplifies this process. Updates and maintenance are handled by the provider and are usually rolled out automatically. This ensures that your software is always current with the latest features and security patches, without any effort on your part. This hands-off approach to updates can save you time and reduce the risk of running outdated or vulnerable software.

Security and Reliability

Security is a top concern for any business. Traditional software security relies heavily on your internal IT team’s capabilities. You need to ensure that all security measures are in place, which can be challenging and resource-intensive.

SaaS providers invest heavily in security to protect their clients’ data. They employ advanced encryption, conduct regular security audits, and often have dedicated security teams. Additionally, SaaS solutions typically include automated SaaS backups, ensuring your data is safe and recoverable in case of an incident. This level of security and reliability can give you peace of mind, knowing that your data is protected by experts.

Conclusion

Choosing between SaaS and traditional software depends on your specific business needs and circumstances. SaaS offers cost efficiency, flexibility, scalability, and enhanced collaboration, making it an excellent choice for businesses looking for modern, adaptable solutions. Traditional software might still be suitable for companies with specific needs that require on-premises solutions or have substantial existing infrastructure.

Ultimately, the best choice is the one that aligns with your business goals, budget, and operational requirements. By carefully evaluating the benefits of each option, you can make a decision that drives your business forward and improves overall efficiency. So, take a closer look at what SaaS and traditional software can offer, and choose the path that best supports your strategic objectives.

The post SaaS vs Traditional Software: What’s Best for You? appeared first on Electronics Lovers ~ Technology We Love.

Infineon Fuses the Power of Si, SiC, and GaN in New Power Supply Units

AAC - Thu, 05/30/2024 - 20:00
With the goal of decarbonizing AI server racks, the new series of power supply units (PSUs) focus on efficiency, ranging from 3 kW to 12 kW.

Has Malaysia’s ‘semiconductor moment’ finally arrived

EDN Network - Thu, 05/30/2024 - 16:43

Malaysia and Taiwan were among the early semiconductor outposts during the late 1960s when U.S. companies like Intel began to outsource their assembly and test operations to Asia. Over half a century, while Taiwan has reached the design and manufacturing pinnacle, Malaysia mostly remained busy on back-end tasks related to chip assembly, packaging, and testing.

Malaysia—which currently accounts for 13% of the semiconductor packaging, assembly, and testing market—is now looking to position itself as a global IC design and manufacturing hub amid U.S. restrictions on China’s chip industry. According to a report published in Reuters, the Malaysian government plans to pour $107 billion into its semiconductor industry for IC design, advanced packaging, and manufacturing equipment for semiconductor chips.

Malaysia, long seeking to move beyond back-end chip assembly and testing and into high-value, front-end design work, is confident that time is now on its side. It’s worth noting here that Malaysia isn’t merely eyeing U.S. or western semiconductor outfits; chip firms in China aiming to diversify supply chains are also considering Malaysia for packaging and assembly operations as well as setting up design centers.

While Intel is setting up a $7 billion advanced packaging plant and Infineon is building a $5.4 billion power semiconductors fab in Malaysia, a Reuters report provides details of Chinese chip firms tapping Malaysian partners to assemble a portion of their high-end chips in the wake of U.S. sanctions.

Take the case of Xfusion, formerly a Huawei unit, joining hands with NationGate to assemble GPU servers in Malaysia and thus avoid U.S. sanctions. Likewise, chip assembly and testing firm TongFu Microelectronics is building a new facility in Malaysia in a joint venture with AMD. Next, RISC-V processor firm StarFive is setting up a design center in Penang.

However, Malaysia will need an immaculate execution besides pouring money into its ambitions to move up the semiconductor ladder as other destinations like India and Vietnam are also vying for a stake in chip design and manufacturing services. Moreover, while U.S. restrictions on China’s chip industry bring Malaysia new possibilities, it’s important to note that the country has been trying to move beyond back-end chip assembly and testing and into high-value front-end design work for quite some time.

So, while China’s chip outfits moving to Malaysia will add weight to the country’s efforts to become a semiconductor hub in Asia, it will still require a strong execution besides tax breaks, subsidies, and visa exemption fees. Malaysia has an experienced workforce and sophisticated equipment, critical elements in the semiconductor design recipe.

What’s required next is a few promising startups in semiconductor design and advanced packaging domains, as hinted by Malaysian Prime Minister Anwar Ibrahim during his policy speech.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Has Malaysia’s ‘semiconductor moment’ finally arrived appeared first on EDN.

Contactless electric bell on a gradient relay

EDN Network - Thu, 05/30/2024 - 16:35

The operation of a contactless electric bell is based on a change in the electrical resistance of a temperature-sensitive element (thermistor) when a finger approaches the bell button or upon contact with it. To exclude the possibility of continuous calls, a gradient relay is used in the device, which turns on the bell only under the condition of a short change (increase) in the temperature of the thermosensitive element.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The operation of the contactless electric bell is based on the use of a gradient relay [1–3] with a temperature-sensitive sensor. When a finger approaches a temperature-sensitive sensor (thermistor), its temperature rises, therefore, its resistance changes. The gradient relay is activated, including the bell. The sensitivity of the device is such that a small local change in the temperature of the sensor leads to the activation of the bell. After removing the finger, the resistance of the thermistor will return to its original state, the bell will be disconnected.

The use of such a device is especially important during epidemics, since the transmission of viruses and microbes without contact with a dirty button is less likely.

The contactless electric bell in Figure 1 is made using the comparator U1.1 of the LM339 chip. The device works as follows.

Figure 1 Electrical circuit of the non-contact doorbell.

The ratio of resistive divider resistances R1 and Rsens is desirable to choose 1:1. In the initial state, when the device is switched on, at the junction point of resistors R1 and Rsens, the voltage at the inputs of the comparator U1.1 is the same and approximately equal to half of the supply voltage. Therefore, the voltage at the output of the comparator is zero. The thermistor Rsens is an element that provides a contactless change in the state of the resistive divider of the input circuit of the device.

If you bring your finger to the thermosensitive element—resistor Rsens, its resistance will change. You can just breathe on this resistor. This will cause an imbalance in the voltage across the comparator inputs. The voltage at the right terminal of the resistor R3 due to the presence of the capacitor C1 will remain unchanged for some time. At the same time, the voltage at the left terminal of R3 will change, allowing the comparator to switch.

A high logic level voltage appears at the output of the comparator. This voltage is supplied to the base of the output transistor VT1 BC547 or its analogue, the transistor opens and connects the bell (electromagnetic sound generator with integrated oscillator circuit HCM1612X) to the power source. If you move your finger away from the resistor Rsens, the resistance of the thermistor will return to its original state, the device will return to its original state, and the bell will be disconnected.

Resistors with both positive and negative temperature coefficient of resistance can be used as resistor Rsens. The device will work in either case. To ensure proper operation of the device, you may have to swap the inputs of the comparator U1.1 (pins 4 and 5).

Michael A. Shustov is a doctor of technical sciences, candidate of chemical sciences and the author of over 800 printed works in the field of electronics, chemistry, physics, geology, medicine, and history.

Related Content

References

  1. Shustov M.A. “Gradient relay”. Radioamateur (BY). 2000. No. 10. pp. 28–29.
  2. Shustov M.A., Shustov A.M. “Gradient Detector a new device for the monitoring and control of the signal deviations”. Elektor Electronica Fast Forward Start-Up Guide 2016–2017. 2017. pp. 44–47.
  3. Shustov M.A., Shustov A.M. “Electronic Circuits for All”. London, Elektor International Media BV, 2017, 397 p.; “Elektronika za sve: Priručnik praktične elektronike”. Niš: Agencija EHO, 2017; 2018, 392 St. (Serbia).
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Contactless electric bell on a gradient relay appeared first on EDN.

Big 7-segment display clock

Reddit:Electronics - Thu, 05/30/2024 - 15:33
Big 7-segment display clock

Finally finished a project that took way too much time to complete.

I made a big 7-segment display clock (1470x480x51mm) with the use of WS2812B LEDs and ESP32 as the brain. Since the clock is meant for inside use, the frame was built out of four 12mm plywood boards stacked together. There is also a 3mm acrylic sheet inbetween, which is used as protection for the light diffusing film for LEDs. There is a big cutout in the back, where all of the electronics are mounted inside of a 3D printed case. There are three tactile switches on the case, which are used for setting the clock in access-point mode or to update the program via FTDI.

Since the clock is using ESP32, it gets its time from an NTP server. I made a simple web interface for it, where some settings can be changed (WiFi credentials, NTP server, LED color/brightness...). Besides displaying the time, it also displays date and temperature, which is taken from the DS18B20 sensor that is attached to the outside of the case.

The design itself is very bad from the aspect of manufacturing, since there are a lot of easy improvements that could be implemented to shorten the time needed to assemble it. The clock is also not suitable for outside use, not only because of the main frame (wood), but also because of the fact that the LEDs are not bright enough for the clock to be useable during daylight hours.

submitted by /u/Yacob135
[link] [comments]

Curiosity killed the mosquito

Reddit:Electronics - Thu, 05/30/2024 - 14:51
Curiosity killed the mosquito

Thia is a controller for shutters that failed and was given to me for repair. The component on the left is a mains voltage bridge rectifier. Two mosquitos decided to try and short it out. Did not end well for them.

submitted by /u/DolfinButcher
[link] [comments]

Cambridge GaN Devices signs MoU with ITRI covering GaN-based power supply development

Semiconductor today - Thu, 05/30/2024 - 14:40
Fabless firm Cambridge GaN Devices Ltd (CGD) — which was spun out of the University of Cambridge in 2016 to design, develop and commercialize power transistors and ICs that use GaN-on-silicon substrates — has signed a memorandum of understanding (MoU) with Taiwan’s Industrial Technology Research Institute (ITRI) to solidify a partnership in developing high-performance GaN solutions for USB-PD adaptors. The MoU also covers the sharing of domestic and international market information, joint visits to potential customers, and promotion...

Nuvoton Develops OpenTitan based Security Chip as Next Gen Security Solution for Chromebooks

ELE Times - Thu, 05/30/2024 - 14:18

Nuvoton Technology Corporation, a global leader in embedded controller and secure IC solutions, announced today that Google’s ChromeOS plans to use the first commercial chip built on the OpenTitan open source secure silicon design as an evolution of its security chip for Chromebooks. This is a result of years of co-development and a close partnership between the companies.

The new chip is based on OpenTitan, a commercial-grade open source silicon design that provides a trustworthy, transparent, and secure silicon platform. It will be used by Google to provide the best protection to Chromebook users. OpenTitan ensures that the system boots from a known good state using properly verified code and establishes a hardware root of trust (RoT) for a variety of system-critical cryptographic operations.

“Hardware security is something we don’t compromise on. We are excited to partner with the dream team of Nuvoton, a valued, historic, strategic partner, and lowRISC, a leader in secure silicon, to maintain this high bar of quality.” said Prajakta Gudhadhe, Sr Director, ChromeOS Platform Engineering. “Google is proud of taking an active role in helping build OpenTitan into a first of a kind open source project, and now we’re excited to see Nuvoton and lowRISC take the next big step and implement a first-of-its-kind open source chip that will protect users all over the world.”

“Nuvoton has been a reliable supplier of embedded controllers (EC) to Chromebooks and Baseboard Management Controllers (BMC) to Google servers in the past decade,” said Erez Naory, VP of Client and Security Products at Nuvoton. “We have now expanded this collaboration with Google and our other OpenTitan partners to bring a new strengthened security IC to Google products and the open market.”

With the goal of making a completely transparent and trustworthy secure silicon platform, the open source project has been developed in the past five years by the OpenTitan coalition of companies hosted by lowRISC C.I.C., the open silicon ecosystem organization. The dedication and expertise of OpenTitan’s skilled community of contributors brought this industry-leading technology to life, producing the world’s first open source secure chip with commercial-grade design verification (DV), testing, and continuous integration (CI).

“Google’s integration of OpenTitan into Chromebooks is a watershed moment — the era of commercial-grade open source silicon has truly arrived,” said Dr. Gavin Ferris, CEO of lowRISC, OpenTitan’s non-profit host organization. “It’s a fantastic validation of the Silicon Commons approach adopted by our OpenTitan project partners and proves that collaborative engineering, driven by an unerring focus on quality and transparency, can successfully deliver products meeting the most stringent security requirements.”

The OpenTitan secure silicon samples are available to the broader market through an early access program and will be in volume production by 2025.

The post Nuvoton Develops OpenTitan based Security Chip as Next Gen Security Solution for Chromebooks appeared first on ELE Times.

Memristor Prototype May Give AI Chips a Sense of Time

AAC - Thu, 05/30/2024 - 02:00
“Do you have the time?” With the University of Michigan’s latest memristor discovery, AI chips may soon note the sequence of events.

Luminus adds improved version of SST-08-UV LED

Semiconductor today - Wed, 05/29/2024 - 20:33
In the latest addition to its high-power UV-A LED series, Luminus Devices Inc of Sunnyvale, CA, USA – which designs and makes LEDs and solid-state technology (SST) light sources for illumination markets – has introduced the SST-08H-UV as the improved version of its SST-08-UV...

Using Complex Permeability to Characterize Magnetic Core Losses

AAC - Wed, 05/29/2024 - 20:00
In this article, we use the concept of magnetic field intensity to help explain how complex permeability models the losses of a magnetic core.

Axus secures $12.5m in funding from IntrinSiC

Semiconductor today - Wed, 05/29/2024 - 19:50
Axus Technology of Chandler, AZ, USA – a provider of chemical-mechanical polishing/planarization (CMP), wafer thinning and surface-processing solutions – has received $12.5m in capital funding from IntrinSiC Investment LLC of Palo Alto, CA, USA, a private equity firm that invests in suppliers of key enabling technology for silicon carbide (SiC). In addition, the firm has secured a significant revolving and term line of credit from a leading national bank...

Infineon launches CoolGaN transistor families built on 8-inch foundry processes

Semiconductor today - Wed, 05/29/2024 - 14:27
Infineon Technologies AG of Munich, Germany has announced two new generations of high-voltage (HV) and medium-voltage (MV) CoolGaN devices that now enable customers to use gallium nitride (GaN) in voltage classes from 40V to 700V in a broader array of applications that help to drive digitalization and decarbonization...

Microsoft’s Build 2024: Silicon and associated systems come to the fore

EDN Network - Wed, 05/29/2024 - 14:00

Microsoft’s yearly Build developer conference took place last Tuesday-Thursday, March 21-23 (as I write these words on Memorial Day), and was rife with AI-themed announcements spanning mobile-to-enterprise software and services.

Curiously, however, many of these announcements were derived from, and in general the most notable news (IMHO) came from, a media-only event held one day earlier, on Monday, March 20. There, Microsoft and its longstanding Arm-based silicon partner Qualcomm co-announced the long-telegraphed Snapdragon X Elite and Plus SoCs along with Surface Laptop and Pro systems based on them. Notably, too, Microsoft-branded computers weren’t the only ones on the stage this time; Acer, Asus, Dell, HP, Lenovo and Samsung unveiled ‘em, too.

To assess the importance of last week’s news, let’s begin with a few history lessons. First off, a personal one: as longtime readers may recall, I’ve long covered and owned Windows-on-Arm operating systems and computers, beginning with my NVIDIA Tegra 3 SoC-based Surface with Windows RT more than a decade back:

Three years ago, I acquired (and still regularly use, including upgrading it to Windows 11 Pro) a Surface Pro X powered by the Snapdragon 8cx SC8180X-based, Microsoft-branded SQ1 SoC:

More recently, I bought off eBay a gently used, modestly discounted “Project Volterra” system (officially: Windows Dev Kit 2023) running a Qualcomm Snapdragon 8cx Gen 3 (SQ3) SoC:

And even more recently, as you can read about in more detail from my just-published coverage, I generationally backstepped, snagging off Woot! (at substantial discount) a used example of Microsoft and Qualcomm’s first developer-tailored stab at Windows-on-Arm, the ECS LIVA Mini Box QC710 Desktop, based on a prior-generation Snapdragon 7c SC7180 SoC:

So, you could say that I’ve got no shortage of experience with Windows-on-Arm, complete with no shortage of scars, most caused by software shortcomings. Windows RT, for example, relied exclusively on Arm-compiled applications (further complicated by an exclusive Microsoft Store online distribution scheme); unsurprisingly, the available software suite garnered little adoption beyond Microsoft’s own titles.

With Windows 10 for Arm, as I complained about in detail at the time, while an emulation layer for x86-compiled content did exist, both its performance and inherent breadth and depth of functionality were subpar…so much so that Microsoft ended up pulling the plug on Windows 10 and focusing ongoing development on the Windows 11 for Arm successor, which has proven far more robust.

Here’s another personal narrative related to this post’s primary topic coverage: last fall, I mentioned that I’d acquired two generations’ successors to my long-used Surface Pro 5 hybrid:

A primary-plus-spare Surface Pro 7+:

 notably for backwards-compatibility with my Kensington docking station:

and the long-term transition destination, a pair of Surface Pro 8s:

What I didn’t buy instead, although it was already available at the time, was the Surface Pro 9. That’s because I wanted my successor systems to be cellular data-capable, and the only Surface 9 variants that supported this particular feature (albeit at a 5G cellular capability uptick compared to the LTE support in what I ended up getting instead) were Arm-based, with what I felt was insufficient upgrade differentiation from my existing Surface Pro X.

Flash forward to a bit more than two months ago, and Microsoft introduced the Surface Pro 10, along with the Surface Laptop 6. They’re both based on Intel Meteor Lake CPUs with integrated NPU (neural processing) cores, reflected in the dedicated Copilot key on each model’s keyboard. Copilot (introduced at last year’s Build), for those of you who don’t already know, is the OpenAi GPT-derived chatbot successor to Microsoft’s now-shuttered Cortana. But here’s an interesting thing, at least to me: the Surface Pro 10 and Surface Laptop 6 are both explicitly positioned as “For Business” devices, therefore sold exclusively to businesses and commercial customers, not available to consumers (at least through normal direct retail channels…note that I got my prior-generation SP7+ and SP8 “For Business” units via eBay resellers).

What about next-generation consumer models? The answer to that question chronologically catches us up to last week’s news. Microsoft’s new Surface Pro 11 (complete with a redesigned keyboard that can be used standalone and an optional OLED screen) and Surface Laptop 7, along with the newly unveiled systems from other Microsoft-partner OEMs, are exclusively Qualcomm Snapdragon X-based, which I suspect you’ll agree represents quite a sizeable bet (and gamble). They’re also labeled as being Copilot+ systems (an upgrade to the earlier Copilot nomenclature), reflective of the fact that Snapdragon X SoCs’ NPUs tout 40 TOPS (trillions of, or “tera”, operations per second) performance. Intel’s Meteor Lake SoC, unveiled last September, is “only” capable of 10 TOPs, for example…which may explain why, last Monday, the very same day, Intel “coincidentally” released a sneak peek of its next-generation Lunar Lake architecture, also claimed Copilot+ NPU performance-capable and coming later this year.

Accompanying the new systems’ latest-generation Arm-based silicon foundations is a further evolution of their x86 code-on-Arm virtualization subsystem, which Microsoft has now branded Prism and is analogous to Apple’s Rosetta technology (the latter first used to run PowerPC binaries on Intel microprocessors, now for x86 binaries on Apple Silicon SoCs), along with other Arm-friendly Windows 11 replumbing. Stating the likely already obvious, Microsoft’s ramped-up Windows-on-Arm push is a seeming reaction to Apple’s systems’ notably improved power consumption/performance/form factor/etc. results subsequent to that company’s own earlier Arm-based embrace. To wit, Microsoft did an interesting half-step a bit more than a year ago when it officially sanctioned running Windows-for-Arm virtualized on Apple Silicon Macs.

Speaking of virtualization, I have no doubt, based both on track record and personal experience, that Prism is capable technology that will continue to improve going forward, since Microsoft has lengthy experience with numerous emulation and virtualization schemes such as:

  • Virtual PC, which enabled running x86-based Windows on PowerPC Macs, and
  • Windows Virtual PC (aka Windows XP Mode), for running Windows XP as a virtualized guest on a Windows 7 Host
  • The more recent, conceptually similar Windows Subsystem for Linux
  • And several generations’ worth of virtualization for prior-generation Xbox titles on newer-generation Xbox consoles, both based on instruction set-compatible and -incompatible CPUs.

To wit, I wonder how Prism is going to play out. Clearly, no matter how robust the emulation and virtualization support, its implementation will be inefficient in comparison to “native” applications. So, I’m assuming that Microsoft will encourage its developers to in-parallel code for both the x86 and Arm versions of Windows, perhaps via an Apple-reminiscent dual-mode “Universal” scheme (in combination with “destination-tailored” downloads from online stores). But, supplier embarrassment and sensationalist press hypothesizing aside, I seriously doubt that Microsoft intends to turn its back on x86 in any big (or even little) way any time soon (in contrast to Apple’s abrupt change in course, in no small part thereby explaining its success in motivating its developer community to rapidly embrace Apple Silicon). Developing for multiple CPU architectures and O/S version foundations requires incremental time, effort, and expense; if you’re an x86 Windows coder and Prism works passably, why expend the extra “lift”?

Further evidence of Apple being in Microsoft’s gunsights comes from the direct call-outs that company officials made last week , particularly against Apple’s MacBook Air. Such comparative assessments are a bit dubious, for at least a couple of reasons. First off, Microsoft neglected to openly reveal that both its and OEM partners’ systems contained fans, whereas the MacBook Air is fan-less; a comparison to the fan-inclusive and otherwise more thermally robust MacBook Pro would be more fair. Plus, although initial comparative benchmarks are seemingly impressive, even against the latest-generation Apple M4 SoC, there’s also anecdotal evidence that Snapdragon X system firmware may sense that a benchmark is being run and allow the CPU to briefly exceed normal thermal spec limits. Any reality behind the comparative hype, both in an absolute and relative sense, will come out once systems are in users’ hands, of course.

So why is Microsoft requiring a standalone NPU core, and specifically such a robust one, in processors that it allows to be branded as Copilot+? While CPUs and GPUs already in systems are alternatively capable of handling various deep learning inference operations, they’re less efficient in doing so in comparison to a focused-function NPU alternative, translating to both lower effective performance and higher energy consumption. Plus, running inference on a CPU or GPU steals away cycles from other applications and operations that could alternatively use them, particularly those for which a NPU isn’t a relevant alternative. One visibly touted example is “Recall”, a newly added Windows 11 feature which, quoting from Microsoft’s website:

…uses Copilot+ PC advanced processing capabilities to take images of your active screen every few seconds. The snapshots are encrypted and saved on your PC’s hard drive. You can use Recall to locate the content you have viewed on your PC using search or on a timeline bar that allows you to scroll through your snapshots. Once you find the snapshot that you were looking for in Recall, it will be analyzed and offer you options to interact with the content.

Recall will also enable you to open the snapshot in the original application in which it was created, and, as Recall is refined over time, it will open the actual source document, website, or email in a screenshot. This functionality will be improved during Recall’s preview phase.

Copilot+ PC storage size determines the number of snapshots that Recall can take and store. The minimum hard drive space needed to run Recall is 256 GB, and 50 GB of space must be available. The default allocation for Recall on a device with 256 GB will be 25 GB, which can store approximately 3 months of snapshots. You can increase the storage allocation for Recall in your PC Settings. Old snapshots will be deleted once you use your allocated storage, allowing new ones to be stored.

Creepy? Seemingly, yes. But at least it runs completely (according to Microsoft, at least) on the edge computing device, with no “cloud” storage or other involvement, thus addressing privacy.

Here’s another example, admittedly a bit more “niche” but more compelling (IMHO) in exemplifying my earlier conceptual explanation. As I most recently discussed in my CES 2024 coverage, upscaling can decrease the “horsepower” of a system’s GPU required in order to render a given-resolution scene to the screen. Such an approach only works credibly, however, only if it comes with no frame rate reduction, image artifacts, or other quality degradations. AI-based upscalers are particularly robust in this regard. And, as discussed and demonstrated at Build, Microsoft’s Automatic Super Resolution (ASR) algorithm runs on the Snapdragon X Elite NPU, leaving the (integrated!) GPU free to focus on its primary polygon and pixel rendering tasks.

That all said, at least one looming storm cloud threatens to rain on this Windows-on-Arm parade. A quick history lesson: NUVIA was a small startup founded in 2019 by ex-Apple and Google employees, in the former case coming from the team that developed the A-series SoCs used in Apple’s smartphones and other devices (and with a direct lineage to the M-series SoCs subsequently included in Apple Silicon-based Macs). Apple predictably sued NUVIA that same year for breach of contract and claimed poaching of employees, only to withdraw the lawsuit in early 2023…but that’s an aside, and anyway, I’m getting chronologically ahead of myself.

NUVIA used part of its investment funding to acquire an architecture license from Arm. A quote from a decade-plus-back writeup at SemiAccurate (along with additional reporting from AnandTech), that as far as I can tell remains accurate, explains (with fixed typos by yours truly):

On top of the pyramid is both the highest cost and lowest licensee count option…This one is called an architectural license, and you don’t actually get a core; instead, you get a set of specs for a core and a compatibility test suite. With all of the license tiers below it, you get you a complete core or other product that you can plug-in to your design with varying degrees of effort, but you cannot change the design itself. If you license a Cortex-A15 you get exactly the same Cortex-A15 that the other licensees get. It may be built with very different surroundings and built on a different process, but the logic is the same. Architectural licensees conversely receive a set of specs and a testing suite that they have to pass; the rest is up to them. If they want to make a processor that is faster, slower, more efficient, smaller, or anything else than the one Arm supplies, this is the license they need to get.

Said more concisely, architecture licensed cores need to fully support a given Arm instruction set generation, but how they implement that instruction set support is completely up to the developer. Cores like those now found in Snapdragon X were already under development under NUVIA’s architecture license when Qualcomm acquired the company for $1.4B in early 2021. And ironically, at the time of the NUVIA acquisition, Qualcomm already had its own Arm architecture license, which it was using to develop its own Kryo-branded cores.

Nevertheless, Arm filed a lawsuit against Qualcomm in late summer 2022. Per coverage at the time from The Register (here’s a more recent follow-up writeup from the same source):

Arm has accused Qualcomm of being in breach of its licenses, and wants the American giant to fulfill its obligations under those agreements, such as destroying its Nuvia CPU designs, plus cough up compensation…

According to Arm…the licenses it granted Nuvia could not be transferred to and used by its new parent Qualcomm without Arm’s permission. Arm says Qualcomm did not, even after months of negotiations, obtain this consent, and that Qualcomm appeared to be focused on putting Nuvia’s custom CPU designs into its own line of chips without permission.

That led to Arm terminating its licenses with Nuvia in early 2022, requiring Qualcomm to destroy and stop using Nuvia’s designs derived from those agreements. It’s claimed that Qualcomm’s top lawyer wrote to Arm confirming it would abide by the termination.

However, says Arm, it appeared from subsequent press reports that Qualcomm may not have destroyed the core designs and still intended to use the blueprints and technology it acquired with Nuvia for its personal device and server chips, allegedly in a breach of contract with Arm…

Arm says individual licenses are specific to individual licensees and their use cases and situations, and can’t be automatically transferred without Arm’s consent.

According to people familiar with the matter, Nuvia was on a higher royalty rate to Arm than Qualcomm, and that Qualcomm hoped to use Nuvia’s technology on its lower rate rather than pay the higher rate. It’s said that Arm wasn’t happy about that, and wanted Qualcomm to pay more to use those blueprints it helped Nuvia develop.

Qualcomm should have negotiated a royalty rate with Arm for the Nuvia tech, and obtained permission to use Nuvia’s CPU core designs in its range of chips, and failed to do so, it is alleged, and is now being sued.

As I write these words, the lawsuit is still active. When will it be resolved, and how? Who knows? All I can say with some degree of certainty, likely stating the obvious in the process, is:

  • Qualcomm is highly motivated for Snapdragon X to succeed, for a variety of reasons
  • Arm is equally motivated for not only Snapdragon X but also other rumored under-development Windows-on-Arm SoCs to succeed (NVIDIA, for example, is one obvious rumored candidate, given both its past history in this particular space and its existing Arm-based SoCs for servers, as is its public partner MediaTek)
  • And their common partner Microsoft is also equally motivated for Arm-based Copilot+ systems (with Qualcomm the lead example) to succeed.

In closing, a couple of other silicon-related comments:

And with that, and closing in on 3,000 words, I’m going to wrap up for today. Let me know your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Microsoft’s Build 2024: Silicon and associated systems come to the fore appeared first on EDN.

Infineon announces next generation CoolGaN Transistor families built on 8-inch foundry processes

ELE Times - Wed, 05/29/2024 - 13:47

Infineon Technologies AG today announces two new generations of high voltage (HV) and medium voltage (MV) CoolGaN devices which now enable customers to use Gallium Nitride (GaN) in voltage classes from 40 V to 700 V in a broader array of applications that help drive digitalization and decarbonization. These two product families are manufactured on high performance 8-inch in-house foundry processes in Kulim (Malaysia) and Villach (Austria). With this, Infineon expands its CoolGaN advantages and capacity to ensure a robust supply chain in the GaN devices market, which is estimated to grow with an average annual growth rate (CAGR) of 46 percent over the next five years according to Yole Group.

“Today’s announcement builds nicely on our acquisition of GaN Systems last year and brings to market a whole new level of efficiency and performance for our customers,” said Adam White, Division President of Power & Sensor Systems at Infineon. “The new generations of our Infineon CoolGaN family in high and medium voltage demonstrate our product advantages and are manufactured entirely on 8 inch, demonstrating the fast scalability of GaN to larger wafer diameters. I am excited to see all of the disruptive applications our customers unleash with these new generations of GaN.”

The new 650 V G5 family addresses applications in consumer, data center, industrial and solar. These products are the next generation of GIT-based high voltage products from Infineon. The second new family manufactured on the 8-inch process is the medium voltage G3 devices which include CoolGaN Transistor voltage classes 60 V, 80 V, 100 V and 120 V; and 40 V bidirectional switch (BDS) devices. The medium voltage G3 products are targeted at motor drive, telecom, data center, solar and consumer applications.

Availability

The CoolGaN 650 V G5 will be available in Q4 2024 and the medium voltage CoolGaN G3 will be available in Q3 2024. Samples are available now. More information is available here.

Infineon at the PCIM Europe 2024

PCIM Europe will take place in Nuremberg, Germany, from 11 to 13 June 2024. Infineon will present its products and solutions for decarbonization and digitalization in hall 7, booths #470 and #169. Company representatives will also be giving several presentations at the accompanying PCIM Conference and Forums, followed by discussions with the speakers. If you are interested in interviewing an expert at the show, please email media.relations@infineon.com. Industry analysts interested in a briefing can email MarketResearch.Relations@infineon.com. Information about Infineon’s PCIM 2024 show highlights is available at www.infineon.com/pcim.

The post Infineon announces next generation CoolGaN Transistor families built on 8-inch foundry processes appeared first on ELE Times.

Pages

Subscribe to Кафедра Електронної Інженерії aggregator - Новини світу мікро- та наноелектроніки