Новини світу мікро- та наноелектроніки

Pitfalls of mixing formal and simulation: How to stay out of trouble

EDN Network - Wed, 06/08/2022 - 10:07

Driven by the need to objectively measure the progress of their verification efforts and the contributions of different verification techniques, IC designers have adopted coverage as a metric. However, what exactly is being measured is different depending on the underlying verification technology in use.

In Part 1, we outlined the textbook definitions of simulation and formal coverage, and then we briefly introduced the risks inherent in arbitrarily merging these metrics together. In Part 2, we shared a series of RTL DUT code examples to illustrate the trouble that can occur when comparing the results from simulation and formal side-by-side.

In this article, we will demonstrate how even properly merged 100% code and functional coverage results can still tempt you to prematurely conclude that your verification effort is over and it’s safe to declare victory. But the good news is that mutation analysis can exhaustively prove that both your DUT and your verification plan itself are bug free. Plus, we will summarize recommendations for properly using simulation and formal coverage together.

Full code coverage achieved—but there is still a bug

The example finite state machine (FSM) and supporting logic in Figure 1 shows a trap that many fall into—assuming that 100% code coverage means that they are done with verification. This assumption is based on a misconception of the reason we measure code coverage—the sole value of code coverage is in pointing out areas of the DUT which haven’t been verified, regardless of which engine is being used.

Figure 1 Here is a view of 100% code coverage of FSM from simulation (left) and formal (right). But could there still be a bug in the DUT? Source: Siemens EDA

In this example, the two outputs are mutex and a property has been written that passes in simulation and is proven in formal. In this verification scenario, the above results from simulation and formal show full coverage on the mutex check for the “out1” and “out2” signals.

Despite these promising results, there are two important points to consider. One is the importance of a complete test plan. Imagine that there is a requirement of “the FSM to be in only state 2 for no more than 3 cycles”. In this case, all the code is traversed, but the functional behavior is incorrect. If this requirement was not part of the test plan, then the above coverage reports would fool the user into believing that the verification was complete while the bug slipped through. However, if the requirement made it into the test plan—which was in-turn captured in the following assertion—then it would be properly tested:

a_st2_3_max: assert property (@(posedge clk) disable iff (!rstn)

                                        $rose(st2) |-> ##[1:3] !st2 );

In this scenario, simulation could have completely missed this functional bug if there was no test to cover this case; and in fact, for the vectors run to give the above coverage, that was the case, as the assertion passed. However, exhaustive formal analysis would have spotted the functional error and reported it as a counter-example to the user.

The larger point of this example is the cautionary note that coverage is mainly for seeing what isn’t covered, versus the focus on achieving 100% coverage at all costs; even if you have 100% code coverage, you may still have bugs that other analyses and coverage metrics will reveal.

Now let’s go a little further and see how model-based mutation coverage can help in revealing such gaps in a verification environment. Mutation coverage is generated by the verification tool systematically injecting mutations (faults) into a formal model of the DUT, and then checking whether any assertion would detect a bug at the corresponding fault point. The mutations are done electronically, in the mutation tool’s memory, while the original source code is untouched. In effect, mutation coverage measures the ability of your testbench to find bugs; if your verification environment can’t find a bug that you have deliberately created in the form of a property, it is unlikely to find bugs you have not imagined.

As shown in Figure 2, running the coverage analysis with mutations indicates precisely where there is a gap covering the requirement, and thus the need to add it to the test plan in order to make it complete.

Figure 2 The model-based mutation coverage reveals the missing requirement and related test from the test plan. Source: Siemens EDA

Recap: Using both formal and simulation code coverage together

Verification teams use coverage data from three types of engines: formal, simulation, and emulation. All contribute to test plan signoff as well as coverage closure in all aspects. Focusing on code coverage, the following table outlines some of the differences between formal and simulation code coverage:

Table 1 The data highlights the differences between formal and simulation code coverage. Source: Siemens EDA

The primary concern here is that mixing code coverage from multiple verification engines can mask holes in the testbench of the other engine. There are other subtle differences which make merging code coverage from different engines difficult. The following recommendations may help.

Recommendations

The formal and simulation coverage flows are flexible, which allows you to make use of them in whatever capacity best fits your needs. Below are a few things to consider when using code coverage from any of the verification engines:

  • Ideally, close code coverage for each verification engine separately.

⁰     Focus on improving testbench completeness and robustness in each domain.

⁰     Code coverage data should be kept separate in the main coverage database for this purpose.

⁰     Additionally, in the formal domain, run both proof core (formal coverage based on the logic needed to prove a property) and mutation coverage to check testbench completeness.

  • Test planning is important for determining which verification engine will verify which parts of the design.

⁰     Formal may totally verify certain modules. When mixing formal and simulation code coverage, try to keep it to instance boundaries, something that can be enabled/disabled in one domain versus the other.

⁰     When mixing code coverage, plan it as part of the test plan, late in the game.

  • Know where your code coverage comes from: formal versus simulation.

⁰     Keep code coverage data from each domain separate in the main coverage database.

⁰     Reporting must also make it clear where the coverage data came from.

⁰     Only merge coverage data for final reports near the end of the project.

  • Avoid writing properties or adding vectors to only trivially hit some specific part of the design to get to the 100% coverage mark.

⁰     This doesn’t typically improve verification, only trivially improves coverage.

⁰     This is why the test plan is important. Coverage holes point to an incomplete test plan and ultimately an incomplete/weak testbench.

⁰     When adding properties or vectors to close code coverage holes, verification of features and requirements of the design are your guide and will give you the highest quality of verification.

Naturally, code coverage from various verification engines will be used to make decisions regarding meeting project milestones. The verification team and project leaders are ultimately responsible for their results. When required to mix coverage for a given block from multiple verification engines, it is recommended to have a peer review on how the code coverage was generated to make sure valid tests tied to a test plan are used.

Proper merger of results

With proper understanding of the nature of the coverage metrics being recorded, output from all sources can be combined to give a holistic picture of the progress and quality of the verification effort. In this series, we focused on the most common element of this need—properly merging the results from formal analysis with the coverage from a constrained-random simulation testbench such that individual contributors and team leaders will understand exactly what the data is telling them.

Editor’s Note: This is the last article of the three-part series on the pitfalls of mixing formal and simulation coverage. Part 1 outlined the risks of arbitrarily merging simulation and formal coverage. Part 2 shared RTL DUT code examples to illustrate the trouble of comparing the results from simulation and formal side-by-side.

Mark Eslinger is a product engineer in the IC Verification Systems division of Siemens EDA, where he specializes in assertion-based methods and formal verification.

 

Joe Hupcey III is product marketing manager for Siemens EDA’s Design & Verification Technologies formal product line of automated applications and advanced property checking.

 

Nicolae Tusinschi is a product manager for formal verification solutions at Siemens EDA.

 

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Pitfalls of mixing formal and simulation: How to stay out of trouble appeared first on EDN.

Qorvo launches 20W GaN-on-SiC MMIC power amplifier for satcoms

Semiconductor today - Tue, 06/07/2022 - 22:27
Qorvo Inc of Greensboro, NC, USA (which provides core technologies and RF solutions for mobile, infrastructure and defense applications) has launched a 20W GaN-on-SiC (gallium nitride on silicon nitride) power amplifier (PA) for defense and commercial satellite applications, including low Earth orbit (LEO) constellations...

Obsolescence by design: The earbuds edition

EDN Network - Tue, 06/07/2022 - 21:09

For well over a decade now, I’ve suffered from right side-dominant back pain. Until the past couple of years, however, the situation was reasonably tolerable. A few times a year, I’d experience intense spasms for a few days straight, which not even prescription painkillers and muscle relaxants would tangibly alleviate. Then the pain would fade away, the spasms would release their grip, and I’d be back to running, hiking, skiing and the like…at least until the next spinal outburst.

Beginning in the fall of 2019, however, the situation “evolved.” Now my pain became (generally) lower grade but more constant, i.e., chronic. The spasms were now happening pretty much every day, for short durations but multiple times a day, especially if I was in motion at the time. Eventually, the pain started moving down my right leg, too. Then COVID hit, and I couldn’t even get into a doctor to diagnose and treat whatever was going on.

Once pandemic restrictions began lifting mid-last year, I started pursuing root causes and solutions with earnest. The first step was a trip to the general practitioner, who took X-rays and pointed out that the cartilage surrounding my L2 and L3 vertebrae was in an advanced state of degradation. Genetics? Perhaps, at least in part. But I suspect the core reason had to do with all that long-distance running I’d been consistently doing the past several decades, including numerous marathons and an ultra (not to mention the preparatory training for them), along with lots of heavy-load backpacking and trekking and, beginning in my early 40s, skiing. I was paying the price for all that adrenaline-fueled past fun, no matter that I’d strived to change out running and hiking shoes religiously, kept my body fairly light in weight, etc.

Physical therapy wasn’t helpful for relieving the pain, although I continue doing it to slow further progression. Next, I was redirected to a specialist, who sent me for a MRI, which confirmed and further magnified the spinal situation that my earlier X-rays had alluded to. At this point, insurance considerations kicked in: first, I underwent several comparatively low-cost medial branch block procedures (essentially lidocaine injections into the affected areas), whose relief lasted only a day (or less) but were indicators that we were headed in the right direction, diagnosis-wise. Only then, months after my initial meeting with the specialist, was I cleared for a two-part radio-frequency ablation (RFA) procedure, beginning with my right side. Essentially, the RFA harnesses high frequency heat, pinpoint-injected via needles, to cauterize the nerves going to and from the vertebrae facet joints, thereby temporarily severing the pain-signal pathways (eventually the nerves will reconnect, and I’ll need to repeat the procedure).

What’s this all have to do with obsolescence by design, a longstanding coverage topic of mine? Well, shortly before my chronic pain phase started, my wife had bought me a pair of Beats (now owned by Apple) Powerbeats Pro earbuds for our wedding anniversary (mine are black, like the ones shown below, although they come in a variety of color combinations):

Compare them to the various makes and models of earbuds that I showcased in a recent piece, and the difference with these will be immediately apparent: not only do they snugly fit within the ear, they also include loops that fit around each ear. The point of the loops is likely also immediately apparent: they keep the earbuds in place when the wearer is moving around, such as when exercising. The Powerbeats Pro is highly rated, and I used them a few times but when my back pain “evolved” and I had to shelve my running, I put them on the shelf, too.

Fast forward to late March of this year. The left-side RFA was completed at the end of February (after the right side RFA had previously been attempted twice, the second time successfully under sedation after I experienced uncontrollable muscle spasms and intense pain the first attempt around…but that’s another story for another day…) and I was starting to experience now-still-incomplete (as I write these words in early May) but still blessed relief. Spring had sprung, as the saying goes, and my thoughts predictably wandered to a potential resurrection of my running and other exercise routines. So, I plugged the Powerbeats Pros (in their recharging case, shown below) into the charger, and…nothing.

Well, strictly speaking, not nothing; the battery inside the case seemed to recharge just fine. But the earbuds themselves appeared to be dead as a doornail (or if you prefer, a parrot)…and of course the two-year extended warranty she’d purchased with the earbuds had expired around six months earlier. It was difficult to discern exactly what was going on, however, as the earbuds themselves don’t contain charge-status LEDs, only the case. More generally, they’re (as with many earbuds brands and models) heavily reliant on the companion case, which not only coordinates their charging process but also puts them into Bluetooth pairing and hardware-reset modes (initiated by variable-duration user presses of a control button within the case).

To wit, even with the case itself seemingly fully charged, my multimeter didn’t read a DC voltage on its charging pins that magnetically couple (sometimes, at least) to the earbuds when they’re placed inside it. In striving to debug what was going on, I even purchased a third-party charging case:

It did present 5V DC to the multimeter once the case was charged up, but although the control switch shown in the photo purported to put the earbuds in pairing mode, it (unlike the one in the Beats case) didn’t also implement hardware-reset functionality (hold that thought). And anyway, it didn’t resurrect the earbuds either.

So, what was the root cause of the system failure here? Was the Beats case not actually charging, either at all or adequately, although it seemed to be? Was it not passing along its stored-electron payload to the earbuds? Or were the earbuds ignoring its charging attention? Ultimately, I had to get my hands on another (self-purchased) Powerbuds Pro set to set the story straight. Both the old and new (to me; refurb’d, actually) cases charged up the new earbuds just fine. But neither case resurrected the old earbuds. The old buds were the culprit.

Here’s what I think happened. Beats’ implementation of the case-and-earbuds interaction is more robust than the one supported by the third-party case. DC charging over the two-pin interface between the Beats case and each earbud doesn’t begin until an “AC” handshake between the two over that same two-pin interface (which, among other things, can also command-signal the earbuds to hardware-reset themselves) successfully completes. But the handshake can’t happen if the earbuds are non-functional, such as if their embedded batteries are deep-drained (long-time readers may remember an analogous situation I wrote about regarding a Qi-supportive battery case). Which is what I think happened in this circumstance.

Ironically, what had specifically gotten me to reach for my dust-covered Powerbeats Pro set a few months ago was an article I saw in The Wirecutter about earbuds’ inherent impermanence and how to properly care for them to maximize their usable life. So let me get this straight. My wife buys me a $250 set of earbuds (which were on sale for $200 at the time, but still…). If I keep them hooked up to the charger all the time when I’m not using them, their embedded batteries will eventually swell and fail. If I don’t keep them hooked up, conversely, their batteries will eventually deep-drain and again irrevocably fail. Admittedly, Apple and its competitors have learned a bit about battery maintenance from experience over the years:

Optimized Battery Charging is designed to reduce the wear on your battery and improve its lifespan by reducing the time that your AirPods Pro and AirPods (3rd generation) spend fully charged. AirPods Pro, AirPods (3rd generation), and your iPhone, iPad, or iPod touch learn from your daily charging routine and will wait to charge your AirPods Pro or AirPods (3rd generation) past 80% until just before you need to use them.

Still, this fundamental Achilles heel is, as I alluded to in the title of this piece, a profound obsolescence by design disappointment.

So where do I go from here? Well, in searching for information on teardowns (specifically, do-it-yourself repairs) and the like, I came across this video:

The video’s creator, Joe’s Gaming and Electronics, is (like iFixit) a supplier of spare parts for DIY repair projects, along with videos and other instruction guides. Unlike iFixit, however, the company also does in-house repairs of various electronics devices, albeit unfortunately no longer the Powerbeats Pro. However, I did obtain from them two brand new replacement batteries, along with some glue. My soldering skills (not to mention my patience) are, as I’ve admitted before, abhorrent, so I’ve recruited a local engineering services company, HWI (Halleck-Willard, Inc.), to tackle the disassembly, battery swap and reassembly tasks. Minimally, I’ll get an intriguing teardown out of this, and I might even end up with a fully functional (now second) set of buds.

Stay tuned. And as always, I welcome your thoughts on Beats’ frustrating design decision, as well as details on any alternative implementations that may exist, in the comments.

Brian Dipert is Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Obsolescence by design: The earbuds edition appeared first on EDN.

POET appoints Arista Networks’ chief procurement director as board advisor

Semiconductor today - Tue, 06/07/2022 - 18:34
POET Technologies Inc of Toronto, Ontario, Canada — a designer and developer of the POET Optical Interposer and photonic integrated circuits (PICs) for the data-center and telecom markets — has appointed Theresa Lan Ende, currently chief procurement director of Arista Networks, as an advisor to its board of directors. The firm also intends to nominate her as a director at its Annual General Meeting scheduled for 6 October...

LED package market grew 15.4% to $17.65bn in 2021

Semiconductor today - Tue, 06/07/2022 - 16:41
The global LED market beat market expectations in 2021, growing 15.4% year-on-year to $17.65bn as a slowing pandemic drove the recovery of various global economic activities, according to TrendForce’s latest LED report...

OpenLight unveils open silicon photonics platform with integrated lasers

Semiconductor today - Tue, 06/07/2022 - 12:56
Addressing the growing silicon photonics market requirements for improved performance, power efficiency and reliability, newly launched company OpenLight of Santa Barbara, CA, USA has introduced what it claims is the first open silicon photonics platform with integrated lasers...

A primer on security of key stages in OEM manufacturing lifecycle

EDN Network - Tue, 06/07/2022 - 12:45

As explained in the first article of this series, the last two stages of the IC lifecycle, board assembly and board test, are owned and controlled by the original equipment manufacturer (OEM).

Figure 1 OEMs are responsible for securing the final two stages of the IC lifecycle: board assembly and board test. Source: Silicon Labs

While there are fewer OEM stages in a product’s lifecycle than there are during IC production, the security risks in each of these stages are similar to the challenges faced by silicon vendors and are equally consequential. The good news is that OEMs can build upon the security foundations established by their silicon vendors and reuse many of the same techniques.

  1. Board assembly

Board assembly is much like the package assembly step in IC production; however, instead of putting a die inside a package, packages are mounted to a printed circuit board (PCB), which is then typically installed in some sort of enclosure. Physical and network security at the package assembly site is the first line of defense against attacks; however, this can vary widely from contractor to contractor and tends to be poor due to cost considerations and the nature of board testing.

Figure 2 Board assembly is quite similar to package assembly in IC production stage. Source: Silicon Labs

The most significant threats at this stage are theft, device analysis, and modification. Mitigation for these threats is described below.

Theft

Device theft for the purpose of resale as legitimate devices is the primary concern at this step. As with the package test stage in IC production, theft of any significant quantity is easily detectable by comparing the incoming and outgoing inventory of the board assembly site.

The biggest risk for OEMs at this stage is an attacker obtaining a significant number of devices, modifying them, and then introducing the modified products to end-users. If the silicon vendor offers custom programming, an OEM can greatly mitigate this risk by ordering parts with secure boot enabled and configured. Secure boot will cause the IC to reject any modified software the attacker attempts to program.

Device analysis

The potential for an attacker to obtain systems for analysis is greatly reduced in this step compared to the package assembly stage during IC production. Boards at this step typically don’t contain any useful information to be analyzed. If an attacker is interested in analyzing the hardware construction, they can easily obtain samples by buying the device. In addition, because the device has not yet been programmed, obtaining a device in this way does not afford the attacker the ability to access and analyze any device-specific software.

Hardware modification

Covert modification of a PCB at scale is hard to achieve given the ease of detecting such modifications. OEMs can implement a simple sampling test in a trusted environment that visually inspects boards and compares them to known good samples to detect changes. Attacks which attempt to modify only a specific set of boards will evade such testing but are also harder to coordinate and implement.

  1. Board test

The board test stage presents threats similar to those for package test during IC production. For example, it’s common for test systems to be shared among multiple vendors, increasing the risk of security breaches or attacks from bad actors. However, OEMs tend to have an even greater diversity of vendors than those for IC manufacturing at this step, which makes board test even more difficult to secure than package test.

Figure 3 Again, board test is quite similar to package test in IC production stage. Source: Silicon Labs

Board test generally has poor physical and network security. It’s extremely common to share space and test hosts between products, and test systems may not be kept patched. Finally, the risk of exposing confidential data at final test depends on the implementation of the product and its final test process. If an IC has sufficient security capabilities, then a final test architecture that completely protects data from bad actors in the test environment is possible. Unfortunately, that topic is too complex to dig into in this article.

Malicious code injection

The simplest method of attack at board test is to modify the device’s software. Because secure boot enabling and application programming take place in the same board test step, there is concern that an attacker gaining full control of the test could inject malicious code and leave secure boot disabled. This risk can be mitigated by implementing sample testing or a dual insertion test flow.

In addition, if custom programming is available, then having the silicon vendor configure and enable secure boot will create a robust defense against malicious code injection. When a programming service is used in this way, it’s still important that board test verify that secure boot is correctly configured and enabled. Working together, the package and final test steps can validate each other such that an attacker would need to compromise both steps to alter the silicon vendor or OEM code.

It’s important to note that the strength of secure boot is reliant on keeping the private key a secret. It is highly recommended that signing keys be generated in a secure key store such as a hardware security module (HSM) and never exported. In addition, the ability to sign with keys should be highly restricted and ideally require authentication from at least two individuals to ensure that no individual actor can sign a malicious image.

Identity extraction

Since it is common for OEMs to inject credentials—cryptographic keys and certificates—in board test, attackers may seek to gain access to credentials or the key material they are based upon.

Secure provisioning of identity credentials has proven to be a particularly complex and nuanced problem. It involves the device’s capabilities, the contractor’s physical and network security, and the provisioning method’s design. It also presents unique challenges due to the scale and cost of manufacturing. In addition, as with all security, there is no way to confirm you haven’t missed some flaw in the system. Providing devices with identities is easy. Providing them with robust secure identities at an acceptable cost and enormous scale is incredibly difficult.

In well-designed systems where private keys never leave secure key storage, gaining access to key material needed to forge credentials should not be possible. For example, in the implementation used by Silicon Labs, the private key used to generate device certificates is stored in a Trusted Platform Module (TPM) on a PC that is hardened to physical and logical attack and located in an access-restricted cage in the site’s data center. Further, those keys are restricted in usage, applying to only a single production lot, and in time, existing only a few days before that lot is tested and deleted once the lot is complete. Finally, if such a key is compromised, the devices manufactured under that key can have their credentials revoked, indicating they should no longer be trusted.

Similarly, all devices that support secure key storage generate their private keys on-board, and those keys are never able to leave the secure key store. Devices that do not support secure key storage must have their keys injected and will be more vulnerable to an attacker on the test infrastructure accessing their private keys. To prevent certificates for low-security devices from being passed off as credentials for high-security devices, all certificates generated in manufacturing have data indicating the strength of storage used for its private key.

OEMs should use test systems which are hardened against modification and restrict physical access. Physical security should be reviewed, and standard access controls and logging maintained. Finally, standard security practices for networks and PCs should be maintained. For example, test systems should not be allowed to have direct Internet connections and should not use communal login credentials. Periodic reviews should be conducted to ensure that any changes to these processes are noticed and reviewed.

These standard actions can prevent an attacker from gaining access to a test system in the first place. In addition to these practices, OEMs can consign testers to contract manufacturers (CMs) that won’t be shared with other vendors, further increasing physical and network security. Those systems can also be put through penetration testing to identify and fix vulnerabilities before they can be exploited.

Finally, higher-level keys stored in the OEM’s IT infrastructure need to be handled appropriately. They should be stored in a secret key store and have appropriate access restrictions. Their use should be monitored so that any unexpected operations can be identified, and the appropriate staff alerted.

For OEMs that don’t wish to set up their own credential provisioning infrastructure, there are silicon vendors that offer secure programming services. For example, Silicon Labs provides credentials in its catalog Vault-High products and can program credentials onto any custom parts ordered though custom part manufacturing service (CPMS). These services transfer this burden from board test to the programming step by the silicon vendor.

Extraction of confidential information

When confidential information such as keys or proprietary algorithms is programmed as part of board test, there is a risk an attacker will obtain this information by compromising the tester. All the recommendations made for hardening the board test stage against identity extraction apply here as well. Similarly, using a programming service can transfer this risk from the board test stage to the package test stage.

With the right set of security features, it is possible to provision confidential information and protect it even if test systems are compromised. This provisioning requires a central, secured machine, as discussed above, and a device with a secure engine that can attest to the device’s state in a way that is outside the influence of the test system and is verifiable by the central machine. The board test will program the IC, enable secure boot, and lock the device.

The device then will attest to its state. If the tester was compromised and did not do what it was supposed to do, the central machine will detect it in the attested information. When the central machine knows the device is configured properly, it can perform a key exchange with the known-good application and then send the confidential information over that secured link. This process prevents the test system from being able to see or alter the confidential information.

End-product security requires OEM diligence

When it comes to securing an end-product, OEMs face many of the same challenges as silicon vendors. While a well-designed product and robust physical and network security are the first layers of defense, OEMs can prevent the bulk of security attacks against their end-products by following many of the same steps and procedures practiced by their silicon vendors.

In addition, many silicon vendors provide services and capabilities that OEMs can use to reduce the effort and complexity of securing their manufacturing environment. Putting these techniques in place today will help ensure security for all their connected devices and the ecosystems in which they participate. Together, silicon vendors and OEMs can deliver on the promise of a secure Internet of Things (IoT).

Joshua Norem is a senior systems engineer at Silicon Labs.

Editor’s Note: This is the second and final part of the article series on OEM-specific security risks. Part 1 identified the threats at each step of the IC production lifecycle and described how to mitigate them.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post A primer on security of key stages in OEM manufacturing lifecycle appeared first on EDN.

GaN controller ICs optimize charger designs

EDN Network - Mon, 06/06/2022 - 19:46

Three GaN primary-side flyback controllers from startup Elevation Semiconductor enable efficient and compact 20-W to 65-W battery charger solutions. The HL9550, HL9552, and HL9554 each integrate a 650-V GaN power FET and offer wide VDD operating ranges up to 80 V without an additional clamp circuit for USB PD applications.

Intended for switched-mode power supplies, the controller ICs support quasi-resonant (QR), discontinuous conduction mode (DCM), continuous conduction mode (CCM) and multiple frequency hybrid modes of operation. The devices provide overvoltage, undervoltage, overtemperature, and brownout protection, as well as an externally triggered shutdown function for safety.

Elevation also announced the HL9701 synchronous rectifier (SR) controller for switched-mode power supplies dedicated to the secondary side of flyback converters. The device drives the SR MOSFET and is compatible with high-side and low-side applications for QR, DCM, and CCM operations. It has a wide input voltage range up to 28 V and is optimized for SR gate turn-off threshold control. The HL9701 comes in a 6-pin SOT-23 package.

Full datasheets for these products are available upon request by contacting sales@elevationsemiconductor.com.

HL9550 product page

HL9552 product page

HL9554 product page

HL9701 product page

Elevation Semiconductor 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post GaN controller ICs optimize charger designs appeared first on EDN.

EPC launches three-phase BLDC motor drive inverter

Semiconductor today - Mon, 06/06/2022 - 19:17
Efficient Power Conversion Corp (EPC) of El Segundo, CA, USA – which makes enhancement-mode gallium nitride on silicon (eGaN) power field-effect transistors (FETs) and integrated circuits for power management applications – has announced the availability of the EPC9173, a three-phase BLDC (brushless DC) motor drive inverter using the EPC23101 eGaN IC with embedded gate driver function and a floating power GaN FET with 3.3mΩ RDS(on)...

Transphorm receives $16m from existing investors’ exercise of green shoe

Semiconductor today - Mon, 06/06/2022 - 13:38
Transphorm Inc of Goleta, near Santa Barbara, CA, USA — which designs and manufactures JEDEC- and AEC-Q101-qualified gallium nitride (GaN) field-effect transistors (FETs) for high-voltage power conversion — has received gross proceeds of $16m as a result of the exercise of the ‘green shoe’ (over-allotment option) associated with the firm’s private placement completed in November 2021...

Wolfspeed’s Rick Madormo succeeds Thomas Wessel as senior VP of sales & marketing

Semiconductor today - Mon, 06/06/2022 - 12:42
Wolfspeed Inc of Durham, NC, USA – which makes silicon carbide materials as well as silicon carbide (SiC) and gallium nitride (GaN) power-switching & RF semiconductor devices – has promoted Rick Madormo to senior VP of sales & marketing, succeeding Thomas Wessel, who is retiring at the end of June. In anticipation of Madormo’s promotion, Wolfspeed has hired Owen DeLeon as its new VP of sales for the Americas...

Pages

Subscribe to Кафедра Електронної Інженерії aggregator - Новини світу мікро- та наноелектроніки