Українською
  In English
Новини світу мікро- та наноелектроніки
Infineon Top-Side Cooling Packages Registered as JEDEC Standard for High-Power Applications
The trends toward higher power density and cost optimization dominate the development goals of efficient high-power applications that create substantial value for segments such as electromobility. To push these boundaries, Infineon Technologies AG announced it has successfully registered its QDPAK and DDPAK top-side cooling (TSC) packages, which are ideal for high-voltage MOSFETs as a JEDEC standard.
This registration further solidifies Infineon’s goal to help establish a broad adoption of TSC in new designs with one standard package design and footprint. Additionally, this provides flexibility and comfort to OEM manufacturers to differentiate their products in the market and take power density to the next level to support various applications.
“As a solutions provider, Infineon continues to influence the semiconductor industry through innovative packaging technologies and manufacturing processes,” said Ralf Otremba, Lead Principal Engineer for High Voltage Packaging, Infineon. “Our advanced top-side cooled packages bring significant advantages to the device and system levels to fulfill the challenging demands of cutting-edge high-power designs. Package outline standardization will help ease one of the main design concerns of OEMs for high-voltage applications by securing pin-to-pin compatibility across vendors.”
For more than 50 years, the JEDEC organization has been the global leader in developing open standards and publications for the microelectronics industry for a broad range of technologies, including package outlines. JEDEC has been widely accepting semiconductor packages such as the TO220 and TO247 through-hole devices (THD) devices that have been prominently used over the past decades and are still an option in new onboard charger (OBC) designs, high voltage (HV) and low voltage (LV) DC-DC converters.
To facilitate design transition for customers from the TO220 and TO247 THD devices, Infineon has designed QDPAK and DDPAK SMD devices to deliver equivalent thermal capabilities with improved electrical performance. Based on a standard height of 2.3 mm for QDPAK and DDPAK SMD TSC package for HV and LV devices, developers are now able to design complete applications such as OBC and DC-DC conversion with all SMD TSC devices measuring the same height. Compared to existing solutions that require a 3D cooling system, this facilitates designs and reduces system cost for cooling.
Additionally, TSC packaging offers up to 35 percent lower thermal resistance than standard bottom-side cooling (BSC). By enabling the use of both PCB sides, TSC packages offer better board space utilization and at least two times more power density. The thermal management of the packages is also improved by thermal decoupling from the substrate since the thermal resistance of the leads is much higher compared to the exposed package top side. Because of the improved thermal performance, stacking different boards is not necessary. Rather than combining both FR4 and IMS, a single FR4 is enough for all components and also requires fewer connectors. These features deliver an overall bill of materials (BOM), which ultimately reduces overall system cost.
In addition to improved thermal and power capabilities, TSC technology also offers an optimized power loop design for increased reliability. This is possible by the placement of the drivers, which can be placed very close to the power switch. The low stray inductance of the driver switch loop reduces the loop parasitics and leads to less ringing on the gate, higher performance, and a smaller risk of failures.
Additional information is available at www.infineon.com/ddpak and www.infineon.com/obc
The post Infineon Top-Side Cooling Packages Registered as JEDEC Standard for High-Power Applications appeared first on ELE Times.
Mac-Dermid Alpha Showcases Solutions for Automotive Applications, 5G Infrastructure
Powering a path to the future of chemicals utilized in electronic applications, MacDermid Alpha will exhibit at booth A17-A20 in Hall 1, reflecting their strong local presence in India and ongoing support to resident OEMs, EMS providers, ODMs, CEMs, and distributors.
Visitors can expect to see a wide range of innovative technologies, specifically designed to improve the reliability and performance of electric vehicles, power electronics, battery control units, battery management systems, batteries, 5G applications, automotive electronics, and mobile devices. Solutions on show will include specific offerings for die attachment such as ALPHA Argomax silver sintering paste, ALPHA, ATROX, ultra-low stress conductive die attach adhesive, and the SAC-based alloy, ALPHA Innolot, for environments exposed to high temperature and vibration. Additionally, ALPHA HRL3 Solid Solder is designed to enable low-temperature processes designed to mitigate defects in selective and dip soldering, thereby mitigating effects created by warpage.
A range of Thermal Interface Materials, (TIMs), polymers/resins/potting compounds, and conformal coatings will also be available. MacDermid Alpha’s broad portfolio will include chemicals for PCB fabrication including Blackhole for lower cost of ownership and easier maintenance, high-performance films for smart surfaces, EMI shielding solutions, and superior battery component plating. In addition, pre- and post-treatments for the anodizing industry will be highlighted; these technologies are already exceeding the expectations of aluminum finishers around the world.
Experts from MacDermid Alpha will host three unmissable presentations at the technical conference on February 14. Gyan Dutt, Strategic Marketing Manager, Semiconductor Solutions, will present the paper ‘Advanced Materials for EV Powertrain and High-Reliability Applications’, covering high-reliability requirements for ADAS and connected applications in next-generation vehicles. The second presentation will examine polymers/resins/potting compounds, TIMs, and conformal coatings solutions for EV batteries and power electronics.
The presentation is titled ‘High-Performance Polymer Protection and Thermal Management for Power Electronics and E- Mobility’ and will be hosted by Padmanabha Shakthivelu, General Manager of MacDermid Alpha’s Electrolube brand in India.
Additionally, MacDermid Alpha’s OEM Sales Director (India), Sharan Aiyappa, will present the paper, ‘Film Insert Molding – the emergence of Display In-Mold Electronics and Smart Surfaces’ and will cover film insert molding technology for new automotive cockpit designs, especially those with curved surfaces, larger displays, more complex shapes, and integrated electronic functionality into plastic parts.
Padmanabha Shakthivelu, General Manager of MacDermid Alpha’s Electrolube brand in India, comments, “It is a fantastic opportunity for MacDermid Alpha, with unrivaled global R&D, and solutions that span the entire scope of the electronics supply chain, to interact with electronics manufacturers in India. We aim to demonstrate how we can significantly increase performance and reliability to the local market, with a focus on sustainability. With increasing global competition, rising expectations from consumers, and high pressure on supply chain operations, MacDermid Alpha’s local facilities in India can ensure chemical solutions developed to the highest global standards, highly efficient customer support, short lead times, and economies of scale.”
Reaching every aspect of the electronics supply chain from design to post-production, and beyond, MacDermid Alpha enables the highest quality electronics interconnection across the company’s integrated circuit, assembly, and semiconductor divisions. Established innovators aligned under MacDermid Alpha Electronics Solutions include Alpha, Compugraphics, Electrolube, Kester, and MacDermid Enthone brands.
Visitors to the show are welcome to meet MacDermid Alpha representatives at Booth A17-A20 in Hall 1 and discover how they can benefit from a complete ‘start to finish’ technology roadmap with some of the most advanced technologies on the market.
For further information, please visit www.macdermidalpha.com
The post Mac-Dermid Alpha Showcases Solutions for Automotive Applications, 5G Infrastructure appeared first on ELE Times.
Vehicle Electrification Providing Sustainability, Efficiency and Affordability
Author: Sharmistha Bose, Allied Market Research
Electric vehicles (EVs) are gaining popularity globally as the technologies involved in their making continue to evolve and improve. They are set to take over the roads in the coming times when they will be doing deliveries, everyday commutes, heavy jobs, and more. The reasons they are becoming popular are that they produce zero emissions, which means they can have a drastic effect on greenhouse gas emissions and climate change. EVs are also great performance-wise, which is a unique selling point when it comes to cars. Additionally, electric cars are becoming more affordable and practical for daily use.
Furthermore, the components that go into the making of EVs help reduce the overall weight of a vehicle, thereby controlling fuel consumption and enhancing operational efficiency.
Faced with stringent environmental regulations and a surge in demand for sustainability, greater operational efficiency, and affordability, and policy change regarding bans on gasoline and diesel vehicles in countries, auto manufacturers and suppliers realize that vehicle electrification is crucial to their survival. Hence, they are working toward electrifying their vehicle portfolio. The vehicle electrification market is also driven by the fact that EVs are quite responsive and have ample energy.
The electrical components can also be easily monitored and controlled for their efficiency and performance compared to conventional vehicles which have heavier and less efficient hydraulic transfer power systems. Furthermore, a rise in the usage of fuel-efficient mobility solutions along with effective performance requirements and a reduction in the price of batteries per KWH propels the market growth. Allied Market Research predicts that the global vehicle electrification market would garner a revenue of $140.29 billion by 2027, growing at a CAGR of 11.3% during the forecast period 2022-2031.
Automakers across the world are ramping up the production of electric vehicles due to the high demand for electric cars globally. They are developing EV technology at lightning speed by keeping an eye on innovation and smart manufacturing and reducing battery prices. For instance, in December 2022, Ansys, Inc., an American multinational company collaborated with Indian software firm Tata Consultancy Services (TCS) to concentrate on the technological development of electric vehicles. The partnership aims to develop a Center of Excellence (CoE) for advanced digital engineering for vehicle electrification in Pune by the latter using the former’s simulation software.
TCS leverages advanced engineering simulation techniques to speed up the development of vehicle electrification for automotive consumers all across the world. This CoE develops e-powertrain components, such as batteries, motors, inverters, and power electronics, and their integration into EVs. In November 22, Airbus SE, a European multinational aerospace corporation, and Renault Group, a French multinational automobile manufacturer joined forces intending to speed up their vehicle electrification goals and improve their range of products.
This collaboration is likely to transform the transport landscape, thereby contributing to the goal of net-zero emissions by 2050 in the automotive and aviation sectors. This collaboration enables Airbus to advance the technologies regarding future hybrid-electric aircraft and energy storage, which continues to be a major hindrance to the development of long-range electric vehicles. The partnership intends to research the technologies associated with energy management optimization and enhancement of battery weight. It also aims to pursue the best ways to make a shift towards all solid-state designs from the existing cell chemistries (advanced lithium-ion). The companies plan to research the full lifecycle of future battery designs while assessing their carbon footprint.
In November ‘22, VivoPower International Plc, an International Battery Technology, electric vehicle, solar, and critical power services company announced that VivoPower along with Tembo EV Australia Pty Ltd., its subsidiary inked a partnership with the Evolution Group Holdings Limited, an Australian public company providing traffic management with the aim of electrification of its range of light utility vehicles for traffic management and fleet management. According to the deal, Tembo Australia converts all its vehicles to completely electric versions over the next few years following the commercial and technical on-road trials to have the first fully electrified utility vehicle certified in 2023.
According to Gary Challinor, COO of VivoPower, the company is the first to provide fleet electrification and repowering in Australia and New Zealand as well as the first in the traffic management sector. He said that they are happy to work with Evolution to provide conversion EV powertrain kits, ruggedization and customization, training and change management, and sustainable energy solutions.
In June 22, Turntide Technologies, a US-based business that makes intelligent, sustainable motor systems introduced Turntide Electrification, a range of battery and powertrain components that provide efficiency, safety, and performance solutions to a wide range of industries. The new technologies have been developed in keeping with the decarbonization of commercial and industrial transport divisions that include off-highway vehicles, autonomous guided vehicles, trucks, buses, marine, and more. Component systems include a hyperdrive battery system, inverters, motors, thermal cooling pumps and fans, and DCDC converters.
The post Vehicle Electrification Providing Sustainability, Efficiency and Affordability appeared first on ELE Times.
Couldn't find a USB hub and switch combo small enough for my project, so I made my own
![]() | submitted by /u/Jefferson-not-jackso [link] [comments] |
4 Lead Nurturing Best Practices You Need to Implement
One of the main goals you should have when investing in marketing campaigns for your business is generating leads. A good marketing campaign can help you attract attention from a new audience, which can lead to substantial growth. As new leads start to roll in, you need to make sure they’re being managed properly.
Issues involving lead management can affect your business negatively. This is why your main goal should be managing and nurturing the leads you get. When properly nurtured, these leads can turn into actual customers.

Source- Unsplash
Here are some lead nurturing best practices you need to implement.
1. Work On Targeting Your ContentA recent study found that nearly 65 percent of business owners have no lead nurturing strategy. If you work hard to generate leads and don’t nurture them properly, you will waste a lot of time and money. When trying to nurture leads, you need to consider the buyer’s journey. Ideally, you want to create content for every stage of this journey.
Targeting your content can help you attract attention from consumers. Addressing common pain points with this content is also a good idea. This will give potential customers the feeling that you actually know what they need. Showing these consumers, you know their needs can help you turn them into actual customers.
2. Lead Nurturing Software is a Good InvestmentAttempting to handle every aspect of lead nurturing can be a daunting task. If you find yourself dropping the ball with your lead nurturing efforts, you have to find ways to address this. One of the best things you can do to fill the gaps in your lead nurturing strategy is to invest in the right technology.
There are a number of lead nurturing software programs on the market that can help you improve conversion rates. These programs are designed to save you time and resources. Before choosing software to use for your business, do your homework. Choosing software that is both easy to use and effective can help your business greatly.
3. Schedule One On One ConversationsProviding qualified sales leads with a personalized method of communication is a good idea. Most people like to talk face-to-face with a company before using their products/services. While using digital communication methods like live chat and email can be helpful, they can also be problematic. Humanizing your business is a great way to turn sales leads into customers.
Taking time out of your busy schedule to chat with qualified sales leads is a great idea. You can use video chat software to make these meetings easier.
Source- Unsplash
4. Follow-Up With LeadsWhen a consumer reaches out for information about your products/services, you have to respond. Failing to follow up with interested consumers can be disastrous for your business. This is why you need to have a plan in place to ensure follow-up calls and emails are sent to potential customers.
By implementing these practices, you can grow your customer base in no time.
The post 4 Lead Nurturing Best Practices You Need to Implement appeared first on Electronics Lovers ~ Technology We Love.
EEVblog 1528 - I found a bin FULL of Dumpster Medical Devices!
Fixing security threat with post-quantum crypto on eFPGA
One of the most critical ramifications of the emergence of quantum computers is the impact on security because quantum computers have the potential to break even the most secure encryption methods used today. That is why the industry will be seeing a rapid shift from traditional cryptosystems to Post Quantum Cryptography (PQC) systems in the next few years. PQC systems respond to this growing quantum threat because they are based on mathematical problems that cannot be solved efficiently with Shor’s algorithm, or by any other known quantum computing algorithm.
In this article, we’ll explain how companies can start building PQC security into their computers and network equipment today by leveraging embedded FPGA (eFPGA) that can be easily updated in the future as the threat of quantum computer security attacks become a reality. But first, let’s take a look at what this threat is and why every system-on-chip (SoC) or systems designer should be taking it seriously.
How quantum computers break security algorithms
Today’s cryptosystems leverage asymmetric cryptography algorithms that are used by modern security protocols for key exchange and digital signatures that rely on the complexity of certain mathematical problems. Currently, the main problems used for asymmetric cryptography are integer factorization of the RSA algorithm and discrete algorithm of the elliptic curve cryptography (ECC). Shor’s algorithm is a quantum algorithm that can solve these problems on a large enough quantum computer. If this happens, cryptosystems utilizing RSA and ECC would be compromised.
One of the biggest misconceptions is that companies don’t have to worry about this right now because quantum computers big enough to break modern-day cryptosystems don’t exist today. This is not the case because many semiconductor chips being designed today will still be in use for decades. It means that when quantum computers become mainstream, all the data on all those semiconductor chips instantly becomes at risk. Yes, even data recorded today could be broken into in the future when a powerful-enough quantum computer comes along.
The rise of PQC
Recognizing the need to mitigate the risk of quantum computers, the National Institute of Standards and Technology (NIST) of the United States initiated a competition in 2016 to find solutions to standardize PQC algorithms. After three rounds that concluded in July 2022, four candidate algorithms were selected for standardization: CRYSTALS-Kyber, CRYSTALS-Dilithium, Falcon, and SPHINCS+. Kyber is a so-called Key Encapsulation Mechanism (KEM) that is used for key exchange and the rest are digital signature algorithms.
NIST continues the competition with a fourth round to find even further advanced PQC algorithms for a more robust standard in the future. Although the algorithms to be standardized are now known, they may still be tweaked before even the draft standards are written. The final standards are expected to be published in a couple of years and may still change from what is known today.
However, even though these algorithms have been selected, the standards are not yet finalized even though there is an urgent call for systems designers to start migrating to PQC immediately. In fact, many organizations are starting to mandate that security systems support PQC in the near future. As an example, the National Security Agency (NSA) has mandated that certain U.S. national systems must support PQC in 2025. These requirements, combined with the still changing PQC landscape, set very high needs for crypto agility: the ability to update and change cryptographic algorithms in deployed systems.
To trust or not to trust
Because PQC schemes are only a few years old and many are based on new types of mathematical problems, they cannot be fully trusted at this stage or even when the final standards are out. It’s entirely possible that previously unknown weaknesses will be discovered and allow breaking them even with classical computers.
To mitigate the risks of a failure of the new PQC schemes, many authorities, researchers, and security professionals recommend using a hybrid mechanism. A hybrid mechanism combines a PQC scheme with a traditional scheme—ECC in most cases—so that the combination remains secure even if one of them fails under classical or quantum attacks.
Figure 1 Security professionals increasingly recommend hybrid mechanisms that combine PQC with traditional schemes like ECC. Source: Flex Logix
Hybrid mechanisms will reduce both risks: the quantum threat and the possible failure of PQC. It is likely that hybrid mechanisms will be widely deployed and used for a long time. This sets high requirements for the implementation of secure systems, as they need to have secure and efficient implementations of both ECC and PQC. They must also be implemented in a crypto-agile manner that permits changes after deployment if some of the algorithms are upgraded or replaced. This is a challenge that reconfigurable computing can answer.
How eFPGAs can help
The problem both SoC and systems designers have today is how to start incorporating PQC support even though the PQC algorithms may change in the next several years. This is a big problem with current chip design because chip circuitry typically cannot be changed or modified after tape-out. And with the rapidly rising cost of developing SoCs, particularly at advanced process nodes where a spin or re-spin could take millions of dollars, there is not an easy solution. Or is there?
Here, at this security technology crossroads, eFPGAs are uniquely qualified because they provide the ability to change the PQC algorithms while still providing the performance and power and cost savings over other alternatives. It’s also possible to retro fit PQC into systems that already have eFPGA included in the SoC. In addition, by adding reconfigurable computing to the SoC, the system can save on power and cost yet still have high-performance encrypting.
Using eFPGA, chip designers are no longer locked in once RTL is frozen, but rather have the flexibility to make changes at any point in the chip’s life span, even in the customers’ systems. This eliminates many expensive chip spins and enables designers to address many customers and applications with the same chips. It also extends the life of chips and systems because designers are now able to update their chips as protocols and standards change in the future.
Figure 2 eFPGAs are uniquely qualified for applications like PQC support in SoC designs. Source: Flex Logix
Many existing SoC architectures have hardened cryptography modules that include support for a multitude of cryptography algorithms including ECC, but not PQC. Updating these modules to support PQC and hybrid mechanisms after deployment is very hard or even impossible and very expensive without eFPGA. Cryptography modules with PQC support will be difficult and risky even in new projects in the future as they may not be available in the market at all or come with fixed parameter sets that are impossible to change if the algorithms get tweaked in the final stages of the PQC standardization process or even broken later. Here, eFPGA permits complementing cryptography modules with PQC support that can be updated to accommodate any future changes.
eFPGA may be used also for implementing the entire hybrid mechanism in a resource efficient manner. eFPGA can be first programmed to implement a PQC KEM and to compute the PQC shared secret, next to implement ECC and to compute the ECC shared secret, and finally programmed to implement the key derivation function that computes the final shared secret from the PQC and ECC shared secrets.
An eFPGA inside the SoC allows for other advantages besides being smaller and generating less heat. One of the problems facing cryptographers is the issue of export laws of various countries and the issue of sensitive information being provided to nefarious people who wish us harm. With an eFPGA inside the SoC, the PQC algorithms remain safe by programming after the SoC is back from manufacturing in a known safe location. The eFPGA binaries can be encrypted using physically unclonable function (PUF) to further secure them in case the computing device is stolen or lost in the field.
Are your systems ready?
While the age of quantum computers has not yet been realized, this threat is coming sooner than people think. Systems designers need to protect not only the data they are recording today, but their data of the future. An SoC manufacturer that can provide the assurance that its SoC today will be able to adapt to changing protocols and threats in the future will be the clear winner.
Andy Jaros VP of sales at Flex Logix.
Kimmo Järvinen is CTO and co-founder of Xiphera.
Matti Tommiska is CEO and co-founder of Xiphera.
Related Content
- Closing Knowledge Gap on Hardware Security
- The Role of Hardware Root of Trust in Edge Devices
- eFPGA expands the ecosystem footprint one deal at a time
- Embedded FPGA (eFPGA) technology: Past, present, and future
- What’s Driving the Shift from Software to Hardware in IoT Security?
The post Fixing security threat with post-quantum crypto on eFPGA appeared first on EDN.
Birds on power lines, another look
Some time ago, we asked the question: “Why Do You Never See Birds on High-Tension Power Lines?”
Just today, this very afternoon, I saw something that seemed to put the lie to my question (Figure 1).
Figure 1 A flock of birds on a power line.
The original thesis was that high tension power line temperatures rise so high that they can burn birds’ feet, but this picture adds something to that.
This power line doesn’t run cross country. It only passes along our commuter railroad right-of-way and might be better thought of as a moderate-tension line with a lesser temperature rise than previously described.
With the weather temperature at 36°F, this flock of birds was feeling the chill and was therefore taking advantage of a relatively modest power line temperature rise to warm their feet.
Nature can be just full of surprises.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- A wing and a wire
- Why do you never see birds on high-tension power lines?
- A tale about loose cables and power lines
- How hot is too hot to touch?
- Misplaced insulator proves fatal
The post Birds on power lines, another look appeared first on EDN.
How gate driver enables a wide range of battery voltages
When designing a product portfolio with a wide range of voltage and power requirements, finding a single driver design solution to serve the full portfolio can deliver significant savings in time and resources. Designs that can handle a wide range of battery voltages require high-efficiency operation with the capability to drive a wide range of MOSFET gate current and voltages, accommodate high thermal dissipation, and withstand very large negative transients.
This article discusses these challenges when trying to accommodate wide-ranging battery voltages and motor powers in a single driver design and presents a commercially available solution as a design case study.
To drive motors or drivers for high-voltage, high-power applications where low noise with high efficiency is the critical design factor, the continuing design trend is to use permanent magnetic synchronous motor (PMSM) drives and/or brushless DC (BLDC) drives. For many other applications, such as those where the acoustic noise of the mechanical processing usually drowns out the motor noise, simple DC motors that can be driven with a simple half-bridge architecture are still adequate.
Nevertheless, running a motor/driver with a wide power range causes power dissipation and switching effects that pose significant challenges for both three-phase drives and DC drives, so the considerations here are valid for both.
High-voltage and high-power DC motor operation
Running a DC motor may seem like a simple task. However, it still presents challenges, including the need to sustain high-voltage operation to support dissipation of motor inductance energy when braking. To address this challenge in applications where the driver stage must be suitable for use with a wide power range, designers require solutions that enable both high-voltage operation and the capability to handle huge negative transients.
A wide-ranging power solution also requires the capability to drive a wide range of gate currents for medium-power to high-power MOSFETs, which introduces high power dissipation inside the device. To address this power dissipation challenge, designers may find the best solutions in drivers with low driving impedances.
For a start, driving a MOSFET requires an understanding of the MOSFET switch waveform, discussed next.
Understanding MOSFET switching behavior
For inductive loads like motors, the switching cycle can be divided into four phases:
- t0 to t1 → Gate voltage rises to the threshold voltage.
- t1 to t2 → Drain current (iD) rises, and gate voltage rises according to the transconductance of the MOSFET.
- Drain-to-source voltage (VDS) falls—not linearly because input capacitance (CISS), output capacitance (COSS), and reverse transfer capacitance (CRSS) are dependent on VDS—and the gate current charges the Miller capacitance (VGS), during which time, VGS is stable.
- t3 to t4 → VDS is saturated low, and VGS rises to its final value.
These phases are illustrated in Figure 1.
Figure 1 Plot of voltages and currents over time for a MOSFET encompasses drain-to-source voltage (VDS), gate-to-source voltage (VGS), drain current (iD), gate current (iG), and threshold gate-to-source voltage (VGS(th)). Source: Allegro Microsystems
The gate-to-source voltage at the start of Phase 1 (VGS(t)) is zero, and the flow of the gate-drive current (iG(t)) that the driver needs to supply is at its peak. This maximum drive strength is not needed for the total switching time (t0 to t4) because the gate capacity becomes successively charged.
Approximately, the typical turn-on time can be obtained by:
tr(HS) = CLOAD × RDS(on)UP
Where:
- tr(HS) is the time from t0 to t4,
- CLOAD represents the gate capacitance of the MOSFET, and
- RDS(on)UP is the pull-up on resistance.
In general, to keep the power budget low, fast switching is desired. On the other hand, in most cases, gate drive current needs to be limited to control switching speed—dV/dt—to meet electromagnetic compatibility (EMC) requirements.
By adding a gate resistor (RGATE), turn-on time can be extended according to:
tr(HS) = CLOAD × (RDS(on)UP + RGate)
Several effects need to be weighed when increasing the gate resistor. Increasing the gate resistor will:
- Increase the switching losses, leading to increase in the light red area in Figure 1.
- Decrease the demand for driving power from the gate driver. This effect is generally desirable because it results in less power dissipation from the driver itself, so less heat is produced. The tradeoff is a longer time to charge the total gate capacitance of the MOSFET and an increase in the power dissipated on the gate resistor, meaning a higher voltage drop occurs for a longer time. In other words, the power losses shift from the driver to the gate resistor.
- Increase the dead time.
The best tradeoff needs to be found to account for all these effects.
Understanding MOSFET fast-switching and Miller effect
In addition to the MOSFET switching behaviors and tradeoffs discussed above, fast switching can cause another appearance of the known Miller effect, which must be considered in a design. The Miller effect can lead to an induced VGS bounce caused by a gate inrush current, according to:
IG = CGD × dVDS/dt
The name stems from the gate-to-drain capacity of a MOSFET, also termed as the Miller capacity.
To illustrate the Miller effect, all MOSFET capacities in a half bridge are shown in Figure 2. The input capacitance (CISS), output capacitance (COSS), and reverse transfer capacitance (CRSS)—values that are typically indicated in the MOSFET datasheet—are related to the gate-to-drain capacitance (CGS), gate-to-source capacitance (CGS), and drain-to-source capacitance (CDS) as follows:
CISS = CGD + CGS
COSS = CDS + CGD
CRSS = CGD
These relations can be rewritten as:
CGD = CRSS (CGD = Miller capacitance)
CGS = CISS – CRSS
CDS = COSS – CRSS
Figure 2 Here is a view of a half-bridge topology with MOSFET capacities. Source: Allegro Microsystems
An example of ideal gate drive signals of both the high side and the low side with sufficient dead time and with motor voltage monitored at the S node is shown in Figure 3. In contrast to this ideal, the high-side and low-side gate voltages might be observed in practice as shown in Figure 4. The difference between the ideal and the actual is explained by the switching behavior.
Figure 3 The diagram shows low-side (LS) gate voltage, high-side (HS) gate voltage, and motor voltage over time. Source: Allegro Microsystems
Figure 4 Another view shows low-side (LS) gate voltage, high-side (HS) gate voltage, and motor voltage over time with oscillation. Source: Allegro Microsystems
When switching on the high-side gate, the fast dV/dt transient of the S node causes the low-side gate voltage Miller capacitance (CGD) to recharge, pulling the VGS of the low-side MOSFET above the low-side gate threshold voltage (VGS(th)). This leads to cross-conduction resulting in oscillation at the S node and extensive power losses. Reducing the low-side MOSFET gate resistor in contrast to the high-side MOSFET gate resistor and using an adequate snubber on the S node can mitigate this effect. Significant improvements can be reached by adding an additional ceramic capacitor in parallel to the high-side gate capacitance, which would reduce dV/dt.
Leveraging MOSFET capacities to avoid Miller effect
Here is a design case study used to avoid the Miller effect while employing the APEK89500 demonstration board designed to evaluate the A89500 half-bridge MOSFET driver. The A89500 fast-switching half-bridge MOSFET driver is designed to enable both high-voltage operation—for instance, high side and low side amounting to 2.7 A source current (typical)—and 5.2 A sink current (typical). It scales up to a 100 V bridge supply and handles up to –18 V transients at the high-side gate output terminal and the high-side source (load) terminal, as illustrated in Figure 5.
Figure 5 The diagram highlights the negative transient voltage sustainability. Source: Allegro Microsystems
With a dual-flat no-leads (DFN) package, A89500 has a very low package thermal-resistance junction to ambient (RƟJA)—38°C/W for a two-layer 3.8 × 3.8-inch PCB. To assist designers in understanding the maximum drive strength, the driver needs to supply throughout all phases from t0 to t4 of the switching cycle. The approximation used in this article for the charging time of a capacitor connected directly to the driver is also used in the datasheet, as shown in Table 1.
Table 1 Excerpts from datasheet highlight charging time of a capacitor connected directly to the driver. Source: Allegro Microsystems
The APEK89500 demonstration board designed to evaluate the A89500 driver demonstrates the capability of the A89500 to avoid the Miller effect and deliver low-noise, high-efficiency operation. The board deploys two MOSFETs with the capacitances, as shown in Table 2.
Table 2 Capacities shown are for the APEK89500GEJ-01-T or APEK89500KEJ-01-T demonstration board used with the A89500 fast-switching 100 V half-bridge MOSFET driver. Source: Allegro Microsystems
The APEK89500 demonstration board was set up without optimization for a reference design for the design target. This deliberate lack of optimization resulted in susceptibility to the Miller effect. Despite the lack of a reference design for the design target, the Miller effect could be completely canceled out by adding a 1-nF capacitor in parallel to the high-side gate capacitance CGS. The reduction of the CGD/CGS ratio is shown in Table 3. When implementing designs that use the A89500, a good approach that completely avoids the Miller effect is to maintain a similar reduction ratio.
Table 3 Additional ground-to-source capacities are shown for the APEK89500GEJ-01-T or APEK89500KEJ-01-T) demonstration board used with the A89500 fast-switching 100 V half-bridge MOSFET driver. Source: Allegro Microsystems
As shown in this design use case, adding a capacitor in parallel to the high-side gate capacitance CGS will cause a reduction in dV/dt at the S node and will therefore mitigate the Miller effect. This result can be leveraged by reducing the CGD/CGS ratio at the low side as well—for instance, by adding a capacitor parallel to the low-side gate capacitance. This approach becomes comprehensive when considering CGD and CGS as a voltage divider (Figure 6). Thus, when increasing CGS, the apparent gate-to-source impedance becomes smaller, which further supports the effort to keep the gate well below VGS(th).
Figure 6 Capacitive voltage divider at the gate takes into consideration the apparent impedance. Source: Allegro Microsystems
Of course, a proper CGD/CGS ratio can be obtained by using a MOSFET that is appropriate for the design. Additional measures to avoid a capacitive switch on the low side include using a low-side MOSFET gate resistor that is reduced in contrast to the high-side MOSFET gate resistor and a snubber on the S node to mitigate the effect of cross-conduction.
Christian Huber is senior field applications engineer at Allegro Microsystems.
Related Content
- Gate Driver Solutions Proliferate for Motor Control
- Gate drive design for enhancement-mode GaN FETs
- Motor drive design: Integrated drivers vs. gate drivers
- How to Select the Right Gate Driver for your SiC MOSFET
- Gate-Driver IC Progress Pushes SMPS to New Levels of Power Density
The post How gate driver enables a wide range of battery voltages appeared first on EDN.
Fun little hot air rework and trace repair
![]() | submitted by /u/wastedhotdogs [link] [comments] |
CM601: 8-bit microprocessor Bulgarian clone from the MC6800
![]() | submitted by /u/Ryancor [link] [comments] |
Managing chaos
There is one thing that many new engineers struggle with, something that is of great importance but is not taught often in undergraduate courses—methods of simple quality analysis and control.
The closest most undergraduates get to any discussion of this might be in a semiconductor design course where yields and process variation might be touched upon, or in my case, the class I had on active networks and active filters discussed the sensitivity of various designs to component value changes.
Yet for any manufacturing environment, this simple skill is of utmost importance in getting a handle on chaos. And believe me, it will be chaos if your critical processes are not being monitored and under control.
We have all seen it—the box full of bad boards that can’t be fixed, and get set aside, or perhaps the test limits on some automated measurement that have to be adjusted all the time, and my favorite: the boards that get to final-inspection but can’t be shipped because they do not meet the customer specifications.
With no process control in place, these issues will all seem unconnected, and it will appear that the entire operation is in chaos. Which it is…
Learn from the best
When I got out of school, the Japanese were the quality leaders. Their products were precise, worked all the time and were at a lower cost than ours. I had heard that this was because of their use of statistical quality control. I had no idea how to do this, and neither did the engineers who were mentoring me, so I purchased a book on statistics, unfortunately, this book—although full of Chi-Square, Poisson distribution theorems and the like—didn’t have any practical meaning behind any of it.
A few years later, I had the good fortune to attend a series of classes on the Analysis of Variance in a practical setting. These folks used the book: “Understanding Variation” by the well-known author Donald J Wheeler [1]. His books are still available today and they are little gems. Like a well-written application note, they are short and to the point, teaching the subject in a concise, easily applied manner.
What most companies do
Everyone gives at least lip service to quality because everyone knows that it is important. But the approach taken is usually that of Dr. Deming’s [2] Red Bead Experiment.
In the Red Bead Experiment, a bin of white beads is mixed with a few red beads. The white beads signify a good product, and the red beads signify a bad product. The class is divided into about four groups to signify work centers or work shifts. Each group is given a spatula that has a grid of holes in it. In turn, each group sticks their spatula into the bin containing the mixed beads and gets some beads in the grid of holes.
The instructor counts the number of red beads on each turn and starts to proclaim the group that gets the lowest number of red (or bad) beads as being the “best production team”.
Well, as you can imagine, the number of good or bad beads is totally out of the control of the groups—they stick the spatula in and get a random number of bad beads each time—they can’t pick or select at all.
Yet the instructor, who by the way is akin to “management”, starts to tout the success of the “best team”. Yet it is all random!
During the exercise, because of the random nature of the process, the previous “best team” will undoubtedly fail and go “backwards”. The instructor will show his “displeasure” with this previously good team’s high-quality standards, which are now clearly “backsliding”.
In the end, there will be some team that just happens to have the best score and they will be awarded a bonus for the “best quality”.
Dr. Deming was a genius because this is a perfect example of how most workplaces try to implement quality control—randomly, and worse yet, with random goals!
If you have never been in one of these kinds of classes, I can’t recommend enough that you watch one on YouTube. Just search “Deming’s red bead experiment”. You can even find some taught by Dr. Deming himself. There are many available to watch and they only last about 30 minutes. This will be the best 30 minutes that you spend this week.
It sounds familiar, doesn’t it?
Well, that was Dr. Deming’s point…We all just hope that the Heisenberg uncertainty principle (i.e., if we pay attention to something, it will change) will work out, but in quality, it doesn’t. Paying attention is a start, but attention in and of itself doesn’t just fix any problem.
Simplest starting point
As I related at the onset, the simplest start to managing chaos is to apply Wheeler’s description of the Shewhart “XmR chart” [3]. “XmR” means X-bar (or the average), and “mR” means the moving range. This is a simple way to get a look at data, and it doesn’t require a computer, just measurements and a piece of graph paper. Oh, you see I mentioned “data”, actual “data”. While it is important to have “feelings” about process-related things, the only way to get a handle on them is to have actual data in a graphical form to start analyzing the possible root causes of the problems.
What this analysis can show is that sometimes processes are really out of control, and now that you have a way to measure it, you can start to understand the process to be able to change and to monitor it. Many times, however, you will find that the process is actually in control and is producing exactly what it can produce, yet you still can’t meet customer specifications. This says that the design of the process needs to be changed or the customer limits need to be changed to match reality.
If you do the Red Bead experiment enough times and measure the results, you will find that the system has an X-bar and a moving range that is actually “in control”, and it properly operates the way that the system is designed. It will produce a statistical number of red beads every time, and there is simply nothing that the workers can do about it short of changing the way that the system is designed or changing the expectation.
Now, this information won’t make anyone who designed the process or management happy immediately, the “red bead = bad product = bad workers” method of looking at things is simply too deeply ingrained in people without having had this training, but at least you will know the facts and that is the start of “managing chaos”.
Interesting Observations
I have used XmR charts for 30 years now and there have been some really interesting observations along the way. There are many more of course, and many of those are detailed in Wheeler’s book. I mostly plot XmR charts to monitor a critical process continually, other times I use it to analyze where a previously in-control process has gone out of control, as it can be used to look back in time (if you have the data saved somewhere, that is).
Figure 1 shows a process that is out of control. Naturally, there will be all sorts of pushback from the team that designed the process and management to “remove the obviously outlier points”. Unless you can positively say from your own personal investigation that those points were indeed exceptional outliers with a definite root cause found, don’t do it. If you do remove the outliers at the start, you will give the impression that the system is better than it is, and when they “re-appear” later, it will look to everyone that the system is getting worse. Naturally, not removing out-of-specification units from any chart will make no one else in the company happy—resist the peer pressure to do so.
Conversely, many new products may start out of control, that is why it is important to measure any process early in the prototype phase so that these things can be worked out and the root causes can be determined and fixed.
Figure 1 The process is out of control from the start.
In Figure 2, the minimum gain for the complete system to function properly is 4.75. This chart of measured gain shows that the customer specification is well within the natural process limits. In other words, the natural process shown here cannot meet the customer’s specifications. The options are:
- Toss the bad units and hope the process doesn’t get any worse
- Negotiate the customer specification so that it matches reality
- Redesign the process so that the specification can be met
Hoping that the process won’t get worse is an act of desperation and the least desirable option.
Figure 2 Minimum customer specification of 4.75 cannot be met with current process.
The average common mode response of an amplifier as shown in Figure 3 keeps shifting during the day, then repeats the next day. This was traced to the temperature inside the facility changing during the day in a spell of hot weather. Without some sort of chart, this effect would have been hard to diagnose. This effect was proven by taking the same units measured at the start of the shift and then measuring them later in the day and seeing that the units themselves showed the same shift. What the process was measuring was the temperature coefficient of the device. This problem may be more properly classed as a “measurement uncertainty issue”.
Figure 3 The shifting average common mode response of an amplifier due to changes in the facility’s temperature.
The bandwidth of a filter is measured and plotted in Figure 4. The bandwidth is adjusted with hand tweaking by expanding or compressing the coils of the inductors in the design. Everything looks okay, but looking at the measured value, the “local” average varies some over time. It was found out that these filters were built in the factory in batches of 20 each, and with that knowledge, the pattern can be seen in the chart. These batches might be better analyzed in those groupings and then compared between groupings.
Figure 4 Shifting patterns in filter bandwidth.
One week the grouping is nice and well within specifications, and the next week it takes a giant step and becomes out of specification. This is the measurement of a 7805-type regulator in Figure 5, and the shift was caused by running out of one SMT tube of parts and the next tube used was from a second source. Both manufacturers were well within the +/-4% absolute output specification, but their wafer fab processes were operating at different center points when the parts were made. Nothing is “wrong” with either manufacturer’s parts, but you can see the result of the raw material part-to-part differences in your finished product measurements.
Figure 5 Binomial groupings.
Here you have a decision to make: continue to track the actual measurements or, you can set the limits at the specified data sheet part limits.
I generally measure the data as it comes to me. Later, if I continually chase the limit specifications around, but if there is no root cause to fix, then it may be time to set the limits on the calculated data sheet or design values.
There is always a way
What if your production is sporadic or infrequent? How can you chart that? It turns out that Wheeler also wrote another book: “Short Run Process Control” [4] where he covers how to chart and monitor these types of processes.
There are also examples of bartenders implementing these processes to improve the accuracy of their drinks, etc. It may not always work, but looking at and thinking about actual data presented to you in an XmR chart is never a wasted effort. It beats the alternative of simply having feelings about a process.
Not the whole story
Quality control is also not the only important issue in a company, rather, it is on equal footing with other issues such as sales, profit, manufacturing capacity, ethics, etc. This is borne by the fact that many quality award winners in the past have subsequently gone out of business, just as many industry leaders have gone out of business. Much of this will be out of your control, but in my experience, applying an XmR chart to your daily analysis of what you can control will make life much better for you because, even if no one listens to you, you will know what your process is capable of, and not just be guessing. This is a real personal chaos reducer.
Bonus
I added my Octave (Open Source Matlab clone) and Python Scripts that easily generate XmR charts from a CSV file data on Github for anyone interested in using them. See Reference [5] below.
Box: How to make an XmR chart
Start with some measurement of something. Here I have five measurements of a “widget”,
Measurements = 1.1, 1.0, 1.3, 0.8 and 0.9
The moving range (mR) is derived by finding the absolute value of the difference between the first and second, second and third, third and fourth measurements, etc. The mR is always positive, as it is the difference between successive measurements.
Moving Range = 0.1, 0.3, 0.5, 0.1
Plot the values (Wheeler’s book has some nice blank charts that you can copy and use), but anything works, even a piece of graph paper, as shown in Box Figure 1.
Box Figure 1 You don’t have to have a computer to make a XmR control chart.
To even think about calculating the limits, you need to start with at least 5 values. Start by calculating the average of the moving range (R),
R = (0.1 + 0.3 + 0.5 + 0.1) / 4 = 0.25
the upper control limit on the range (UCLr) is given by,
UCLr = 3.268 * R
for our data,
UCLr = 3.268 * 0.25 = 0.82
The UCLr value should be plotted on the mR chart. Calculate the average value of the measurements (X),
X = (1.1 + 1.0 + 1.3 + 0.8 + 0.9) / 5 = 1.02
To compute the measurement upper and lower control limits (UCL, LCL) use the following formulas,
UCL = X + (2.66 * R)
LCL = X – (2.66 * R)
for our data,
UCL = 1.0 + (2.66 * 0.25) = 1.7
LCL = 1.0 – (2.66 * 0.25) = 0.3
Now plot all the data and limits together (Box Figure 2).
Box Figure 2 The completed XmR chart plotted using my Python Script. Although you don’t need a computer to make these graphs, it sure does look prettier if you do.
The Upper graph consists of:
- Blue line = Measured data connected by lines
- Green line = The calculated UCL
- Dashed orange line = The calculated X
- Red line = The calculated LCL
The bottom graph is the moving range plot, it consists of:
- Blue line = Range data connected by lines
- Green line = The calculated UCLr
- Dashed orange line = The calculated R
Box: Add customer specifications
Sometimes in a presentation setting, it is important to plot the customer specifications on an XmR chart, just to graphically show the actual situation. No one cares if the process is producing parts that are well within specification, but things get more interesting when it can be shown that the process today cannot meet the current customer specifications.
Note: The data for Figures 3 and 5 were re-created from an experience that I have had in the past but didn’t still have the real data for.
References
[1] Wheeler, Donald J, “Understanding Variation: The Key To Managing Chaos”, 1993, SPC Press, Knoxville, TN, ISBN: 0-945320-35-3
[2] Dr. W. Edwards Deming, https://en.wikipedia.org/wiki/W._Edwards_Deming
[3] More information on the Shewhart Control Chart, https://en.wikipedia.org/wiki/Shewhart_individuals_control_chart
[4] Wheeler, Donald J, “Short Run SPC”, 1991, SPC Press, Knoxville, TN, ISBN: 0-945320-12-4
[5] Python and Octave scripts can be found at, https://github.com/Hagtronics/statistics-scripts
—Steve Hageman has been a confirmed “Analog-Crazy” since about the fifth grade. He has had the pleasure of designing op-amps, switched-mode power supplies, gigahertz-sampling oscilloscopes, Lock In Amplifiers, Radio Receivers, RF Circuits to 50 GHz and test equipment for digital wireless products. Steve knows that all modern designs can’t be done with Rs, Ls, and Cs, so he dabbles with programming PCs and embedded systems just enough to get the job done.
Related Content
- Six Sigma: The Myth, The Mystery, The Magic
- Poring over breakdown statistics
- Statistical analysis to yield better chips
- Move ICs from defects per million to defects per billion
The post Managing chaos appeared first on EDN.
Cracking open an already-cracked calculator
My household is increasingly a case study of the common saying, “no good deed goes unpunished”. We are, I’ll modestly admit, fairly (and consistently) generous when it comes to charity donations each year; not only materially but also monetarily. However, as time passes, more and more “asks” arrive in our email inboxes and the mailbox, sent (repeatedly, of course) by ever-more numerous charities, a trend due in part to “list” sales from one charity to others.
Today’s dissection victim represents one of the more bizarre examples of the trend. It’s an extrapolation of the usual “we put a quarter in this envelope, which hopefully acts as sufficient guilt-trip motivation for you to respond by filling out the form also in this envelope and donating a quarter-of-$100 (or more) to us” approach (a variation on the “look, we even included an already-postage-paid-by-us envelope along with the form” stratagem).
Unfortunately, the calculator was poorly packaged and arrived broken courtesy of the U.S. Postal Service…which, of course, led to my immediate “teardown!” thought. See for yourself:
The already-busted bit, as you may have already noticed, was the monochrome screen:
It’s defunct, but hey, it tilts! Not too shabby for gratis:
The charity even glued a personalized sticker at the bottom of it with my wife’s (who, I must admit, gets the bulk of the “asks”) name printed on it, which I’ve removed for privacy reasons:
Flipping the calculator over:
You might first notice the product markings, which (among other things) reference a replaceable battery…which is a bit strange considering that if you revisit the front-side overview image you’ll see what appears to be a battery-recharging solar cell in the upper right, just to the right of the charity’s logo…(hold that thought):
You might also notice eight conspicuous screw heads scattered across the back side, which seem to be an obvious entry avenue:
Yep, I was right; there ended up not being, as is otherwise commonly the case, additional screws hidden underneath the two rubber “feet”:
Now this is interesting. What we have here appears to be a piece of paper with trace paths (conductive, I assume) embedded within it. Can’t see that I’ve come across something like this before, but given the obvious lowest-possible-BOM (bill of materials)-cost aspiration, it’s at least conceptually not surprising:
Underneath it is a rubber “sheet” containing embedded “knobbies”, location-correlated with the calculator’s keys (one per key in most cases, two per key in four cases):
Flipping the rubber “sheet” over to expose its other side reveals a round disc embedded in the opposite end of each “knobbie”:
When a calculator key is pressed, the associated “knobbie(s)” also move(s) downward, resulting in each disc pressing against a mating multi-contact pad within the paper-housed matrix. My guess (reader insights are also as-always welcomed, of course) is that the discs are also made of a conductive material, and when they make contact, they “complete the circuit”, a status change subsequently communicated to the “brains” on the PCB via the flex connector between them:
Speaking of which:
Alas, that blob of opaque epoxy on top of the “brains” isn’t going to allow me to ID it, but (again referencing the earlier lowest-possible-BOM-cost mention) the extremely high degree of integration isn’t surprising. The only other things of note here are one passive component, a bunch of traces, two unpopulated contacts above the passive (manufacturing test points?), and two more to the far left labeled “S+” and “S-”. And the PCB backside is completely bare:
Re the latter two labeled contacts, remember the earlier mentioned seeming solar cell? Well…
Ends up it’s just a piece of thick translucent glass. Fakers! I don’t know whether “S+” and “S-” reference a PCB-based solar (S) recharging option that’s not enabled in this particular product configuration, or if they (more likely IMHO) stand for “switch” i.e., a dedicated power on/off toggle versus the existing matrix-activated one (like the other keys) in this calculator variant.
Onward. In one of the earlier “tilt” shots you might have noticed four more screw heads on the back side of the display, whose “tilt” two-hinge mechanism is now on full display, too:
It’s high time to remove ‘em too:
It was at this point in the disassembly that I realized I’d to-date neglected to snap an obligatory photo or few of the product accompanied by a United States penny (0.75 inches/19.05 mm in diameter) for size-comparison purposes. So better late than never, here you go:
It’s definitely not a pocket calculator:
Onward redux:
As you’ve likely already noticed, there’s a second flex cable (in addition to the earlier one spanning the PCB and key matrix) in this design, this one between the PCB and the display:
Both the display and PCB are only lightly glued in place and pop out without much fuss:
And in conclusion…more images of breakage:
As always, I welcome your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Building cost-effective, capable PCs: recommendations and real-life stories
- 1969 Compucorp calculator teardown
- Amazing! The Curta Mechanical Calculator
- Nines complement subtraction
The post Cracking open an already-cracked calculator appeared first on EDN.
Are you prepared for zero-day threats?
Zero-day attacks are not a new concept. However, if an enterprise has not addressed them recently, now might be a good time to review what has been implemented and what is available to prevent serious enterprise compromises.
A zero-day threat or exploit is based on a vulnerability that was not previously detected. Without previous visibility, no one has any idea that the vulnerability exists, so the consequences can be catastrophic. A zero-day exploit occurs when someone, a bad actor, takes the vulnerability and creates code to take advantage of the vulnerability. Next, a zero-day attack occurs when the exploit is used to attack a system. So, a zero-day threat is the overall concept.
Figure 1 Zero-day threats are cyberattacks that occur before a vulnerability within software has been fixed. Source: Lanner
As shown in Figure 1, a threat can be thought of as a timeline. The tricky thing about a zero-day threat is that the attack takes place before anybody has a patch for the software vulnerability. That leaves an unfortunate “Window of Vulnerability” when the attack can run rampant before the vulnerability is patched.
The best method to address a zero-day threat is to use broad protections that can defend against a range of attacks. With this method, the defender is prepared for whatever the attacker may send their way.
In the cycle of defense, the defender is continuously trying to prevent, detect and respond to attacks. The defender starts by trying to prevent an attack from being successful. If defenders are not able to prevent an attack and the system is breached, the attack can be detected by some means and an appropriate response action is taken. Frequently, this takes the form of a system/software patch.
Whack-a-mole or swatting-mosquitos analogy
To employ a metaphor to illustrate the value of broad defenses mounted in advance, consider pesky mosquito attacks. There are many ways to deal with them. In a warm and wet climate, they are a constant problem. In this case, broad prevention may involve chemical treatment to stop them where and when they breed—in stagnant water. Another broad prevention step is window screens to keep them out of living quarters.
Detection must be performed after the prevention step is not 100% successful. For mosquitos, this takes the form of audible detection—hearing them before they bite. And the response is swatting the darn bugs. Those who have lived in a mosquito-filled area know that prevention is much better than detection and response.
The information security business uses a similar system called defense in depth where multiple layers are used in the prevention and detection processes. If one of these layers is skipped, the problem can get out of control. That’s why multiple layers of defense are required.
In information security, one of the multiple layers could involve a firewall on the network to keep the pesky bad guys out. Another software firewall can be running on each computer in the organization. Within the computer there are different layers of protection as well. For example, the operating system and software libraries are hardened as much as possible to minimize the impact of vulnerabilities. But even with the best preventative approaches, vulnerabilities still occur and cause system breaches.
Cyber resiliency
While ideally the goal is keeping problems at zero, in reality, this goal is not achieved. When problems occur, they must be detected and responded to as soon as possible. Cyber resiliency is the ability to anticipate, withstand, recover from, and adapt to adverse conditions, stresses, attacks, or compromises on systems that use or are enabled by cyber resources. This means not only prevention to keep the number of occurrences to a minimum but also detecting and responding.
Several detection techniques are used these days. Previously identified attacks can be found on a network using a Network Intrusion Detection System (NIDS) that monitors network traffic looking for malware signatures or anomalous traffic. Anti-virus software can be used to look for suspected malware on a computer. Once detected, the attack can be blocked.
To reduce the impact of an attack, more fundamental protection steps can be taken. For example, integral hardware security provides the means to keep secrets separate and safe.
Zero trust architecture
Another approach is zero trust architecture (ZTA). It assumes that there are no traditional network boundaries. Networks can be local, in the cloud, or a hybrid setup with resources anywhere as well as employees and others working in remote locations that need to access these networks.
ZTA is a hot topic because in 2021 an executive order from the U.S government declared that the U.S. government was switching to a ZTA. Instead of a firewall, as previously used (the “perimeter model”), ZTA takes a different approach. With the perimeter model, the firewall is a protected perimeter that keeps the enemy out. As long as the bad actors remained on the outside, everything was fine and safe. The problem is this does not really work. User devices and valuable services exist outside of the firewall.
Even if the security perimeter can be extended to include all resources, eventually a bad actor gets inside, and valuable content is at risk. This has even occurred even to the military networks that have an airgap to separate them from the outside world. Even with the best protection, a breach can occur because someone installs an infected thumb drive or an infected patch for software.
One example that shows the flaws of the perimeter model is the SolarWinds cyberattack in 2021. The cybersecurity attack by SolarWinds’ software is one of the most widespread and sophisticated hacking campaigns ever conducted against the federal government and private sector.
SolarWinds made software for high security government networks. It was installed on all the U.S. government computers, even the classified ones. Bad actors figured out how to infect development machines at SolarWinds and place malware in SolarWinds software so the next time a security software update occurred, the malware came inside the government’s firewall and easily spread through the network.
With ZTA, the perimeter approach is redefined. It is recognized that the bad guys will get inside the perimeter, or an insider is a bad guy, or an insider allows the infected software inside and it gets on the network somehow. With ZTA, nobody is simply trusted just because they are inside. To trust a computer or a user, they must be authenticated. The most fundamental principle of ZTA is to authenticate everything—every user and every device.
Ideally, multifactor authentication is performed, but this is not always possible. The bad guys are assumed to be inside, so secrets cannot be sent over the network without encrypting them. Every message transmitted on the network must be encrypted and authenticated.
The ZTA architecture can be highly effective against zero-day threats because it limits the damage that an attacker can do. No longer can an attacker simply breach the perimeter and run amok. Because no network is trusted and every device and user are authenticated, the attacker is constantly presented with hurdles that limit his or her ability to spread within the organization.
A false sense of security
One of the situations that happens all too frequently is security experts think they have been entirely successful and think the war is over just because the most recent battle has been won. In doing so, they forget that the next battle is imminent. A sure sign of this type of thinking occurs when no bad guys are ever detected inside the firewall or on the network. Bad guys do eventually get into every network. Lack of detection is more likely to be an indication that the existing detection techniques are inadequate.
For the most up-to-date detection, a variety of techniques are used such as honeypots and honeynets. In addition to looking for patterns or a signature on the network to indicate the presence of malware, a very attractive target—a honeypot or honeynet—can be installed on the network. This is a computer that presents itself as a valuable and vulnerable target if someone is scanning the computers on the network.
In reality, this computer is not used for anything except as a trap for attackers. When it is attacked, the honeypot sets off alarms within the network. Now the defender knows that the attacker is there and responds appropriately with additional information from the alarm that can provide additional defense. Referring again to the mosquito analogy, a honeypot is a computer version of a mosquito black light attracter and zapper.
The use of honeypots is a common approach, but it needs to be done more often and even by companies that are not necessarily that large but have proprietary/secret information they really want to protect. While network intrusion detection is quite common, the honeypot is not as common since it requires a staff of personnel to set up the honeypot and monitor the alarms. However, network intrusion detection systems also require monitors and without the proper monitoring, alarms can go off and no one detects them or the breach that caused them to go off. This can occur because false positives are common, so real positives tend to be ignored.
Managed security services
False positives are one of the reasons that enterprises enlist the help of managed security services. By paying a third party, security monitoring can be outsourced to experts whose fulltime job is detecting real threats. With the number of clients they have, the threats can be detected from many locations. Detection at one client can lead them to analyze similar occurrences at their other clients. This can be very effective for zero-day attacks.
For these types of attacks, it’s very helpful to have computer security experts handling network and computer security with a staff that can be there 24/7/265. For companies with recognized security concerns, a managed security provider could be the right solution. This can be accomplished through a large cloud network to store and transmit the most important data. Large cloud providers can invest significant capital and resources to provide the highest security with the latest techniques to their clients. It’s all part of the service.
Applying security techniques to embedded systems
All of the principles that have been discussed can be employed in embedded systems. This includes defense in depth, cyber resiliency and zero trust using hardware security for the most critical secrets and security for software running on that system. The hardware security can be accomplished with a trusted platform module (TPM) or secure element (SE), a microcontroller (MCU) that includes a separate security core. To provide strong protection against zero-day threats in embedded systems, secrets should be kept in hardware. The key is to include security measures in each device and also outside the device.
A device can be a part of a larger system or network, perhaps a Wi-Fi network, that could be present in a smart home, a smart car, or other places. For smart homes, the recently introduced Matter standard builds in extra layers of security for defense in depth. Matter adds another layer of encryption and authentication on top the network used in the home. Matter even includes and embodies a ZTA. Every device in the Matter fabric gets authenticated using strong cryptography to make sure it is trustworthy enough to join the fabric.
With the Matter standard, these security techniques are coming into the smart home. As time goes on, more techniques including network intrusion detection, automated response and honeynets will be used in the smart home and other places to counter bad guys who want to expand their cyberattacks to smart homes.
Future cars, with their increasing communication capabilities, will have even more need for protection from the ever-changing cybersecurity landscape. Figure 2 shows the implementation of different layers of security in the automobile.
Figure 2 Current and future vehicle architectures will increasingly require different layers of security. Source: Infineon
As a design example of hardware security, shown in Figure 3, the OPTIGA Trust M can easily and safely interface with a PSoC 6 MCU. The PSoC 6 microcontroller has a separate core that can be dedicated for security and secrets can be kept in the core and restricted to that core and processor. They cannot be accessed even if the main processor is infected.
Figure 3 Connecting the host PSoC 6 microcontrollers to OPTIGA Trust M via a shielded I2C interface provides an additional layer of security. Source: Infineon
Next, OPTIGA TRUST M protects sensitive security tokens on the device such as X.509 certificates and private keys. Design engineers can use these security tokens for certificate-based mutual authentication in Matter or to connect devices to Amazon Web Services (AWS) IoT Core.
While the threats discussed here are nothing new, it is hoped that awareness can provide new motivation to explore the latest techniques to protect enterprises, smart homes, smart cars, and more against security threats. Attacks like SolarWinds provide periodic wakeup calls to those who have come to accept a false sense of security in minimally protected networks and devices.
For embedded systems designers who are ready to implement improved security, controllers and development tools for embedded systems are available. In this manner, zero-day threats can be stopped before they spread, even in embedded systems.
Steve Hanna is a distinguished engineer at Infineon Technologies.
Related Content
- Closing Knowledge Gap on Hardware Security
- Zero-Trust and the Rise of ICS, OT Security Threats
- Ending the Cat-and-Mouse Game of Firmware Attacks
- What’s Driving the Shift from Software to Hardware in IoT Security?
- The Unexpected Outcome of Ransomware: An Industrial Digital Revolution
The post Are you prepared for zero-day threats? appeared first on EDN.
my way of removing components without a soldering iron
![]() | submitted by /u/stackinghabbits [link] [comments] |
What is ChatGPT | A beginner Guide 2023
ChatGPT (Generated Pre-trained Transformer) is a language Chabot which was programmed by OpenAI (an open artificial intelligence research institute) and was launched on November 30th of 2022 and in just a duration of 5 days till the 4th of December 2022 it attracted over a million users where the famous streaming platform “Netflix” gathered 1 million users in 41 months, Twitter 24 months, Facebook 10 months, Instagram a month and ChatGPT only 5 days.
ChatGPT is a program which is built to answer a user in a human like manner it is trained on language text from the internet and uses a learning method called Transformers which basically enables the Chabot to generate answers in a human like manner; it can create texts that engages the user in a conversational/exchanging dialogues like setting.
ChatGPT is more efficient than other language models due to its Transformer technology which enables it produce longer texts.
It is use super convenient to use and to use it you just have to sign up by simply searching ChatGPT on the web page.
Generating text: ChatGPT can easily generate text of any sort for you; it can generate text for a new topic, summarize a piece of text for you, create a story, and complete a text and more.
Managing dialogues: ChatGPT has the capability to engage and mange a human like conversation by keeping up the tone of the conversation which makes it seems as real life human interaction.
Providing information: ChatGPT can provide a vast range of information on diverse topics and can answer to all of your questions in a chat like manner.
Tone analysis: ChatGPT has this capability where it analyses the emotions in a text which is great if you want to avoid certain type of feelings added to your text.
Translation: It can translate text from a language to another language based on the data it was trained on.
Purpose of ChatGPT: The data this Chabot has been trained on aims for suitable answers according to the user’s need in a conversational manner which makes fun and easy.
It can remember what it was asked before, what the user said in the previous conversations and it also trained to decline inappropriate requests.
This is what it replies with when its asked with such a question
Development of ChatGPT:
ChatGPT has been trained and developed on a large handful of data gathered from various sources from books or the internet, the process of its training and development is mentioned below:
COLLECTION OF DATA: The step to start it all was the data collection in which the data was collected from various sources like websites, books internet and many more and fed to the model.
PRE-PROCESSING THE COLLECTED DATA: This step involved the removal of the duplicated data and also any sort of irrelevant information that was added to the collected data which basically cleaned the collected data.
MODEL DESIGN: This step involved using the transformer architecture and defining the model’s architecture.
TRAINING: The model is trained on the pre-processed data in which the data was installed in the model and brought to perfection.
FINE-TUNING: The model was fine-tuned to provide high detailed and specific answers.
EVALUATION: The model was tested again and again to improve its performance, once it gave a satisfactory result it was launched as a Chabot which can generate human-like text.
IMPACTS OF ChatGPT:
ChatGPT has took over the world by a storm, it can write poems, help programmers to write codes or it can compose songs or poems so it is and it will be creating an impact on the world.
It was created on GPT-3 which is the most powerful and the largest language model ever created.
It can help a lot of professions such as to help teachers in creating courses and so forth but it also puts a lot of other jobs at risk by taking over them like content creators or researchers.
This model sometimes gives false information which can create confusion in many ways and it can also become handy for cybercriminals.
ChatGPT is one of those programs which upon feeding it data will improve its performance even more which will put a greater impact in the near future as it’s still quite new.
LIMITATIONS OFFERED BY ChatGPT:
ChatGPT is changing everything but it still has its own limits just like everything else do, let’s take a look at some of them:
LACK OF EXPRESSION: ChatGPT is unable to have any sort of expressions just words which makes it unable to produce content which shares an intensity of any kind of expressions.
REAL LIFE INFORMATION: It also fails to give you live updates for example about the current weather.
LIMITED KNOWLEDGE AND UNDERSTANDING: The device is only loaded with the data till 2021 any time after 2021 will be more likely that the model will give you answers with less accuracy, and it just simply has less knowledge about specific domains like a language or a reference barrier.
MISUSE OF THE TECHNOLOGY: Because of its text as a human like manner it could be used to cause ill to someone or to spread hate.
IMPACTS ON JOBS: Further advancements in ChatGPT can cause job displacements which were previously done by humans which can cause an increase in the unemployment rate which is too risky.
HALLUCINATIONS: Because of ChatGPT being a language model it will inevitably provide answers which are hilariously wrong these are called hallucinations, it will sometimes just to do that and this problem is faced by almost every AI Chabot. OpenAI published a warning about incorrect output:
“ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”
DETECTABLE AS NON-HUMAN: ChatGPT is highly organized and too wordy on top of that it uses a specific way of writing its text which makes it non-human and easily detectable,
LACK OF ACCURACY: The text ChatGPT provides is by identifying the word before some other word and not as a topic which makes it inaccurate.
OVERLY DETAILED: When you just want a direct answer to a question such as in a medical context it will still provide you with a very detailed and a comprehensive answer when a direct answer is preferred.
UNNATURAL: The model sometimes overlooks a topic and gives a feeling of being unnatural which also differentiates it to humans who are divergent unlike ChatGPT.
BIAS TOWARDS BEING NEUTRAL: Biasing towards positivity or neutralism is helpful but in cases with sensitive topics neutrality becomes an unwanted topic.
TOO FORMAL: Because it uses proper punctuation and all the right grammar the human like mannerism is neglected because humans tend to use a lot of slang in their daily talks and ChatGPT doesn’t get irony, sarcasm or human expressions.
CONCLUSION:
ChatGPT doesn’t have any personal opinions, biases or views and it’s trained to provide neutral and factual information based on the data it was trained on, it does not have any emotions and its goal is to provide helpful and informative responses to all of its users.
The post What is ChatGPT | A beginner Guide 2023 appeared first on Electronics Lovers ~ Technology We Love.
Oversized tech!
![]() | I follow this account on Instagram and she is trying to see if there's a market for oversized tech, would anyone be interested? spoiler I think the next oversized project is an oversized 5050 led! [link] [comments] |
Pages
