EDN Network

Subscribe to EDN Network feed EDN Network
Voice of the Engineer
Updated: 1 hour 17 min ago

Two-half-period rectifiers on op amps

Tue, 08/22/2023 - 20:17

Schemes of two-half-period rectifiers on op amps with switching inputs in accordance with the polarity of the input voltage are given. The switching of inputs is performed by a control signal taken from the output of the zero-indicator made on the comparator.

Precision two-half-period rectifiers, whose operation is based on switching inputs in accordance with the polarity of the input voltage, usually contain an op amp, the non-inverting input of which is shunted by a diode, Figure 1, R1 = R2 = 2R3.

Figure 1 Classical scheme of a precision two-half-period rectifier.

Wow the engineering world with your unique design: Design Ideas Submission Guide

When a positive half-wave of the input voltage enters the input, the diode D1 is locked. The op amp U1 operates in the mode of a non-inverting amplifier with a transmission coefficient equal to R2/R1. The output voltage is equal to the input voltage: Uout = Uin.

When a negative half-wave of the corresponding amplitude arrives at the input of the device, the diode D1 opens, the circuit operates in the mode of an inverting amplifier with a transmission coefficient equal to –1, Uout = –Uin.

The disadvantage of the circuit is obvious: with a low input voltage of negative polarity due to the fact that the diode D1 has a noticeable resistance, Uout ≠ –Uin. In practice, the amplitude of the input signal when using the op amp U1 LM324 and the diode D1 1N4148 should be in the range of 2.5 V to the supply voltage.

It is possible to improve the operation of a precision two-half-period rectifier by using key elements (FET Q1, Figure 2, or analog switch U2, Figure 3), controlled by a zero detector on the comparator U1.1.

The practical scheme of the first of the devices is shown in Figure 2.

Figure 2 A two-half-period rectifier with switching of the input of an op amp by a FET.

A zero detector of the input signal is made on the comparator U1.1 of the LM339 chip. From the output of this detector, the control signal is applied to the gate of the FET Q1 2N3823, which switches the input of the op amp U2.1 of the LM324 chip.

The amplitude of the input signal when using an op amp U2.1 LM324 and a FET Q1 2N3823 should be in the range of 0.5 V from the supply voltage.

It is possible to improve a two-half-period rectifier by using an analog U3 switch as a switching element such as Figure 3, for example, CD4066, or a more modern element with less losses. To reduce the resistance of the public key, all 4 channels of the CD4066 switch should be connected in parallel. The switch S1 can change the polarity of the output signals.

Figure 3 A two-half-period rectifier with switching of the input of an op amp by an analog switch.

The precision of the two-half-period rectifier in Figure 3 is operable in the input voltage range from 20 mV to the supply voltage of the device. The maximum frequency of the rectifier is 100 kHz and depends on the frequency properties of the active elements.

Michael A. Shustov is a doctor of technical sciences, candidate of chemical sciences and the author of over 750 printed works in the field of electronics, chemistry, physics, geology, medicine, and history.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Two-half-period rectifiers on op amps appeared first on EDN.

Edge and corner cases: AI’s conceptual simplifications and potential complications

Mon, 08/21/2023 - 21:27

Whether or not (and if so, how) to account for rarely encountered implementation variables and combinations in hardware and/or software development projects is a key (albeit often minimized, if not completely overlooked) “bread and butter” aspect of the engineering skill set. Wikipedia seemingly agrees; here’s a relevant excerpt from the entry for edge cases:

An edge case can be expected or unexpected. In engineering, the process of planning for and gracefully addressing edge cases can be a significant task, and yet this task may be overlooked or underestimated. Non-trivial edge cases can result in the failure of an object that is being engineered. They may not have been foreseen during the design phase, and they may not have been thought possible during normal use of the object. For this reason, attempts to formalize good engineering standards often include information about edge cases.

And here’s a particularly resonant bit from the entry for corner cases:

Corner cases form part of an engineer’s lexicon—especially an engineer involved in testing or debugging a complex system. Corner cases are often harder and more expensive to reproduce, test, and optimize because they require maximal configurations in multiple dimensions. They are frequently less-tested, given the belief that few product users will, in practice, exercise the product at multiple simultaneous maximum settings. Expert users of systems therefore routinely find corner case anomalies, and in many of these, errors.

I’ve always found case studies about such anomalies and errors fascinating, no matter that I’ve also found them maddening when I’m personally immersed in them! And apparently, one of my favorite comic artists concurs with my interest:

That said, I’ll start off with a bit-embarrassing-in-retrospect confession. Until now, as I was researching this piece, I’d historically used the terms boundary case, corner case and edge case interchangeably. Although Google search results reassure me that I’m not unique in this imprecision, the fact that there are multiple distinct Wikipedia entries for the various terms (along with the closely related flight envelope) has probably already tipped you off (those of you not already more enlightened than me, to be precise) that they aren’t the same thing.

Here’s a concise explanation of the difference between an edge case and a corner case:

Corner cases and edge cases are different things even though they are commonly referred to as the same thing. Let’s formally define both:

  • An edge case is an issue that occurs at an extreme (maximum or minimum) operating parameter.
  • A corner case is when multiple parameters are simultaneously at extreme levels, and the user is put at a corner of the configuration space.

And what about the term boundary case? My research suggests that it’s essentially interchangeable with edge case, which makes sense when you think about it…an edge defines a boundary between one thing and another, after all. Boundary case seems to more commonly find use in software engineering, where (again quoting from Wikipedia’s edge case entry):

In programming, an edge case typically involves input values that require special handling in an algorithm behind a computer program. As a measure for validating the behavior of computer programs in such cases, unit tests are usually created; they are testing boundary conditions of an algorithm, function or method. A series of edge cases around each “boundary” can be used to give reasonable coverage and confidence using the assumption that if it behaves correctly at the edges, it should behave everywhere else. For example, a function that divides two numbers might be tested using both very large and very small numbers. This assumes that if it works for both ends of the magnitude spectrum, it should work correctly in between.

Conversely, the edge case vernacular seemingly finds more common use with hardware engineers. Examples here, off the top of my tongue, include extremes in:

  • Operating temperature (including both ambient extremes and circuitry-generated heat, not to mention what happens when ventilation sources—i.e., fans and the like—fail)
  • Supply voltage and current
  • Electromagnetic interference, both self-created and ambient
  • Humidity, more blatant moisture exposure, and other environmental variables
  • Etc…

And for the mechanical engineers in the audience, a whole host of other variables beg for attention, involving pressure, torque and other measures of stress, vibration, and the like.

Let’s now revisit software. Putting aside obvious code bugs, such as accesses to invalid areas of system memory and the like, edge cases often involve input, intermediary and output data that’s other than expected. The information may be larger or smaller than what was comprehended by the coder; it might also be formatted differently than anticipated. And then there are cases such as one that I personally grappled with a few years ago…

The original version of the website for the Embedded Vision Alliance (now the Edge AI and Vision Alliance), my “day job” employer, was initially implemented in a now-archaic version of Drupal. As time went on, I’d increasingly grapple with content provided by a Member company (often written in Microsoft Word or another word processor, or originally published on their website in HTML) which, after I republished it, would (I kid you not) cause our website’s hosting server to spike CPU utilization, sometimes even locking up completely. The culprit, it turned out, was the source content’s inclusion of unconventional character sets and extended symbols as well as other characters within a set…specifically, emoji, which wasn’t in common use at the time of that Drupal version’s development and therefore hadn’t been comprehended by the coders.

Speaking of the Alliance…let’s focus now on software for systems that, paraphrasing the organization’s lingo: “perceive, understand and appropriately respond to their surroundings”. Semi- and fully-autonomous vehicles are one obvious example here, albeit a somewhat extreme one. Since they both contain human beings capable of being harmed or killed by a collision or other malfunction and are capable of colliding with other human beings (among other things), edge and corner case comprehension and testing should appropriately be far more extensive than, say, with an autonomous consumer drone that worst-case might collide with a tree or the side of a building, damaging nothing but itself in the process.

Just the other day I had a conversation with a colleague who relayed to me the story of an experience he’d just had; in traversing a shadow-filled underpass that also involved a “dip” in the roadway, his car had briefly but notably auto-braked, incorrectly perceiving an object ahead of it. More generally, for example, he said that the vehicle will refuse to back out of the garage if the driveway contains more than a few inches of snow, because it discerns the accumulation as something that it might adversely collide with. Due to this particular car’s age, I pointed out to him, its advanced driver assistance system (ADAS) algorithms were undoubtedly developed in a traditional manner, where the software engineer challengingly had to:

  • Brainstorm all possible edge/boundary and corner cases, and then
  • Explicitly code algorithms that comprehended and implemented correct responses

Nowadays, of course, autonomous vehicle (and more general autonomous mobile robot, or AMR) software development is deep learning-based, done quite differently. Instead of precisely coding algorithms to comprehend all possible usage scenarios, you instead “train” the deep learning model with an extensive data set increasingly including not only still and video images, but also “sensor fusion” data from radar, lidar, ultrasound, human hearing-range audio (other vehicles’ horns, for example), and the outputs of other sensing modalities.

As with traditional algorithm development, and reiterating my statement that opened this piece, whether or not (and if so how) to account for rarely encountered implementation variables remains an application-dependent balancing act (to my earlier contrast between vehicles and consumer drones). Feeding the model training function with an excess of images, for example, will invariably lead to an unnecessarily bloated model, not only consuming disproportionate system resources but also executing subsequent inference operations more slowly than would otherwise be the case…particularly an issue when rapid response is critical!

That said, it’s not just rarely encountered big-picture operational scenarios that require thoughtful inclusion consideration in upfront training, it’s also the data within each of these scenarios. A bouncing ball, or for that matter a distracted toddler in motion, looks completely different in the middle of the day than under more muted dawn or dusk lighting conditions (further complicated by rain, snow, fog, and other environmental attenuators). Not to mention at different distances from, and at broader varying orientations relative, to a viewing camera.

If you don’t have real-life images to cover all these scenarios—say, a human being directly facing a camera as well as oriented sideways, with his or her back to the camera lens and image sensor, in various sizes (both absolute and distance-determined), with various skin tones, and wearing various outfits—what do you do? Until recently, you relied on synthetic image generation to append the training data set, using tools not unlike those that video game developers harness. Not coincidentally, check out this presentation on synthetic data creation and training inclusion by Unity Technologies (a leading game engine developer) from the 2022 Embedded Vision Summit, a preview version of which I’ve embedded below:

Nowadays, however, generative AI is becoming increasingly powerful (as I noted in my 2023 forecast piece from last fall) and correspondingly is also becoming an increasingly tempting option for doing synthetic data generation. While sometimes the images it creates are fanciful:

other times its output is uncannily realistic:

And, of course, generative AI can find use not only in creating still images and video sequences but also sound clips and other data types. So why bother capturing (or at least collecting) a bunch of real-life data, or firing up your computer and tediously using audio, graphics and/or other tools to craft your own synthetic content for model training purposes? Why not instead just say “Market Street, San Francisco, CA, on a rare sunny day, including a trolley car” and have your preferred generative AI tool automatically synthesize exactly what you want?

The answer, it seems from recently published research, is that you should resist the temptation to do so, because it ends up being a really bad idea in how it affects the resultant quality of the trained model. As noted, for example, in VentureBeat’s coverage of the topic, aptly titled “The AI Feedback Loop: Researchers Warn of ‘Model Collapse’ as AI Trains on AI-generated Content”:

The data used to train the large language models (LLMs) and other transformer models underpinning products such as ChatGPT, Stable Diffusion and Midjourney comes initially from human sources — books, articles, photographs and so on — that were created without the help of artificial intelligence. Now, as more people use AI to produce and publish content, an obvious question arises: What happens as AI-generated content proliferates around the internet, and AI models begin to train on it, instead of on primarily human-generated content? A group of researchers from the UK and Canada have looked into this very problem and recently published a paper on their work in the open access journal arXiv. What they found is worrisome for current generative AI technology and its future: “We find that use of model-generated content in training causes irreversible defects in the resulting models.”

I’ll explain what I think may be going on by means of analogy to traditional computer vision. Images intended for human viewing purposes are often quite different than those optimized for computer vision analysis. In the former case, they’re intended to be perceived as pleasing to the human visual system, tailored for our green-dominant color perception scheme, for example, as well as to smooth out subjects’ skin blemishes, enhance detail in both dark and light areas of the image, etc. Conversely, images ideal for computer vision analysis have artificially enhanced edges (boundaries?), for example, that aid in differentiating one object in a scene from another…but at the same time can be perceived as undesirable to the human eye.

Analogously, what we perceive in a generative AI-synthesized “artificial” image and what a trained deep learning model might draw attention to might be very different. Minute variances between a real-life image of an automobile and a synthesized one might not be noticed by us—we might even prefer the artificial representation—but will only confuse a deep learning inference operation guided by the prior flawed model training process. And confusion leads to unintended results, including an increasingly documented phenomenon called hallucination:

In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called confabulation or delusion) is a confident response by an AI that does not seem to be justified by its training data…Such phenomena are termed “hallucinations”, in loose analogy with the phenomenon of hallucination in human psychology. However, one key difference is that human hallucination is usually associated with false percepts, but an AI hallucination is associated with the category of unjustified responses or beliefs.

This writeup was intended to, and has hopefully succeeded in, providing you with plenty of “food for thought” as well as motivation for providing myself and your fellow readers with feedback. To wit, some questions for your consideration, to whet your appetite:

  • What examples from your past, present, and forecasted future product development experiences exist regarding corner, edge or whatever your favorite cases lingo is?
  • How do you know when to worry, or not, about accounting for a particular potential corner or edge case in your hardware and/or software design, what criteria guides that decision, and how does the outcome of your thought process vary over time, accumulated experience, situation specifics and other variables?
  • If you’re doing a deep learning-based implementation and you’re not confident that your existing model training data set is sufficiently comprehensive, how do you augment it? Conversely, if your training data set’s size and scope are overkill, how do you cull it?
  • Do you think that generative AI will end up being a boon, a bane, or some combination of the two in this regard?

I look forward to your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Edge and corner cases: AI’s conceptual simplifications and potential complications appeared first on EDN.

A sneak peek at next-generation ultrasound imaging

Mon, 08/21/2023 - 11:59

The use of sound waves for non-invasively visualizing a fetus in the womb, via ultrasound imaging, is widely recognized. By emitting high-frequency sound waves into the body and converting their echo into electrical signals, real-time images can be constructed. Apart from medical imaging, these sound waves find application in biometric identification and gesture recognition in automotive and virtual reality (VR) applications and various other fields.

Currently, ultrasound transducers are commonly produced using silicon semiconductor facilities. For high-resolution medical imaging with large-area coverage, large sensors are required, which is challenging for silicon-based sensors because of their high cost per mm2.

PMUT arrays for large-area imaging

In 2021, imec introduced flat panel display (FPD)-compatible piezoelectric micromachined ultrasound transducer (PMUT) arrays on glass. As a result of moving from wafer-based to FPD-compatible processes, cost-effective upscaling of ultrasound sensors was made possible.

More specifically, due to its compatibility with existing thin-film transistor backplanes and because this transducer technology is not hampered by wafer-size restrictions, large-area processing capabilities with PMUT arrays were made possible. However, the performance of the polymeric piezoelectric material was not yet sufficient for high-quality medical imaging.

Now, imec has demonstrated a second-generation PMUT array with another piezoelectrical material, AlScN. With the earlier implementation of a glass substrate instead of a crystalline silicon one, area restrictions were lifted. Additionally, this next-generation PMUT array exhibited 10 times the acoustic pressure compared to the previous generation.

Image acquisition up to 10 cm distance is shown below with pressures above 7 kPa in water, making it suitable for high-performing ultrasound imaging (Figure 1).

Figure 1 The second-generation PMUT array has been built with piezoelectrical material AlScN. Source: imec

The array, featuring an AlScN piezoelectric layer, achieves impressive image acquisition and beam steering up to 10 cm in water. This advancement paves the way for complex ultrasound applications on curved surfaces, revolutionizing medical imaging and monitoring.

Prospects for flexible ultrasound imaging

Next steps include maturing the technology and tuning the device to specific frequencies. In doing so, this technology will enable large ultrasound arrays on for instance curved surfaces, such as sensors for the human body or car dashboards, facilitating integration of ultrasound functions on large non-planar surfaces. As a result, exciting opportunities for innovative ultrasound applications will emerge.

Figure 2 Schematic cross-sections of a PMUT process flow include (a) backplane substrate with optional TFT and/or flexible layer; (b) front-plane substrate with metal-insulator metal stack; (c) bonding of front-plane to back-plane and removal of front-plane substrate; and (d) metal via interconnect for electrical connection between front- and back-plane. Source: imec

In collaboration with Pulsify Medical, imec has already created a proof-of-concept rigid medical patch for cardiac monitoring, paving the way for non-invasive and longitudinal monitoring outside hospitals, without the need for a physician. As a result, imec’s collaboration with Pulsify Medical brings non-invasive, physician-free cardiac monitoring a step closer to reality.

Figure 3 In the diagram outlining characteristics of PMUT elements, the inset shows a microscopy image of the fabricated PMUT and a corresponding cross-section across the cavity. Source: imec

As part of the EU-funded project ‘Listen2Future’, further development of a flexible ultrasound patch is ongoing. Listen2future is an EU-funded project addressing and benchmarking piezoelectric acoustic transducers with 27 partners across seven countries, coordinated by Infineon Technology Austria AG.

The findings are described in ‘A flat-panel-display compatible ultrasound platform’, which was presented at The Society for Information Display’s DisplayWeek 2023.

Erwin Hijzen is director of the MEMS Ultrasound program at imec.

Epimitheas Georgitzikis is R&D project leader of ‘Listen2Future’ project at imec.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post A sneak peek at next-generation ultrasound imaging appeared first on EDN.

Why responsible implementation of AI technology is critical

Fri, 08/18/2023 - 09:58

Artificial intelligence (AI) is seemingly everywhere. As AI models like ChatGPT are experiencing a meteoric rise in popularity, calls from critics and regulators have circulated the airwaves to do something about the potential threats that AI poses. Understandably, this has created a debate about whether the merits of AI outweigh its risks.

In recent months, the U.S. Federal Trade Commission has issued several statements on AI programs. These culminated in a statement made in April 2023 in conjunction with the Civil Rights Division of the U.S. Department of Justice, the Consumer Financial Protection Bureau, and the U.S. Equal Employment Opportunity Commission to support “responsible innovation in automated systems.”

Figure 1 It’s about time to weigh the ethical side of AI technology. Source: White Knight Labs

Why FTC is beginning to explore AI

Cybersecurity expert Greg Hatcher, co-founder of White Knight Labs, says there are three main areas about which the FTC is concerned: inaccuracy, bias, and discrimination. He adds that there is good reason for them to be worried. “Time has shown that models can be accidentally trained to discriminate based on ethnicity, and the vast majority of AI developers are white men, which leads to homogeneous perspectives,” he explains.

However, according to cloud computing guru Michael Gibbs, founder and CEO of Go Cloud Careers, this bias is not inherent to AI systems, but a direct result of the biases instilled in them by their creators. “Artificial intelligence is not inherently biased—AI can become biased based on the way it is trained,” Gibbs explains. “The key is to use unbiased information when developing custom AI systems. Companies can easily avoid bias with AI by training their models with unbiased information.”

Executive coach and business consultant Banu Kellner has helped numerous organizations responsibly integrate AI solutions into their operations. She points to the frenzy around AI as a major reason behind many of these shortcomings.

“The crazy pace of competition can mean ethics get overshadowed by the rush to innovate,” Kellner explains. “With the whole “gold rush” atmosphere, thoughtfulness sometimes loses out to speed. Oversight helps put on the brakes, so we don’t end up in a race to the bottom.”

Responsible implementation of AI

Kellner says the biggest challenge business leaders face when adopting AI technology is finding the balance between their vision as a leader and the increased efficiency that AI can offer to their operations. “True leadership is about crafting a vision and engaging other people toward that vision,” she says. “As humans, we must assume the role of the architects in shaping the vision and values for our emerging future. By doing so, AI and other technologies can serve as invaluable tools that empower humanity to reach new heights, rather than reducing us to mere playthings of rapidly evolving AI.”

As a leading cybersecurity consultant, Hatcher finds himself most interested in the influence AI will have on data privacy. After all, proponents of artificial intelligence have hailed AI’s ability to process data at a level once thought impossible. Additionally, the training process to improve the performance of these models depends also on the input of large amounts of data. Hatcher explains that this level of data processing could lead to what’s known as “dark patterns,” or deceptive and misleading user interfaces.

Figure 2 AI can potentially enable dark patterns and misleading user interfaces. Source: White Knight Labs

“Improving AI tools’ accuracy and performance can lead to more invasive forms of surveillance,” he explains. “You know those unwanted advertisements that pop up in your browser after you shopped for a new pink unicorn bike for your kid last week? AI will facilitate those transactions and make them smoother and less noticeable. This is moving into ‘dark pattern’ territory—the exact behavior that the FTC regulates.”

Kellner also warns of unintended consequences that AI may have if our organizations and processes become so dependent on the technology that it begins to influence our decision-making. “Both individuals and organizations could become increasingly dependent on AI for handling complex tasks, which could result in diminished skills, expertise, and a passive acceptance of AI-generated recommendations,” she says. “This growing dependence has the potential to cultivate a culture of complacency, where users neglect to scrutinize the validity or ethical implications of AI-driven decisions, thereby diminishing the importance of human intuition, empathy, and moral judgment.”

Solving challenges posed by AI

As for the solution to these consequences of AI implementation, Hatcher suggests there are several measures the FTC could take to enforce the responsible use of the technology.

“The FTC needs to be proactive and lean forward on AI’s influence on data privacy by creating stricter data protection regulations for the collection, storage, and usage of personal data when employing AI in cybersecurity solutions,” Hatcher asserts. “The FTC may expect companies to implement advanced data security measures, which could include encryption, multi-factor authentication, secure data sharing protocols, and robust access controls to protect sensitive information.”

Beyond that, the FTC may require developers of AI programs and companies implementing them to be more proactive about their data security. “The FTC should also encourage AI developers to prioritize transparency and explainability in AI algorithms used for cybersecurity purposes,” Hatcher adds. “Finally, the FTC may require companies to conduct third-party audits and assessments of their AI-driven cybersecurity systems to verify compliance with data privacy and security standards. These audits can help identify vulnerabilities and ensure best practices are followed.”

For Kellner, the solution lies more in the synergy that must be found between the capabilities of human employees and their AI tools. “If we just think in terms of replacing humans with AI because it’s easier, cheaper, faster, we may end up shooting ourselves in the foot,” she warns. “My take is that organizations and individuals need to get clear on the essential human elements they want to preserve, then figure out how AI could thoughtfully enhance those, not eliminate them. The goal is complementing each other—having AI amplify our strengths while we retain duties needing a human touch.”

Figure 3 There needs to be a greater synergy between the capabilities of human employees and their AI tools. Source: White Knight Labs

An application of AI in which a perfect example of this balance can be found is in personal finance. The finance app Eyeballs Financial uses AI in its financial advisory services. However, the app’s founder and CEO Mitchell Morrison emphasizes that the AI does not offer financial advice itself. Instead, AI is used as a supplement to a real-life financial advisor.

“If a client asks a question like ‘Should I sell my Disney stock?’, the app’s response will be, ‘Eyeballs does not give financial advice,’ and the message will be forwarded to their advisor,” Morrison explains. “The Eyeballs Financial app does not provide or suggest any form of financial advice. Instead, it offers clients a comprehensive overview of their investment performance and promptly answers questions based on their latest customer statement. The app is voice-activated and available 24/7 in real-time, ensuring clients can access financial information anytime, anywhere.”

The use case of Eyeballs is a perfect example of how human involvement is necessary to check the power of AI. Business leaders must remember that AI technologies are still in their infancy. As these models are still developing and learning, it’s essential to remember that they are imperfect and bound to make mistakes. Thus, humans must remain involved to prevent any mistakes from having catastrophic consequences.

Although we cannot discount the tremendous potential that AI models offer to make work more efficient in virtually every industry, business leaders must be responsible for its implementation. The consequences of AI being implemented irresponsibly could be more harmful than the benefits it would bring.

The debate about artificial intelligence is best summarized in a question rhetorically asked by Kellner: “Are we trying to empower ourselves or create a god to govern us?” So long as AI is implemented with responsible practices, businesses can stay firmly in the former category, and minimize the risk of falling victim to the latter.

John Stigerwalt is co-founder of White Knight Labs.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Why responsible implementation of AI technology is critical appeared first on EDN.

Power MOSFETs target automotive ECUs

Thu, 08/17/2023 - 19:49

Infineon offers six OptiMOS 5 MOSFETs in the 60-V and 120-V range housed in TOLx packages for use with automotive ECUs requiring 24 V to 72 V. The compact TOLx packages include the TO-leadless (TOLL), TO-leaded with gullwing (TOLG), and TO-leaded top-side cooling (TOLT). The TOLL, TOLG, and TOLT packages are optimized for high power, better thermal cycling on board (TCoB), and improved thermal performance, respectively.

The six new products offer a narrowed gate threshold voltage (VGS(th)) enabling designs with parallel MOSFETs for increased output power capability. The IAUTN06S5N008, IAUTN06S5N008G, and IAUTN06S5N008T are 60-V MOSFETs, while the IAUTN12S5N017, IAUTN12S5N018G, and IAUTN12S5N018T are 120-V MOSFETs.

On resistance (RDS(on)) ranges from 1.7 mΩ to 1.8 mΩ for the 120-V MOSFETs and is 0.8 mΩ for the 60-V MOSFETs. This makes the 60-V MOSFETs a good choice for 24-V CAV applications or for HV-LV DC/DC converters in electric vehicles. The 120-V MOSFETs can be used in 48-V to 72-V traction inverters for 2- and 3-wheelers and light electric vehicles.

Samples of the new OptiMOS 5 products in TOLx packages can be ordered now.

Infineon Technologies

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Power MOSFETs target automotive ECUs appeared first on EDN.

Kyocera-AVX launches supercapacitor modules

Thu, 08/17/2023 - 19:49

Series-connected supercapacitor modules in the SCM series from Kyocera-AVX provide strong pulse power handling with high capacitance and low ESR. The double-layer, electrochemical supercapacitor modules can be used by themselves or in conjunction with primary or secondary batteries to extend battery life and backup time or to provide instantaneous power pulses.

SCM modules are rated for operating temperatures ranging from -40°C to +65°C and lifetimes that extend to millions of cycles. The series supports active cell balancing and withstands high current, vibration, and frequent charge/discharge cycles.

The initial release of the SCM series comprises five devices:

  • SCMA63K586SPPB2 is rated for 16 V, 58 F (+30%/-10%), 5 mA DCL, and 15 mΩ ESR DC and comes in      a 226.2×48.6-mm plastic case with terminal screws.
  • SCMA63S586SPPB2 is rated for 160 V, 5.8 F (+30%/-10%), 25 mA DCL, and 150 mΩ ESR DC and comes in a 364.5×234-mm plastic case with terminal screws.
  • SCMZ1EK507STAB2 is rated for 16 V, 500 F (+30%/-10%), 6 mA DCL, and 2.5 mΩ ESR DC and comes in a 418×68-mm aluminum case with a 4-pin connector.
  • SCMZ1EP1F6STAB2 is rated for 48 V, 165 F (+30%/-10%), 6 mA DCL, and 5.22 mΩ ESR DC and comes in a 418×194-mm aluminum case with a 4-pin connector.
  • SCMZ85P836STAB2 is rated for 48 V, 83 F (+30%/-10%), 3 mA DCL, and 9 mΩ ESR DC and comes in a 418×194-mm aluminum case with a 4-pin connector.

Applications for the SCM series of supercapacitor modules include heavy industrial equipment, grid storage, energy harvesting. GSM/GPRS wireless communications, and automotive vehicles. Modules are available from DigiKey and Mouser.

SCM series product page


Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Kyocera-AVX launches supercapacitor modules appeared first on EDN.

PIN diode switches operate up to 75 GHz

Thu, 08/17/2023 - 19:49

Comprising 10 models, Pasternack’s line of ultra-broadband PIN diode switches covers frequencies ranging from 1 MHz to 75 GHz. The military-grade devices are available in SP2T, SP4T, and SP8T configurations, all with integrated TTL drivers and coaxial connectors.

The product lineup includes both reflective and absorptive designs, with the latter ensuring low VSWR performance. According to the manufacturer, the switches provide input power handling of up to 1 W CW and switching speeds as low as 50 ns typical. Their wideband frequency coverage encompasses popular bands, including UHF, VHF, L, S, C, X, Ku, K, Ka, Q, U, and V. Well-suited for a diverse range of applications, the switches can be used for radar, phase array systems, broadband jamming, wireless infrastructure, 5G communications, and test and measurement.

The RoHS-compliant devices operate over a temperature range of -40°Cto +85°C and resist such environmental conditions as altitude, vibration, humidity, and shock. The ultra-broadband PIN diode switches are in stock and available for same-day shipping.


Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post PIN diode switches operate up to 75 GHz appeared first on EDN.

Optical sensor improves performance, shrinks footprint

Thu, 08/17/2023 - 19:49

A reflective optical sensor, the VCNT2030 from Vishay provides a high current transfer ratio (CTR) and increased sensing distance in a tiny surface-mount package. The VCNT2030 packs a vertical-cavity surface-emitting laser (VCSEL) and silicon phototransistor in its 1.85×1.2×0.6-mm package, saving more than 40% of PCB space compared to previous-generation devices.

The sensor provides a detection range of 0.3 mm to 6 mm, an emitter wavelength of 940 nm, and a typical output current of 2.5 mA, which represents a typical CTR of 31% under test conditions. This CTR value is more than 100% higher than previous-generation devices and the closest competing sensor, according to the manufacturer. Vishay also says that VCNT2030’s sensing distance of 15 mm is three times that of the closest competing device.

With its compact footprint, the VCNT2030 offers space savings for optical switching in industrial infrastructure, home and building controls, computers, appliances, and consumer electronics. It can also be used to perform optical encoding for motor control in e-bikes, golf carts, tractors, and harvesters.

Samples and production quantities of the VCNT2030 are available now, with lead times of 8 to 16 weeks.

VCNT2030 product page

Vishay Intertechnology

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Optical sensor improves performance, shrinks footprint appeared first on EDN.

Partnership elevates video doorbell security

Thu, 08/17/2023 - 19:49

Omnivision and iCatch Technology are working together to bring enhanced color pre-roll and edge AI processing to wireless home security products. With pre-roll enabled, users can save up to 4 seconds of video footage prior to the camera’s motion detector triggering.

The collaboration pairs Omnivision’s OA7600 always-on video coprocessor with built-in pre-roll recording buffer and iCatch Technology’s Vi57 AI imaging SoC. This integrated chipset provides video doorbells with continuous storage of pre-event footage, as well as traceability and visibility during any triggering event.

The iCatch Vi57 imaging SoC offers fast capture and edge AI capability to improve the general usability of video doorbells. Taking the pre-roll footage from Omnivision’s OA7600 coprocessor, the Vi57 seamlessly combines it with post-event footage into one video with minimal data loss. It also provides high-quality color video, even under extreme low-light conditions.

The jointly developed color pre-roll video doorbell solution is in mass production now. For more information, contact iCatch Technology via the website link below.

iCatch Technology


Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Partnership elevates video doorbell security appeared first on EDN.

Take-back-half thermostat uses ∆Vbe transistor sensor

Thu, 08/17/2023 - 17:34

Cobbling up precision temperature control systems typically poses two particular design challenges:

  1. Accurate and cost-effective sensing of the temperature to be controlled.
  2. Closure of the feedback loop between temperatures sensor and heat and/or cooling source with control circuitry, including both high gain and dynamic stabilization that compensates for large time delays and phase lags common in thermal control systems.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 shows a thermostat design that incorporates unusual (analog) solutions to these two challenges.

One is a variable sample-rate (adjustable via Rf) ∆Vbe temperature sensor based on an ordinary self-calibrated small-signal transistor (2N3904). The other is a nonlinear convergence-forcing TBH (take-back-half) integrating thermal feedback loop.

Figure 1 Delta-Vbe temperature sensor combined with take-back-half integrating control loop.

 Here’s how it works: IC2a and IC2b form a variable-frequency (1Hz to 100Hz) square wave oscillator controlled by pot Rf. This drives the 2N3904’s delta-Vbe temperature measurement cycle. IC2b pin 1 modulates temperature sensor bias current in a 10:1 (ideally 10.0255:1) ratio to generate a PTAT (proportional to absolute temperature) peak-to-peak AC signal of:

Vt = log10(10.0255)/5050 = 198.24 µV/oK.

 This is amplified by IC5b by a gain factor of 10.091:1, to produce a net PTAT signal of:

198.24 µV/oK * 10.091 = 2000.4 µV/oK.

 This 2 mV/oK AC signal is synchronously rectified by IC2a, with the resultant DC applied to integrator amplifier IC5a as temperature control feedback.

IC5a accumulates the difference between the PTAT signal and the Vs (temperature setpoint) voltage with an integration time constant of F. F is inversely proportional to the IC2a-b oscillation frequency set by Rf, thus proportional to Rf, and therefore variable over a range of 1 to 100 seconds. More on this later as we explore how the PTAT – Vs feedback signal is used to control system temperature.

Vs is programmed by the IC4 precision 5.00 V reference as scaled by the Rb/(Ra + Rb) voltage divider according to this relation:

Ts = Setpoint temperature (oK) = (oC + 273.1) = 500Vs = 500(5(Rb/(Ra + Rb))),
Rb/(Ra + Rb) = Ts/2500,
2500Rb = Ts(Ra + Rb),
Rb(2500 – Ts) = RaTs, and
Rb = RaTs/(2500 – Ts).
For some examples (using standard resistor values), if Ra = 110k:
0oC requires Rb = 13.5k,
25oC requires Rb = 14.9k,
30oC requires Rb = 15.2k,
50oC requires Rb = 16.4k,
75oC requires Rb = 17.8k,
100oC requires Rb = 19.3k,

Which brings us to the question of how the PTAT – Vs difference signal integrated by IC5a is converted to a heater control signal.

According to analog guru Jim Williams, “The unfortunate relationship between servo systems and oscillators is very apparent in thermal control systems.” (Linear Applications Handbook, 1990). High-performance temperature control is certainly one of those topics that look easy in theory but turns out not so easy in practice. Heater-load thermal time constants conspire with heater-sensor response delays to produce wild oscillatory instability where precision thermostasis was intended.

Over the years, many feedback techniques and control strategies have been devised to tame the dynamic-stability gremlins that inhabit temperature-control servo loops. Many of these ideas incorporate integration of the temperature-control error term (TS − T) to force the control loop error to converge toward zero.

The method implemented in Figure 1 is such an error-term-integrating scheme. I have used it in many applications over the decades and have named it TBH = “take-back-half”. 

It has other applications besides temperature control. One example can be seen here and another here.

Perhaps the best write-up of TBH principles and performance, including a computed comparison of it versus the classic proportional-integral-differential (PID) algorithm, appears in third edition of “The Art of Electronics” by Paul Horowitz (pages 1075 to 1077).

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Nearly 100 submissions have been accepted since his first contribution back in 1974.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Take-back-half thermostat uses ∆Vbe transistor sensor appeared first on EDN.

Intel, Tower, and the diary of a failed acquisition bid

Wed, 08/16/2023 - 19:22

The termination of Intel’s bid to acquire Tower Semiconductor seems to be the latest fallout from semiconductor business-related tensions between China and the United States. It could also be seen as a heavy blow to Intel’s foundry ambitions to compete with the current duopoly of TSMC and Samsung.

Is it the harbinger of the tech version of the cold war? After all, Intel wouldn’t become a monopoly in the foundry business if the deal went through. Though Tower is a specialty fab focusing on areas like analog, RF and sensors that fabs in China are also eying.

Still, the termination of Intel’s $5.4 billion bid to buy the Israeli fab is widely seen in the backdrop of semiconductor technology-centric tensions between China and the United States. In 2022, DuPont De Nemours scrapped its $5.2 billion bid to acquire electronics materials maker Rogers Corp. after delays in securing approval from Chinese regulators.

In fact, the chatter about the future of this foundry deal began last month when Intel CEO Pat Gelsinger flew to China to persuade regulators. Intel, which announced to acquire the Migdal HaEmek, Israel-based Tower Semiconductor in February 2022, was originally planning to complete the acquisition within a year.

After failing to get approval from China’s State Administration for Market Regulation (SAMR), the two companies extended the acquisition period first to mid-June and then to 15 August 2023. The termination of the deal was announced by both companies a day after this deadline passed. Now Intel will pay Tower a termination fee of $353 million.

Source: Reuters

Intel’s bid for Tower was strategic for two reasons. First, it expanded the digital behemoth’s foundry footprint in areas like analog and RF. Second, it provided Intel with a geographical depth as Tower’s foundry operations are scattered around Israel, Italy, Japan, and the United States. And besides market share gains, it was also seen as a way for Intel to acquire foundry know-how and business culture.

Intel’s planned acquisition of specialty fab Tower was crucial in its drive to become the second-largest semiconductor foundry after TSMC by 2030. However, Intel Foundry Services has made gradual but significant advances in the fab business over the past couple of years. For instance, it has raised the bar in the semiconductor wafer business by adding chiplets, packaging, and software tools to the fab offerings.

Intel Foundry Services has also signed customers like Amazon, MediaTek, and Qualcomm. On the other hand, Tower Semiconductor, formerly known as TowerJazz, will not be significantly affected by this failed bid.

But will this failed bid slow down the merger and acquisition (M&A) activities in the semiconductor industry? Will chip companies be more cautious in making such deals? There will be more clarity to this fundamental question as more details emerge about this aborted deal.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Intel, Tower, and the diary of a failed acquisition bid appeared first on EDN.

Blue screen of deterioration

Wed, 08/16/2023 - 16:18

The song “Love Is Blue” has been recorded by many artists. One version I really like is by Marty Robbins and can be watched here while an instrumental version by Paul Mauriat can be watched here  but the issue below takes the concept of blue, but a blue not to love, to a different level.

We recently bought a new OLED television, and it works quite well, at least so far. It gets a little disturbing though when I see some television screens outside of the home, many of which have taken on a bluish cast like these three shown below.

Some bluish cast screens in different locations. Source: John Dunn

Admittedly these television screens are operated almost ceaselessly, especially the two that run 24/7 in diners that stay open all night. Our screen at home is only used for a relatively short time each evening and so far, it shows no deterioration.

However, it is pretty clear (sorry for the pun) that there is some mechanism of screen deterioration going on that can take effect over the long haul.


John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Blue screen of deterioration appeared first on EDN.

Cutting into a conventional USB-C charger

Tue, 08/15/2023 - 15:29

Back in March, EDN published my teardown of a 30W USB-C output gallium nitride (GaN) transistor-based charger from a company called VOLTME. At the time, I told you, and I even showed you:

that I’d also recently acquired a conventional (non-GaN) 30W USB-C charger, this an Insignia (Best Buy’s store brand), which I’d also be dissecting shortly. That time is now.

The VOLTME charger, again, has dimensions of 1.2×1.3×1.2 inches, translating to around 1.9 cubic inches of volume, and weighs 1.5 ounces. Today’s victim, the Insignia, is only a bit larger, 1.43×1.33×1.33 inches (~2.5 cubic inches); I can’t find a weight spec for it on Best Buy’s site.

Back in March, I’d also showed you comparative images of the Insignia charger and an older Aukey 27W one, also conventional in design, which I’d purchased in mid-June 2019 and which finds daily use in recharging my 11” iPad Pro (along with, more recently, my new-to-me M1 MacBook Air…but that’s another story for another time…). Here they are again:

The Aukey device, with model # PA-Y8, has dimensions of 2.17×1.97×1.10 inches (~4.7 cubic inches) and weighs 2.57 ounces. It cost me $17.59 (plus tax), promotion priced. Compare that against the earlier-documented dimensions of the two newer devices. Now consider that the more modern units cost me only $9.99 (VOLTME, purchased in early January) and $10.99 (Insignia, purchased in early February), in both cases again on sale at the time, and plus tax. Half (or less) the volume, and around half the price, after around four years of evolution. Progress!

Now for some standalone shots of our patient. Here it is back in February, still packaged:

Here’s what’s inside, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

Now for some standalone shots. Front:

Left side:

Back, with the AC plug prongs both retracted and extended:

Right side:

Top, again with the AC plug prongs both retracted and extended:

And finally, the comparatively boring bottom:

Now it’s time for the $10,000 question: how to get inside? The entry points with products like these are usually obvious; there’s typically a seam around the edge of the front panel, as well as the back (which is how I got inside the VOLTME unit). But for obvious high-voltage safety reasons, it’s rarely at all easy to get through those seams; they’re quite sturdy. That was certainly the case here: I started out with the front panel using the combination of a box cutter and a small flat-head screwdriver acting as a chisel, but didn’t get far:

I then turned to my heat gun in the ultimately-shattered aspiration to melt the glue holding the front panel in place:

Instead, I ended up melting a good portion (but not the entirety) of the front panel off:

But the insides remained stubbornly stuck in place, due in no small part to all the grey thermal paste that you can see in this initial glimpse (not to mention all the remaining bits of the front panel that remained stubbornly glued in place):

I then tried ejecting the insides from the case by pushing on the extended AC prongs. No dice. Wielding my implements of destruction on the back panel didn’t do anything, either:

Eventually, I brute-forced my way inside, putting the unit in a vise and applying a hacksaw to it:

Here’s what the “guts” look like, now case-unencumbered but still thermal paste-swathed.


Left side:

Back (no different than the previous perspective, but hold that thought):

Right side:


And much-more-interesting-than-before bottom:

Re my earlier “hold that thought” comment, and as keen-eyed readers may have deduced from the previous “top” view, it turns out that the “guts” comprise two different assemblages, press-fit together (but, in the previous “top view”, moved slightly out of alignment during the case-cutting and “guts”-removal steps). One encompasses the AC prongs, along with a spring-and-latch assembly of some sort, presumably; I couldn’t figure out how to get inside the black-and-white box. The other comprises the bulk (entirety?) of the electronics. You can see on the top edge of the latter assembly’s top view the two PCB contacts that mate with clips on the former:

Here are some more views of the AC prong assembly:

And a closer peek at the top of the main assembly standalone, minus its prior mate:

I realized in retrospect, while writing this piece, that I hadn’t taken another photo of the back of the main assembly at this point. Trust me when I attempt to reassure you that the visage was unmemorable, essentially nothing but a big blob of grey thermal paste, to (among other things) stick the two assemblies together. You’ll see what it looked like after paste removal shortly.

Speaking of which, it’s now time for an isopropyl alcohol bath, reminiscent of last time (this time I used a shot glass instead):

followed by tedious chip-off of the paste using the combo of a fingernail and toothpick. Then back in the shot glass for another soak…wash, rinse and repeat multiple times…

And finally the deed was done, at least to the degree that my patience afforded. Front first, striving to keep the same cadence (and orientation) as in prior photo sequences:

Last time I found a small black piece of plastic that, when removed, exposed more circuitry to view. I found one again this time: presumably their functions aren’t simply aesthetic but also protective. See it in the upper left, extending across the top left edge of the transformer?

Here’s what the front looks like when I slip it off:

The rationale for the plastic piece becomes more obvious when you look at the left side. Here it is temporarily put back in place:

Now removed and exposing several more passives to view, three capacitors and two resistors, to be precise (the piece slides into that groove in the PCB you may have already noticed):

Now here’s that shot of the prong-less back side that I previously promised; the green toroidal coil inductor and two big brown capacitors were completely immersed in grey goo previously:

Right side:


And last but not least, bottom:

Fini et terminé! As I’ve confessed plenty of times in the past, “1”s and “0”s are my specialty, not power electronics. As such, instead of attempting further analysis myself (predictably embarrassing myself in the process), I’ll now turn the microphone/pen/keyboard/pick-your-favorite-analogy over to all of you for your thoughts in the comments!

I’ll hold onto the “guts” for a while so that I can respond to any incoming component identity or other similar questions that folks may have, for which my magnifying glass perspectives may be illuminating. Just don’t ask me to reconnect the two halves, plug the unified assemblage into an AC outlet and see if it still works…trust me, I’ve thought of but successfully ignored that temptation plenty of times already!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Cutting into a conventional USB-C charger appeared first on EDN.

Generative AI and memory wall: A wakeup call for IC industry

Tue, 08/15/2023 - 09:34

Judged by the presence of artificial intelligence (AI) and machine learning (ML) technologies at 2023 Design Automation Conference (DAC), the premiere event for semiconductor designers, computing hardware for accelerating AI/ML software workloads is finally receiving due attention. According to PitchBook Data, in the first six months of 2023, more than 100 startups designing novel AI/ML computing architectures collected $15.2 billion of venture funding.

With the creation of “transformers,” namely large language models (LLMs) implementing software algorithms, the semiconductor industry has reached an inflexion point. Conceived at Google in 2017, transformers can learn complex interactions between different parts of an entity, be they words in a sentence, pixels in an image, notes in a musical score and more, and generate a translation in a different language, an enhanced image, a new musical score and more.

Transformers turned into the preferred choice for autonomous driving (AD) algorithms for their ability to track multiple interactions between the environment and the AD vehicle. Recently, transformers became the foundation for Generative AI, the most significant new technology since the Internet, introduced by OpenAI’s ChatGPT. Beside the front-runner, well-known transformers include Google PaLM, Meta LLAMA, and others.

However, the advancements in software algorithms spearheaded by transformers have not been paralleled by progress in computing hardware tasked to execute the models. The catch is that cutting-edge dense models are enormous and require massive processing power for learning and even more for inferencing.

Today, the model training process is handled by vast computing farms running for days, consuming immense electric power, generating lots of heat and costing fortunes. Much worse is the inference process. Indeed, it is hitting a wall, defeating GenAI proliferation on edge devices.

Two bottlenecks coincide to defeat the inference process: inadequate memory capacity/bandwidth and insufficient computational power.

While none of the existing edge AI accelerators are well-suited for transformers, the semiconductor industry is at work to correct the deficiency. The demanding computational requirements are tackled on three different levels: innovative architectures, silicon scaling to lower technology nodes, and multi-chip stacking.

Still, the advancements in digital logic do not address the memory bottleneck. To the contrary, they have contributed to a perverse effect, known as “memory wall.”

The memory wall

The memory wall was first devised as a theory by William A. Wulf and Sally A. McKee in an article published in a 1995 issue of the ACM SIGArch Computer Architecture News. It posited that the improvements in processor performance far exceeded those of the memory, and the performance gap has continued to diverge ever since. See Figure 1.

Figure 1 The performance gap between processor and memory has grown significantly over the past 30 years.

The scenario forces the processor to wait for data from the memory. The higher the performance of the processor, the longer the waiting time. Likewise, the larger the amount of data, the longer the idling time. Both outcomes prevent 100% utilization of the processor.

The memory wall has plagued the semiconductor industry for years and is getting worse with each new generation of processors. In response, the industry came up with a multi-level hierarchical memory structure with faster albeit more expensive memory technologies nearer the processor. Closest to the processor are multiple levels of cache that minimize the amount of traffic with the slower main memory and with the slowest external memory.

Inevitably, the more the levels to be traversed, the larger the impact on latency and the lower the processor efficiency. See Figure 2.

Figure 2 A multi-level hierarchical memory structure has faster, more expensive memory technologies closer to the processor with more levels and larger impact on latency and processor efficiency.

The impact on Generative AI: Out of control cost

Today, the impact of the memory wall on Generative AI processing is out of control.

In less than one year, GPT, the foundation model powering ChatGPT, evolved from GPT-2 to GPT-3/GPT-3.5 to the current GPT-4. Each generation inflated the model size and the number of parameters (weights, tokens, and states) by an order of magnitude. GPT-3 models incorporated 175 billion parameters. The most recent GPT-4 models pushed the size to 1.7 trillion parameters.

Since these parameters must be stored in memory, the memory size requirement exploded into terabytes territory. To make things worse, all these parameters must be accessed simultaneously at high speed during training/inference, pushing memory bandwidth to hundreds of gigabytes/sec, if not terabytes/sec.

The daunting data transfer bandwidth between memory and processor brings the processor efficiency to its knees. Recent findings prove that the efficiency of running GPT-4 on cutting-edge hardware drops to around 3%. Meanwhile, the very expensive hardware designed to run these algorithms sits idle 97% of the time.

The lower the implementation efficiency, the more hardware becomes necessary to perform the same task. For example, assuming that a requirement of one petaflop (1,000 teraflops) may be served by two suppliers. The suppliers (A and B) deliver different processing efficiencies, 5% and 50% respectively. Then supplier A could only provide 50 teraflops of effective, not theoretical processing power. Supplier B would provide 500 teraflops of the same. To deliver one petaflops of effective compute power, supplier A would require 20 units of its hardware, but supplier B only 2 units.

Adding hardware to compensate for the inefficiency propels the cost. In July 2023, EE Times reported that Inflection, a Silicon Valley AI startup, is planning to use 22,000 Nvidia H100 GPUs in its supercomputer data center. “A back-of-the-envelope calculation suggests 22,000 H100 GPUs may come in at around $800 million—the bulk of Inflection’s latest funding—but that figure doesn’t include the cost of the rest of the infrastructure, real estate, energy costs and all the other factors in the total cost of ownership (TCO) for on-premises hardware.” Find more on this in Sally Ward-Foxton’s EE Times article “The Cost of Compute: Billion-Dollar Chatbots.”

Digging into publicly available data, a best-guess estimate would lead to a total cost per query in the ballpark of $0.18 per query when the target is 0.2 cents per query, informally established as benchmark by leading AI firms to pay for advertising using the Google model.

Attempts at reducing the usage cost of LLM inference

Intuitively, one approach to improve inference latency and performance would be to increase the batch sizes (the number of concurrent users). Unfortunately, it expands the state data with detrimental impact on memory size and bandwidth.

Another method to reduce the processing power adopted by OpenAI’s GPT-4 consists of using incremental transformers. While performance significantly enhances, it requires one state per user, enlarging the memory capacity needs and taxing the memory bandwidth requirements.

While all attempts ought to be appreciated, the solution must come from a novel chip architecture that will break up the memory wall.

The cost imperative

Unless a solution to reduce the cost of operating ChatGPT or any other large language model can be crafted, the usage model may not be paid for by advertising. Instead, it may have to be based on subscription or similar payment. The implication will impact and limit massive deployment.

Looking at the number of visitors to the OpenAI/ChatGPT website, after three months of exponential growth, the growth curve recently flattened, possibly due to a lowering of the hype surrounding ChatGPT. It could also be the dismay in usage costs. See Figure 3.

Figure 3 ChatGPT monthly visits shows an exponential growth curve flattening.

GPT-4 has proven to be unexpectedly daunting to roll out in a commercially viable form. It is fair to assume that there is a GPT-5 lurking in the shadows that will pose an order of magnitude larger challenges.

The Generative AI game will be won by the AI accelerator supplier delivering the highest processing efficiency. It’s a wake-up call for the semiconductor industry.

Dr. Lauro Rizzatti is a verification consultant and industry expert on hardware emulation. Previously, he held positions in management, product marketing, technical marketing, and engineering.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Generative AI and memory wall: A wakeup call for IC industry appeared first on EDN.

Fast(er) frequency doubler with square wave output

Mon, 08/14/2023 - 19:27

Whenever I find myself harboring a constantly nagging thought, I know I will eventually have to get busy and find a way to put it to rest.

When I read EDN’s Design Ideas, I look for the range of frequencies over which the circuit will operate, if this is an appropriate spec for the circuit. Mentally, I give the circuit a low score if the operating frequency range is, in my opinion, limited.

Wow the engineering world with your unique design: Design Ideas Submission Guide

My nagging thought was “can the operating frequency range for the Frequency doubler with square wave output be significantly extended above 2.82 MHz?”  If “yes”, then can it be done for a “reasonable” cost? (Reasonable cost, of course, depends on the application, among other things, some of which may be somewhat subjective.)

So, I started thinking and researching possible ways to extend the frequency range of the recently published square wave frequency doubler circuit.

To extend the operating frequency range , I needed to begin with a really fast one-shot circuit.  Being unable to purchase a suitable device, I designed one similar to the ubiquitous 555 timer, but with much faster components. Fast XOR gates, a fast comparator, a fast flip-flop, and a fast discharge transistor are the main components of the one-shot. This one-shot circuit will operate at frequencies up to 50 MHz (and probably even higher).

The comparator I used, the TLV3501 from Texas Instruments, operates rail-to-rail (input and output), and has a typical prop delay of 4.5 ns (6.4 ns max) when driving 17 pF.

It is the costliest component used in my circuit ($1.62 in 1k quantity), but it is fast, reasonably priced, and readily available. The other components are inexpensive, fast, and widely used in the electronics industry. (The MMBT2369 is the surface mount version of the 2N2369, which dates back to the early 1960s, but it is pretty fast and it is cheap.)

The 74LVC1Gxx parts have prop delay times in the order of 1 ns when driving a few picofarads of capacitance, and they can be operated with a supply voltage of 5 V (which was my preference). I used the TLV9052 dual op amp, which has infinite input impedance (well, almost) and operates rail-to-rail, input and output. The 74LVC1G86  XOR gate is handy because it can function as an inverter or as a buffer, and I used several of them.

A simple description of the operation of the circuit: An ultra-fast one-shot is forced by negative feedback to produce a 50% duty cycle square wave output. I added a 50-ohm termination and a buffer/squarer at the input and a 50-ohm driver at the output for convenience in testing.

The nitty-gritty description (Figure 1): A 50% duty cycle square wave is the input to the XOR gate, U3 (via U7), causing a 2 ns pulse output from U5 to be applied to the /S input of the flip-flop, U2. The /Q output of the flip-flop goes low and turns off the discharge transistor, Q3, which allows the timing capacitor, C4, to begin charging. The output from Q3 is a voltage ramp which is applied through R1 to the inverting input of comparator, U1. The output of the comparator goes low when the voltage ramp reaches the reference voltage set by R4 and R5. This resets the flip-flop, causing the discharge transistor to turn on and discharge the timing capacitor, C4, and the cycle repeats.

Figure 1 Circuit for an ultra-fast one-shot is forced by negative feedback to produce a 50% duty cycle square wave output.

The charging current to C4 is supplied by Q1 and the associated components. The charging current is controlled by negative feedback from op amp U6A, which forces the one-shot to produce a square wave of 50% duty cycle, which, when low-pass filtered, produces a DC voltage of exactly 2.5 V (if the supply voltage is exactly 5 V). The tolerances of R18 and R19 will determine how exact this voltage is.

The reference voltage provided by U6B and its associated components is set to 2.5 V by precision (or matched) resistors R18 and R19. This reference will track the +5 V supply, so that the 50% duty cycle square wave output of the circuit remains at 50% if the supply voltage changes. (The lightly loaded output of the flip-flip also tracks changes in the supply voltage.)

Simulation, implementation, testing, and results

 I used LTspice to design and simulate the circuit. Then I used Express PCB’s free tools to design and lay out a two-sided circuit board with a ground plane on the bottom side. I used AppCad, which is freely available on the web, to simulate the signal’s overshoot/undershoot on critical traces. I inserted resistors in series with some of the longer traces for fast signals to reduce overshoot/undershoot. I used surface mount devices for all of the components except Q1, Q2, the input and output SMA connectors, and connecter P1 and P3. (P2 was not used.) The points marked T1 through T12 are test points (plated-thru holes on the PCB). The loaded circuit board is shown in Figure 2.

Figure 2 The loaded two-sided circuit board with a ground plane on the bottom side, the points marked T1 through T12 are test points.

The 10 pF capacitor for C4 was left in the circuit when I tested with other values of C4. The other values were soldered on the PCB, but were connected to a 0.100-inch center connector mounted on the PCB, so I could select them individually with a slide-on shorting tab. That is why all the other values of C4 have an additional 10 pF for the frequency range tests.

The circuit performance was pretty well predicted by the LTspice and AppCad simulations. Table 1 shows the range of performance vs several values of C4.

Table 1 The range of performance with different values of C4.

This circuit extends the operating frequency range for the frequency doubler to 36 MHz, which is more than 10 times the upper frequency limit of the original circuit.

Jim McLucas retired from Hewlett-Packard Company after 30 years working in production engineering and on design and test of analog and digital circuits.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Fast(er) frequency doubler with square wave output appeared first on EDN.

Why MCU suppliers are teaming up with TinyML platforms

Fri, 08/11/2023 - 17:53

Infineon’s recent acquisition of Imagimob, a Stockholm, Sweden-based supplier of TinyML platforms, raises a fundamental question: where the chip industry stands in adopting and accelerating this artificial intelligence (AI) technology used for automated tasks involving sensory data.

Especially, when most TinyML applications employ microcontrollers (MCUs) to deploy AI models. In fact, MCUs are at the heart of a new premise at the intersection of AI and the Internet of Things (IoT) called Artificial Intelligence of Things or AIoT. Steve Tateosian, VP of IoT Compute and Wireless at Infineon, calls AIoT a natural evolution enabled by TinyML.

But how does TinyML bridge the gap between machine learning (ML) and embedded systems? What role will suppliers of microcontrollers and other embedded processors play in facilitating the production-ready deep learning models? Infineon’s Imagimob deal and other tie-ups between embedded processors and ML software houses provide some clarity.

For a start, what’s required is more sophisticated TinyML models, and that calls for more innovation at the software solutions level for specific use cases. Here, it’s worth mentioning that Imagimob has been working closely with embedded processor suppliers like Syntiant before the acquisition. It has demoed its TinyML platform on Syntiant’s NDP120 neural decision processor in 2022.

Figure 1 AI chips powered by TinyML platforms can be used to quickly and easily implement vision, sound-event detection (SED), keyword spotting, and speech processing capabilities in a variety of applications. Source: Syntiant

Likewise, Infineon teamed up with another supplier of TinyML-based AI models, Edge Impulse, to prep its PSoC 6 microcontrollers for edge-based ML applications. Edge Impulse’s platform streamlines the entire process of collecting and structuring datasets, designing ML algorithms with ready-made building blocks, validating the models with real-time data, and deploying the fully optimized production-ready result to a microcontroller like PSoC 6.

So, by collaborating with a software house specializing in TinyML-based AI models, Infineon wanted to lower the barriers to running TinyML models on its MCUs. The TinyML platform offered by software houses like Imagimob and Edge Impulse allows developers to go from data collection to deployment on an edge device in minutes.

Such tie-ups are aimed at adopting and accelerating ML applications such as sound event detection, keyword spotting, fall detection, anomaly detection, and gesture detection. Here, MCU suppliers are trying to accelerate the adoption of TinyML for the microwatt era of smart and flexible battery-powered devices.

Figure 2 Embedded system developers use Imagimob AI to build production-ready models for a range of use cases such as audio, gesture recognition, human motion, predictive maintenance, and material detection. Source: Imagimob

According to David Lobina, Artificial Intelligence & Machine Learning research analyst at ABI Research, any sensory data from an environment can have an ML model applied to that data. “However, ambient sensing and audio processing remain the most common applications in TinyML.”

Take the case of the Imagimob AI platform that includes a built-in fall detection starter project. It comprises an annotated dataset with metadata (video) and a pre-trained ML model (in h5-format) to detect when a person falls from a belt-mounted device using inertial measurement unit (IMU) data. So, a developer can use the fall detection model and improve it by collecting more data.

Figure 3 Imagimob AI is an end-to-end development platform for machine learning on edge devices. Source: Imagimob

Founded in 2013, Imagimob offers a quick-start development system for on-device TinyML as well as “automatic machine learning” or AutoML solutions. Its acquisition by Infineon underscores the need for collaboration between embedded processor suppliers and TinyML platform providers in order to bring the advantages of AI/ML to embedded systems.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Why MCU suppliers are teaming up with TinyML platforms appeared first on EDN.

Memory module promotes CXL 2.0 adoption

Thu, 08/10/2023 - 22:12

Micron is sampling its CZ120 memory expansion module, which features a PCIe 5.0 x8 interface and supports the Compute Express Link (CXL) 2.0 standard. The CXL Type 3 module provides storage capacities of 128 Gbytes and 256 Gbytes in an EDSFF E3.S 2T form factor.

With a dual-channel memory architecture and Micron’s high-volume production DRAM process, the CZ120 achieves higher module capacity and increased bandwidth. The device has a read/write bandwidth of up to 36 Gbytes/s (measured by running MLC workload with 2:1 read/write ratio on a single CZ120 memory expansion module).

“Micron is advancing the adoption of CXL memory with this CZ120 sampling milestone to key customers,” commented Siva Makineni, vice president of the Micron Advanced Memory Systems Group. “We have been developing and testing our CZ120 memory expansion modules utilizing both Intel and AMD platforms capable of supporting the CXL standard. Our product innovation coupled with our collaborative efforts with the CXL ecosystem will enable faster acceptance of this new standard, as we work collectively to meet the ever-growing demands of data centers and their memory-intensive workloads.”

Qualified customers and partners can enroll in Micron’s Technology Enablement Program (TEP) to gain early access to technical information and support to aid in the development of CXL-enabled memory expansion products.

CZ120 product page

Micron Technology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Memory module promotes CXL 2.0 adoption appeared first on EDN.

Neural processor IP tackles generative AI workloads

Thu, 08/10/2023 - 22:12

Ceva has enhanced its NeuPro-M NPU IP family to bring the power of generative AI to infrastructure, industrial, automotive, consumer, and mobile markets. The redesigned NeuPro-M architecture and development tools support transformer networks, convolutional neural networks (CNNs), and other neural networks. NeuPro-M also integrates a vector processing unit to support any future neural network layer.

The power-efficient NeuPro-M NPU IP delivers peak performance of 350 tera operations per second per watt (TOPS/W) on a 3-nm process node. It is also capable of processing more than 1.5 million tokens per second per watt for transformer-based large language model (LLM) inferencing.

To enable scalability for diverse AI markets, NeuPro-M adds two new NPU cores: the NPM12 and NPM14 with two and four NeuPro-M engines, respectively. These two cores join the existing NPM11 and NPM18 with one and eight engines, respectively. Processing options range from 32 TOPS for a single-engine NPU core to 256 TOPS for an eight-engine NPU core.

NeuPro-M meets stringent safety and quality compliance standards, such as ISO 26262 ASIL-B and Automotive Spice. Development software for NeuPro-M includes the Ceva Deep Neural Network (CDNN) AI compiler, system architecture planner tool, and neural network training optimizer tool.

The NPM11 NPU IP is generally available now, while the NPM12, NPM14, and NPM18 are available to lead customers.

NeuPro-M product page


Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Neural processor IP tackles generative AI workloads appeared first on EDN.

Power ICs embed MCU and CAN FD interface

Thu, 08/10/2023 - 22:12

Infineon’s TLE988x and TLE989x series of ICs combine a gate driver, microcontroller, communication interface, and power supply on a single chip. Devices in the TLE988x series are H-bridge drivers for 2-phase brushed DC motor control, while ICs in the TLE989x series are 3-phase bridge drivers for brushless DC motors.

The N-channel MOSFET drivers employ a 32-bit Arm Cortex-M3 microcontroller running at up to 60 MHz to enable the implementation of advanced motor control algorithms. Their CAN FD transceiver operates at a speed of 2 Mbps. Variants offer flash memory sizes of 128 kbytes and 256 kbytes and support read-while-write operation.

The drivers are AEC-Q100 qualified, making them suitable for automotive motor control applications. Infineon’s adaptive MOSFET control algorithm compensates for the variation of MOSFET parameters in the system by automatically adjusting the gate current to achieve the required switching. Devices are also ISO 26262 (ASIL B) compliant, and some variants have built-in cybersecurity.

Housed in in 7×7-mm TQFP packages, the TLE988x and TLE989x devices are currently in production and available. Two variants in LQFP packages with 64 pins will follow in December 2023.

TLE988x product page

TLE989x product page

Infineon Technologies 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Power ICs embed MCU and CAN FD interface appeared first on EDN.

Storage accelerator board handles 200 Gbps

Thu, 08/10/2023 - 22:11

Panther III from MaxLinear is an OCP 3.0 form factor storage accelerator that delivers a throughput of 200 Gbps, scalable to 3.2 Tbps when cascaded. Panther’s 12:1 data reduction intelligently offloads the CPU by providing multiple independent parallel DPU transform engines.

Aimed at hyperscale data centers, telecommunications infrastructure, and public-to-edge clouds, Panther III allows users to access, process, and transfer data up to 12 times faster than without a storage accelerator. Advanced encryption capabilities eliminate the need for self-encrypting drives and remove the cost of and need for security routers. The board also provides independent hash block size and programmable offset, enhancing deduplication hit rates and improving effective storage capacity.

A software development kit (SDK) for the Panther III contains API, drivers, and source code for incorporation with end-application software and software-defined storage. The kit is focused on CPU offloading, reduced overhead for lowest latency, and full-feature failover for highest system reliability and zero down time.

The OCP 3.0 version of the Panther III adapter card is available immediately. A PCIe version will be available in Q3 2023.

Panther III product page


Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Storage accelerator board handles 200 Gbps appeared first on EDN.