Українською
  In English
Новини світу мікро- та наноелектроніки
Oscilloscopes tout hardware-accelerated analysis

Keysight’s Infiniium MXR-B-Series oscilloscopes offer automated test tools and hardware-accelerated analysis to quickly find anomalies. The series comprises 12 models ranging in performance from 500 MHz to 6 GHz, 4 or 8 channels, and multiple hardware and software options.
Built-in tools reduce troubleshooting time by automating fault detection, design compliance testing, power integrity analysis, protocol decoding of more than 50 serial protocols, and mask testing on all channels simultaneously. Each scope leverages the same hardware-acceleration ASIC as Keysight’s 110-GHz UXR-B-Series oscilloscopes to accelerate analysis, eye diagrams, and triggering.
MXR B-Series scopes provide an update rate of greater than 200,000 waveforms/s, a sample rate of 16 Gsamples/s, and bandwidth up to 6 GHz that does not decrease with channel usage. They also boost jitter analysis by up to 70% and power integrity analysis by 65% compared to the MXR A-Series. A noise floor as low as 43 µV and an effective number of bits (ENOB) of up to 9.0 ensure accurate measurements.
For more information on the MXR B-Series oscilloscopes, use the product page link below.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Oscilloscopes tout hardware-accelerated analysis appeared first on EDN.
Dial pad upgrade
![]() | As a child I played with these dialers and thought the DTMF sounds were interesting so bought this gem as a throwback [link] [comments] |
New version with precision of 0.73ppm(1.9s/ month) Based on this project https://hackaday.io/project/175697-breadboard-wristwatch
![]() | submitted by /u/titojff [link] [comments] |
Good connector for slotting parts
Hi, as the title suggests I am looking for a connector that is easy to "slot in". I'm working on a modular PSU for a project and need a 2 pin (for DC) connector that can take at least 3 amps (more is preferable, I'm guessing a bigger connector will be easier to slot as well).
The prerequisites are as follows:
* Easy to slot in
* No need for soldering (screw terminals are preferred)
* Minimum 3A
* Not too deep/long connector to save space if possible
Also, anyone know where to find good, compact and reliable transformers? Using a 60W 24 V LED driver for now, but I'm unsure whether it will hold up and can handle a fluctuating current.
Any help is greatly appreciated!
[link] [comments]
Expanding the pins of a microcontroller
What to do when the chosen microcontroller doesn’t have enough pins? We will find out in this tutorial, where we will learn how to use shift registers to control LEDs and read buttons with just three wires! In six steps, we will use the CD4094 and CD4021 with a PIC microcontroller.
The post Expanding the pins of a microcontroller appeared first on Open Electronics. The author is Boris Landoni
You do get better at soldering, right?
![]() | submitted by /u/Snoo53833 [link] [comments] |
Update: STM32Cube.AI and NVIDIA TAO Toolkit, Download and watch a 10x jump in performance on an STM32H7 running vision AI
As promised, STM32AI – TAO Jupyter notebooks are now available for download on our GitHub page. We provide scripts to train, adapt, and optimize neural networks before processing them with STM32Cube.AI, which generates optimized code for our microcontrollers. It’s one of the most straightforward ways to experiment with pruning, retraining, or benchmarking, thus reducing the barrier to entry by facilitating the user of the TAO framework to create models that will run on our MCUs.
NVIDIA announced today TAO Toolkit 5, which now supports the ONNX quantized format, thus opening STM32 engineers to a new way of building machine learning applications by making the technology more accessible. The ST demo featured an STM32H7 running a real-time person-detection algorithm optimized using TAO Toolkit and STM32Cube.AI. The TAO-enabled model on the STM32 system determines whether people are present. If they are, it wakes up a downstream Jetson Orin, enabling significant power savings. For such a system to be viable, the model running on the microcontroller must be fast enough to wake up the downstream device before the object leaves the frame.
Today’s presentation is possible because of the strong collaboration between NVIDIA and ST. We updated STM32Cube.AI to support ONNX quantized models and worked on a Jupyter notebook to help developers optimize their workflow. In return, by opening its TAO Toolkit, NVIDIA ensured more developers, such as embedded systems engineers working with STM32 microcontrollers, could use its solution to reduce their time to market. It’s why today’s announcement is an important step for the Industrial AI community. Today marks an important step in democratizing machine learning at the edge. More than a technical collaboration, it lowers the barrier to entry in this sector.
What are the challenges behind machine learning at the edge?Machine learning at the edge is already changing how systems process sensor data to alleviate the use of cloud computing, for example. However, it still has inherent challenges that can slow its adoption. Engineers must still deal with memory-constrained systems and stringent power efficiency requirements. Failure to account for them could prevent a product from shipping. Moreover, engineers must work with real-time operating systems, which demand a certain optimization level. An inefficient runtime could negatively impact the overall application and ruin the user experience. As a result, developers must ensure that their neural networks are highly optimized while remaining accurate.
How is ST solving this challenge? STM32Cube.AITo solve this challenge, ST came up with STM32Cube.AI in 2019, a tool that converts a pre-trained neural network into optimized code for STM32 devices. Version 7.3 of STM32Cube.AI introduced new settings that enabled developers to prioritize RAM footprint, inference times, or a balance between the two. It thus helps programmers tailor their applications. ST also introduced support for deeply quantized and binarized neural networks to reduce RAM utilization further. Given the importance of memory optimizations on embedded systems and microcontrollers, it’s easy to understand why STM32Cube.AI (now in version 8) has been adopted by many in the industry. For instance, we recently showed a people counting demo from Schneider Electric, which used a deeply quantized model.
STM32Cube.AI Developer Cloud and NanoEdge AITo make Industrial AI applications more accessible, ST recently introduced the STM32Cube.AI Developer Cloud. The service enables users to benchmark their applications on our Board Farm to help them determine what hardware configuration would give them the best cost-per-performance ratio, among other things. Additionally, we created a model zoo to optimize workflows. It provides recommended neural network topologies based on applications to avoid memory limitations or poor performance down the road. ST also provides NanoEdge AI Studio that specifically targets anomaly detection and can run training and inference on the same STM32 device. The software offers a more hands-off approach for applications that don’t require as much fine-tuning as those that rely on STM32Cube.AI.
Ultimately, STM32Cube.AI, STM32Cube.AI Developer Cloud, and NanoEdge AI Studio put ST in a unique position in the industry as no other maker of microcontrollers provides such an extensive set of tools for machine learning at the edge. It explains why NVIDIA invited ST to present this demo when the GPU maker opened its TAO Toolkit to the community. Put simply, both companies are committed to making Industrial AI applications vastly more accessible than they are today.
How is NVIDIA solving this challenge? TAO ToolkitTAO stands for Train, Adapt, Optimize.In a nutshell, TAO Toolkit is a command-line interface that uses TensorFlow and PyTorch to train, prune, quantize, and export models. It allows developers to call APIs that abstract complex mechanisms and simplify the creation of a trained neural network. Users can bring their weighted model, a model from the ST Model Zoo, or use NVIDIA’s library to get started. The NVIDIA model zoo includes general-purpose vision and conversational AI models. Within these two categories, developers can select among more than 100 architectures across vision AI tasks, like image classification, object detection, and segmentation, or try application-based models, such as people detection or vehicle classification systems.
Overview of the TAO Toolkit workflowThe TAO Toolkit allows a developer to train a model, check its accuracy, then prune it by removing some of the less relevant neural network layers. Users can then recheck their models to ensure they haven’t been significantly compromised in the process and re-train them to find the right balance between performance and optimization.ST also worked on a Jupyter notebook containing Python scripts to help prepare models for inference on a microcontroller. Finally, engineers can export their model to STM32Cube.AI using the quantized ONNX format, as we show in the demo, to generate a runtime optimized for STM32 MCUs.
Using TAO Toolkit and STM32Cube.AI togetherThe ST presentation at the NVIDIA GTC Conference 2023 highlights the importance of industry leaders coming together and working with their community. Because NVIDIA opened its TAO Toolkit and because we opened our tool to its trained neural network, developers can now create a runtime in significantly fewer steps, in a lot less time, and without paying a dime since all those tools remain free of charge. As the demo shows, going from TAO Toolkit to STM32Cube.AI to a working model usable in an application is much more straightforward. What may have been too complex or costly to develop is now within reach.
Using TAO Toolkit and STM32Cube.AI enabled a people detection application to run on a microcontroller at more than five frames per second, the minimum performance necessary. Below this threshold, people could run out of the frame before being detected. In our example, we also were able to decrease the Flash footprint by more than 90% (from 2710 KB to 241 KB) and the RAM usage by more than 65% (from 820 KB to 258 KB) without any significant reduction in accuracy. It will actually surprise many that the application takes more RAM than Flash, but that’s the type of optimization that microcontrollers need to play an important role in the democratization of machine learning at the edge.
The code in the demonstration is available in a Jupyter notebook downloadable from ST’s GitHub. In the video, you will see how developers can, with a few lines of code, use the STM32Cube.AI Developer Cloud to benchmark their model on our Board Farm to determine what microcontroller would work best for their application. Similarly, it shows how engineers can take advantage of some of the features in TAO Toolkit to prune and optimize their model. Hence, it’s already possible to prepare teams to rapidly take advantage of the new workflow once it is open to the public.
The post Update: STM32Cube.AI and NVIDIA TAO Toolkit, Download and watch a 10x jump in performance on an STM32H7 running vision AI appeared first on ELE Times.
University of NSW Electronics Lab & Makerspace Tour
Generative AI Startup Shows Off Digital In-Memory Computing Platform
Sony’s Energy Harvester Draws Power From Electromagnetic Wave Noise
Let’s create a small level with a matrix display
The low cost of sensors and components allows us to build small gadgets and tools that were unthinkable just a few years ago. In this project, a “bubble” level has been created, which is a tool used to determine if a surface is perfectly level. The level was constructed using an accelerometer and an […]
The post Let’s create a small level with a matrix display appeared first on Open Electronics. The author is Boris Landoni
Memory, Processing, & Security: Focal Points of New Automotive Releases
Apple’s latest product launch event takes flight

Another September…another suite of new AirPods, iPhones and Watches from Apple. Don’t get me wrong: in a world rife with impermanence, there’s something comforting about predictability, no matter how boring it might also be. And at this Tuesday’s event nexus was the most predictable (albeit simultaneously impactful) announcement of all: a decade-plus-one years after unveiling the proprietary Lightning connector for its various mobile devices, replacing the initial and equally-proprietary 30-pin dock connector, the transition to Lightning’s successor has now also begun. This time, though, the heir isn’t proprietary. It’s USB-C.
The switch to USB-C isn’t even remotely a surprise, as I said. The only question in my mind was when it’d start, and now another question has taken its place: how long will it take to complete? After all, more than five years ago the European Union (EU) started making grumbling noises about whether it should standardize charger connections. A bit more than four years later, last October to be exact, the EU followed through on its threat, mandating USB-C usage by the end of 2024. Later that month, Apple publicly acquiesced, admitting that it had no choice but to comply.
With today’s iPhone 15, 15 Plus, 15 Pro and 15 Pro Max, and a cognizant charging case for the tweaked 2nd-gen AirPods Pro, the transition to USB-C has started in earnest. And as usual, the interesting bits (or if you prefer, the devils) are in the details. Since the iPhone 15 and 15 Plus are based on last year’s A16 Bionic SoC, the brains of 2022’s iPhone 14 Pro and 14 Pro Max, they “only” run USB-C at Lightning-compatible USB 2.0 speeds (recall that the connector form factor—USB-A or USB-C, for example—and the bus bandwidth 480 Mbps USB 2.0 or 5-or-higher Gbps USB 3.x—are inherently distinct although they’re often implementation-linked). This year’s A17 Pro (hold that thought) SoC, conversely, contains a full USB 3 controller.
The higher bandwidth potential of the new wired bus generation is particularly resonant for anyone who’s tried transferring long-duration 4K video off a smartphone using comparatively slothlike USB 2/Lightning or Wi-Fi. And Power Delivery (PD) support (assuming it actually works as intended) will be great for passing higher charging voltage-and-current payloads to the phone; the iPhone 15 series implementation is bidirectional, actually, enabling the phone’s battery to even bump up the charge on an Apple Watch or set of AirPods in a pinch. But I was curious to see what exact form this new bus would take, among other reasons due to the system complications it might create. Pre-event rumors had indicated that Apple might have instead branded it as “Thunderbolt 4” which, if true, would have offered the broadest system compatibility: with TB4 and TB3, as well as with TB2 and original Thunderbolt via adapters, and with USB-C and USB generational precursors.
Here’s the thing with USB-C; Apple still supports (although it doesn’t still sell) plenty of Intel-based systems containing only Thunderbolt 3 ports. And as my own past documented experiences exemplify, USB-C and Thunderbolt 3 aren’t guaranteed to interoperate, in spite of their connector commonality. Intel, for example, sold two different generations of TB3 controllers: “Alpine Ridge” (the chipset in my CalDigit TS3 Plus dock, for example, along with several other TB3 docks and hubs I own) is Thunderbolt-only, while the “Titan Ridge” successor also interoperates with USB-C devices (I plan to elaborate on these differences, along with the additional existing and future enhancements supported by Thunderbolt 4 and just-announced Thunderbolt 5, in an upcoming focused-topic post). If the A17 Pro SoC is really USB-C only, Apple will be facing a notable support burden (albeit decreasing over time, since all newer Apple Silicon-based systems support Thunderbolt 4, therefore also USB-C). That’s why I suspect that although Apple’s marketeers are calling the conector “USB-C” for simplicity’s sake, it’s also Thunderbolt-interoperable.
A few more notes here: Apple’s dropping sales of its Lightning-based MagSafe wireless charging accessories, a curious move considering they still work with still-sold iPhone 14 and 13 variants (RIP iPhone 14 Pro models, along with the iPhone 13 mini). And if you still want to use your Lightning-based charger or other accessory, Apple will happily sell you an overpriced USB-C adapter for it. Bus fixations now satiated, let’s broaden the view and see what else Apple announced this week.
The iPhone 15 family
You already know about the A16 Bionic SoC from last year’s coverage. And you already know about the A16 Pro SoC’s USB controller enhancements. But there’s much more to talk about, of course, beginning with the package-integrated RAM boost from 6 GBytes to 8 GBytes. Last year’s A16 Bionic was Apple’s first chip fabricated on foundry partner TSMC’s 4 nm process. This year, with the A17 Pro, it’s TSMC’s successor 3 nm process, with a commensurate increase in the available transistor budget (from 16 to 19 billion) which Apple has leveraged in various ways:
- Performance- and power consumption-enhanced microarchitecture CPU cores, albeit with the same counts (2 performance, 4 efficiency) as before
- An improved neural engine for deep learning inference, claimed up to twice as fast as before, but again with the same core count (16) as before
- A six-core graphics accelerator with a redesigned shader architecture, claimed capable of up to 20% higher peak performance than before, derived in part from new hardware-accelerated ray tracing support, and
- Enhanced video and display controllers, now capable of hardware-decoding the AV1 codec (among other things).
About that first-time “Pro” branding for the new SoC …on Monday, Daring Fireball’s John Gruber published an as-usual excellent pre-event summary of how Apple has historically transitioned its smartphone product line each year, and how it’s more recently tweaked the cadence in the era of the “Pro” smartphone tier. Although Apple has previously tweaked smartphone SoCs to come up with iPad variants—from the A12 SoC to the A12X and A12Z, for example—this is the first time I can recall that the company has custom-branded (and high-end branded, to boot: usually you start with a defeatured variant to maximize initial chip yield) a SoC out of the chute. At least two options going forward that I can see:
- Perhaps next year’s iPhone 16 and 16 Plus will be based on a neutered non-Pro variant of the A17, or
- Mebbe they’re saving the non-Pro version for the next-gen iPhone SE?
The iPhone 15 and 15 Plus inherit the processing-related enhancements present in the last year’s iPhone 14 Pro and Pro Max, reflective of their SoC commonality.
Apple has also “ditched the notch” previously required to integrate the iPhone 14 and 14 Plus front camera into the display, instead going with the software-generated and sensor-obscuring Dynamic Island toward the top of the display. Speaking of displays, reflective of OLED’s ongoing improvements (and LCD’s ongoing struggle to remain relevant against them), these are capable of up to 2000 nits of brightness when used outdoors. And, speaking of cameras, there are still two rear ones, “main” and “ultra-wide”, the latter still 12 Mpixel in resolution. The former has gotten attention, however; it uses a 48 Mpixel “quad pixel” sensor in combination with computational photography to implement image stabilization and other capabilities, outputting 24 Mpixel images. It also supports not only standard but also 2x optical telephoto modes, the latter generating 12 Mpixel pictures.
Now for the iPhone 15 Pro and Pro Max (again, above and beyond the SoC and RAM updates already covered). First, they switch from stainless steel to lighter-weight titanium-plus-aluminum combo frames:
They incorporate a similar 48 Mpixel main camera as their non-Pro siblings, albeit with slightly larger pixel dimensions for improved low light performance, three focal length options, and the option to capture images in full 48 Mpixel resolution. And, as before, there’s a dedicated 12 Mpixel ultra-wide camera. This time, however, instead of the main camera doing double-duty for telephoto purposes, there’s (again, as with the iPhone 14 Pro generation) a dedicated third 12 Mpixel telephoto camera, this time with 3x optical zoom range in the standard “Pro” and 5x in the “Pro” Max, the latter stretching to a 120 mm focal length. A complicated multi-prism structure enables squeezing this optical feat into a svelte smartphone form factor:
Last, but not least, the previous single-function switch on the side has been swapped out for a multi-function “action” button. Here’s the summary:
Apple Watch Series 9 and Ultra 2
Although Apple claimed via its naming that the SoCs in the Apple Watch Series 6 (using the S6 chip), 7 (S7) and 8 (S8) were different, a lingering rumor (backed up by Wikipedia specs) claimed the contrary: That they were actually the same sliver of silicon (based on the A13 Bionic SoC found in the iPhone 11 series), differentiated only by topside package mark differences, and that Apple focused its watch family evolution efforts instead on display, chassis, interface and other enhancements.
Whether or not previous-generation SoC speculations were true, we definitely have a new chip inside both the Series 9 and Ultra 2 this time. It’s the S9, comprising 5.6B transistors that, among other things, assemble a 30% faster GPU and a 4-core neural engine with twice as fast machine learning (ML) processing as before. The benefits of the GPU—faster on-display animation updates, particularly for high-res screens—are likely already obvious to you. The deep learning inference improvements, while perhaps more obscure at first glance, are IMHO more compelling in their potential.
For one thing, as I’ve discussed in the past, doing deep learning “work” as far out on the “edge” as possible (alternatively stated, as close to the input data being fed to the ML model as possible) is beneficial in several notable ways: it minimizes the processing latency that would otherwise accrue from sending that data elsewhere (to a tethered smartphone, for example, or a “cloud” server) for processing, and it affords ongoing functionality even in the absence of a “tether”. As Apple mentioned on Tuesday, one key way that the company is leveraging the beefed-up on-watch processing capabilities is to locally run Siri inference tasks on voice inputs, allowing for direct health data access right from the watch, for example. Another example is the “sensor fusion” merge of data from the watch’s accelerometer, gyro, and optical heart rate sensor to implement the new “Double tap” gesture that requires no interaction with the touchscreen display whatsoever:
Reminiscent of my earlier comments about OLED advancements, the Series 9 display is twice as bright (2000 nits) as the one in Series 8 predecessors, and it drops down as low as 1 nit for use in dimly lit settings.
The one in the Ultra 2 is even brighter, 3000 nits max to be precise:
And both watches, as well as the entire iPhone 15 family, come with a second-generation ultra-wideband (UWB) transceiver IC for even more accurate location of AirPods-stuck-in-sofa-cushions and other compatible devices. Speaking of AirPods…
Second-gen (plus) AirPods Pro
As previously mentioned, the charging case for the second-generation AirPods Pro earbuds now supports USB-C instead of Lightning.
Curiously, however, Apple doesn’t currently plan to sell the case standalone for use by existing AirPods Pro 2nd-gen owners. The company has also tweaked the design of the earbuds themselves, for improved dust resistance and lossless audio playback compatibility with the upcoming Vision Pro extended-reality headset. Why I wonder, didn’t Apple call them the AirPods Pro 2nd Generation SE? (I jest…sorta…)
The rest of the story
There’s more that I could write about, including Apple’s (but not third parties’) purge of leather cases, watch bands and the like, its carbon-neutral and broader “green” aspirations, and the well-intentioned but cringe-worthy sappy video that accompanied their rollout. But having just passed through the 2,000 word threshold, and mindful of both Aalyia’s wrath (again I jest…totally this time) and her desire for timely publication of my prose, I’ll wrap up here. I encourage and await your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- What’s next for USB and Thunderbolt?
- Display technologies: Refinements and new entrants
- Cutting into a conventional USB-C charger
- The Apple 2023 WWDC: One more thing? Let’s wait and see
- Apple’s first-generation HomePod: A teardown facilitated by a design that’s flawed
- Thunderbolt: Good industry-standard intentions undone by proprietary implementations
The post Apple’s latest product launch event takes flight appeared first on EDN.
Output Voltage and Diode Current in a Boost Converter
ChatGPT: Writing Code with Artificial Intelligence
ChatGPT is an artificial intelligence-based chatbot that was introduced by OpenAI on November 30th. With ChatGPT, it is possible to engage in conversations and discussions. In fact, GPT stands for Generative Pretrained Transformer, and it is programmed to be able to converse with the public. Various voice assistants also operate on this basis. We tested […]
The post ChatGPT: Writing Code with Artificial Intelligence appeared first on Open Electronics. The author is Boris Landoni
Free Webinar: Arduino IoT Cloud and ESP32 Demoboard
Connect our ESP32 Demoboard to Arduino IoT Cloud to remotely monitor and control sensors via a web dashboard. In this free webinar, Fabrizio Mirabito (IoT Cloud Manager at Arduino) will demonstrate how to connect the ESP32 to the cloud, enabling remote monitoring and control of sensors and actuators through a web dashboard. For this purpose, […]
The post Free Webinar: Arduino IoT Cloud and ESP32 Demoboard appeared first on Open Electronics. The author is Boris Landoni
A 50 MHz 50:50-square wave output frequency doubler/quadrupler

On two days in the course of every year, one in March heralding the start of spring and another in September marking the first of fall, the Earth’s axis of rotation aligns perpendicular to the rays of the Sun. These days are the equinoxes and, as the name implies, divide daytime into nominally equal intervals of sunlight and night.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Author of multiple EDN design ideas, Jim McLucas (Mr. Equinox) evidently has a passion and a talent for devising circuits that also divide up time into equal intervals. He has published several clever and innovative design ideas that convert arbitrary waveshapes into 50:50 square waves, thus slicing and dicing the time axis into equal segments. He’s also often included a wide-range frequency doubler functions:
- Convert any signal to exactly 50% duty cycle
- Frequency doubler with 50 percent duty cycle
- Fast(er) frequency doubler with square wave output
I thought this looked like a fun concept and design challenge, and Jim kindly gave me permission to borrow it and try designing an “equinoctial” circuit of my own. Figure 1 shows the result.
Figure 1 Kibitzer’s version of a McLucas frequency multiplier and square wave generator.
Figure 1’s circuit comprises two almost identical sections: input processor, IP (U1pin1 through A1), and output generator, OG (U1p12 through A2).
The IP is capable of working in either of two modes as selected by jumper J1 or J2. J1 puts the IP into 50:50 mode in which it will accept any duty cycle input and convert it to a symmetrical 50% duty cycle square wave, suitable for frequency doubling by the OG. (This circuit concept is purely Mr. McLucas’s.) J2 puts the IP into frequency-doubling mode in which an input waveshape that’s already 50:50 symmetrical is doubled before input to the OG for net frequency quadrupling.
When frequency doubling J2 is selected, the combination of RC delays (R1C4 in the IP and R8C3 in the OG) and XOR gates (U1) generate high speed pulses (~6 ns width) on each input edge. Hence two pulses per cycle and doubled frequency input to the OG for quadrupled frequency. If J1 is jumpered instead, then R1C4 is bypassed and just one pulse per cycle and an unmultiplied 50:50 square wave is generated by the IP for doubling by the OG.
The hearts of both IP and OG are simple but fast timing loops in which a very fast monostable flip-flop is forced by feedback from an op-amp integrator to generate 50:50 square waves. (Yup. Jim’s idea again.)
My variation on Jim’s basic timing loop concept consists of U3’s two D type flip-flops and the surrounding components, including Schottky switching diodes D1 and D2, current sink transistors Q1 and Q2, and timing capacitors C1 and C2. Because the two loops are essentially identical, let’s talk about the OG loop.
Each timing sequence begins when U1pin8 delivers a clock pulse to U3pin3. U3 is positive-edge-triggered and responds by driving U3pin6 low. This disconnects D2 from timing cap C2 and allows the current sink Q2 to ramp it down toward the switching threshold of U3pin 4 = -SET.
The timing interval thus begun has a duration (~10 ns to 500 µs) determined by Q2’s collector current as controlled in turn by integrator A2. The intent is to force the interval to be accurately 50% of the time between U1pin8 pulses. A2 does this by subtracting the 2.5 V reference developed by the R6R7 voltage divider from the pulse train at U2pin13 and accumulating the averaged difference on feedback capacitor C6.
If the duty cycle at U2pin13 is <50%, indicating that the U3 timeout is too long, A2’s output will ramp up, increasing Q2’s collector current and C2 ramp rate, thereby making the timeout shorter. If it’s >50%, A2 will ramp down, decreasing IcQ2 and lengthing the timeout. Net result: after a few seconds, U2pin13 will output an accurately 50:50 square wave at 2 or 4 times (depending on J1 J2) the input frequency.
Provided, of course, that said frequency is within the limits of the timing loop.
The high end of said frequency range is mainly limited by the propagation delays of U3, Q2 ,and D2. These sum to about 10 ns (maybe a smidgeon less) and thus limit the max frequency to ~1/(10 ns + 10 ns) = ~1/20 ns = ~50 MHz (or possibly a bit more). The low end is limited by leakage currents (mainly through D2) that can cause C2 to continue to ramp down even when A2 turns Q2 completely off. This leakage can sum to upwards of 10 nA (especially if the diode is warm) and sets a bottom-end interval of ~1 ms and a temperature-dependent minimum frequency of (very) roughly ~1/(1 ms + 1 ms) = ~1/2 ms = ~500 Hz.
OG output is routed through U2pins 6 and 8 and summed by R12 and R13 to produce a convenient 5 Vpp, ~50 Ω output. If no input is provided, the output shuts down at zero volts, preventing overheating of U2.
An additional detail is A3. It serves as an IP duty cycle comparator that holds OG timing loop activity disabled until the IP has converged (or nearly so) on and is producing an accurate 50:50 pulse train. This avoids the possibility of the erratic and persistent confusion of the OG feedback loop, which can occur if it’s allowed to try to converge prematurely.
It was indeed a fun project—all things being “equal”. Thanks, Jim!
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Nearly 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Fast(er) frequency doubler with square wave output
- Convert any signal to exactly 50% duty cycle
- Frequency doubler with 50 percent duty cycle
- Triangle waves drive simple frequency doubler
The post A 50 MHz 50:50-square wave output frequency doubler/quadrupler appeared first on EDN.
Discover MacDermid Alpha’s innovative integrated solutions that are energizing the battery and power electronics materials at The Electric & Hybrid Vehicle Technology Expo, Novi
(Waterbury, CT USA) September 8, 2023 – MacDermid Alpha Electronics Solutions, a leading provider of integrated materials and technologies for the electronics industry, will be at The Electric & Hybrid Vehicle Technology Expo 2023 in Novi from September 12 and 14. Engage the industry experts at booth 1411, where they will showcase their latest integrated material solutions. Visitors can discover, using data-driven models, how MacDermid Alpha solutions enhance electric vehicles’ range and reliability by optimizing inverter power and enabling increased battery efficiency.
Powerful solutions when performance matters“Our team anticipates this show, where we have the opportunity to meet automotive engineers and designers and offer guidance on material technology selection to improve EV efficiency,” says Chris Klok, Director of Vehicle Electrification Technology. Chris continues by highlighting the topics MacDermid Alpha will cover. He mentions the importance of efficient thermal management, “It is about microns and millimeters. Thermal interface materials not optimized can easily be displaced by vibrations or temperature changes. Our expertise in this area helps engineers and designers make the optimal choice.” Moving on to the battery management system, Chris emphasizes the need for ’high-reliability solder alloys’ in applications where solder is the preferred choice. In the realm of powertrain consideration, he points out that “the shift from silicon-based power modules to advanced alternatives like silicon carbide is revolutionizing the industry and requires sintering as the joining technology. This change dictates the use of our Argomax® silver-sintering as the assembly technology,” Chris concludes by highlighting: “Our solutions are also used in charging infrastructure, where integrated materials extend component life and charging efficiency. In fact, we have just launched RELIANCE – our Reliability Enhancement Tool – Powered by Integrated Solutions Data Matrix which clearly demonstrates optimum material choice selections.
The MacDermid Alpha team looks forward to meeting show visitors at booth 1411, to demonstrate RELIANCE – Reliability Enhancement Tool, share insights, collaborate, and shape the future of electric vehicles together.
The Electric & Hybrid Vehicle Technology Expo co-located with The Battery Show in North America brings together engineers, business leaders, and top industry companies to discuss technologies and innovations that are influencing the future of electric vehicles. Visiting the MacDermid Alpha booth at this show is an opportunity to explore enabling material technologies surrounding EV battery applications.
![Skateboard angle view background BDB 220922[1] copy](https://www.eletimes.com/wp-content/uploads/2023/09/Skateboard-angle-view-background-BDB-2209221-copy-1024x576.jpg)
MacDermid Alpha Electronics Solutions, a prominent division of Element Solutions Inc, holds a distinguished position as a global leader in the field of fully integrated materials. They empower manufacturers worldwide to enhance their performance, reliability, and sustainability. Their expertise is segmented into three vital divisions:
- Circuitry Solutions: MacDermid Alpha Electronics Solutions pioneers advanced specialty chemical and material technologies tailored to meet the circuitry demands of the electronics industry.
- Semiconductor & Assembly Solutions: They specialize in delivering cutting-edge solutions for semiconductors and assembly processes, driving innovation and reliability in these critical sectors.
- Film & Smart Surface Solutions: With a focus on materials and technologies for films and smart surfaces, MacDermid Alpha Electronics Solutions is at the forefront of transforming the future of electronics.
With a legacy spanning over a century of innovation, MacDermid Alpha has garnered the trust of manufacturers spanning more than 50 countries.
What sets MacDermid Alpha Electronics Solutions apart is its unique ability to promptly deliver high-quality solutions and provide technical services that comprehensively cover the entire electronics supply chain. They are actively shaping industries such as automotive, consumer electronics, mobile devices, telecom, data storage, and infrastructure.
For those seeking to power their path to success in the electronics industry, MacDermid Alpha Electronics Solutions offers an exceptional opportunity. Join them on their journey of innovation and excellence.
The post Discover MacDermid Alpha’s innovative integrated solutions that are energizing the battery and power electronics materials at The Electric & Hybrid Vehicle Technology Expo, Novi appeared first on ELE Times.
Microchip Launches MPLAB® Machine Learning Development Suite to More Easily Incorporate ML Into MCUs and MPUs
Unique solution is first to support 8-bit, 16-bit and 32-bit MCUs and 32-MPUs for ML at the edge
Machine Learning (ML) is becoming a standard requirement for embedded designers working to develop or improve a vast array of products. Meeting this need, Microchip Technology (Nasdaq: MCHP) has launched a complete, integrated workflow for streamlined ML model development with its new MPLAB® Machine Learning Development Suite. This software toolkit can be utilized across Microchip’s portfolio of microcontrollers (MCUs) and microprocessors (MPUs) to add an ML inference quickly and efficiently.
“Machine Learning is the new normal for embedded controllers, and utilizing it at the edge allows a product to be efficient, more secure and use less power than systems that rely on cloud communication for processing,” said Rodger Richey, VP of Microchip’s Development Systems business unit. “Microchip’s unique, integrated solution is designed for embedded engineers and is the first to support not just 32-bit MCUs and MPUs, but also 8- and 16-bit devices to enable efficient product development.”
ML uses a set of algorithmic methods to curate patterns from large data sets to enable decision making. It is typically faster, more easily updated and more accurate than manual processing. One example of how this tool will be utilized by Microchip customers is to enable predictive maintenance solutions to accurately forecast potential issues with equipment used in a variety of industrial, manufacturing, consumer and automotive applications.
The MPLAB Machine Learning Development Suite helps engineers build highly efficient, small-footprint ML models. Powered by AutoML, the toolkit eliminates many repetitive, tedious and time-consuming model-building tasks including extraction, training, validation and testing. It also provides model optimizations so the memory constraints of MCU and MPUs are respected.
When used in combination with the MPLAB X Integrated Development Environment (IDE), the new toolkit provides a complete solution that can be easily implemented by those with little to no ML programming knowledge, which can eliminate the cost of hiring data scientists. It is also sophisticated enough for more experienced ML designers to control.
Microchip also offers the option to bring a model from TensorFlow Lite and use it in any MPLAB Harmony v3 project, a fully integrated embedded software development framework that provides flexible and interoperable software modules to simplify the development of value-added features and reduce a product’s time to market. In addition, the VectorBlox Accelerator Software Development Kit (SDK) offers the most power-efficient Convolutional Neural Network (CNN)-based Artificial Intelligence/Machine Learning (AI/ML) inference with PolarFire® FPGAs.
MPLAB Machine Learning Development Suite provides the tools necessary for designing and optimizing edge products running ML inference. Visit Microchip’s Machine Learning Solutions page to learn more about streamlining the development process while keeping costs down and achieving a quicker time to market with Microchip’s intuitive ML tools.
Pricing and AvailabilityPricing varies based on licensing. A free version of the MPLAB Machine Learning Development Suite is available for evaluation. For additional information or to purchase, contact a Microchip sales representative at www.microchipdirect.com.
The post Microchip Launches MPLAB® Machine Learning Development Suite to More Easily Incorporate ML Into MCUs and MPUs appeared first on ELE Times.
Netskope Delivers the Next Evolution in Digital Experience Management for SASE with Proactive DEM
Proactive DEM provides high definition visibility and predictive insights alongside proactive remediation capabilities
Netskope, a leader in Secure Access Service Edge (SASE), today announced the launch of Proactive Digital Experience Management (DEM) for SASE, elevating best practice from the current reactive monitoring tools to proactive user experience management. Proactive DEM provides experience management capabilities across the entire SASE architecture, including Netskope Intelligent SSE, Netskope Borderless SD-WAN and Netskope NewEdge global infrastructure.
Digital Experience Management technology has become increasingly crucial amid digital business transformation, with organizations seeking to enhance customer experiences and improve employee engagement. With hybrid work and cloud infrastructure now the norm globally, organizations have struggled to ensure consistent and optimized experiences alongside stringent security requirements.
Gartner predicts that “by 2026, at least 60% of I&O leaders will use DEM to measure application, services and endpoint performance from the user’s viewpoint, up from less than 20% in 2021”. However, monitoring applications, services, and networks is only part of a modern DEM experience, and so Netskope Proactive DEM goes beyond observation, providing Machine Learning (ML)-driven functionality to anticipate, and automatically remediate, problems.
Sanjay Beri, CEO and co-founder of Netskope commented; “Ensuring a constantly optimized experience is essential for organizations looking to support the best productivity returns for hybrid workers and modern cloud infrastructure, but monitoring alone is not enough. Customers have told us of the challenges they face managing a multi-vendor cloud ecosystem and so we have yet again innovated beyond industry standards, providing experience management that can both monitor and proactively remediate.”
For issue identification, Netskope Proactive DEM uniquely combines Synthetic Monitoring with Real User monitoring, creating SMART monitoring (Synthetic Monitoring Augmentation for Real Traffic). This enables full end-to-end ‘hop-by-hop’ visibility of data, and the proactive identification of experience-impacting events. SMART monitoring enables organizations to anticipate potential events that might impact upon network and application experience.
While most SASE vendors rely on “gray cloud” infrastructure – built on public cloud – which limits their ability to granularly identify and control any issues, Proactive DEM leverages Netskope NewEdge – the industry’s largest private cloud infrastructure – to deliver 360 visibility and control of end-to-end user experience while providing mitigation of issues, including using various self healing mechanisms, before the user recognises their experience has degraded.
Netskope Proactive DEM capabilities include: Predictive Insights with High Definition Visibility:- Introducing SMART Monitoring – Synthetic Monitoring Augmentation for Real Traffic, combining Real User Monitoring (RUM) and Synthetic Transaction Monitoring (STM) to provide organizations a full 360 degree view of users’ digital experiences.
- Reducing both MTTD (mean time to detection) and MTTR (mean time to resolution) with the correct level of predictive insights and actionable intelligence.
- Providing true visibility of all four stages of the transaction:
- Endpoint health and performance monitoring
- Hop-by-hop view of the connectivity path from user to Netskope
- True visibility into the performance of all features of the SASE platform including client performance
- Application response monitoring
- Identifying anomalies in normal patterns with machine learning modeling, with actionable and tailored alerts helping to reduce alert false positives and streamline network operations processes and response times.
- Using a combination of proactive and customer triggered remediation, incident impact time can be eliminated or greatly reduced.
- Providing lightweight Real User Monitoring capabilities helping networking teams gain visibility while removing friction with the endpoint teams.
- Proactive monitoring of critical business applications provides focus on what matters the most to organizations, helping network operations teams streamline the remediation process, and reducing incident duration.
- Proactive remediation – before the user even reports an issue – reduces the burden on help desks and network operations teams.
- Multi-level routing controls proactively optimize the user experience by identifying the optimum route for critical applications;
- Proactively switching Netskope NewEdge infrastructure routing decisions, reducing latency and increasing application performance.
- Selecting the optimum onward path from the Netskope SASE Platform to the application or public cloud provider, routing around external network issues.
Frictionless integration with Netskope Endpoint SD-WAN’s client capability for performance optimization, for applications sensitive to network degradation such as Zoom and Microsoft Teams.
The post Netskope Delivers the Next Evolution in Digital Experience Management for SASE with Proactive DEM appeared first on ELE Times.
Pages
