Українською
  In English
Новини світу мікро- та наноелектроніки
Microchip’s PolarFire FPGA’s Single-Chip Crypto Design Flow “Successfully Reviewed” By the United Kingdom Government’s National Cyber Security Centre
The Review confirms strength of PolarFire FPGA’s security solution
Security is now an imperative for all designs in every vertical market. Today, system architects and designers received further evidence of the security of their communications, industrial, aerospace, defense, nuclear, and other systems relying on Microchip Technology’s PolarFire FPGAs. The United Kingdom Government’s National Cyber Security Centre (NCSC) has reviewed the devices when used with the Single-Chip Crypto Design Flow against stringent device-level resiliency requirements.
“The NCSC conducts a very rigorous analysis, and the work done with Microchip on the Design Separation Methodology in the PolarFire FPGA enables the user to take advantage of improved resilience and functional isolation within the device. This reinforces Microchip’s commitment to our comprehensive approach to security,” said Tim Morin, technical fellow at Microchip’s FPGA business unit. “This analysis provides the option for single-chip cryptography in addition to what already exists within the devices for protecting IP, securing data, and protection against physical tampering—an often overlooked and very powerful threat to every electronic system, especially those at the intelligent edge.”
PolarFire FPGAs implement Microchip’s industry-leading security architecture to protect intellectual property, secure data, and secure supply chains.
- PolarFire FPGA IP protection includes:
- AES 256-encrypted configuration files with SHA 256-based HMAC authentication
- Processing is protected against Differential Power Analysis (DPA) with technology licensed from Cryptography Research Incorporated (CRI)
- Public key cryptographic cores: Elliptic Curve Cryptography (ECC) for secure distribution of keys
- True random number generators
- PolarFire FPGA data security features include:
- Hardened cryptographic accelerators for use in the end application
- Pass-through CRI license enables royalty-free development of DPA-protected algorithms using techniques patented by CRI
- PolarFire FPGA supply chain security features reduce the risk of counterfeiting, re-marking, and overbuilding and include:
- Silicon biometrics, including Physically Unclonable Functions (PUFs), that allow each device to be uniquely identified and cryptographically validated
The post Microchip’s PolarFire FPGA’s Single-Chip Crypto Design Flow “Successfully Reviewed” By the United Kingdom Government’s National Cyber Security Centre appeared first on ELE Times.
On Semiconductor
FYI:
On Semiconductor has decided to focus all its product marketing on Electric Vehicles. They are telling non-EV customers that their support will be limited, and they will be "at the end of the line" for allocation purposes. Non-EV customers will be required to get their parts thru distribution.
The semiconductor supply chain is expected to be constrained again in late 2024/early 2025 with analog parts being of most concern. It is good to know in advance where you stand with your vendors.
[link] [comments]
Arduino’s newest gambits: Connectivity, industrial designs

Arduino has made its next move to bolster connectivity offerings by joining the AWS Partner Network (APN), a global community of over 100,000 cloud partners from more than 150 countries. It’s aimed to boost the Arduino PRO product line introduced in 2020 at the request of OEMs and system integrators.
Arduino PRO features 24 industrial-grade products, including the Portenta X8 Linux SOM and UL-certified Opta PLC, and is deployed by more than 2,000 businesses worldwide.
Figure 1 Arduino Pro, a microcontroller-based board, can be programmed with the Arduino software download and powered via a USB header, battery, or external power supply. Source: Arduino
The announcement underscores two major shifts at the open-source developer platform. First, Arduino wants users to easily create connectivity applications via its cloud platform commonly known as Arduino Cloud. Second, it aims to move beyond a prototype or educational platform and transition toward commercial and industrial applications.
Arduino’s cloud journey
Arduino Cloud offers users an easy path to collect data, control the edge, and gain insights from connected products without the need to build, deploy, and maintain a custom Internet of Things (IoT) platform. The 3-year-old Arduino Cloud is built on AWS and processes 4 billion device messages every month.
Figure 2 Arduino Cloud is an online platform to configure, program, and connect devices with a dashboard that allows users to monitor and control Arduino boards from a web interface. Source: Arduino
Take the example of ABM Vapor Monitoring, which supervises commercial buildings across the United States to ensure that regulated air quality standards are met. The company claims to have slashed product development time by six months and saved over $250,000 in engineering services while using Arduino Cloud.
Arduino has spent the last few years developing the IoT cloud to ensure more users can develop connected products. However, its UNO boards didn’t offer connectivity. So, earlier this year, Arduino released the UNO R4 boards with more powerful processors alongside Wi-Fi and Bluetooth connectivity. “It became very simple to create connectivity products for mobile apps or the web that can be controlled remotely and integrated very easily with the cloud,” said Massimo Banzi, Arduino’s co-founder, chairman and CMO.
Figure 3 UNO R4, powered by a 32-bit microcontroller, comes with a Wi-Fi variant to allow users to connect to the Arduino Cloud and other platforms for IoT projects. Source: Arduino
“Arduino doesn’t just develop boards,” Banzi added. “It’s a combination of development environment and cloud community, which makes Arduino very easy.” He also pointed toward the open architecture at the core of every Arduino product that provides a preferred path to AWS for chips supported by Arduino.
Moreover, in a high-level framework that allows users to migrate code to different platforms, libraries abstracted to connectivity channels like Bluetooth make it easier for users to develop connectivity applications. “We have developed a lot of libraries abstracted for high-level tasks, and everything is open source as much as possible,” Banzi said.
He gave the example of sensor-based connectivity applications in an Arduino environment. “You don’t need to read a lot of datasheet pages to figure out how to connect a sensor,” he said. “You search the sensor’s name, find one or more libraries developed by the Arduino community, and get going very quickly.”
Arduino in America
Arduino’s new AWS partnership also marks a shift beyond the common perception that it’s a prototyping or educational platform. Furthermore, while it’s making advances toward commercial and industrial applications, Arduino has decided to establish a local presence in the United States. Arduino has set up two new offices in the United States and named Guneet Bedi the head of U.S. operations.
“Arduino has been hugely popular in the United States, and we have a large community here,” Banzi said. “However, with Arduino being perceived as a prototyping or educational platform, we were able to manage this particular market with local partners without direct involvement.”
But with the launch of Arduino PRO, which is targeted at industrial products, Arduino must cater to large companies. “So, to serve these kinds of customers, we need to have a local team,” he added. “The local team can figure out what these companies need.”
Connectivity and industrial-grade applications mark a new chapter in Arduino’s design journey spanning nearly 15 years. And the latter part, which focuses on commercial and industrial applications, is intrinsically tied to its renewed presence in the United States. Exciting times at the open-source hardware pioneer with 32 million active developers worldwide.
Related Content
- Arduino Catches IoT Wave
- The 5 Best Arduino Projects
- Arduino board plugs DIYers into the cloud for $69
- Linux-Friendly Arduino Simplifies IoT Development
- Open-source HW in the Modern Era: Interview of Arduino’s CEO Fabio Violante
The post Arduino’s newest gambits: Connectivity, industrial designs appeared first on EDN.
Chua's circuit built from scratch without proper perfboard or oscilloscope
![]() | The circuit looks cute but a pain to build on this board, easier on a bigger one but that was all I had at the time. With an old analog scope pictures would have been better (no pixels and continous line, old doesn't always mean worse). [link] [comments] |
RLD-based astable 555 timer circuit

In the classic configuration and most variants of the astable 555 multivibrator circuit, the timing characteristics are based on the charging and discharging of a capacitor. However, it can be argued that since the exponential voltage of a capacitor is qualitatively similar to inductor current, the latter can be made an alternative timing element for the 555. This was shown in the “Inductor-based astable 555 timer circuit”. In Figure 1, we present another approach for an inductor-based astable 555 multivibrator.
Figure 1 An astable 555 timer circuit based on an inductor, diode, and resistor.
Wow the engineering world with your unique design: Design Ideas Submission Guide
At power-on, the inductor voltage (VL) spikes up and exceeds the 555’s trigger voltage of 2Vcc/3. Output (Vo) at pin 3 goes low and the discharge transistor at pin 7 turns on which provides a low-resistance path to ground. Inductor current (IL) begins to rise as VL and the voltage at pin 2 (V2) and pin 6 (V6) all fall exponentially.
When V6 gets below the 555’s threshold voltage of Vcc/3, Vo goes high, and the discharge transistor turns off. Because IL was interrupted, the inductor’s voltage reverses which forward-biases the flywheel diode (D). Pin 7 gets clamped to a diode forward voltage above Vcc. Both IL and VL start to fall towards zero while V2 climbs toward Vcc.
When V2 crosses 2Vcc/3 again, Vo goes low, the discharge transistor turns on, and the train of regular high and low output pulses ensues. The expected waveforms are shown in Figure 2.
Figure 2 The simulated waveforms using Tinkercad (setting: 15 µs/div).
For each state of Vo, we derived the first-order differential equation of the effective circuit. This led us to the Equation 1 for calculating the pulse widths:
The symbols are defined in Table 1 where the columns for TH and TL list specific values that the symbols take on. We also considered Rs as the inductor’s DC resistance, RON = 59.135 / Vcc0.8101 as the resistance of the discharge transistor at pin 7 (Refer to “Design low-duty-cycle timer circuits”), and VD=0.6 V as the diode forward voltage.
Table 1 Formulas to predict timing characteristics.
To test these ideas, we prepared a spreadsheet calculator that predicts TH, TL, and other output characteristics. Then we picked the components listed in Table 2, used a digital LCR tester (SZBJ BM4070) to measure their actual values, and plugged the numbers into the calculator. The predicted attributes of Vo are listed in Table 3.
Table 2 Components for the experimental circuit.
Table 3 Predicted versus measured values (Vcc=5.00 volts).
Finally, we connected to our laptop, a USB-powered test and measurement device—the Digilent Analog Discovery 3 (AD3)—to supply +5 V to the experimental circuit (Figure 3) and observe the waveforms from pins 2 and 6, and 3 of the IC (Figure 4). We tested 8 chips from a bin of assorted 555s and noticed that while TH was consistent, the TL values annoyingly lacked precision. Nonetheless when we compared the AD3 Measurements with the Predicted values in Table 3, we saw that Equation 1 fairly modeled the output of the new multivibrator.
Figure 3 Experimental set-up with the Digilent Analog Discovery 3 connected to the +5 V to the experimental circuit.
Figure 4 Waveforms of V2, V6, and Vo, and measurements for Vo.
Arthur Edang (M.Sc) taught Electronics and Communications Engineering courses at the Don Bosco Technical College (Mandaluyong, Philippines) for 25 years. His current interests include nonlinear phenomena and chaos in circuits, creative approaches to teaching and research, and adaptive e-books. He started Thinker*Tinker—the YouTube channel where viewers can “examine circuits, play with their equations, and make designs work.”
Maria Lourdes Lacanilao-Edang (M.Engg) has instructed a diverse range of courses in the field of computer engineering, from Basic Electronics to Computer Design Interface Projects. Currently serving as faculty member at the University of Santo Tomas (Manila, Philippines), she specializes in the IT Automation track with particular interests in embedded systems, web and mobile app development, and IoT.
Related Content
- Inductor-based astable 555 timer circuit
- Design low-duty-cycle timer circuits
- Schmitt trigger provides alternative to 555 timer
- 555 timer triggers phase-control circuit
- Adjustable triangle/sawtooth wave generator using 555 timer
The post RLD-based astable 555 timer circuit appeared first on EDN.
IP partnerships stir the world of FPGA chiplets

The tie-ups between IP suppliers of embedded FPGA (eFPGA) and UCIe chiplets mark a new era of FPGA chiplet integration in die-to-die connectivity. Chiplets are rapidly being adopted as heterogeneous multi-chip solutions to enable lower latency, higher bandwidth, and lower cost solutions than discrete devices connected via traditional interconnects on a PCB.
Take YorChip, a supplier of UCIe-compatible IP, which is employing QuickLogic’s eFPGA IP technology to create the first UCIe-compatible FPGA chiplet ecosystem. Unified Chiplet Interconnect Express (UCIe) is an open standard for connecting small, modular blocks of silicon called chiplets. Kash Johal, founder of YorChip, calls his company’s partnership with QuickLogic a giant leap for FPGA technology.
Figure 1 QuickLogic and YorChip have partnered to develop the industry’s first UCIe-enabled FPGA.
The two companies claim that this strategic partnership aims to enable an ecosystem allowing chiplet developers to create a customized system and use chiplets for prototyping and doing early market production. QuickLogic teamed up with connectivity IP supplier eTopus in a similar tie-up last year to create a disaggregated eFPGA-enabled chiplet template solution.
QuickLogic combined its Australis eFPGA IP Generator with chiplet interfaces from eTopus to produce standard eFPGA-enabled chiplet templates. Each template will be designed with native support for chiplet interfaces, including the bunch of wires (BOW) and UCIe standards. According to QuickLogic, unlike discrete FPGAs with pre-determined resources of FPGA lookup tables (LUTs), RAM, and I/Os, the disaggregated eFPGA-enabled chiplet template will be available initially as a configurable IP and eventually as known good die (KGD) chiplets.
Figure 2 The disaggregated eFPGA chiplet template solution supports both BOW and UCIe interfaces.
Such collaborations to create eFPGA–enabled chiplet solutions mark an important trend in developing chip-to-chip interconnect technology. In April 2023, the European research institute Fraunhofer IIS/EAS entered a collaboration with eFPGA IP supplier Achronix to build a heterogeneous chiplet solution.
Fraunhofer IIS/EAS, which provides system concepts, design services and fast prototyping in most advanced packaging technologies, will use Speedcore eFPGA IP from Achronix to explore chip-to-chip transaction layer interconnects such as BOW and UCIe. One key application in this project covers the connection of high-speed analog-to-digital converters (ADCs) alongside Achronix eFPGA IP for pre-processing in radars as well as wireless and optical communication.
Figure 3 Fraunhofer IIS/EAS has selected Achronix’s eFPGAs to build a heterogeneous chiplet demonstrator.
Brian Faith, CEO of QuickLogic says that these efforts to use eFPGA for building heterogeneous chiplets embody a new era of FPGA chiplet integration. And he sees their application in the evolving edge IoT and AI/ML markets. Nevertheless, the design journey toward building the world of FPGA chiplets has already started and we are likely to hear about more such partnerships incorporating FPGAs into chip-to-chip interconnect technology.
Related Content
- Chiplet interconnect handles 40 Gbps/bump
- Chiplets Gain Popularity, Integration Challenges
- Chiplets advance one design breakthrough at a time
The post IP partnerships stir the world of FPGA chiplets appeared first on EDN.
Desktop DC source shows real precision where it’s needed

I’ve always been intrigued with high-precision instruments, as it usually represents the best of engineering design, craftsmanship, elegance, and even artistry. One of earliest and still best examples that I recall was when I saw the weigh-scale design by the late, legendary Jim Williams, published nearly 50 years ago in EDN. His piece, “This 30-ppm scale proves that analog designs aren’t dead yet,” details how he designed and built a portable, AC-powered scale for nutritional research using standard components and with extraordinary requirements: extreme resolution of 0.01 pound out of 300.00 pounds, accuracy to 30 parts per million (ppm), and no need for calibration during its lifetime.
Jim’s project was a non-production, one-off unit and while its schematic (Figure 1) was obviously important, tells only part of the story, there are many more lessons in his description.
Figure 1 This schematic from Jim Williams’ 1976 EDN article on design of a precision scale teaches many lessons, but there’s much more than just the schematic to understand. Source: Jim Williams
To meet his objectives, he identified every source of error or drift and then methodically minimized or eliminated each one via three techniques: using better, more-accurate, more-stable components; employing circuit topologies which self-cancelled some errors; and providing additional “insurance” via physical EMI and thermal barriers. In this circuit, the front-end needed to extract a miniscule 600-nV signal (least significant digit) from a 5-V DC level—a very tall order.
I spoke to Jim a few years before his untimely passing, after he had written hundreds of other articles (see “A Biography of Jim Williams”), and he vividly remembered that design and the article as the event which made him realize he could be a designer, builder, and expositor of truly unique precision, mostly analog circuits.
Of course, it’s one thing to handcraft a single high-performance unit, but it’s a very different thing to build precision into a moderate production-volume instrument. Yet companies have been doing this for decades, as typified by—but certainly not limited to—Keysight Technologies (formerly known as Agilent and prior to that, Hewlett-Packard) and many others, too many to cite here.
Evidence of this is seen in the latest generation of optical test and measurement instruments, designed to capture and count single photons. That’s certainly a mark of extreme precision because individual photons generally don’t have much energy, don’t like to be assessed or captured, and self-destruct when you look at them.
I recently came across another instrument that takes a simple function to an extreme level of precision: the DC205 Precision Voltage Source from Stanford Research Systems. This desktop unit is much more than just a power supply, as it provides a low-noise, high-resolution output which is often used as a precision bias source or threshold in laboratory-science experiments (Figure 2).
Figure 2 This unassuming desktop box represents an impressively high level of precision and stability in an adjustable voltage source. Source: Stanford Research Systems
Its bipolar, four-quadrant output delivers up to 100 V with 1-μV resolution and up to 50 mA of current. It offers true 6-digit resolution with 1 ppm/°C stability (24 hours) and 0.0025 % accuracy (one year).
Two other features caught my attention: it uses a linear power supply (yes, they are still important in specialty applications) to minimize output noise, presumably only for the voltage-output block but not for the entire instrument. There’s also the inclusion of an DB-9 RS-232 connector in addition to its USB and fiber optic interfaces. I haven’t seen an RS-232 interface in quite a while, but I presume they had a good reason to include it.
The block diagram in the User’s Manual reveals relatively little, except to indicate the unit has three core elements which combine to deliver the instrument’s performance: a low-noise, stable voltage reference; a high-resolution digital-to-analog converter; and a set of low-noise, low-distortion amplifiers (Figure 3).
Figure 3 As with Jim Williams’ scale, the core functions of the SR205 look simple, and may be so, but it is also the unrevealed details of the implementation that make the difference in achieving the desired performance. Source: Stanford Research Systems
I certainly would like to know more of the design and build details that squeeze such performance out of otherwise standard-sounding blocks.
As this unit targets lab experiments in physics, chemistry, and biology disciplines, it also includes a feature that conventional voltage sources would not include: a scanning (ramping) capability. This triggerable voltage-scanning feature gives user control over start and stop voltages, scan speed, and scan function, with scan speeds settable from 100 ms to 10,000 s, and the scan function can either be a ramp or a triangle wave. Further, for operating in the 100-V “danger zone”, the user must plug a jumper into the rear panel to deliberately and consciously allow operation in the region.
In addition to the DB-9 RS-232 interface supporting legacy I/O and an optical link for state-of-the-art I/O, I noticed another interesting feature called in the well-written, readable, crisp, and clear user’s manual: how to change the AC-line voltage setting. Some instruments I have seen use a slide switch and some use plug-in jumpers (don’t lose them), but this instrument uses a thumbwheel rotating selector as shown in Figure 4.
Figure 4 Even a high-end instrument must deal with different nominal power-line voltages, and this rotary switch in the unit makes changing the setting straightforward and resettable. Source: Stanford Research Systems
In short, this is a very impressive standard-production instrument with precision specifications and performance, with what seems like a very reasonable base price of around $2300.
I think about it this way: In the real world of sensors, signal conditioning, and overall analog accuracy achieving stable, accurate performance to 1% is doable with reasonable effort; getting to 0.1% is much harder and reaching 0.01% is a real challenge. Yet both custom and production-instrumentation designers have mastered the art and skill of going far beyond those limits.
It’s similar to when I first saw a list of fundamental physical constants such as the mass or moment of an electron, and which had been measured (not defined) to seven or eight significant figures with an error only that last digit. I felt compelled to do further investigation to understand how they reached that level of precision and confidence, and how they credibly assessed their sources of error and uncertainty.
What’s the tightest measurement or signal-source accuracy and precision you have had to create? How did you confirm the desired level of performance was actually achieved—if it was?
Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.
Related Content
- Power needs go beyond just plain voltage and current
- Learning to like high-voltage op-amp ICs
- Jim Williams’ contributions to EDN
- The Quest for Quiet: Measuring 2nV/√Hz Noise and 120dB Supply Rejection in Linear Regulators, Part 2
- Transistor ∆VBE-based oscillator measures absolute temperature
The post Desktop DC source shows real precision where it’s needed appeared first on EDN.
Sh Ravi Sharma, Chairman TEMA has been appointed as Chairperson – IIIT, Una, Himachal Pradesh
We are happy to announce that Hon’ble President of India has graciously nominated Sh. Ravi Sharma , Chairman TEMA as Chairperson – Board of Governors, Indian Institute of Information Technology (IIIT), Una, HP. This is a recognition of excellent Industry experience of Shri Ravi Sharm and also an honour for TEMA that its chairman has been nominated for this prestigious position.
With 48,500 members and 74 counties MOU partners globally, TEMA is the oldest Indian industry association devoted to telecom, ICT, Cyber as well as to education sector.
TEMA and CMAI extend our heartiest congratulations to Shri Ravi Sharma on this noteworthy appointment and eagerly anticipate his enduring leadership, which serves as an inspiration to all of us.
A former CEO of Alcatel Lucent, Videocon Telecom and Adani Power, Shri Ravi Sharma, currently, is the Chairman of two global bodies – IIT Alumni Council and TEMA (Telecom Equipment Manufacturers Association of India). In addition, he is making valuable contributions to humanity through “Mission Chetna Foundation”, dedicated to the noble mission of promoting goodness. Mission Chetna is spread in 11 states and more than One crore Indians have been benefited from its social service.
Notably, Shri Sharma holds the esteemed “Distinguished Alumni Award” from IIT Roorkee, a recognition cherished by all. He also serves as the chief patron of “The Great Indian Film and Literature Festival” and “Subodhanand Foundation” – a trust devoted to Vedanta Studies & teachings.
Prof NK Goyal, Chairman Emeritus TEMA said “This is going in a long way for industry academia cooperation for the people of Himachal Pradesh and Mr Sharma’s leadership will guide IIIT Una in becoming one of the best Institutes of Excellence. The IIIT will help in innovations and creating startups engaged in new technologies of AI, 6G, quantum and others thus helping in achieving our Hon’ble Prime Minister Shri Narendra Modi’s vision of making India a developed nation by 2047. It will empower Himachal Pradesh and India through information and communication technologies.
It may be recalled during May 2023 TEMA/CMAI announced its initiative of Collaboration with Educational Institutions, to forge partnerships with educational institutions to enhance digital learning opportunities for students across the country. Through these collaborations, TEMA proposed to establish high-speed internet connectivity in educational institutes and provide access to educational resources, thereby empowering the next generation with knowledge and skills for a digital future.
The post Sh Ravi Sharma, Chairman TEMA has been appointed as Chairperson – IIIT, Una, Himachal Pradesh appeared first on ELE Times.
Using Machine Learning to Characterize Database Workloads
Databases have been helping us manage our data for decades. Like much of the technology that we work with on a daily basis, we may begin to take them for granted and miss the opportunities to examine our use of them—and especially their cost.
For example, Intel stores much of its vast volume of manufacturing data in a massively parallel processing (MPP) relational database management system (RDBMS). To keep data management costs under control, Intel IT decided to evaluate our current MPP RDBMS against alternative solutions. Before we could do that, we needed to better understand our database workloads and define a benchmark that is a good representation of those workloads. We knew that thousands of manufacturing engineers queried the data, and we knew how much data was being ingested into the system. However, we needed more details.
“What types of jobs make up the overall database workload?”
“What are the queries like?”
“How many concurrent users are there for each kind of a query?”
Let me present an example to better illustrate the type of information we needed.
Imagine that you’ve decided to open a beauty salon in your hometown. You want to build a facility that can meet today’s demand for services as well as accommodate business growth. You should estimate how many people will be in the shop at the peak time, so you know how many stations to set up. You need to decide what services you will offer. How many people you can serve depends on three factors: 1) the speed at which the beauticians work; 2) how many beauticians are working; and 3) what services the customer wants (just a trim, or a manicure, a hair coloring and a massage, for example). The “workload” in this case is a function of what the customers want and how many customers there are. But that also varies over time. Perhaps there are periods of time when a lot of customers just want trims. During other periods (say, before Valentine’s Day), both trims and hair coloring are in demand, and yet at other times a massage might be almost the only demand (say, people using all those massage gift cards they just got on Valentine’s Day). It may even be seemingly random, unrelated to any calendar event. If you get more customers at a peak time and you don’t have enough stations or qualified beauticians, people will have to wait, and some may deem it too crowded and walk away.
So now let’s return to the database. For our MPP RDBMS, the “services” are the different types of interactions between the database and the engineers (consumption) and the systems that are sending data (ingestion). Ingestion consists of standard extraction-transformation-loading (ETL), critical path ETL, bulk loads, and within-DB insert/update/delete requests (both large and small). Consumption consists of reports and queries—some run as batch jobs, some ad hoc.
At the outset of our workload characterization, we wanted to identify the kinds of database “services” that were being performed. We knew that, like a trim versus a full service in the beauty salon example, SQL requests could be very simple or very complex or somewhere in between. What we didn’t know was how to generalize a large variety of these requests into something more manageable without missing something important. Rather than trusting our gut feel, we wanted to be methodical about it. We took a novel approach to developing a full understanding of the SQL requests: we decided to apply Machine Learning (ML) techniques including k-means clustering and Classification and Regression Trees (CARTs).
- k-means clustering groups similar data points according to underlying patterns.
- CART is a predictive algorithm that produces a human-readable criteria for splitting data into reasonably pure subgroups.
In our beauty salon example, we might use k-means clustering and CART to analyze customers and identify groups with similarities such as “just hair services,” “hair and nail services,” and “just nail services.”
For our database, our k-means clustering and CART efforts revealed that ETL requests consisted of seven clusters (predicted by CPU time, highest thread I/O, and running time) and SQL requests could be grouped into six clusters (based on CPU time).
Once we had our groupings, we could take the next step, which was to characterize various peak periods. The goal was to identify something equivalent to “regular,” “just before Valentine’s” and “just after Valentine’s” workload types—but without really knowing upfront about any “Valentine’s Day” events. We started by generating counts of requests per each group per each hour based on months of historical database logs. Next, we used k-means clustering again, this time to create clusters of one-hour slots that are similar to each other with respect to their counts of requests per group. Finally, we picked a few one-hour slots from each cluster that had the highest overall CPU utilization to create sample workloads.
The best thing about this process was that it was driven by data and reliable ML-based insights. (This is not the case with my post-Valentine’s massages-only conjecture, because I didn’t have any gift cards.) The workload characterization was essential to benchmarking the cost and performance of our existing MPP RDBMS and several alternatives. You can read the IT@Intel white paper, “Minimizing Manufacturing Data Management Costs,” for a full discussion of how we created a custom benchmark and then conducted several proofs of concept with vendors to run the benchmark.

The post Using Machine Learning to Characterize Database Workloads appeared first on ELE Times.
Eliminating the need for an MCU (and coding) in Highly Efficient AC/DC Power Supplies
JON HARPER | Onsemi
Grid power is AC for many good reasons, yet almost every device requires DC power to operate. This means that AC-DC power supplies are used almost everywhere, and, in a time of environmental awareness and rising energy costs, their efficiency is critical to controlling operating costs and using energy wisely.
Simply put, efficiency is the ratio between input power and output power. However, the input power factor (PF) must be considered – this is the ratio between useful (true) power and total (apparent) power in any AC-powered device – including power supplies.
With a purely resistive load, the PF will be 1.00 (‘unity’) but a reactive load will decrease the PF as the apparent power rises, leading to reduced efficiency. A less-than-unity PF results from out-of-phase voltage and current, a significant harmonic content, or a distorted current waveform – common in discontinuous electronic loads such as switched mode power supplies (SMPS).
PF CorrectionGiven the impact on efficiency that a low PF has, when power levels are above 70W, legislation requires designers to incorporate circuitry that will correct the PF to a value close to unity. Often, active PF correction (PFC) employs a boost converter that converts rectified mains to a high DC level. This rail is then regulated using pulse width modulation (PWM) or other techniques.
This approach generally works and is simple to deploy. However, modern efficiency requirements such as the challenging ‘80+ Titanium standard’ stipulate the efficiency across a wide operating power range, requiring peak efficiencies of 96% at half load. This means the line rectification and PFC stage must achieve 98% as the following PWM DC-DC will lose a further 2%. Achieving this is very challenging due to the losses within the diodes in the bridge rectifier.
Replacing the boost diode with a synchronous rectifier helps and the two line rectifier diodes can be similarly replaced which further enhances efficiency. This topology is referred to as totem pole PFC (TPPFC) and, in theory, with an ideal inductor and perfect switches, efficiency will approach 100%. While silicon MOSFETs offer good performance, wide bandgap (WBG) devices offer far closer to ‘ideal’ performance.

As designers increase frequency to reduce the size of magnetic components, dynamic losses in switching devices will also increase. As these losses can be significant with silicon MOSFETs, designers are turning to WBG materials including silicon carbide (SiC) and gallium nitride (GaN) – especially for TPPFC applications.
Critical conduction mode (CrM) is generally the preferred approach for TPPFC designs at power levels up to a few hundred watts, balancing efficiency and EMI performance. In kilowatt designs, continuous conduction mode (CCM) further reduces RMS current within switches, reducing conduction loss.

Even CrM, can see an efficiency drop approaching 10% at light loads which is a roadblock to achieving ‘Titanium 80 Plus’. Clamping (‘folding-back’) the maximum frequency forces the circuit into DCM at light loads, thereby significantly reducing peak currents.
Overcoming Design ComplexityWith four active devices to be driven synchronously and the need to detect the inductor’s zero current crossing to force CrM, TPPFC design can be far from trivial. Additionally, the circuit must switch in / out of DCM while maintaining a high-power factor and generating a PWM signal to regulate the output – as well as providing circuit protection (such as over current and over voltage).
The obvious way to address these complexities is to deploy a microcontroller (MCU) for the control algorithms. However, this requires the generation and debugging of code, which add significant effort and risk to the design.
CrM- based TPPFC without CodingHowever, time-consuming coding can be avoided by using a fully integrated TPPFC control solution. These devices offer several advantages including high performance, faster design time and reduced design risk as they eliminate the need to implement a MCU and associated code.
A good example of this type of device is onsemi’s NCP1680 mixed-signal TPPFC controller that operates in constant on-time CrM, thereby delivering excellent efficiency across a wide load range. The integrated device features ‘valley switching’ during frequency foldback at light loads to enhance efficiency by switching at a voltage minimum. The digital voltage control loop is internally compensated to optimize performance throughout the load range, while ensuring that the design process remains simple.
The innovative TPPFC controller includes a novel low-loss approach for current sensing and cycle-by-cycle current limiting offers substantial protection without the need for an external Hall-effect sensor, thereby reducing complexity, size and cost.

A full suite of control algorithms is embedded within the IC, giving designers a low-risk, tried-and-tested solution that delivers high performance at a cost-effective price point.
The post Eliminating the need for an MCU (and coding) in Highly Efficient AC/DC Power Supplies appeared first on ELE Times.
Factory automation realizes boost from new technologies
PHILIP LING | Avnet
Factory automation strategies are reaping benefits now from several technologies and their enabling elements.
Automation has a long history, and it has played an essential role in all industrial markets. Repeatable manufacturing to high quality and in high volume is the essence of industrialization. The cost of the finished product can be directly related to the level of automation in the manufacturing process.
Continuous advancements in automation deliver mechanical excellence. That excellence, in turn, relies on control. Manufacturers must balance technologies used to implement control to this level with commercial considerations. These include the cost of development and deployment – or capital expenditure – and the recurring cost of implementation – or operational expenditure.
New technologies can impact factory automation when manufacturers align capex and opex while accurately assessing total cost of ownership. This article examines some technologies that meet this requirement and some that may influence the direction of automation.
Industrial automation technologiesSeveral technologies with their corresponding enabling elements are impacting industrial automation right now. The competitive advantage of Industrial Internet of Things (IIoT) will increase as these technologies become more pervasive.
- Single-pair Ethernet
- Edge computing
- Time-sensitive networking

Wide area networking (WAN) has changed every aspect of modern life. Consumers enjoy internet access anywhere, even on a transatlantic flight. Its use in IIoT means information technology and operational technology are colliding, and the way connectivity is used is still developing. Different parts of the ecosystem are at different stages of their IIoT journey.
The industrial sector is now largely aligned on the use of Ethernet to support an IP architecture. There is also growing momentum behind single-pair Ethernet (IEEE 802.3cg) in industrial automation. The move to single-pair Ethernet (SPE), which the automotive market developed, provides a simplified network at the physical level. It offers both data and power on the same two wires, speeds of 10 Mbps, reaches of 1000 meters, and support for multidrop configuration.
The development of SPE is helping to bring Ethernet into an environment where single-pair connectivity has long been the preferred solution. SPE’s significance will increase as support grows. An example of this is the advanced physical layer (APL) developed by leaders in the industrial sector. Ethernet-APL uses the 10BASE-T1L part of the standard, plus extensions. Ethernet-APL covers physical layer attributes, including power, connectors and cables. The Ethernet-APL layer is also specified for use in hazardous areas.
The Ethernet-APL group comprises OPC Foundation, Profibus, FieldComm Group, and OVDA. The physical layer supports various high-level network protocols, including EtherNet/IP, HART-IP, OPC UA, and Profinet.
Edge computingThe IIoT introduced cloud computing to the factory floor. Cloud platforms play an important role in data aggregation, its analysis and distribution to back-office applications. Edge computing puts the power of the cloud directly on the production line.
An edge computing solution employs high-end processors running cloud-level software on a local device. That device connects directly to the manufacturing equipment. There are several reasons why edge computing is popular.
First, it allows some of, or all, the operational data to stay inside the organization’s walls. There are good security imperatives for taking this approach. A further reason is to simply minimize the cost of moving data around. Another is to avoid the latency associated with processing time-sensitive data in a cloud platform.
Second, edge computing creates a contained environment that enables manufacturers to take greater control over their processes. This work cell approach can support distributed and separate workflows that provide greater flexibility over how assets are deployed. An edge computer can turn a small cluster of machines into a discrete manufacturing process that can operate outside a wider manufacturing environment.
The concept of edge computing goes beyond securing data or minimizing cloud transfers. It supports trends such as micromanufacturing or on-demand manufacturing.

As the IT and OT networks continue to merge, the need for time-sensitive networking (TSN) has increased. The IEEE Standards Association is working on several profiles for time-sensitive networking in various verticals, including industrial automation.
The purpose of the specification is to support time-critical packets on an Ethernet network. It achieves this using three mechanisms. The first is a method to prioritize Ethernet frames that are time critical by delaying frames that are not. Transmission time is one tool used to set priorities. It also looks at the frame length to determine if it can be sent without disrupting higher priority traffic. A further method is to build fault-tolerant networks with multiple paths to avoid latency.
Semiconductor manufacturers are now implementing these features at the chip level. Multi-chip solutions are evolving into single-chip or system-on-chip product offerings. This will continue in parallel with efforts to move to standard application protocols.
Expect the cost and complexity of implementing TSN to come down quite rapidly. Not all manufacturers will see a benefit or need for TSN, at least not immediately. As IIoT pervades the manufacturing environment, TSN is likely to feature more strongly.
Technologies that will impact factory automationIndustrial equipment has a long operational lifetime. This means change can be slow in comparison to other markets. As an example, wireless mesh networks are still mostly limited to connecting sensors in an industrial environment. Wired connectivity is still dominant for control.
However, there is also more cross-pollination between verticals, encouraged by wide area networking. Many of the technologies that have been created in – or are dependent on – IT are making their way into the OT world. Some of the prominent and most promising technologies include:
- Digital twins
- Blockchain
- Microservices
The idea of operating duplicate systems, or twins, in different environments goes back to NASA’s early days. A twin can be used to replicate and react to operational data happening somewhere else, even off planet. Moving to the digital domain has enabled the concept to be more cost efficient and, potentially, more flexible.
Digital twinning involves modelling an action, rather than simulating it. The difference relies on the twin using real-world data. This is where IoT technologies play a part. Sensors are the primary source of data.
Digital twins in industrial automation
It becomes feasible to use digital twins as manufacturers deploy more sensors on industrial equipment and couple them with high-speed networking.
Recent developments indicate OEMs are now implementing digital twins at a work cell level. This makes it easier to model part of a system as a function, rather than trying to model an entire factory.
Using multiple digital twins will become more common with the development of edge processing. It follows that the two are closely coupled, as edge processing is effective at a local level. Although edge processing is not dependent on digital twinning, the symbiosis is apparent.
BlockchainIn a manufacturing environment, the term “blockchain” can be closely associated with supply chain. Engineers have discussed the concept of using blockchain technology to authenticate and track the products in the supply chain for several years.
Part of the potential in adopting IoT comes from the commercialization of information. Trust will be an important part of the success. Using blockchain to provide evidence of authenticity could be key.
The move toward providing something as a service is also building momentum. Here, blockchain could be used to validate the hardware platform delivering that service. If the service relies on genuine parts being fitted to a system, blockchain could be the best way of authenticating those parts.
MicroservicesIf a theme is emerging in industrial automation’s evolution, perhaps it’s around making work cells more intelligent. Edge computing and soon digital twins are focused on work cells and modular functionality.
Modularity at a software level is one way to describe microservices. The methodology is now common in cloud platforms. A microservice architecture is more agile, more scalable and easier to maintain than large monolithic software structures.
The diversity in industrial automation processes suggests microservices will become more common here, too. Flexibility on the shop floor will mean machines can be repurposed more frequently. Using a microservice approach will support that flexibility.
AI in industrial automationThere is enormous scope for AI to impact industrial automation. Current examples of AI demonstrate that the technology is good at following procedures and adapting within known parameters. Its real strength comes from reacting to the unexpected in a predictable way.
Using AI in this way should improve procedural operations that are handled by programmable logic controllers (PLCs). AI can also now write the ladder logic that configures the PLCs. This scenario uses AI in a mechanical way to augment a function.
Putting AI into human-centric operations may be the next phase. In this scenario, the AI would need to “think” like an operator. It would at first assist and, potentially, in time displace the human in the loop.
ConclusionIndustrial automation is constantly developing. New technologies, often from other market verticals, provide the momentum for improvement. Caution is always used, but the pace of change seems to be increasing.
Studies show a widening productivity gap between large OEMs that can afford to implement new technologies more aggressively than smaller enterprises. As access to these technologies improves and the total cost of ownership softens, this gap may once again close.
The post Factory automation realizes boost from new technologies appeared first on ELE Times.
5 Tips to Deploy Cobots Effectively
Businesses can effectively deploy cobots in manufacturing by leveraging the flexibility of cobots to maximize the ROI of human-robot collaboration. A thorough planning process is central to the success of any cobot implementation. Businesses have to ensure they are taking steps to prepare employees and protect their safety at work. What other tips can businesses use to ensure they maximize the potential of their cobots?
1. Perform a Risk AssessmentThe first step to effectively deploy cobots in manufacturing is understanding the potential risks they pose. Modern robots are typically designed with high safety standards in mind. However, any new piece of machinery in the workplace can pose risks to employees and property.
Conducting a thorough risk assessment will allow businesses to understand their unique risk factors. This process involves analyzing the area where the robot is going to be installed as well as the job it is going to perform. It is best to do a risk assessment in a team, since different people may think of different possible risks.
For example, ideally every employee on site is wearing their PPE, but the risk assessment team can’t count on this. They have to consider every possibility, including potential safety hazards resulting from accidents or negligent behavior. The same applies to the cobots. To effectively deploy cobots, businesses have to be prepared for mechanical malfunctions.
Identify as many risks as possible and rank them numerically based on the degree of risk. Factors like potential injury severity or repair costs can contribute to a risk’s numerical ranking.
2. Identify the Right ApplicationsOne of the most common speed bumps when attempting to effectively deploy cobots is finding the right application. Businesses have to keep in mind that cobots have unique advantages compared to conventional robots. They need to be integrated in a highly strategic manner to maximize the ROI they deliver.
One of the top benefits of cobots is their high level of adaptability and flexibility. They are able to perform a greater variety of tasks compared to conventional robots. They can also do this in close proximity to humans without posing a high degree of risk. Businesses can factor this into their planning for their new robot integration to identify ideal applications.
For example, a cobot is perfect for streamlining assembly lines or packing processes. The cobots can automate simple, repetitive steps in the packing process, such as applying plastic straps to boxes. Due to the human-focused nature of cobots, they are able to do this alongside humans working on more complex tasks in the same assembly line.
One of the most important steps to effectively deploy cobots in manufacturing is looking for applications like this. Find opportunities to leverage the flexibility and safety features of cobots so they can augment the skills of human coworkers.
3. Ensure Employees Are PreparedA common stumbling block in robot integrations is lack of employee preparedness. Businesses may be so focused on the robot side of things that they forget the role employees play in a successful robot integration.
Employees need the skills and knowledge to understand the new robots well so they can confidently work alongside them. This goes for all employees, not just those who will be directly operating or interacting with the robots. Thorough cobot training is crucial for minimizing safety risks and preventing robot-related accidents.
Cobot training is also crucial for ensuring effective collaboration between robots and employees. Surveys estimate that 29 to 47% of jobs have been “taken over” by robots. People who had already been displaced by robots estimated this number to be higher, indicating possible resentment or fear surrounding the role of robots in the workplace.
Businesses need to be aware of employees’ concerns about their job security any time a new robot enters the workplace. Cobots are designed to work with humans, not replace. Effective cobot training can instill confidence in employees and relieve fears that they are being replaced. This will ensure that employees help the new robot integration go as smoothly as possible.
4. Make Safety a Top PriorityIn addition to completing a risk assessment, businesses also need to act on the known risks associated with a cobot integration. Making safety a top priority is critical in order to effectively deploy cobots. It’s a two-way street, as well – cobots can improve employee safety when integrated well. Businesses can apply cobots to high-risk tasks, freeing up employees to concentrate on safer roles.
There are many steps businesses can take to ensure cobot safety. However, cobots don’t usually require the extensive safety measures needed for conventional robots. For instance, cobots don’t need large safety cages to keep employees away.
Examples of common cobot safety measures include sensors and emergency stop controls. Most cobots come with some safety features built in, as well. Businesses can use proximity sensors to allow the cobot to sense when people or objects are nearby.
These sensors can be used to program in auto-stop functions any time a person or objects gets within a certain radius of the robot. Make sure to factor in the full range of any robotic appendages, such as claws or arms.
5. Monitor and Analyze PerformancePlanning and preparation are vital to success with robotics. However, what happens after the integration is installed is just as important in order to effectively deploy cobots. Businesses need to monitor and analyze the performance of their cobots to maximize their ROI.
Effective performance monitoring allows businesses to make adjustments to their cobot setup, optimizing it for better efficiency or safety. Having clear, measurable benchmarks and goals in mind is vital for this process to be successful. Businesses should have a clear idea of what they are hoping their cobots will achieve.
For example, a business might install cobots to improve productivity on a box packing assembly line. They might measure how many boxes pass through the assembly line in a certain amount of time or track how long each step of the packing process takes. By comparing these numbers to performance metrics before the cobot was installed, the business can tell how the cobot is performing.
Continuously monitor cobot performance and look for ways to improve the integration. Sometimes this may involve repurposing the cobot to a new application where it could be more effective. Don’t be afraid to consider new ideas and applications if the first one is not showing a good ROI even after optimization attempts.
How to Effectively Deploy Cobots In ManufacturingBusinesses can effectively deploy cobots by combining plenty of preparation with a strong performance monitoring strategy. Cobots are much safer and more adaptable than conventional robots, offering many possible applications for businesses.
Remember to make employees a central part of the process of adopting a cobot. The collaboration between cobots and employees is vital to a successful integration. Safety, training and risk awareness are also key to achieving a good ROI on any new cobot.

The post 5 Tips to Deploy Cobots Effectively appeared first on ELE Times.
STMicroelectronics – ZF multi-year supply agreement for SiC technology will help drive increased efficiency, performance, and reliability in sustainable energy applications
Silicon Carbide technology is widely used in semiconductor material which are easy to fabricate and provides good general electrical and mechanical properties. However, it can also be used to create advanced microprocessors but in simple diodes. By using silicon carbide subtracts in semiconductor fabs, STMicroelectronics is signed multiyear supply agreement with ZF for Silicon Carbide devices.
STMicroelectronics is a leading global semiconductor company that are serving the customers across the spectrum of electronics applications. The company creates innovative technologies that brings an important contribution in Silicon Carbide devices.
To talk about more on Silicon Carbide devices coming up with the collaboration of ZF and STMicroelectronics, Sakshi Jain, Sr. Sub Editor-ELE Times had an opportunity to interact with Gianfranco Di Marco, Marketing Communication Manager – Power Transistor Sub-Group, STMicroelectronics- Excerpts.
ELE Times: Enumerate the terms and details of the deal between technology group ZF and STMicroelectronics for purchase of silicon carbide devices from ST.
Gianfranco Di Marco: STMicroelectronics and ZF have signed a significant, multi-year, supply agreement for ST to supply ZF in the order of tens of millions of silicon carbide (SiC) devices from 2025. These devices will be third generation ST 1200V SiC MOSFETs in the STPAK package.
(STPAK is a high-creepage package by ST that allows mounting on heatsinks through a silver sintering process. It enables higher power delivery, easier scaling, and better long-term reliability).
ELE Times: How important is the deal for STMicroelectronics with ZF signing multi-year-supply-agreement for silicon carbide devices?
Gianfranco Di Marco: ST pioneered the first automotive grade SiC MOSFETs in 2016, and today we lead the market with an estimated market share above 50% and more than 5 million passenger cars implementing ST SiC devices. In addition to enabling greater range in e-mobility applications, ST third generation SiC technology is driving increased efficiency, performance, and reliability in sustainable energy applications like solar inverters and energy storage, as well as in industrial motor drives and power supplies. The key to success in electric vehicle technology is greater scalability and modularity with increased efficiency, peak power, and affordability. Our STPAK equipped with our silicon carbide technologies deliver these benefits and we are proud to work with ZF, a leading automotive supplier for electrification, to help them differentiate and optimize the performance of their inverters.
ELE Times: List the technology and technical specifications of silicon carbide technology and devices to be integrated in ZF’s new modular inverter architecture.
Gianfranco Di Marco: In ST, ZF found a supplier with the necessary manufacturing capabilities and capacities to produce exceptionally high-quality silicon carbide devices in the required quantities thanks to our investments in building a fully integrated supply chain complemented by long-term wafer-supply agreements. ZF will integrate the modules from STMicroelectronics into its new modular inverter architecture, which will commence series production from 2025. With these devices, ZF will be able to interconnect a varying number of ST modules in their inverters according performance requirements, without changing the intrinsic design of the inverter.
ELE Times: Mention ST’s capability and USP of manufacturing silicon carbide and packaging of the chips. Also mention about the manufacturing facilities.
Gianfranco Di Marco: ST high-volume STPOWER SiC products are manufactured in front-end fabs in Italy and Singapore, with back-end fabs in Morocco and China. Our SiC ecosystem also includes substrate research, development, and manufacturing in Italy and Sweden. In October, ST announced the expansion of our wide-bandgap manufacturing capacity with a new integrated SiC substrate manufacturing facility in Catania (Italy). The new facility is the first of its kind in Europe and is integral to our objective of achieving 40% internal substrate sourcing by 2024. ST is working on industrializing 200mm substrates, leveraging both engineering in Norrköping (Sweden) and the 200mm production line for SiC devices in Catania (Italy). ST is also cooperating with a technology partner, Soitec [Bernin (France)], to ensure a second qualified source of 200mm SmartSiC substrates, in addition to ensuring a robust internal supply. The collaboration with Soitec and their SmartSiC technology aims to advance SiC substrate technology and develop significant performance improvements that ST can deploy in high volume manufacturing. A final supply agreement will be subject to the qualification phase of the technology by ST and Soitec.
ELE Times: How do you list the benefits of using silicon carbide technologies across automotive and industrial sectors?
Gianfranco Di Marco: Power devices based on silicon carbide offer higher voltage and frequency capabilities than conventional silicon devices, allowing greater system efficiency, faster switching, lower losses, and better thermal management. In final applications, these advantages translate into smaller and lighter power designs featuring higher power density. SiC-based power devices can operate at up to 200°C junction temperature (limited only by the package), which reduces cooling requirements and allows more compact, more reliable, and more robust solutions. Existing designs can incorporate the performance and efficiency benefits of SiC devices without major changes, allowing fast development turnaround while keeping the BOM to a minimum.
SiC for automotive sector. SiC power devices find application in critical power systems inside electric vehicles, including traction inverters, on-board chargers, and in DC-DC conversion stages. They also provide significant efficiency gains in charging stations. SiC devices offer the following advantages over silicon for automotive and eMobility applications in general:
- 6-10% greater driving range in an average electric vehicle
- 150 to 200 kg less weight in an average electric vehicle
- Double the energy from charging stations.
- Longer battery lifetime
SiC for industrial sector. SiC devices benefit industrial motors, robots, and various other factory automation systems, as well as power supplies for servers and solar energy conversion systems. For industrial contexts, SiC devices can deliver the following advantages:
- Major power loss reduction, even up to 50%
- Ability to run at up to five times greater frequencies.
- Significant system size and weight reduction, as high as 50%
- Total cost of ownership reduction, as high as 20%

Marketing Communication Manager – Power Transistor Sub-Group
STMicroelectronics
The post STMicroelectronics – ZF multi-year supply agreement for SiC technology will help drive increased efficiency, performance, and reliability in sustainable energy applications appeared first on ELE Times.
Frustrated electronics hobbyist. Are electronic components being scalped?
EBAY is ridiculous. Broken parts radios. $300. WTF?
[link] [comments]
Exploring the endless applications of SuperSpeed USB
By Jimmychou95 | Infineon
Have you ever wondered how machines are able to “see” and understand the world around them? It’s all thanks to a fascinating field called machine vision. While machine vision has traditionally been used for industrial automation, its potential applications extend far beyond that.
Today, I am thrilled to share with you a deeper understanding of the remarkable SuperSpeed USB applications in the context of machine vision. However, before we dive in, let me take a moment to give you a brief overview on USB3 Vision:
One bandwidth-hungry application that can benefit from SuperSpeed USB, or USB 3.0, is machine vision. Machine vision essentially gives machines the ability to see by using cameras; it relies on image sensors and specialized optics to acquire images so that computer hardware and software can process and analyze various characteristics of the captured images for decision-making. As image sensors are becoming more advanced, with higher resolution, higher frame rate, and deeper color, the amount of data generated for a captured image has grown exponentially. SuperSpeed USB with available bandwidth up to 20 Gbps is naturally an interface of consideration for machine vision cameras. To help the diffusion of SuperSpeed USB´s usage in machine vision, an industry standard called USB3 Vision was born.
Top potential applications of machine vision
Machine vision can be used to improve the quality, accuracy and efficiency of many different types of applications, and it becomes especially powerful when combined with artificial intelligence and machine learning. This combination enables fast and autonomous decision-making, which is the essence of any type of automation. Let´s take an example: a defect inspection system in a factory could use an inspection camera to take a high-resolution picture of each product on the production line, the machine vision software would then analyze the image and issue a pass or fail response based on some predetermined acceptance criteria.

Machine vision and machine learning are being used increasingly in the medical field, for tasks like detection, monitoring, and training. Machine vision is especially good at motion analysis, and can be used to detect neurological and musculoskeletal problems by tracking a person’s movement. This technology can also be used for things like home-based rehabilitation and remote patient monitoring, which could be especially beneficial for elderly patients.
Machine Vision Technology in Agriculture
In recent years, the agricultural industry has witnessed a significant rise in the adoption of machine vision technology , showing a lot of promise for reducing production costs and boosting productivity. Machine vision can be used for additional activities like livestock management, plant health monitoring, harvest prediction, and weather analysis. By automating these processes, we can create a smart food supply chain that doesn’t require as much human supervision. Machine vision’s biggest advantage is in being able to automate decision-making through non-invasive, low-cost methods that collect data and perform analytics. In plant farming, for example, yield estimation is a critical preharvest process, and by improving its accuracy farmers could better allocate transportation, labor, and supplies.
Machine Vision Technology in Transportation
For a long time, computer-aided vision has been used to help with vehicle classification in transportation, but as the technology has rapidly evolved, we can now do things such as large-scale traffic analysis and vehicle identification. Using the latest smart cameras, we can achieve accurate vehicle classification and identification: this can improve things like traffic congestion, safety monitoring, toll collection, and law enforcement. In fact, the proliferation of traffic cameras has essentially eliminated the need for such a large police force, as they can operate 24/7 to catch moving violations at any time. With further advancements in machine learning, image analytics can now be applied to traffic cameras: this can help direct traffic flow, monitor street safety, and reduce congestion for an entire city — saving time, fuel and resources on a large scale.
Machine Vision Technology in Retail
Machine vision is a useful tool for retailers who want to improve the customer experience and increase sales: by training machine learning algorithms with data examples, retailers can anonymously track customers in their store to collect data about foot traffic, waiting times, queueing time, etc. This data can then be used to optimize store layouts, reduce crowding, and ultimately improve customer satisfaction. To prevent impatient customers from waiting in long lines, retailers are also using machine vision to detect queues and manage them more efficiently.
Machine Vision Technology in Sports
Technology is increasingly being used to help athletes perform better in sports. From computer-generated analysis to cognitive coaching, from injury prevention to automated refereeing, technology is now playing a major role in almost every aspect of sports. One area that has seen a particular growth in recent years is the use of machine vision and AI in training, coaching, and injury prevention: it’s all about using smart cameras to track and analyze the movement of an athlete. The system monitors various ranges of motion, analyzes them in real-time and provides instant feedback. In recent years, smart cameras have become so sophisticated that even the smallest body movement can be tracked precisely down to limbs and joints.
By fully embracing USB 3.0-enabled machine vision, factories around the world are quickly and reliably automating and solving complex manufacturing issues. The same benefits are also shared with a wide range of other industries including health care, agriculture, transportation, retail, sports, and many more. Together with leading machine vision manufacturers in the world, Infineon is accelerating the automation revolution with EZ-USB FX3 based cameras scanners and video capturing systems. Additionally, the exciting news is that Infineon is looking forward to enable new applications and empower new customers with our next generation of 5 and 10 Gbps solutions coming by the end of 2023.
The post Exploring the endless applications of SuperSpeed USB appeared first on ELE Times.
How do robots see? Robotic vision systems
Jeremy Cook | Arrow
The short answer to the question, “How do robots see?” is via machine vision or industrial vision systems. The details are much more involved. In this article, we’ll frame the question around physical robots that accomplish a real-world task, rather than software-only applications used for filtering visual materials on the internet.
Machine vision systems capture images with a digital camera (or multiple cameras), processing this data on a frame-by-frame basis. The robot uses this interpreted data to interact with the physical world via a robotic arm, mobile agricultural system, automated security setup, or any number of other applications.
Computer vision became prominent in the latter part of the twentieth century, using a range of hard-coded criteria to determine simple facts about captured visual data. Text recognition is one such basic application. Inspection for the presence of component x or the size of hole y in an industrial assembly application are others. Today, computer vision applications have expanded dramatically by incorporating AI and machine learning.
Importance of machine visionWhile vision systems based on specific criteria are still in use, machine vision is now capable of much more, thanks to AI-based processing. In this paradigm, robot vision systems are no longer programmed explicitly to recognize conditions like a collection of pixels (a so-called “blob”) in the correct position. A robot vision system can instead be trained with a dataset of bad and good parts, conditions, or scenarios to allow it to generate its own rules. So equipped, it can manage tasks like unlocking a door for humans and not animals, watering plants that look dry, or moving an autonomous vehicle when the stoplight is green.
While we can use cloud-based computing to train an AI model, for real-time decision-making, edge processing is typically preferable. Processing robotic vision tasks locally can reduce latency and means that you are not dependent on cloud infrastructure for critical tasks. Autonomous vehicles provide a great example of why this is important, as a half-second machine vision delay can lead to an accident. Additionally, no one wants to stop driving when network resources are unavailable.
Cutting-edge robotic vision technologies: multi-camera, 3D, AI techniquesWhile one camera allows the capture of 2D visual information, two cameras working together enable depth perception. For example, the NXP i.MX 8 family of processors can use two cameras at a 1080P resolution for stereo input. With the proper hardware, multiple cameras and camera systems can be integrated via video stitching and other techniques. Other sensor types, such as LIDAR, IMU, and sound, can be incorporated, giving a picture of a robot’s surroundings in 3D space and beyond.
The same class of technology that allows a robot to interpret captured images also allows a computer to generate new images and 3D models. One application of combining these two sides of the robotics vision coin is the field of augmented reality. Here, the visual camera and other inputs are interpreted, and the results are displayed for human consumption.
How to get started with machine visionWe now have a wide range of options for getting started with machine vision. From a software standpoint, OpenCV is a great place to start. It is available for free, and it can work with rules-based machine vision, as well as newer deep learning models. You can get started with your computer and webcam, but specialized industrial vision system equipment like the Jetson Nano Developer Kit or the Google Coral line of products are well suited to vision and machine learning. The NVIDIA Jetson Orin NX 16GB offers 100 TOPS of AI performance in the familiar Jetson form factor.
Companies like NVIDIA have a range of software assets available, including training datasets. If you would like to implement an AI application but would rather not source the needed pictures of people, cars, or other objects, this can give you a massive head start. Look for datasets to improve in the future, with cutting-edge AI techniques like attention and vision transformers enhancing how we use them.
Robot vision algorithmsRobots see via the constant interpretation of a stream of images, processing that data via human-coded algorithms or interpretation via an AI-generated ruleset. Of course, on a philosophical level, one might flip the question and ask, “How do robots see themselves?” Given our ability to peer inside the code—as convoluted as an AI model maybe—it could be a more straightforward question than how we see ourselves!
The post How do robots see? Robotic vision systems appeared first on ELE Times.
Best e-Rickshaw in the USA
In the quest for sustainable urban transportation solutions, electric rickshaws, or e-rickshaws, have emerged as a promising alternative. These innovative vehicles combine eco-friendliness, efficiency, and convenience, making them an ideal choice for short-range urban travel. In this article, we will explore some of the best e-rickshaw models available in the USA that are revolutionizing the way we think about urban commuting.
- Mahindra Treo Sft
The 2023 Mahindra Treo represents a remarkable achievement in India’s pursuit of environmentally friendly urban transportation. This e-rickshaw boasts a groundbreaking electric powertrain, a design that emphasizes sustainability, and a range of intelligent features. By addressing urban mobility challenges and setting new standards for electric three-wheelers, the Mahindra Treo is at the forefront of the e-rickshaw revolution.
- Bajaj Re Rickshaw
The 2023 Bajaj RE Rickshaw Electric model embodies Bajaj Auto’s commitment to eco-consciousness and durability. As an electric version of the renowned Bajaj RE Rickshaw, this model offers an affordable and environmentally responsible solution for urban and short-distance travel. With an electric motor, a respectable range, and a comfortable cabin, it provides a sustainable alternative to traditional rickshaws.
- Piaggio Ape E City
Hailing from the distinguished Italian automotive manufacturer Piaggio, the 2023 Piaggio Ape E-City Electric vehicle stands out as a flexible and environmentally-conscious three-wheeler. This electric trike showcases Piaggio’s dedication to sustainability and effective urban transportation. Boasting a cutting-edge electric power system, commendable driving range, comfortable interior, and a strong safety focus, the Electric 2023 Piaggio Ape E-City emerges as a practical and eco-conscious answer to urban mobility needs.
- Mahindra E Alfa Mini
The 2023 Mahindra e-Alfa Mini Electric is a prime example of a sustainable and high-performing electric rickshaw crafted by Mahindra, a prominent name in the Indian automotive industry. With advanced electric power technology, a respectable range, a comfortable interior, and a strong safety commitment, the Electric 2023 Mahindra e-Alfa Mini offers a reliable and eco-friendly choice for city commuters and businesses alike.
- Kinetic Green Safar Smart E Auto
Championing innovation, the Kinetic Safar Smart Electric Auto forgoes the traditional internal combustion engine for a progressive electric powertrain. Powered by a high-capacity battery pack, it heralds an era of zero emissions, reduced noise, and enhanced energy efficiency. Positioned as a green alternative to conventional auto-rickshaws, the 2023 Electric Kinetic Safar Smart Auto is a financially viable, sustainable, and efficient option suitable for both urban and rural settings.
- Jezza Motors J1000 Electric Rickshaw
Crafted by the notable electric mobility entity Jezza Motors, the Electric 2023 Jezza Motors J1000 Electric Rickshaw stands as an inventive electric vehicle. Jezza Motors has established itself as a major player in the electric vehicle arena, focusing on advancing electric cars and providing sustainable transportation options for urban travellers. With a state-of-the-art electric power system, considerable range, performance capabilities, comfort features, safety protocols, and affordability, the Electric 2023 Jezza Motors J1000 offers a well-rounded and efficient choice tailored for urban commuting needs.
- Citylife Butterfly Super Deluxe XV850 E Rickshaw
Introducing the 2023 City Life Butterfly Super Deluxe, a pinnacle of electric rickshaw design dedicated to luxurious and convenient urban travel. Through refined aesthetics, cutting-edge attributes, and environmentally conscious functionality, the Butterfly Super Deluxe aims to redefine the conventional perception of rickshaw commutes. With its sophisticated design, advanced features, and eco-friendly operation, the 2023 City Life Butterfly Super Deluxe embodies opulent and sustainable urban transportation, providing a deluxe travel experience tailored for those seeking both comfort and style.
In a world where sustainable urban transportation is gaining paramount importance, these e-rickshaw models pave the way for a greener, more efficient, and more comfortable way of getting around in urban environments. As these electric vehicles continue to evolve, they offer a glimpse into a future where eco-friendly mobility is the norm.
The post Best e-Rickshaw in the USA appeared first on ELE Times.
Enabling EV Charging Infrastructure in India
Courtesy: Delta
The world is becoming sensitive towards climate change and adapting technologies and solutions that can accelerate resolutions to avoid further disruption to climate. Needless to say, the electric vehicle is one such solution. Around 57% of all global passenger vehicle sales and over 30% of the passenger vehicle fleet will be electric by 2040, according to the Bloomberg New Energy Finance (BNEF). The BNEF Electric Vehicle Outlook 2019 expects annual passenger EV sales to rise to 10 million in 2025, 28 million in 2030, and 56 million by 2040.
But for India, vision is already set for 2030 with maximum dependency on electric vehicles, especially for public transportation where the government in transformation at high speed. India is highly adaptive to new technology, and when it is about EV adoption, we can understand with current launches of electric vehicles how it is streaming up more. The government recently shared its view to have that in the next three years, one can experience the EV charging station every 3 km. As a recent report suggested, India has the potential to become the largest electric vehicle (EV) market in the world, according to a report published by the World Economic Forum in collaboration with Ola Electric institute. But being hungry doesn’t solve the food problem. Having electric vehicles also means to have charging stations and infrastructure that can meet the growing demand. The best thing about EV charging is that it can be done at home as well. And the future will see the maximum dependency on AC chargers only, which can be done at both commercial and residential places. So the coming time is gearing up to have more electrification of electric vehicle and strengthen up then the ecosystem of EV charging.
Currently, EV is the trend in India, with the automobile industry becoming absorbed about electric vehicles. But the priority is not to only provide but adapt. The big question is why should customers consider electric vehicle as their priority means of transportation? This can only be channelized when we have a perfect balance of electric vehicles and charging infrastructure that convinces the people that there are less or no hurdles in driving a vehicle, which reduces the carbon emission. This year there were multiple launches of electric cars in Auto Expo 2020. To some extent, it wouldn’t be wrong to say that the theme of Auto Expo was going electric with major automobile brands showcasing how they will contribute to the EV infrastructure.
And subsequently, the government and companies are doing their part in making the path of EV established with strengthening the infrastructure. With GOI’s vision to have an EV charging station every 3 km in the next 3 years, it can be perceived that everyone is playing their role in making EV a future of tomorrow’s automobile sector. The government is setting a platform with various measures that will benefit the EV ecosystem, like reducing GST on EV Chargers from 18% to 5%.
The growth for the more electric vehicle on roads can only be done with a stable atmosphere that requires encouragement for sustainability by prioritizing the growth of the EV ecosystem. This can be done by creating a well-connected charging infrastructure on the Pan-India level.
But the market needs a lot more in terms of infrastructure and value. For example, an acute shortage of Cobalt, which is the major raw material for lithium-ion batteries, can become a significant concern for the adoption of EV. But we are confident that the automotive industry is resilient enough to come up with alternate materials for manufacturing EV batteries.
Apart from fiscal benefit, the government should introduce non-fiscal incentives that can convince customers to explore electric cars as their first choice. Like waving off Road-tax and RC for EV vehicles can be a high turning point for people to consider electric vehicles as a prime option. Also, small initiatives like free parking in the malls, special zones with only EV vehicles like Connaught Place in Delhi, which will also reduce the carbon emission, can be a great contributor.
Once the path has been smoothened, the EV sector can witness its next adaptation, which is about vehicle-grid technology that will also revolutionize the electricity infrastructure. Imagine having a car that not only solves the transportation problem but also can be utilized to provide electricity to the home. That is the solution future is holding for EV infrastructure.
As a leading player, we are uniquely positioned to offer complete end-to-end solutions with both on-board and off-board chargers. Our energy-efficient, compact, and extremely robust solutions for onboard chargers (DC-DC Converters & Powertrain) give us a distinctive advantage because of our global expertise. We have been consistent in developing technologies and solutions that strengthen the EV charging infrastructure and showcase our support in partnering with GOI’s ‘E-Mobility Mission.’
We have been partnering with leading OEMs and automobile brands to provide our EV Charging solutions and have already installed more than 700 chargers. We have both AC and DC chargers to ensure that the market has end-to-end solutions from our energy-efficient products that will enhance the growth of EV in India.
But like discussed earlier, the major challenge is not just supplying but also to provide the knowledge in implementing smooth execution. Taking the same faith ahead, we recently launched the E-Mobility Tech Experience Center conceptualized to provide an industry platform that will support all types of ratings and configurations and be an enabler when it comes to an understanding of the ecosystem of EV charging solutions. The aim is to encourage more and more partners and associates to familiarize themselves with the technology which is being initiated by providing knowledge and practical experience with a different type of charging solutions for all kinds of electric vehicles under one roof. We have various range of energy-efficient AC & DC EV chargers such as GB/T, CCS, ChadeMO along with OCA certified Testing Tools, Charging Process Simulators, Load Simulators, and Charge Point Operator software platform. That will help the future to be more precise with perfect knowledge of the process.
Being a green company with a vision to power green India for us EV Charging is one of the key businesses which rightly projects our vision to provide clean energy. Thus our R&D lab is continuously working to ensure that we have the technology with us that can always support the EV charging ecosystem and reduce the hurdles in achieving it.
The post Enabling EV Charging Infrastructure in India appeared first on ELE Times.
Enabling a sovereign cloud using a multi-cloud foundation
The adoption of multiple clouds by European business and public agencies continues to increase due to the need for competitive differentiation and growth through speed, quality and the delivery of great customer experiences. To achieve these goals, IT and business executives must manage challenges across data governance, security and compliance to protect sensitive customer, citizen and country data using privacy, access and security controls.
Data has become both a business and national asset. The ability of enterprises and governments to control data and run workloads while operating within legal jurisdiction, complying with multi-jurisdictional regulations, and protecting against unauthorized access requires a critical set of sovereign capabilities which are essential for customer trust and business growth.
Given this transformational journey, sovereign clouds should be included as part of a multi-cloud strategy. Using common sovereign tenants and principles is becoming increasingly necessary while at the same time supporting capabilities that deliver efficiency, reduce complexity and enable standardization. This approach provides a foundation from which IT and business teams can ensure that the necessary solutions are in place to control, secure, and store data in compliance with relevant regional, national, and (where applicable) international laws and guidelines. A multi-cloud architecture can provide layers to meet local and national regulations, and thereby give organizations greater choices and flexibility across multiple sovereign cloud environments. Fundamentally, a multi-cloud approach to sovereign cloud is about unlocking and supporting emerging data economies with as little complexity and uncertainty as possible. This approach empowers enterprises to focus more on serving their stakeholders through innovation and growth. Additionally, a multi-cloud approach to a sovereign cloud enables legacy application and back-end infrastructure modernization.
Technology executives must understand that establishing a sovereign cloud is complex and difficult, especially without assistance from a partner or vendor with deep expertise. There are various complex dimensions spanning data security and data protection, understanding regulations and their impact on technology needs, and the complexity of driving standardization and controls across multiple clouds. In addition, data classification for a sovereign cloud is essential for its success. This complexity requires technology leaders to build expertise with strategic partners who have the depth and bench strength to deploy a sovereign cloud. As part of a sovereign cloud foundation, multi-cloud tools enable organizations to tailor infrastructure to their specific needs and respond in an agile way to data privacy, security and geopolitical disruptions.
When it comes to vendor and partner support, customers should not expect to create a sovereign cloud on their own because of the required complexity and expertise. Let’s take two examples of sovereign cloud deployments using large global professional service partners, VMware and Broadcom. VMware’s technology has been a critical foundation for driving innovation and scale for governments and public agencies in Europe for many years. With a set of tools that can position customers to work across multiple clouds, VMware can enable the critical foundational requirements for a sovereign cloud, and has been an instrumental partner in the process of driving innovation for governments and public agencies in Europe. After its pending acquisition by Broadcom, VMware will be supported by Broadcom’s lengthy track record of significant R&D investments, an innovation-focused culture, and commitment to customers. Broadcom’s acquisition of VMware creates opportunities for the new, combined organization to offer customers a more complete set of sovereign cloud capabilities. Such a set of capabilities could help accelerate digital transformation across Europe while also furthering the needs and objectives of sovereign clouds.
Digging deeper into multi-cloud technology capabilities, enterprises must consider how to manage the necessary controls, security and data transparency required for a sovereign cloud. Without the right technology foundation that empowers these capabilities, the successful deployment of a sovereign cloud is simply not possible. Additional key areas customers must consider enabling a sovereign cloud include:
- Basing the technology architecture on a resilient and scalable architecture that takes advantage of process automation across application, service and operational tasks and capabilities
- Taking a focus on data and security policies that deliver layers of digital protection and sovereignty across the software development pipeline and service operations
- Enabling processes that empower jurisdictional controls, and an ability to adjust to geo-political dynamics, enabling business and IT teams to manage and control confidential data via advanced methods and practices
- Enabling an organization to adopt country-specific regulatory, compliance, and data requirements, (regardless of the underlying cloud platforms) with data control points and reporting mechanisms
Multi-cloud solutions like those offered by VMware provide European enterprises and the public sector with a flexible, consistent digital foundation to build, run, manage, connect and protect their most important and complex workloads. Once Broadcom completes its pending acquisition of VMware, the combined company can make new and significant R&D investments, develop a stronger and broader set of innovations, and foster larger professional service partnerships focused on multi-cloud capabilities to power and enable sovereign cloud.

The post Enabling a sovereign cloud using a multi-cloud foundation appeared first on ELE Times.
Amazon Sidewalk network is getting silicon traction

New silicon solutions are emerging for Amazon Sidewalk network, and these chips come alongside developer tools providing step-by-step direction and expert advice for Amazon Sidewalk device development.
At its fourth annual Works With Developers Conference, Silicon Labs unveiled two system-on-chips (SoCs) optimized for Amazon Sidewalk: SG23 and SG28. These chips complement the Silicon Labs Pro Kit for Amazon Sidewalk previously announced by the Austin, Texas-based semiconductor supplier.
Figure 1 Amazon Sidewalk is built on an architecture comprising a radio, network, and application layers. Source: Silicon Labs
The always-on, community-driven Amazon Sidewalk is a shared network that helps devices like Amazon Echo, Ring security cameras, outdoor lights, motion sensors, and tile trackers work better at home and beyond the front door. It uses three different radios: Bluetooth LE for device provisioning and nearby device connectivity, sub-GHz FSK for connectivity up to one mile, and a proprietary CSS radio for extreme long-range.
Most Amazon Sidewalk end-devices will support Bluetooth LE and one of the two long-range protocols: FSK or CSS operating at 900 MHz frequencies to cover longer distances. So, SG28 includes two dual-band SoCs with radios for sub-GHz FSK as well as Bluetooth LE. That allows device makers to simplify designs and reduce costs by having the two most used radios on Sidewalk end-devices in one package. On the other hand, SG23 provides security and a robust sub-GHz link budget for long-range, end-node devices.
Figure 2 The two SoCs are optimized for Amazon Sidewalk with extensive developer support. Source: Silicon Labs
Amazon Sidewalk is one of the exciting developments in the Internet of Things (IoT) space since it was launched in 2019. It pushes connectivity beyond the walls of the smart home while employing smart home devices like cameras and speakers as gateways for supporting long-range use cases. According to Amazon, it’s a community network built by the community for the community.
A neighborhood network
Silicon Labs CTO Daniel Cooley calls Amazon Sidewalk a neighborhood network. “While Bluetooth gives users an easy way to provision and deploy new devices onto the network, the sub-Ghz band is designed to support device communications over one mile, allowing for new edge applications in areas like smart agriculture and smart cities.”
Figure 3 Amazon Sidewalk Bridges will pick up the message from the compatible device and route it through the AWS cloud to the user with multiple layers of security. Source: Silicon Labs
Besides chips like SG23 and SG28, Silicon Labs has launched a design kit that supports the development of wireless IoT-based devices on Bluetooth and sub-GHz wireless protocols for Amazon Sidewalk. The Wireless Pro Kit is built around a KG100S radio board that provides a complete reference design to support Bluetooth, FSK, and CSS protocols used in Amazon Sidewalk.
The kit also includes a BG24 radio board and FSK/CSS adapter board for developers who want a discrete design. Its mainboard contains an onboard J-Link debugger with a packet trace interface and a virtual COM port, enabling application development and debugging of the attached radio board as well as external hardware through an expansion header.
Figure 4 The Pro Kit for Amazon Sidewalk provides the necessary tools for developing high-volume, scalable IoT applications. Source: Silicon Labs
Silicon Labs has been working closely with Amazon to navigate the Amazon Sidewalk development process. After all, it’s a new network, and developers need to be educated on how to best create Amazon Sidewalk devices. Recognizing this need, Silicon Labs has joined hands with Amazon to create the Amazon Sidewalk Developer’s Journey with Silicon Labs.
Amazon Sidewalk was opened for developers on 28 March 2023.
Related Content
- It’s Amazon’s Sidewalk, You Just Live On It!
- What’s This Sidewalk Thing? Alexa! ‘Splain!
- MediaTek, Amazon Aim to Lead in Smart Homes
- Top smart home trends plus: is Matter the key to interoperability?
- Silicon Labs dual band SoCs for Amazon Sidewalk BLE and sub-GHz FSK
The post Amazon Sidewalk network is getting silicon traction appeared first on EDN.
Pages
