Microelectronics world news

Latest issue of Semiconductor Today now available

Semiconductor today - Wed, 08/30/2023 - 18:04
For coverage of all the key business and technology developments in compound semiconductors and advanced silicon materials and devices over the last month, subscribe to Semiconductor Today magazine...

Chua's circuit built from scratch without proper perfboard or oscilloscope

Reddit:Electronics - Wed, 08/30/2023 - 17:23
Chua's circuit built from scratch without proper perfboard or oscilloscope

The circuit looks cute but a pain to build on this board, easier on a bigger one but that was all I had at the time. With an old analog scope pictures would have been better (no pixels and continous line, old doesn't always mean worse).

submitted by /u/No_Usual9256
[link] [comments]

RLD-based astable 555 timer circuit

EDN Network - Wed, 08/30/2023 - 12:33

In the classic configuration and most variants of the astable 555 multivibrator circuit, the timing characteristics are based on the charging and discharging of a capacitor. However, it can be argued that since the exponential voltage of a capacitor is qualitatively similar to inductor current, the latter can be made an alternative timing element for the 555. This was shown in the “Inductor-based astable 555 timer circuit”. In Figure 1, we present another approach for an inductor-based astable 555 multivibrator.  

Figure 1 An astable 555 timer circuit based on an inductor, diode, and resistor. 

Wow the engineering world with your unique design: Design Ideas Submission Guide

At power-on, the inductor voltage (VL) spikes up and exceeds the 555’s trigger voltage of 2Vcc/3. Output (Vo) at pin 3 goes low and the discharge transistor at pin 7 turns on which provides a low-resistance path to ground. Inductor current (IL) begins to rise as VL and the voltage at pin 2 (V2) and pin 6 (V6) all fall exponentially.

When V6 gets below the 555’s threshold voltage of Vcc/3, Vo goes high, and the discharge transistor turns off. Because IL was interrupted, the inductor’s voltage reverses which forward-biases the flywheel diode (D). Pin 7 gets clamped to a diode forward voltage above Vcc. Both IL and VL start to fall towards zero while V2 climbs toward Vcc. 

When V2 crosses 2Vcc/3 again, Vo goes low, the discharge transistor turns on, and the train of regular high and low output pulses ensues. The expected waveforms are shown in Figure 2

Figure 2 The simulated waveforms using Tinkercad (setting: 15 µs/div).

For each state of Vo, we derived the first-order differential equation of the effective circuit. This led us to the Equation 1 for calculating the pulse widths:

The symbols are defined in Table 1 where the columns for TH and TL list specific values that the symbols take on. We also considered Rs as the inductor’s DC resistance, RON = 59.135 / Vcc0.8101 as the resistance of the discharge transistor at pin 7 (Refer to “Design low-duty-cycle timer circuits”), and VD=0.6 V as the diode forward voltage.

Table 1 Formulas to predict timing characteristics.

To test these ideas, we prepared a spreadsheet calculator that predicts TH, TL, and other output characteristics. Then we picked the components listed in Table 2, used a digital LCR tester (SZBJ BM4070) to measure their actual values, and plugged the numbers into the calculator. The predicted attributes of Vo are listed in Table 3.

Table 2 Components for the experimental circuit.

Table 3 Predicted versus measured values (Vcc=5.00 volts). 

Finally, we connected to our laptop, a USB-powered test and measurement device—the Digilent Analog Discovery 3 (AD3)—to supply +5 V to the experimental circuit (Figure 3) and observe the waveforms from pins 2 and 6, and 3 of the IC (Figure 4). We tested 8 chips from a bin of assorted 555s and noticed that while TH was consistent, the TL values annoyingly lacked precision. Nonetheless when we compared the AD3 Measurements with the Predicted values in Table 3, we saw that Equation 1 fairly modeled the output of the new multivibrator.  

Figure 3 Experimental set-up with the Digilent Analog Discovery 3 connected to the +5 V to the experimental circuit.

Figure 4 Waveforms of V2, V6, and Vo, and measurements for Vo.

Arthur Edang (M.Sc) taught Electronics and Communications Engineering courses at the Don Bosco Technical College (Mandaluyong, Philippines) for 25 years. His current interests include nonlinear phenomena and chaos in circuits, creative approaches to teaching and research, and adaptive e-books. He started Thinker*Tinker—the YouTube channel where viewers can “examine circuits, play with their equations, and make designs work.” 

Maria Lourdes Lacanilao-Edang (M.Engg) has instructed a diverse range of courses in the field of computer engineering, from Basic Electronics to Computer Design Interface Projects. Currently serving as faculty member at the University of Santo Tomas (Manila, Philippines), she specializes in the IT Automation track with particular interests in embedded systems, web and mobile app development, and IoT. 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post RLD-based astable 555 timer circuit appeared first on EDN.

Two Companies Use MEMS to Cool the ‘World’s Fastest Compact SSD’

AAC - Wed, 08/30/2023 - 02:00
At this year's Flash Memory Summit, two companies combined their respective cooling technology and SSD controllers to unlock unprecedented storage performance.

AMD Levels Up Gaming GPUs, Providing Mid-Tier Option in RX 7000 Series

AAC - Tue, 08/29/2023 - 20:00
The two new graphics cards improve performance per dollar and fill an important gap in AMD's RX 7000 lineup.

Toshiba ships first 2200V dual SiC MOSFET module

Semiconductor today - Tue, 08/29/2023 - 18:54
Japan-based Toshiba Electronic Devices & Storage Corp (TDSC) – which was spun off from Toshiba Corp in 2017 – has begun volume shipments of what it reckons is the industry’s first 2200V dual silicon carbide (SiC) MOSFET module for industrial equipment...

IP partnerships stir the world of FPGA chiplets

EDN Network - Tue, 08/29/2023 - 18:29

The tie-ups between IP suppliers of embedded FPGA (eFPGA) and UCIe chiplets mark a new era of FPGA chiplet integration in die-to-die connectivity. Chiplets are rapidly being adopted as heterogeneous multi-chip solutions to enable lower latency, higher bandwidth, and lower cost solutions than discrete devices connected via traditional interconnects on a PCB.

Take YorChip, a supplier of UCIe-compatible IP, which is employing QuickLogic’s eFPGA IP technology to create the first UCIe-compatible FPGA chiplet ecosystem. Unified Chiplet Interconnect Express (UCIe) is an open standard for connecting small, modular blocks of silicon called chiplets. Kash Johal, founder of YorChip, calls his company’s partnership with QuickLogic a giant leap for FPGA technology.

Figure 1 QuickLogic and YorChip have partnered to develop the industry’s first UCIe-enabled FPGA.

The two companies claim that this strategic partnership aims to enable an ecosystem allowing chiplet developers to create a customized system and use chiplets for prototyping and doing early market production. QuickLogic teamed up with connectivity IP supplier eTopus in a similar tie-up last year to create a disaggregated eFPGA-enabled chiplet template solution.

QuickLogic combined its Australis eFPGA IP Generator with chiplet interfaces from eTopus to produce standard eFPGA-enabled chiplet templates. Each template will be designed with native support for chiplet interfaces, including the bunch of wires (BOW) and UCIe standards. According to QuickLogic, unlike discrete FPGAs with pre-determined resources of FPGA lookup tables (LUTs), RAM, and I/Os, the disaggregated eFPGA-enabled chiplet template will be available initially as a configurable IP and eventually as known good die (KGD) chiplets.

Figure 2 The disaggregated eFPGA chiplet template solution supports both BOW and UCIe interfaces.

Such collaborations to create eFPGAenabled chiplet solutions mark an important trend in developing chip-to-chip interconnect technology. In April 2023, the European research institute Fraunhofer IIS/EAS entered a collaboration with eFPGA IP supplier Achronix to build a heterogeneous chiplet solution.

Fraunhofer IIS/EAS, which provides system concepts, design services and fast prototyping in most advanced packaging technologies, will use Speedcore eFPGA IP from Achronix to explore chip-to-chip transaction layer interconnects such as BOW and UCIe. One key application in this project covers the connection of high-speed analog-to-digital converters (ADCs) alongside Achronix eFPGA IP for pre-processing in radars as well as wireless and optical communication.

Figure 3 Fraunhofer IIS/EAS has selected Achronix’s eFPGAs to build a heterogeneous chiplet demonstrator.

Brian Faith, CEO of QuickLogic says that these efforts to use eFPGA for building heterogeneous chiplets embody a new era of FPGA chiplet integration. And he sees their application in the evolving edge IoT and AI/ML markets. Nevertheless, the design journey toward building the world of FPGA chiplets has already started and we are likely to hear about more such partnerships incorporating FPGAs into chip-to-chip interconnect technology.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post IP partnerships stir the world of FPGA chiplets appeared first on EDN.

Desktop DC source shows real precision where it’s needed

EDN Network - Tue, 08/29/2023 - 13:27

I’ve always been intrigued with high-precision instruments, as it usually represents the best of engineering design, craftsmanship, elegance, and even artistry. One of earliest and still best examples that I recall was when I saw the weigh-scale design by the late, legendary Jim Williams, published nearly 50 years ago in EDN. His piece, “This 30-ppm scale proves that analog designs aren’t dead yet,” details how he designed and built a portable, AC-powered scale for nutritional research using standard components and with extraordinary requirements: extreme resolution of 0.01 pound out of 300.00 pounds, accuracy to 30 parts per million (ppm), and no need for calibration during its lifetime.

Jim’s project was a non-production, one-off unit and while its schematic (Figure 1) was obviously important, tells only part of the story, there are many more lessons in his description.

Figure 1 This schematic from Jim Williams’ 1976 EDN article on design of a precision scale teaches many lessons, but there’s much more than just the schematic to understand. Source: Jim Williams

To meet his objectives, he identified every source of error or drift and then methodically minimized or eliminated each one via three techniques: using better, more-accurate, more-stable components; employing circuit topologies which self-cancelled some errors; and providing additional “insurance” via physical EMI and thermal barriers. In this circuit, the front-end needed to extract a miniscule 600-nV signal (least significant digit) from a 5-V DC level—a very tall order.

I spoke to Jim a few years before his untimely passing, after he had written hundreds of other articles (see “A Biography of Jim Williams”), and he vividly remembered that design and the article as the event which made him realize he could be a designer, builder, and expositor of truly unique precision, mostly analog circuits.

Of course, it’s one thing to handcraft a single high-performance unit, but it’s a very different thing to build precision into a moderate production-volume instrument. Yet companies have been doing this for decades, as typified by—but certainly not limited to—Keysight Technologies (formerly known as Agilent and prior to that, Hewlett-Packard) and many others, too many to cite here.

Evidence of this is seen in the latest generation of optical test and measurement instruments, designed to capture and count single photons. That’s certainly a mark of extreme precision because individual photons generally don’t have much energy, don’t like to be assessed or captured, and self-destruct when you look at them.

I recently came across another instrument that takes a simple function to an extreme level of precision: the DC205 Precision Voltage Source from Stanford Research Systems. This desktop unit is much more than just a power supply, as it provides a low-noise, high-resolution output which is often used as a precision bias source or threshold in laboratory-science experiments (Figure 2).

Figure 2 This unassuming desktop box represents an impressively high level of precision and stability in an adjustable voltage source. Source: Stanford Research Systems

Its bipolar, four-quadrant output delivers up to 100 V with 1-μV resolution and up to 50 mA of current. It offers true 6-digit resolution with 1 ppm/°C stability (24 hours) and 0.0025 % accuracy (one year).

Two other features caught my attention: it uses a linear power supply (yes, they are still important in specialty applications) to minimize output noise, presumably only for the voltage-output block but not for the entire instrument. There’s also the inclusion of an DB-9 RS-232 connector in addition to its USB and fiber optic interfaces. I haven’t seen an RS-232 interface in quite a while, but I presume they had a good reason to include it.

The block diagram in the User’s Manual reveals relatively little, except to indicate the unit has three core elements which combine to deliver the instrument’s performance: a low-noise, stable voltage reference; a high-resolution digital-to-analog converter; and a set of low-noise, low-distortion amplifiers (Figure 3).

Figure 3 As with Jim Williams’ scale, the core functions of the SR205 look simple, and may be so, but it is also the unrevealed details of the implementation that make the difference in achieving the desired performance. Source: Stanford Research Systems

I certainly would like to know more of the design and build details that squeeze such performance out of otherwise standard-sounding blocks.

As this unit targets lab experiments in physics, chemistry, and biology disciplines, it also includes a feature that conventional voltage sources would not include: a scanning (ramping) capability. This triggerable voltage-scanning feature gives user control over start and stop voltages, scan speed, and scan function, with scan speeds settable from 100 ms to 10,000 s, and the scan function can either be a ramp or a triangle wave. Further, for operating in the 100-V “danger zone”, the user must plug a jumper into the rear panel to deliberately and consciously allow operation in the region.

In addition to the DB-9 RS-232 interface supporting legacy I/O and an optical link for state-of-the-art I/O, I noticed another interesting feature called in the well-written, readable, crisp, and clear user’s manual: how to change the AC-line voltage setting. Some instruments I have seen use a slide switch and some use plug-in jumpers (don’t lose them), but this instrument uses a thumbwheel rotating selector as shown in Figure 4.

Figure 4 Even a high-end instrument must deal with different nominal power-line voltages, and this rotary switch in the unit makes changing the setting straightforward and resettable. Source: Stanford Research Systems

In short, this is a very impressive standard-production instrument with precision specifications and performance, with what seems like a very reasonable base price of around $2300.

I think about it this way: In the real world of sensors, signal conditioning, and overall analog accuracy achieving stable, accurate performance to 1% is doable with reasonable effort; getting to 0.1% is much harder and reaching 0.01% is a real challenge. Yet both custom and production-instrumentation designers have mastered the art and skill of going far beyond those limits.

It’s similar to when I first saw a list of fundamental physical constants such as the mass or moment of an electron, and which had been measured (not defined) to seven or eight significant figures with an error only that last digit. I felt compelled to do further investigation to understand how they reached that level of precision and confidence, and how they credibly assessed their sources of error and uncertainty.

What’s the tightest measurement or signal-source accuracy and precision you have had to create? How did you confirm the desired level of performance was actually achieved—if it was?

Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.

Related Content 

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Desktop DC source shows real precision where it’s needed appeared first on EDN.

Sh Ravi Sharma, Chairman TEMA has been appointed as Chairperson – IIIT, Una, Himachal Pradesh

ELE Times - Tue, 08/29/2023 - 13:20

We are happy to announce that Hon’ble President of India has graciously nominated Sh. Ravi Sharma , Chairman TEMA as Chairperson – Board of Governors, Indian Institute of Information Technology (IIIT), Una, HP. This is a recognition of excellent Industry experience of Shri Ravi Sharm and also an honour for TEMA that its chairman has been nominated for this prestigious position.

With 48,500 members and 74 counties MOU partners globally, TEMA is the oldest Indian industry association devoted to telecom, ICT, Cyber as well as to education sector.

TEMA and CMAI extend our heartiest congratulations to Shri Ravi Sharma on this noteworthy appointment and eagerly anticipate his enduring leadership, which serves as an inspiration to all of us.

A former CEO of Alcatel Lucent, Videocon Telecom and Adani Power, Shri Ravi Sharma, currently, is the Chairman of two global bodies – IIT Alumni Council and TEMA (Telecom Equipment Manufacturers Association of India). In addition, he is making valuable contributions to humanity through “Mission Chetna Foundation”, dedicated to the noble mission of promoting goodness. Mission Chetna is spread in 11 states and more than One crore Indians have been benefited from its social service.

Auto EV India

Notably, Shri Sharma holds the esteemed “Distinguished Alumni Award” from IIT Roorkee, a recognition cherished by all. He also serves as the chief patron of “The Great Indian Film and Literature Festival” and “Subodhanand Foundation” – a trust devoted to Vedanta Studies & teachings.

Prof NK Goyal, Chairman Emeritus TEMA said “This is going in a long way for industry academia cooperation for the people of Himachal Pradesh and Mr Sharma’s leadership will guide IIIT Una in becoming one of the best Institutes of Excellence. The IIIT will help in innovations and creating startups engaged in new technologies of AI, 6G, quantum and others thus helping in achieving our Hon’ble Prime Minister Shri Narendra Modi’s vision of making India a developed nation by 2047. It will empower Himachal Pradesh and India through information and communication technologies.

It may be recalled during May 2023 TEMA/CMAI announced its initiative of Collaboration with Educational Institutions, to forge partnerships with educational institutions to enhance digital learning opportunities for students across the country. Through these collaborations, TEMA proposed to establish high-speed internet connectivity in educational institutes and provide access to educational resources, thereby empowering the next generation with knowledge and skills for a digital future.

The post Sh Ravi Sharma, Chairman TEMA has been appointed as Chairperson – IIIT, Una, Himachal Pradesh appeared first on ELE Times.

Using Machine Learning to Characterize Database Workloads

ELE Times - Tue, 08/29/2023 - 13:04

Databases have been helping us manage our data for decades. Like much of the technology that we work with on a daily basis, we may begin to take them for granted and miss the opportunities to examine our use of them—and especially their cost.

For example, Intel stores much of its vast volume of manufacturing data in a massively parallel processing (MPP) relational database management system (RDBMS). To keep data management costs under control, Intel IT decided to evaluate our current MPP RDBMS against alternative solutions. Before we could do that, we needed to better understand our database workloads and define a benchmark that is a good representation of those workloads. We knew that thousands of manufacturing engineers queried the data, and we knew how much data was being ingested into the system. However, we needed more details.

“What types of jobs make up the overall database workload?”  

“What are the queries like?”

“How many concurrent users are there for each kind of a query?”

Let me present an example to better illustrate the type of information we needed.

Imagine that you’ve decided to open a beauty salon in your hometown. You want to build a facility that can meet today’s demand for services as well as accommodate business growth. You should estimate how many people will be in the shop at the peak time, so you know how many stations to set up. You need to decide what services you will offer. How many people you can serve depends on three factors: 1) the speed at which the beauticians work; 2) how many beauticians are working; and 3) what services the customer wants (just a trim, or a manicure, a hair coloring and a massage, for example). The “workload” in this case is a function of what the customers want and how many customers there are. But that also varies over time. Perhaps there are periods of time when a lot of customers just want trims. During other periods (say, before Valentine’s Day), both trims and hair coloring are in demand, and yet at other times a massage might be almost the only demand (say, people using all those massage gift cards they just got on Valentine’s Day). It may even be seemingly random, unrelated to any calendar event. If you get more customers at a peak time and you don’t have enough stations or qualified beauticians, people will have to wait, and some may deem it too crowded and walk away.

Auto EV India

So now let’s return to the database. For our MPP RDBMS, the “services” are the different types of interactions between the database and the engineers (consumption) and the systems that are sending data (ingestion). Ingestion consists of standard extraction-transformation-loading (ETL), critical path ETL, bulk loads, and within-DB insert/update/delete requests (both large and small). Consumption consists of reports and queries—some run as batch jobs, some ad hoc.

At the outset of our workload characterization, we wanted to identify the kinds of database “services” that were being performed. We knew that, like a trim versus a full service in the beauty salon example, SQL requests could be very simple or very complex or somewhere in between. What we didn’t know was how to generalize a large variety of these requests into something more manageable without missing something important. Rather than trusting our gut feel, we wanted to be methodical about it. We took a novel approach to developing a full understanding of the SQL requests: we decided to apply Machine Learning (ML) techniques including k-means clustering and Classification and Regression Trees (CARTs).

  • k-means clustering groups similar data points according to underlying patterns.
  • CART is a predictive algorithm that produces a human-readable criteria for splitting data into reasonably pure subgroups.

In our beauty salon example, we might use k-means clustering and CART to analyze customers and identify groups with similarities such as “just hair services,” “hair and nail services,” and “just nail services.”

For our database, our k-means clustering and CART efforts revealed that ETL requests consisted of seven clusters (predicted by CPU time, highest thread I/O, and running time) and SQL requests could be grouped into six clusters (based on CPU time).

Once we had our groupings, we could take the next step, which was to characterize various peak periods. The goal was to identify something equivalent to “regular,” “just before Valentine’s” and “just after Valentine’s” workload types—but without really knowing upfront about any “Valentine’s Day” events. We started by generating counts of requests per each group per each hour based on months of historical database logs. Next, we used k-means clustering again, this time to create clusters of one-hour slots that are similar to each other with respect to their counts of requests per group. Finally, we picked a few one-hour slots from each cluster that had the highest overall CPU utilization to create sample workloads.

The best thing about this process was that it was driven by data and reliable ML-based insights. (This is not the case with my post-Valentine’s massages-only conjecture, because I didn’t have any gift cards.) The workload characterization was essential to benchmarking the cost and performance of our existing MPP RDBMS and several alternatives. You can read the IT@Intel white paper, “Minimizing Manufacturing Data Management Costs,” for a full discussion of how we created a custom benchmark and then conducted several proofs of concept with vendors to run the benchmark.

2018 ITLS Miro Dzakovic 10584396-0655 - medium-squareMiroslav Dzakovic | Intel

The post Using Machine Learning to Characterize Database Workloads appeared first on ELE Times.

Eliminating the need for an MCU (and coding) in Highly Efficient AC/DC Power Supplies

ELE Times - Tue, 08/29/2023 - 12:31

JON HARPER | Onsemi

Grid power is AC for many good reasons, yet almost every device requires DC power to operate. This means that AC-DC power supplies are used almost everywhere, and, in a time of environmental awareness and rising energy costs, their efficiency is critical to controlling operating costs and using energy wisely.

Simply put, efficiency is the ratio between input power and output power. However, the input power factor (PF) must be considered – this is the ratio between useful (true) power and total (apparent) power in any AC-powered device – including power supplies.

With a purely resistive load, the PF will be 1.00 (‘unity’) but a reactive load will decrease the PF as the apparent power rises, leading to reduced efficiency. A less-than-unity PF results from out-of-phase voltage and current, a significant harmonic content, or a distorted current waveform – common in discontinuous electronic loads such as switched mode power supplies (SMPS).

ACDC 1

PF Correction

Given the impact on efficiency that a low PF has, when power levels are above 70W, legislation requires designers to incorporate circuitry that will correct the PF to a value close to unity. Often, active PF correction (PFC) employs a boost converter that converts rectified mains to a high DC level. This rail is then regulated using pulse width modulation (PWM) or other techniques.

This approach generally works and is simple to deploy. However, modern efficiency requirements such as the challenging ‘80+ Titanium standard’ stipulate the efficiency across a wide operating power range, requiring peak efficiencies of 96% at half load. This means the line rectification and PFC stage must achieve 98% as the following PWM DC-DC will lose a further 2%. Achieving this is very challenging due to the losses within the diodes in the bridge rectifier.

Replacing the boost diode with a synchronous rectifier helps and the two line rectifier diodes can be similarly replaced which further enhances efficiency. This topology is referred to as totem pole PFC (TPPFC) and, in theory, with an ideal inductor and perfect switches, efficiency will approach 100%. While silicon MOSFETs offer good performance, wide bandgap (WBG) devices offer far closer to ‘ideal’ performance.

ACDC2Figure 1: Simplified Totem Pole PFC Topology Dealing with Losses

As designers increase frequency to reduce the size of magnetic components, dynamic losses in switching devices will also increase. As these losses can be significant with silicon MOSFETs, designers are turning to WBG materials including silicon carbide (SiC) and gallium nitride (GaN) – especially for TPPFC applications.

Critical conduction mode (CrM) is generally the preferred approach for TPPFC designs at power levels up to a few hundred watts, balancing efficiency and EMI performance. In kilowatt designs, continuous conduction mode (CCM) further reduces RMS current within switches, reducing conduction loss.

ACDC3Figure 2: Typical PFC Circuits: Conventional boost (left) and Bridgeless Totem Pole (right)

Even CrM, can see an efficiency drop approaching 10% at light loads which is a roadblock to achieving ‘Titanium 80 Plus’. Clamping (‘folding-back’) the maximum frequency forces the circuit into DCM at light loads, thereby significantly reducing peak currents.

Overcoming Design Complexity

With four active devices to be driven synchronously and the need to detect the inductor’s zero current crossing to force CrM, TPPFC design can be far from trivial. Additionally, the circuit must switch in / out of DCM while maintaining a high-power factor and generating a PWM signal to regulate the output – as well as providing circuit protection (such as over current and over voltage).

The obvious way to address these complexities is to deploy a microcontroller (MCU) for the control algorithms. However, this requires the generation and debugging of code, which add significant effort and risk to the design.

Auto EV India

CrM- based TPPFC without Coding

However, time-consuming coding can be avoided by using a fully integrated TPPFC control solution. These devices offer several advantages including high performance, faster design time and reduced design risk as they eliminate the need to implement a MCU and associated code.

A good example of this type of device is onsemi’s NCP1680 mixed-signal TPPFC controller that operates in constant on-time CrM, thereby delivering excellent efficiency across a wide load range. The integrated device features ‘valley switching’ during frequency foldback at light loads to enhance efficiency by switching at a voltage minimum. The digital voltage control loop is internally compensated to optimize performance throughout the load range, while ensuring that the design process remains simple.

The innovative TPPFC controller includes a novel low-loss approach for current sensing and cycle-by-cycle current limiting offers substantial protection without the need for an external Hall-effect sensor, thereby reducing complexity, size and cost.

ACDC5Figure 4: NCP1680 Typical Application Schematic

A full suite of control algorithms is embedded within the IC, giving designers a low-risk, tried-and-tested solution that delivers high performance at a cost-effective price point.

The post Eliminating the need for an MCU (and coding) in Highly Efficient AC/DC Power Supplies appeared first on ELE Times.

Factory automation realizes boost from new technologies

ELE Times - Tue, 08/29/2023 - 12:05

PHILIP LING | Avnet

Factory automation strategies are reaping benefits now from several technologies and their enabling elements.

Automation has a long history, and it has played an essential role in all industrial markets. Repeatable manufacturing to high quality and in high volume is the essence of industrialization. The cost of the finished product can be directly related to the level of automation in the manufacturing process.

Continuous advancements in automation deliver mechanical excellence. That excellence, in turn, relies on control. Manufacturers must balance technologies used to implement control to this level with commercial considerations. These include the cost of development and deployment – or capital expenditure – and the recurring cost of implementation – or operational expenditure.

New technologies can impact factory automation when manufacturers align capex and opex while accurately assessing total cost of ownership. This article examines some technologies that meet this requirement and some that may influence the direction of automation.

Industrial automation technologies

Several technologies with their corresponding enabling elements are impacting industrial automation right now. The competitive advantage of Industrial Internet of Things (IIoT) will increase as these technologies become more pervasive.

  • Single-pair Ethernet
  • Edge computing
  • Time-sensitive networking
single-pair-ethernet-in-industrial-automationSingle-pair Ethernet is being used in industrial applications to reduce wiring cost and complexity and to provide a common physical interface. Single-pair Ethernet in industrial automation

Wide area networking (WAN) has changed every aspect of modern life. Consumers enjoy internet access anywhere, even on a transatlantic flight. Its use in IIoT means information technology and operational technology are colliding, and the way connectivity is used is still developing. Different parts of the ecosystem are at different stages of their IIoT journey.

The industrial sector is now largely aligned on the use of Ethernet to support an IP architecture. There is also growing momentum behind single-pair Ethernet (IEEE 802.3cg) in industrial automation. The move to single-pair Ethernet (SPE), which the automotive market developed, provides a simplified network at the physical level. It offers both data and power on the same two wires, speeds of 10 Mbps, reaches of 1000 meters, and support for multidrop configuration.

The development of SPE is helping to bring Ethernet into an environment where single-pair connectivity has long been the preferred solution. SPE’s significance will increase as support grows. An example of this is the advanced physical layer (APL) developed by leaders in the industrial sector. Ethernet-APL uses the 10BASE-T1L part of the standard, plus extensions. Ethernet-APL covers physical layer attributes, including power, connectors and cables. The Ethernet-APL layer is also specified for use in hazardous areas.

The Ethernet-APL group comprises OPC Foundation, Profibus, FieldComm Group, and OVDA. The physical layer supports various high-level network protocols, including EtherNet/IP, HART-IP, OPC UA, and Profinet.

Edge computing

The IIoT introduced cloud computing to the factory floor. Cloud platforms play an important role in data aggregation, its analysis and distribution to back-office applications. Edge computing puts the power of the cloud directly on the production line.

An edge computing solution employs high-end processors running cloud-level software on a local device. That device connects directly to the manufacturing equipment. There are several reasons why edge computing is popular.

First, it allows some of, or all, the operational data to stay inside the organization’s walls. There are good security imperatives for taking this approach. A further reason is to simply minimize the cost of moving data around. Another is to avoid the latency associated with processing time-sensitive data in a cloud platform.

Second, edge computing creates a contained environment that enables manufacturers to take greater control over their processes. This work cell approach can support distributed and separate workflows that provide greater flexibility over how assets are deployed. An edge computer can turn a small cluster of machines into a discrete manufacturing process that can operate outside a wider manufacturing environment.

The concept of edge computing goes beyond securing data or minimizing cloud transfers. It supports trends such as micromanufacturing or on-demand manufacturing.

edge-computing-in-industrial-automationIndustrial automation is using IoT to collect data. Edge processing provides the local intelligence to use this data. Further along, the processed data and insights are sent to the cloud. Edge computing in industrial automation Time-sensitive networking

As the IT and OT networks continue to merge, the need for time-sensitive networking (TSN) has increased. The IEEE Standards Association is working on several profiles for time-sensitive networking in various verticals, including industrial automation.

The purpose of the specification is to support time-critical packets on an Ethernet network. It achieves this using three mechanisms. The first is a method to prioritize Ethernet frames that are time critical by delaying frames that are not. Transmission time is one tool used to set priorities. It also looks at the frame length to determine if it can be sent without disrupting higher priority traffic. A further method is to build fault-tolerant networks with multiple paths to avoid latency.

Semiconductor manufacturers are now implementing these features at the chip level. Multi-chip solutions are evolving into single-chip or system-on-chip product offerings. This will continue in parallel with efforts to move to standard application protocols.

Auto EV India

Expect the cost and complexity of implementing TSN to come down quite rapidly. Not all manufacturers will see a benefit or need for TSN, at least not immediately. As IIoT pervades the manufacturing environment, TSN is likely to feature more strongly.

Technologies that will impact factory automation

Industrial equipment has a long operational lifetime. This means change can be slow in comparison to other markets. As an example, wireless mesh networks are still mostly limited to connecting sensors in an industrial environment. Wired connectivity is still dominant for control.

However, there is also more cross-pollination between verticals, encouraged by wide area networking. Many of the technologies that have been created in – or are dependent on – IT are making their way into the OT world. Some of the prominent and most promising technologies include:

  • Digital twins
  • Blockchain
  • Microservices
Digital twins

The idea of operating duplicate systems, or twins, in different environments goes back to NASA’s early days. A twin can be used to replicate and react to operational data happening somewhere else, even off planet. Moving to the digital domain has enabled the concept to be more cost efficient and, potentially, more flexible.

Digital twinning involves modelling an action, rather than simulating it. The difference relies on the twin using real-world data. This is where IoT technologies play a part. Sensors are the primary source of data.

Digital twins in industrial automation digital-twins-in-industrial-automationDigital twins are used in industrial automation to monitor and model real-world assets using data from the asset. This can help generate insights to boost productivity, and the insights can be fed back to the real-world asset.

It becomes feasible to use digital twins as manufacturers deploy more sensors on industrial equipment and couple them with high-speed networking.

Recent developments indicate OEMs are now implementing digital twins at a work cell level. This makes it easier to model part of a system as a function, rather than trying to model an entire factory.

Using multiple digital twins will become more common with the development of edge processing. It follows that the two are closely coupled, as edge processing is effective at a local level. Although edge processing is not dependent on digital twinning, the symbiosis is apparent.

Blockchain

In a manufacturing environment, the term “blockchain” can be closely associated with supply chain. Engineers have discussed the concept of using blockchain technology to authenticate and track the products in the supply chain for several years.

Part of the potential in adopting IoT comes from the commercialization of information. Trust will be an important part of the success. Using blockchain to provide evidence of authenticity could be key.

The move toward providing something as a service is also building momentum. Here, blockchain could be used to validate the hardware platform delivering that service. If the service relies on genuine parts being fitted to a system, blockchain could be the best way of authenticating those parts.

Microservices

If a theme is emerging in industrial automation’s evolution, perhaps it’s around making work cells more intelligent. Edge computing and soon digital twins are focused on work cells and modular functionality.

Modularity at a software level is one way to describe microservices. The methodology is now common in cloud platforms. A microservice architecture is more agile, more scalable and easier to maintain than large monolithic software structures.

The diversity in industrial automation processes suggests microservices will become more common here, too. Flexibility on the shop floor will mean machines can be repurposed more frequently. Using a microservice approach will support that flexibility.

AI in industrial automation

There is enormous scope for AI to impact industrial automation. Current examples of AI demonstrate that the technology is good at following procedures and adapting within known parameters. Its real strength comes from reacting to the unexpected in a predictable way.

Using AI in this way should improve procedural operations that are handled by programmable logic controllers (PLCs). AI can also now write the ladder logic that configures the PLCs. This scenario uses AI in a mechanical way to augment a function.

Putting AI into human-centric operations may be the next phase. In this scenario, the AI would need to “think” like an operator. It would at first assist and, potentially, in time displace the human in the loop.

Conclusion

Industrial automation is constantly developing. New technologies, often from other market verticals, provide the momentum for improvement. Caution is always used, but the pace of change seems to be increasing.

Studies show a widening productivity gap between large OEMs that can afford to implement new technologies more aggressively than smaller enterprises. As access to these technologies improves and the total cost of ownership softens, this gap may once again close.

The post Factory automation realizes boost from new technologies appeared first on ELE Times.

EPCSpace adds rad-hard GaN devices in high-current G-Package

Semiconductor today - Tue, 08/29/2023 - 11:52
EPC Space LLC of Haverhill, MA, USA has introduced two new radiation-hardened (rad-hard) gallium nitride (GaN) transistors with ultra-low on-resistance and high-current capability for high-power-density solutions that are said to be lower cost and more efficient than the nearest comparable rad-hard silicon MOSFET. The devices are supplied in hermetic packages in very small footprints. ..

5 Tips to Deploy Cobots Effectively

ELE Times - Tue, 08/29/2023 - 10:56

Businesses can effectively deploy cobots in manufacturing by leveraging the flexibility of cobots to maximize the ROI of human-robot collaboration. A thorough planning process is central to the success of any cobot implementation. Businesses have to ensure they are taking steps to prepare employees and protect their safety at work. What other tips can businesses use to ensure they maximize the potential of their cobots?

1. Perform a Risk Assessment

The first step to effectively deploy cobots in manufacturing is understanding the potential risks they pose. Modern robots are typically designed with high safety standards in mind. However, any new piece of machinery in the workplace can pose risks to employees and property.

Conducting a thorough risk assessment will allow businesses to understand their unique risk factors. This process involves analyzing the area where the robot is going to be installed as well as the job it is going to perform. It is best to do a risk assessment in a team, since different people may think of different possible risks.

For example, ideally every employee on site is wearing their PPE, but the risk assessment team can’t count on this. They have to consider every possibility, including potential safety hazards resulting from accidents or negligent behavior. The same applies to the cobots. To effectively deploy cobots, businesses have to be prepared for mechanical malfunctions.

Identify as many risks as possible and rank them numerically based on the degree of risk. Factors like potential injury severity or repair costs can contribute to a risk’s numerical ranking.

2. Identify the Right Applications

One of the most common speed bumps when attempting to effectively deploy cobots is finding the right application. Businesses have to keep in mind that cobots have unique advantages compared to conventional robots. They need to be integrated in a highly strategic manner to maximize the ROI they deliver.

One of the top benefits of cobots is their high level of adaptability and flexibility. They are able to perform a greater variety of tasks compared to conventional robots. They can also do this in close proximity to humans without posing a high degree of risk. Businesses can factor this into their planning for their new robot integration to identify ideal applications.

For example, a cobot is perfect for streamlining assembly lines or packing processes. The cobots can automate simple, repetitive steps in the packing process, such as applying plastic straps to boxes. Due to the human-focused nature of cobots, they are able to do this alongside humans working on more complex tasks in the same assembly line.

One of the most important steps to effectively deploy cobots in manufacturing is looking for applications like this. Find opportunities to leverage the flexibility and safety features of cobots so they can augment the skills of human coworkers.

Auto EV India

3. Ensure Employees Are Prepared

A common stumbling block in robot integrations is lack of employee preparedness. Businesses may be so focused on the robot side of things that they forget the role employees play in a successful robot integration.

Employees need the skills and knowledge to understand the new robots well so they can confidently work alongside them. This goes for all employees, not just those who will be directly operating or interacting with the robots. Thorough cobot training is crucial for minimizing safety risks and preventing robot-related accidents.

Cobot training is also crucial for ensuring effective collaboration between robots and employees. Surveys estimate that 29 to 47% of jobs have been “taken over” by robots. People who had already been displaced by robots estimated this number to be higher, indicating possible resentment or fear surrounding the role of robots in the workplace.

Businesses need to be aware of employees’ concerns about their job security any time a new robot enters the workplace. Cobots are designed to work with humans, not replace. Effective cobot training can instill confidence in employees and relieve fears that they are being replaced. This will ensure that employees help the new robot integration go as smoothly as possible.

4. Make Safety a Top Priority

In addition to completing a risk assessment, businesses also need to act on the known risks associated with a cobot integration. Making safety a top priority is critical in order to effectively deploy cobots. It’s a two-way street, as well – cobots can improve employee safety when integrated well. Businesses can apply cobots to high-risk tasks, freeing up employees to concentrate on safer roles.

There are many steps businesses can take to ensure cobot safety. However, cobots don’t usually require the extensive safety measures needed for conventional robots. For instance, cobots don’t need large safety cages to keep employees away.

Examples of common cobot safety measures include sensors and emergency stop controls. Most cobots come with some safety features built in, as well. Businesses can use proximity sensors to allow the cobot to sense when people or objects are nearby.

These sensors can be used to program in auto-stop functions any time a person or objects gets within a certain radius of the robot. Make sure to factor in the full range of any robotic appendages, such as claws or arms.

5. Monitor and Analyze Performance

Planning and preparation are vital to success with robotics. However, what happens after the integration is installed is just as important in order to effectively deploy cobots. Businesses need to monitor and analyze the performance of their cobots to maximize their ROI.

Effective performance monitoring allows businesses to make adjustments to their cobot setup, optimizing it for better efficiency or safety. Having clear, measurable benchmarks and goals in mind is vital for this process to be successful. Businesses should have a clear idea of what they are hoping their cobots will achieve.

For example, a business might install cobots to improve productivity on a box packing assembly line. They might measure how many boxes pass through the assembly line in a certain amount of time or track how long each step of the packing process takes. By comparing these numbers to performance metrics before the cobot was installed, the business can tell how the cobot is performing.

Continuously monitor cobot performance and look for ways to improve the integration. Sometimes this may involve repurposing the cobot to a new application where it could be more effective. Don’t be afraid to consider new ideas and applications if the first one is not showing a good ROI even after optimization attempts.

How to Effectively Deploy Cobots In Manufacturing

Businesses can effectively deploy cobots by combining plenty of preparation with a strong performance monitoring strategy. Cobots are much safer and more adaptable than conventional robots, offering many possible applications for businesses.

Remember to make employees a central part of the process of adopting a cobot. The collaboration between cobots and employees is vital to a successful integration. Safety, training and risk awareness are also key to achieving a good ROI on any new cobot.

EMILY NEWTON | RevolutionizedEMILY NEWTON | Revolutionized

The post 5 Tips to Deploy Cobots Effectively appeared first on ELE Times.

Rohm’s Automotive Hall ICs Offer ‘Industry-Leading Withstand Voltage’

AAC - Tue, 08/29/2023 - 02:00
Rohm has devised a new unipolar and latch Hall-effect sensor to better detect magnetic fields in automotive designs.

Semiconductor Industry News: Recent IPOs, Acquisitions, and Fallen Deals

AAC - Mon, 08/28/2023 - 20:00
Amid the Chips Act, labor shortages, and international trade tension, the semiconductor industry is anything but predictable.

Exploring the endless applications of SuperSpeed USB

ELE Times - Mon, 08/28/2023 - 15:01

By Jimmychou95 | Infineon

Have you ever wondered how machines are able to “see” and understand the world around them? It’s all thanks to a fascinating field called machine vision. While machine vision has traditionally been used for industrial automation, its potential applications extend far beyond that.

Today, I am thrilled to share with you a deeper understanding of the remarkable SuperSpeed USB applications in the context of machine vision. However, before we dive in, let me take a moment to give you a brief overview on USB3 Vision:

jimmychou95_0-1685435435521

One bandwidth-hungry application that can benefit from SuperSpeed USB, or USB 3.0, is machine vision. Machine vision essentially gives machines the ability to see by using cameras; it relies on image sensors and specialized optics to acquire images so that computer hardware and software can process and analyze various characteristics of the captured images for decision-making. As image sensors are becoming more advanced, with higher resolution, higher frame rate, and deeper color, the amount of data generated for a captured image has grown exponentially. SuperSpeed USB with available bandwidth up to 20 Gbps is naturally an interface of consideration for machine vision cameras. To help the diffusion of SuperSpeed USB´s usage in machine vision, an industry standard called USB3 Vision was born.

 

Top potential applications of machine vision

Machine vision can be used to improve the quality, accuracy and efficiency of many different types of applications, and it becomes especially powerful when combined with artificial intelligence and machine learning. This combination enables fast and autonomous decision-making, which is the essence of any type of automation. Let´s take an example: a defect inspection system in a factory could use an inspection camera to take a high-resolution picture of each product on the production line, the machine vision software would then analyze the image and issue a pass or fail response based on some predetermined acceptance criteria.

jimmychou95_1-1685435510590 Machine Vision Technology in Medicine

Machine vision and machine learning are being used increasingly in the medical field, for tasks like detection, monitoring, and training. Machine vision is especially good at motion analysis, and can be used to detect neurological and musculoskeletal problems by tracking a person’s movement. This technology can also be used for things like home-based rehabilitation and remote patient monitoring, which could be especially beneficial for elderly patients.

 

Machine Vision Technology in Agriculture

In recent years, the agricultural industry has witnessed a significant rise in the adoption of machine vision technology , showing a lot of promise for reducing production costs and boosting productivity. Machine vision can be used for additional activities like livestock management, plant health monitoring, harvest prediction, and weather analysis. By automating these processes, we can create a smart food supply chain that doesn’t require as much human supervision. Machine vision’s biggest advantage is in  being able to automate decision-making through non-invasive, low-cost methods that collect data and perform analytics. In plant farming, for example, yield estimation is a critical preharvest process, and by improving its accuracy farmers could better allocate transportation, labor, and supplies.

 

Machine Vision Technology in Transportation

For a long time, computer-aided vision has been used to help with vehicle classification in transportation, but as the technology has rapidly evolved, we can now do things such as large-scale traffic analysis and vehicle identification. Using the latest smart cameras, we can achieve accurate vehicle classification and identification: this can improve things like traffic congestion, safety monitoring, toll collection, and law enforcement. In fact, the proliferation of traffic cameras has essentially eliminated the need for such a large police force, as they can operate 24/7 to catch moving violations at any time. With further advancements in machine learning, image analytics can now be applied to traffic cameras: this can help direct traffic flow, monitor street safety, and reduce congestion for an entire city — saving time, fuel and resources on a large scale.

 

Machine Vision Technology in Retail

Machine vision is a useful tool for retailers who want to improve the customer experience and increase sales: by training machine learning algorithms with data examples, retailers can anonymously track customers in their store to collect data about foot traffic, waiting times, queueing time, etc. This data can then be used to optimize store layouts, reduce crowding, and ultimately improve customer satisfaction. To prevent impatient customers from waiting in long lines, retailers are also using machine vision to detect queues and manage them more efficiently.

 

Machine Vision Technology in Sports

Technology is increasingly being used to help athletes perform better in sports. From computer-generated analysis to cognitive coaching, from injury prevention to automated refereeing, technology is now playing a major role in almost every aspect of sports. One area that has seen a particular growth in recent years is the use of machine vision and AI in training, coaching, and injury prevention: it’s all about using smart cameras to track and analyze the movement of an athlete. The system monitors various ranges of motion, analyzes them in real-time and provides instant feedback. In recent years, smart cameras have become so sophisticated that even the smallest body movement can be tracked precisely down to limbs and joints.

By fully embracing USB 3.0-enabled machine vision, factories around the world are quickly and reliably automating and solving complex manufacturing issues. The same benefits are also shared with a wide range of other industries including health care, agriculture, transportation, retail, sports, and many more. Together with leading machine vision manufacturers in the world, Infineon is accelerating the automation revolution with EZ-USB FX3 based cameras scanners and video capturing systems. Additionally, the exciting news is that Infineon is looking forward to enable new applications and empower new customers with our next generation of 5 and 10 Gbps solutions coming by the end of 2023.

The post Exploring the endless applications of SuperSpeed USB appeared first on ELE Times.

How do robots see? Robotic vision systems

ELE Times - Mon, 08/28/2023 - 14:38

Jeremy Cook | Arrow

The short answer to the question, “How do robots see?” is via machine vision or industrial vision systems. The details are much more involved. In this article, we’ll frame the question around physical robots that accomplish a real-world task, rather than software-only applications used for filtering visual materials on the internet.

Machine vision systems capture images with a digital camera (or multiple cameras), processing this data on a frame-by-frame basis. The robot uses this interpreted data to interact with the physical world via a robotic arm, mobile agricultural system, automated security setup, or any number of other applications.

Computer vision became prominent in the latter part of the twentieth century, using a range of hard-coded criteria to determine simple facts about captured visual data. Text recognition is one such basic application. Inspection for the presence of component x or the size of hole y in an industrial assembly application are others. Today, computer vision applications have expanded dramatically by incorporating AI and machine learning.

Importance of machine vision

While vision systems based on specific criteria are still in use, machine vision is now capable of much more, thanks to AI-based processing. In this paradigm, robot vision systems are no longer programmed explicitly to recognize conditions like a collection of pixels (a so-called “blob”) in the correct position. A robot vision system can instead be trained with a dataset of bad and good parts, conditions, or scenarios to allow it to generate its own rules. So equipped, it can manage tasks like unlocking a door for humans and not animals, watering plants that look dry, or moving an autonomous vehicle when the stoplight is green.

Auto EV India

While we can use cloud-based computing to train an AI model, for real-time decision-making, edge processing is typically preferable. Processing robotic vision tasks locally can reduce latency and means that you are not dependent on cloud infrastructure for critical tasks. Autonomous vehicles provide a great example of why this is important, as a half-second machine vision delay can lead to an accident. Additionally, no one wants to stop driving when network resources are unavailable.

Cutting-edge robotic vision technologies: multi-camera, 3D, AI techniques

While one camera allows the capture of 2D visual information, two cameras working together enable depth perception. For example, the NXP i.MX 8 family of processors can use two cameras at a 1080P resolution for stereo input. With the proper hardware, multiple cameras and camera systems can be integrated via video stitching and other techniques. Other sensor types, such as LIDAR, IMU, and sound, can be incorporated, giving a picture of a robot’s surroundings in 3D space and beyond.

The same class of technology that allows a robot to interpret captured images also allows a computer to generate new images and 3D models. One application of combining these two sides of the robotics vision coin is the field of augmented reality. Here, the visual camera and other inputs are interpreted, and the results are displayed for human consumption.

How to get started with machine vision

We now have a wide range of options for getting started with machine vision. From a software standpoint, OpenCV is a great place to start. It is available for free, and it can work with rules-based machine vision, as well as newer deep learning models. You can get started with your computer and webcam, but specialized industrial vision system equipment like the Jetson Nano Developer Kit or the Google Coral line of products are well suited to vision and machine learning. The NVIDIA Jetson Orin NX 16GB offers 100 TOPS of AI performance in the familiar Jetson form factor.

Companies like NVIDIA have a range of software assets available, including training datasets. If you would like to implement an AI application but would rather not source the needed pictures of people, cars, or other objects, this can give you a massive head start. Look for datasets to improve in the future, with cutting-edge AI techniques like attention and vision transformers enhancing how we use them.

Robot vision algorithms

Robots see via the constant interpretation of a stream of images, processing that data via human-coded algorithms or interpretation via an AI-generated ruleset. Of course, on a philosophical level, one might flip the question and ask, “How do robots see themselves?” Given our ability to peer inside the code—as convoluted as an AI model maybe—it could be a more straightforward question than how we see ourselves!

The post How do robots see? Robotic vision systems appeared first on ELE Times.

Best e-Rickshaw in the USA

ELE Times - Mon, 08/28/2023 - 14:35

In the quest for sustainable urban transportation solutions, electric rickshaws, or e-rickshaws, have emerged as a promising alternative. These innovative vehicles combine eco-friendliness, efficiency, and convenience, making them an ideal choice for short-range urban travel. In this article, we will explore some of the best e-rickshaw models available in the USA that are revolutionizing the way we think about urban commuting.

Auto EV India

  1. Mahindra Treo Sft

The 2023 Mahindra Treo represents a remarkable achievement in India’s pursuit of environmentally friendly urban transportation. This e-rickshaw boasts a groundbreaking electric powertrain, a design that emphasizes sustainability, and a range of intelligent features. By addressing urban mobility challenges and setting new standards for electric three-wheelers, the Mahindra Treo is at the forefront of the e-rickshaw revolution.

  1. Bajaj Re Rickshaw

The 2023 Bajaj RE Rickshaw Electric model embodies Bajaj Auto’s commitment to eco-consciousness and durability. As an electric version of the renowned Bajaj RE Rickshaw, this model offers an affordable and environmentally responsible solution for urban and short-distance travel. With an electric motor, a respectable range, and a comfortable cabin, it provides a sustainable alternative to traditional rickshaws.

  1. Piaggio Ape E City

Hailing from the distinguished Italian automotive manufacturer Piaggio, the 2023 Piaggio Ape E-City Electric vehicle stands out as a flexible and environmentally-conscious three-wheeler. This electric trike showcases Piaggio’s dedication to sustainability and effective urban transportation. Boasting a cutting-edge electric power system, commendable driving range, comfortable interior, and a strong safety focus, the Electric 2023 Piaggio Ape E-City emerges as a practical and eco-conscious answer to urban mobility needs.

  1. Mahindra E Alfa Mini

The 2023 Mahindra e-Alfa Mini Electric is a prime example of a sustainable and high-performing electric rickshaw crafted by Mahindra, a prominent name in the Indian automotive industry. With advanced electric power technology, a respectable range, a comfortable interior, and a strong safety commitment, the Electric 2023 Mahindra e-Alfa Mini offers a reliable and eco-friendly choice for city commuters and businesses alike.

  1. Kinetic Green Safar Smart E Auto

Championing innovation, the Kinetic Safar Smart Electric Auto forgoes the traditional internal combustion engine for a progressive electric powertrain. Powered by a high-capacity battery pack, it heralds an era of zero emissions, reduced noise, and enhanced energy efficiency. Positioned as a green alternative to conventional auto-rickshaws, the 2023 Electric Kinetic Safar Smart Auto is a financially viable, sustainable, and efficient option suitable for both urban and rural settings.

  1. Jezza Motors J1000 Electric Rickshaw

Crafted by the notable electric mobility entity Jezza Motors, the Electric 2023 Jezza Motors J1000 Electric Rickshaw stands as an inventive electric vehicle. Jezza Motors has established itself as a major player in the electric vehicle arena, focusing on advancing electric cars and providing sustainable transportation options for urban travellers. With a state-of-the-art electric power system, considerable range, performance capabilities, comfort features, safety protocols, and affordability, the Electric 2023 Jezza Motors J1000 offers a well-rounded and efficient choice tailored for urban commuting needs.

  1. Citylife Butterfly Super Deluxe XV850 E Rickshaw

Introducing the 2023 City Life Butterfly Super Deluxe, a pinnacle of electric rickshaw design dedicated to luxurious and convenient urban travel. Through refined aesthetics, cutting-edge attributes, and environmentally conscious functionality, the Butterfly Super Deluxe aims to redefine the conventional perception of rickshaw commutes. With its sophisticated design, advanced features, and eco-friendly operation, the 2023 City Life Butterfly Super Deluxe embodies opulent and sustainable urban transportation, providing a deluxe travel experience tailored for those seeking both comfort and style.

In a world where sustainable urban transportation is gaining paramount importance, these e-rickshaw models pave the way for a greener, more efficient, and more comfortable way of getting around in urban environments. As these electric vehicles continue to evolve, they offer a glimpse into a future where eco-friendly mobility is the norm.

The post Best e-Rickshaw in the USA appeared first on ELE Times.

Pages

Subscribe to Кафедра Електронної Інженерії aggregator - Новини світу мікро- та наноелектроніки