Українською
  In English
Новини світу мікро- та наноелектроніки
Understanding currents in DC/DC buck converter input capacitors

All buck converters need capacitors on the input. Actually, in a perfect world, if the supply had zero output impedance and infinite current capacity and the tracks had zero resistance or inductance, you wouldn’t need input capacitors. But since this is infinitesimally unlikely, it’s best to assume that your buck converter will need input capacitors.
Input capacitors store the charge that supplies the current pulse when the high-side switch turns on; they are recharged by the input supply when the high-side switch is off (Figure 1).
Figure 1 The above diagram shows simplified current waveform in the input capacitor current during the buck DC/DC switching cycle, assuming infinite output inductance. Source: Texas Instruments
The switching action of the buck converter charges and discharges the input capacitor, causing the voltage across it to rise and fall. This voltage change represents the input voltage ripple of the converter at the switching frequency. The input capacitor filters the input current pulses to minimize the ripple on the input supply voltage.
The amount of capacitance governs the voltage ripple, so the capacitor must be rated to withstand the root-mean-square (RMS) current ripple. The RMS current calculation assumes the presence of only one input capacitor, with no equivalent series resistance (ESR) or equivalent series inductance (ESL). The finite output inductance accounts for the current ripple on the input side, as shown in Figure 2.
Figure 2 Input capacitor ripple current and calculated RMS current are displayed by TI’s Power Stage Designer software. Source: Texas Instruments
Current sharing between parallel input capacitors
Most practical implementations use multiple input capacitors in parallel to provide the required capacitance. These capacitors often include a small-value, high-frequency multilayer ceramic capacitor (MLCC), for example, 100 nF. One or more larger MLCCs (10 µF or 22 µF) are used, and sometimes accompany a polarized large-value bulk capacitor (100 µF).
Each capacitor is performing similar yet different functions; the high-frequency MLCC decouples fast transient currents caused by the MOSFET switching process in DC/DC converter. The larger MLCCs source the current pulses to the converter at the switching frequency and its harmonics. The bulk capacitor supplies the current required to respond to output load transients when the impedance of the input source means that it cannot respond as quickly.
Where used, a large bulk capacitor has a significant ESR, which provides some damping of the input filter’s Q factor. Depending on its equivalent impedance at the switching frequency relative to the ceramic capacitors, the capacitor may also have significant RMS current at the switching frequency.
The datasheet of a bulk capacitor specifies a maximum RMS current rating to prevent self-heating and ensure that its lifetime is not degraded. The MLCCs have a much smaller ESR and correspondingly much less self-heating because of the RMS current. Even so, circuit designers sometimes overlook the maximum RMS current specified in ceramic capacitor datasheets. Therefore, it is important to understand the RMS currents in each of the individual input capacitors.
If you are using multiple larger MLCCs, you can combine them and enter the equivalent capacitance into the current-sharing calculator for calculating RMS currents in parallel input capacitors. The calculation of RMS current considers the fundamental frequency only. Nonetheless, this calculation tool is a useful refinement of the single input capacitor RMS current calculation.
Consider an application where VIN = 9 V, VOUT = 3 V, IOUT = 12.4 A, fSW = 440 kHz and L = 1 µH. The three parallel input capacitors could then be 100 nF (MLCC), ESR = 30 mΩ, ESL = 0.5 nH; 10 µF (MLCC), ESR = 2 mΩ, ESL = 2 nH; and 100 µF (bulk), ESR = 25 mΩ, ESL = 5 nH. The ESL here includes the PCB track inductance.
Figure 3 shows the capacitor current-sharing calculator results for this example. The 100-nF capacitor draws a low RMS current of 40 mA as expected. The larger MLCC and bulk capacitors divide their RMS currents more evenly at 4.77 A and 5.42 A, respectively.
Figure 3 Output is shown from TI’s Power Stage Designer capacitor current-sharing calculator. Source: Texas Instruments
In reality, the actual capacitance of the 10-µF MLCC is somewhat lower because of the voltage applied. For example, a 10-µF, 25-V X7R MLCC in an 0805 package might only provide 30% of its rated capacitance when biased at 12 V, in which case the large bulk capacitor’s current is 6.38 A, which may exceed its RMS rating.
The solution is to use a larger capacitor package size and parallel multiple capacitors. For example, a 10-µF, 25-V X7R MLCC in a 1210 package retains 80% of its rated capacitance when biased at 12 V. Three of these capacitors have a total effective value of 24 µF when used for C2 in the capacitor current-sharing calculator.
Using these capacitors in parallel reduces the RMS current in the large bulk capacitor to 3.07 A, which is more manageable. Placing the three 10-µF MLCCs in parallel also reduces the overall ESR and ESL of the C2 branch by a factor of three.
The low capacitance of the 100-nF MLCC and its relatively high ESR mean that this capacitor plays little part in sourcing the current at the switching frequency and its lower-order harmonics. The function of this capacitor is to decouple nanosecond current transients seen at the switching instants of the DC/DC converter’s MOSFETs. Designers often refer to it as the high-frequency capacitor.
In order to be effective, it’s essential to place the high-frequency capacitor as close as possible to the input voltage and ground terminals of the regulator using the shortest (lowest inductance) PCB routing possible. Otherwise, the parasitic inductance of the tracks will prevent this high-frequency capacitor from decoupling the high-frequency harmonics of the switching frequency.
It’s also important to use as small a package as possible to minimize the ESL of the capacitor. A high-frequency capacitor with a value of <100 nF can be beneficial for decoupling at a specific frequency when compared to its ESR and impedance curve. A smaller capacitor will have a higher self-resonance frequency.
Similarly, always place the larger MLCCs as close as possible to the converter to minimize their parasitic track inductance and maximize their effectiveness at the switching frequency and its harmonics.
Figure 3 also shows that, although the overall RMS current in the overall input capacitor (were it a single equivalent capacitor) is 6 A, the sum of RMS currents in the C1, C2 and C3 branches is >6 A and does not follow Kirchhoff’s current law. The law only applies to the instantaneous values, or to the complex addition of the time-varying and phase-shifted currents.
Using PSpice for TI or TINA-TI software
Designers who need more than three input capacitor branches for their applications can use PSpice for TI simulation software or TINA-TI software. These tools enable more complex RMS current calculations, including harmonics alongside the fundamental switching frequency and the use of a more sophisticated model for the capacitor, which captures the frequency-dependent nature of the ESR.
TINA-TI software can compute the RMS current in each capacitor branch in the following way: run the simulation, click the desired current waveform to select it, and from the Process menu option in the waveform window, select Averages. TINA-TI software uses a numerical integration over the start and end display times of the simulation to calculate the RMS current.
Figure 4 shows the simulation view. For clarity in this example, we omitted the 100-nF capacitor because its current is very low and contributes to ringing at the switching edges. The Power Stage Designer software analysis of the total input capacitor current waveform for the converter calculates the input current (IIN), which is 6 ARMS, the same value as for Figure 2.
Figure 4 Output from TINA-TI software shows the capacitor branch current waveforms and calculated RMS current in C2. Source: Texas Instruments
The capacitor current waveforms in each branch are quite different compared to the idealized trapezoidal waveform that ignores their ESR and ESL. This difference has implications for DC/DC converters such as the TI LM60440, which has two parallel voltage input (VIN) and ground (GND) pins.
The mirror-image pin configuration enables designers to connect two identical parallel input loops, meaning that they can place double input capacitance (both high frequency and bulk) in parallel close to the two pairs of power input (PVIN) and power ground (PGND) pins. The two parallel current loops also halve the effective parasitic inductance.
In addition, the two mirrored-input current loops have equal and opposite magnetic fields, allowing some H-field cancellation that further reduces the parasitic inductance (Figure 5). Figure 4 suggests that if you don’t carefully match the parallel loops in capacitor values, ESR, ESL and layout for equal parasitic impedances, then the current in the parallel capacitor paths can differ significantly.
Figure 5 Parallel input and output loops are shown in a symmetrical “butterfly” layout. Source: Texas Instruments
Software tool use considerations
To correctly specify input capacitors for buck DC/DC converters, you must know the RMS currents in the capacitors. You can estimate the currents from equations, or more simply by using software tools like TI’s Power Stage Designer. You can also use this tool to estimate the currents in up to three parallel input capacitor branches, as commonly used in practical converter designs.
More complex simulation packages such as TINA-TI software or PSpice for TI can compute the currents, including harmonics and fundamental frequencies. These tools can also model frequency-dependent parasitic impedance and many more parallel branches, illustrating the importance of matching the input capacitor combinations in mirrored input butterfly layouts.
Dr. Dan Tooth is Member of Group Technical Staff at Texas Instruments. He joined TI in 2007 and has been a field application engineer for over 17 years. He is responsible for supporting TI’s analog and power product portfolio in ADAS, EV and diverse industrial applications.
Dr. Jim Perkins Senior Member of Technical Staff at Texas Instruments. He joined TI in 2011 as part of the acquisition of National Semiconductor and has been a field application engineer for over 25 years. He is now mainly responsible for supporting TI’s analog and power product portfolio in grid infrastructure applications such as EV charging and smart metering.
Related Content
- Step-Down DC/DC Converter
- DC/DC Converter Considerations for Smart Lighting Designs
- Choosing The Right Switching Frequency For Buck Converter
- Use DC/DC buck converter features to optimize EMI in automotive designs
- Reducing Noise in DC/DC Converters with Integrated Ferrite-bead Compensation
The post Understanding currents in DC/DC buck converter input capacitors appeared first on EDN.
Yes, you _can_ prototype a vacuum tube circuit on a breadboard.
![]() | submitted by /u/1Davide [link] [comments] |
Penn State gains $3m DARPA grant for GaN-on-silicon project with Northrop Grumman
Ayar Labs raises $155m in Series D funding round led by Advent Global Opportunities and Light Street Capital
NUBURU resolves non-compliance with NYSE rules
ROHM’s PMICs for SoCs have been Adopted in Reference Designs for Telechips’ Next-Generation Cockpits
ROHM has announced the adoption of its PMICs in power reference designs focused on the next-generation cockpit SoCs ‘Dolphin3’ (REF67003) and ‘Dolphin5’ (REF67005) by Telechips, a major fabless semiconductor manufacturer for automotive applications headquartered in Pangyo, South Korea. Intended for use inside the cockpits of European automakers, these designs are scheduled for mass production in 2025.
ROHM and Telechips have been engaged in technical exchanges since 2021, fostering a close collaborative relationship from the early stages of SoC chip design. As a first step in achieving this goal, ROHM’s power supply solutions have been integrated into Telechips’ power supply reference designs. These solutions support diverse model development by combining sub-PMICs and DrMOS with the main PMIC for SoCs.
For infotainment applications, the Dolphin3 application processor (AP) power reference design includes the BD96801Qxx-C main PMIC for SoCs. Similarly, the Dolphin5 AP power reference design developed for next-generation digital cockpits combines the BD96805Qxx-C and BD96811Fxx-C main PMICs for SoC with the BD96806Qxx-C sub-PMIC for SoC, improving overall system efficiency and reliability.
Modern cockpits are equipped with multiple displays, such as instrument clusters and infotainment systems, with each automotive application becoming increasingly multifunctional. As the processing power required for automotive SoCs increases, power ICs like PMICs must be able to support high currents while maintaining high efficiency. At the same time, manufacturers require flexible solutions that can accommodate different vehicle types and model variations with minimal circuit modifications. ROHM SoC PMICs address these challenges with high efficiency operation and internal memory (One Time Programmable ROM) that allows for custom output voltage settings and sequence control, enabling compatibility with large currents when paired with a sub-PMIC or DrMOS.
Moonsoo Kim,
Senior Vice President and Head of System Semiconductor R&D Center, Telechips Inc.
“Telechips offers reference designs and core technologies centered around automotive SoCs for next-generation ADAS and cockpit applications. We are pleased to have developed a power reference design that supports the advanced features and larger displays found in next-generation cockpits by utilizing power solutions from ROHM, a global semiconductor manufacturer. Leveraging ROHM’s power supply solutions allows these reference designs to achieve advanced functionality while maintaining low power consumption. ROHM power solutions are highly scalable, so we look forward to future model expansions and continued collaboration.”
Sumihiro Takashima,
Corporate Officer and Director of the LSI Business Unit, ROHM Co., Ltd.
“We are pleased that our power reference designs have been adopted by Telechips, a company with a strong track record in automotive SoCs. As ADAS continues to evolve and cockpits become more multifunctional, power supply ICs must handle larger currents while minimizing current consumption. ROHM SoC PMICs meet the high current demands of next-generation cockpits by adding a DrMOS or sub-PMIC in the stage after the main PMIC. This setup achieves high efficiency operation that contributes to lower power consumption. Going forward, ROHM will continue our partnership with Telechips to deepen our understanding of next-generation cockpits and ADAS, driving further evolution in the automotive sector through rapid product development.”
The post ROHM’s PMICs for SoCs have been Adopted in Reference Designs for Telechips’ Next-Generation Cockpits appeared first on ELE Times.
Chinese Xiaomi 50W Wireless "Car Charger" Teardown - MICROPHONE AND BLE FOUND, other goodies. READ COMMENT
![]() | submitted by /u/comperr [link] [comments] |
Infineon plans to implement ISO/SAE 21434 product compliance for TRAVEO T2G automotive microcontrollers
The increasing connectivity of road vehicles leads to a growing need for cybersecurity. The United Nations Economic Commission for Europe (UNECE) has therefore adopted the R155 and R156 regulations, which define the cybersecurity requirements for OEMs. OEMs who want to sell new vehicles in UNECE-regulated markets must hold a valid type approval certificate and implement cybersecurity practices throughout the supply chain to minimize the risk of attack throughout the vehicle’s lifecycle. The TRAVEO T2G Automotive Microcontroller family for Body and Cluster from Infineon Technologies AG features a Hardware Security Module (HSM) that is capable of executing secured boot and ensuring secured isolation of HSM applications and data. To further enhance this, Infineon plans to retrospectively implement product compliance for the TRAVEO T2G automotive microcontroller family with the latest automotive cybersecurity standard ISO/SAE 21434. All necessary documentation, including the Cybersecurity Manual and Cybersecurity Case Report, will be provided to customers.
“With ISO/SAE 21434 compliant TRAVEO T2G automotive microcontrollers, OEMs’ effort to comply with UNECE R155 and R156 regulations will be significantly reduced. This enables faster time to the regulated markets”, said Ralf Koedel, Vice President Automotive Microcontroller at Infineon. “For existing customers, compliance becomes simpler, faster and more cost-effective while allowing the reuse of existing software and hardware. New customers can also benefit from the ISO/SAE 21434 compliance.”
The TRAVEO T2G microcontrollers are based on the Arm Cortex-M4(Single core)/M7(Single core/Dual core/ Quad core) core and deliver high performance, enhanced human-machine interfaces, high-security and advanced networking protocols tailored for a wide range of automotive applications. They offer state-of-the-art real-time performance, safety and security features. This is reflected, among other things, in the introduction of HSM (Hardware Security Module), dedicated Cortex-M0+ for secured processing, and embedded flash in dual bank mode for FOTA requirements.
With the planned new product compliance, developers can continue to use the TRAVEO T2G MCUs to develop their ISO/SAE 21434 compliant ECUs. As a result, they will benefit from lower product development costs and faster time-to-market for both existing and new platforms.
The post Infineon plans to implement ISO/SAE 21434 product compliance for TRAVEO T2G automotive microcontrollers appeared first on ELE Times.
I saw this at wallmart, and I just wanted to steal it 😅 (for recycling purposes xd).
![]() | Well it is something simple, I even took it apart fro the support to see what was going under it. But I didn't was able to wath to much as, it was glued. So the rainbow cable was blocking the main ic. Also, I watched a little drop of resin wich makes me think, that, or it was the main controller (not really sure, as the visible chips aside were plenty big), or it was just the e-paper driver. Maybe the second. Anyways. Even being a simple thing, it looked awesome. I thought this e-paper screen were more slow to refresh their frames, but this seemed to work faster than I thought. So this is my story of today and why now I want to buy an e-paper screen to test it with my raspberry zero 2w. Also, I can't imagine how expensive that screen is. And maybe will just end up in the trash once the decide is not necessary anymore, or just if the batteries die 😓 Hope it ends on good hands in the future (mine if possible, maybe leaving a note like a post it, behind. I will do it xd). [link] [comments] |
The Google Chromecast Gen 2 (2015): A form factor redesign with beefier Wi-Fi, too

In mid-2023, Google subtly signaled that its first-generation Chromecast A/V streaming receiver, originally introduced in 2013, had reached the end of the support road. I’d already tore one down, but I had several others still in use, which I promptly replaced with 3rd-generation (2018) successors off eBay. And while I was at it, I picked up an additional “rough”-condition one, plus intermediary 2nd-generation (2015) and Ultra (2016) well-used devices, for teardown purposes.
One year (and a couple of months) later, and a couple of months ago as I write these words in late October 2024, Google end-of-life’d the entire Chromecast product line, also encompassing the 4K (introduced in 2020) and HD (2022) variants of the Chromecast with Google TV which I’d already torn down too, and replacing everything with its newly unveiled TV Streamer:
So, I guess you can say I’m now backfilling from a disassembly-and-analysis standpoint. Today you’ll see the insides of the 2nd generation (2015) Chromecast:
with the Ultra (2016), notably kitted with the Stadia online-streamed gaming controller:
and 3rd generation (2018) to follow in the coming months.
Truth be told, I’ve also got a couple of Chromecast Audio streamers on hand, but as they’re so rare and prized by audiophiles (and wannabes like me), I’m loath to (destructively, at least) take one apart. Time will tell if I change my mind and/or get more disassembly-skilled in the future…
Anyhoo, let’s get to tearing down, beginning with the device I eBay-purchased last summer, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:
As you probably already noticed from the “stock” shots I’ve previously shared, the 2nd generation Chromecast marked a fairly radical physical design departure from its forebear. I’ll begin with something that might seem to be a “nit” at first glance but was actually a big deal to many users. That USB-A to micro-USB cable you see on the left was only 1’ long with the first-gen Chromecast; now it’s 5’ long. Much more convenient, especially if you’re getting power from an outlet-installed “wall wart” versus a TV back panel USB connector:
The device itself has more visibly-obviously evolved. The first-gen Chromecast looked a bit like a USB flash “stick”, cigar-shaped with a stubby HDMI connector jutting out of one end. Google bundled a short female-to-male extender cable with it, which frequently got quickly lost. Now, the extender cable is integrated, and the device itself is circular in shape. This transition has multiple benefits, two obvious and another conjecture on my part. The extension cable simplifies hookup to a TV’s crowded-connector backside (and as I’ve already mentioned, won’t be inadvertently discarded). Also, as you’ll soon see, the 2nd generation round Chromecast includes multiple Wi-Fi antennae, arranged around the partial-circumference of the also-circular PCB. And here’s the conjecture part: the 1st generation Chromecast was plagued by overheating issues, which I’m guessing the redesign assists in mitigating.
I’m calling this the “front”, although as I’ve mentioned before, I used this term, along with “back” and “sides”, loosely because, as I’ve also previously mentioned with other devices of this type, orientation is HDMI plug- and cable-orientation dependent, therefore inconsistent from one TV and broader setup to another. Mine’s black (duh); it also came in “Coral” (red) and “Lime” (also referred to in some places as “Lemonade”, yellow) shades:
At the bottom is the micro-USB power input jack, along with a reset switch to its left and a multi-color status LED to its right:
When not in use, the HDMI connector magnetically attaches to the back of the circular main body…for unclear-to-me reasons (ease of portability?). I apparently wasn’t alone, because Google dropped this particular “feature” for the third-generation successor:
Here the HDMI cable is extended; the magnet is that shiny rectangle with rounded corners (which I just learned today is called a stadium, presumably referencing the shape of an athletic entertainment facility) toward the top:
Here’s what the HDMI cable end looks like:
And once more back to the back (see what I did there?) of the device for a closeup of the various markings, including the FCC ID, A4RNC2-6A5 (which has an interesting historical twist I’ll revisit shortly):
Time to dive inside. From my advance research, I already knew that the glue holding the two halves of the body together was particularly stubborn stuff. This gave me an opportunity to try out a new piece of gear I’d recently acquired, iFixit’s iOpener kit, consisting of a long, narrow insulated heat-retaining bag which you put in the microwave oven for 30 seconds before using:
plus other handy disassembly accessories (the iOpener is also optionally sold standalone):
Strictly speaking, the iOpener is intended for removing the screen from a tablet or the like:
but I managed to get it to work (with a “bit” of overlap) with the Chromecast, too:
After that, assisted by a couple of the Opening Picks also included in the kit:
I was inside, with minimal cosmetic damage to the case (although I still harbored no delusions that my remaining disassembly steps would be non-destructive)
Here’s the inside of the top half of the case:
And here’s our first glimpse of the PCB topside, complete with a sizeable Faraday Cage:
Did you notice those three screws holding the PCB in place? You know what comes next:
Ladies and gentlemen, we have liftoff:
This is still the PCB topside, but alongside it (to the left) is the first-time revealed inside of the top of the case, complete with a LED light pipe assembly, a dallop of thermal paste, and a round gray heatsink that does double-duty as the attractant for the HDMI cable connector magnet. Also note the reset switch in the lower left edge:
Flipping the insides upside down first-time reveals the PCB underside; this time, the LED is clearly visible. And there’s another Faraday cage, to which the dallop of thermal paste connects:
Let’s return to the PCB topside, specifically to its Faraday cage, for removal first:
In past teardowns, to get it off, I’ve relied either on fairly flimsy-tip devices like the iSesamo:
Or just brute-forced it with a flat-head screwdriver, which inevitably resulted to both a mangled cage and PCB. This time, however, I pressed into service another new tool in my arsenal, iFixit’s Jimmy, which in the words of Goldilocks, was “just right”:
As you may have already inferred, two of the three earlier screws did double-duty, not only holding the PCB in place within the lower half of the case but also keeping the PCB-connector end of the HDMI cable intact. After removing them and then the Cage, the HDMI cable was free:
I’m sure that in the earlier shots you already noticed a second dallop of thermal paste between the large IC in the lower left quadrant and the Faraday Cage:
A bit of rubbing alcohol cleaned it off sufficiently for me to ID it and the other components on the board:
The previously paste-encrusted IC in the lower left quadrant is Marvell’s Armada 88DE3006 1500 Mini Plus dual-core ARM Cortex-A7 media processor, an uptick from the Marvell Armada DE3005-A1 1500-mini SoC in the first-generation Chromecast. To its right, barely visible under the Cage-mounting frame, is a Toshiba TC58NVG1S3HBAI6 2 Gbit NAND flash memory; curiously, its predecessor in the first-gen Chromecast, a Micron MT29F16G08, was 16 Gbit (8x larger) in capacity). In the lower right corner is a chip marked:
MRVL
21AA3
521GDT
which iFixit believes implements the system’s power management control capabilities. And in the lower left corner is another frame-obscured Marvell IC, marked as follows (you’ll have to trust me on this one):
MRVL
G868
524GBD
whose identity is unclear to me (and iFixit didn’t even bother taking a stab at), although it apparently was also in the first-gen Chromecast. Readers?
Flipping the board back over to its underside, and going through the same Faraday cage removal (this time also with preparatory thermal paste cleanup) process as before:
Reveals our third dallop of thermal paste, inside the second (underside) cage in the design:
Time for more rubbing alcohol-plus-tissues:
The dominant ICs this time are a Samsung K4B4G1646D-BY 4 Gbit DDR3L SDRAM to the right (this memory time around, the same capacity as with the first-gen Chromecast) and Marvell’s Avastar 88W8887 wireless controller (Wi-Fi, Bluetooth, NFC and FM, not all of these used). At this point, I’ll refer back to the “interesting historical twist” teaser from before. For one thing, the Avastar 88W8887’s precursor in the first-gen Chromecast was an AzureWave AW-NH387, a 2.4 GHz-only Wi-Fi (plus Bluetooth and FM receiver, the latter again unused) controller. This time, however, you get dual-band 1×1 802.11ac, reflective of the multi-PCB-embedded-antenna array you see around the PCB sides.
And what about Bluetooth? Here’s where things get really interesting. At its initial 2015 introduction, Bluetooth capabilities were innate in the silicon but not enabled in software. A couple of years later, however, Google went back to the FCC for recertification, not because any of the hardware had changed but just because a new firmware “push” had turned on Bluetooth support. Why? I don’t know for sure, but I have a theory.
Initially, Google relied on a wonky app called Device Utility that forced you to jump thorough a bunch of hoops in a poorly documented specific sequence and with precise step-by-step timing in order for initial activation to successfully complete:
Subsequent setup steps were done through the TV to which the Chromecast was connected over HDMI. Google subsequently switched to doing these latter setup steps over its Google Home app, initially launched in 2016 and substantially revamped in 2023, instead, which presumably leverages Bluetooth (therefore the subsystem software-enable and FCC recertification). But for legacy devices, initial activation still needed to occur over Device Utility.
And with that, closing in on 1,800 words, I’ll wrap up for today. Your thoughts are as-always welcomed in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Google’s Chromecast with Google TV: Car accessory similarity, and a post-teardown resurrection opportunity?
- Google’s Chromecast with Google TV: Dissecting the HD edition
- Teardown: Chromecast streams must have gotten crossed
- Google’s Chromecast: Is “proprietary” necessary for wireless multimedia streaming success?
- Google’s Chromecast: impressively (and increasingly) up for the video streaming task
The post The Google Chromecast Gen 2 (2015): A form factor redesign with beefier Wi-Fi, too appeared first on EDN.
Driving Forces: Unveiling the Top Automotive Powerhouses Around the Globe
The United States, China, and India are known for their significant contributions to the global automotive industry. Let’s dive into the details of each market and analyze the factors that make them unique and influential on a global scale.
Currently, the USA leads with a market size of Rs 78 lakh crore, followed by China at Rs 47 lakh crore. India, now at Rs 22 lakh crore, has significant potential.
The US Automotive Market: The United States has one of the largest automotive markets in the world. With a population of over 330 million people, there is a high demand for vehicles in the country. In 2020, the US automotive market was valued at approximately $1.5 trillion. This value includes the sales of new cars, as well as aftermarket services and products.
The market is anticipated to witness increased demand for commercial vehicles due to the thriving logistics and passenger transportation industry. Government policies and initiatives is also a market driver that have a significant impact on its growth and are anticipated to continue doing so in the years to come.
One of the key drivers of the US automotive market is consumer demand. Americans have a strong preference for larger vehicles such as trucks and SUVs, which contribute significantly to the overall market size. Additionally, the US is home to several major automakers such as General Motors, Ford, and Tesla, which further stimulate market growth.
The China Automotive MarketChina is the largest automotive market in the world in terms of vehicle sales. With a population of over 1.4 billion people, the demand for cars in China is immense. In 2020, the Chinese automotive market was valued at approximately $1.5 trillion, on par with the US market.
One of the key factors driving the growth of the Chinese automotive market is the rising middle class. As incomes in China continue to increase, more people are able to afford cars, leading to a surge in vehicle sales. Additionally, the Chinese government has implemented policies to promote the production and adoption of electric vehicles, further boosting market growth.
The Chinese automotive industry is uniquely situated to become a centre for the best technologies. By Category, the Chinese automotive vehicles industry’s principal categories include Electric Vehicle (EV), Hybrid Electric Vehicle (HEV), Plug-in Hybrid Electric Vehicle (PHEV), Mild Hybrid Electric Vehicle (MHEV), Natural Gas Vehicle (NGV), Fuel Cell Electric Vehicle (FCEV), Diesel Vehicle, Petrol Vehicle. In recent years, Electric vehicles (EV) and Mild Hybrid Electric vehicles (MHEV) have been very successful in China. In particular, due to the Chinese government’s support and the cost-saving trend offered through buying an electric vehicle, which avoids the cost of purchasing a license plate, which is indeed a considerable saving.
The India Automotive MarketIndia is another key player in the global automotive industry. With a population of over 1.3 billion people, India has a large consumer base for vehicles. In 2020, the Indian automotive market was valued at approximately $100 billion, significantly smaller than the US and China markets.
One of the main drivers of the Indian automotive market is the increasing urbanization of the country. As more people move to cities, the demand for vehicles, especially two-wheelers and compact cars, is on the rise. Additionally, government initiatives such as the “Make in India” campaign have encouraged domestic production and manufacturing in the automotive sector.
In conclusion, the US, China, and India are three key players in the global automotive market, each with its unique characteristics and drivers of growth. While the US and China have larger market sizes, India is a rapidly developing market with great potential for future expansion. By understanding the size and dynamics of these markets, stakeholders can make informed decisions and capitalize on opportunities for growth and innovation.
- The Renault-Nissan alliance is stepping up its investments in India plans to invest US$ 600-700 million at its Chennai-based facility to step up platform localisation and improve sophistication levels in manufacturing.
- Mercedes Benz will make an investment of Rs 3,000 crore (US$ 360.14 million) in Maharashtra.
- In March 2024, Tata Motors Group has signed a facilitation Memorandum of Understanding (MoU) with the Government of Tamil Nadu to explore setting-up of a vehicle manufacturing facility in the state. The MoU envisages an investment of US$ 1,081.6 million (Rs. 9,000 crores) over 5-years.
- Tata Motors, in April 2024, announced the inauguration of a new commercial vehicle spare parts warehouse in Guwahati.
- In April 2024, Maruti Suzuki India Limited, commissioned another vehicle assembly line at its Manesar facility.
- In February 2024, Hyundai Motors has announced it will invest over US$ 3.85 billion (Rs 32,000 crore) from 2023 to 2033 in expanding its EV range and enhancing its current car and SUV platforms.
- In January 2024, Mercedes-Benz is set to invest US$ 24.04 million (Rs 200 crore) in India in 2024 and is gearing up to introduce more than a dozen new cars, including EVs this year.
- In February 2024, Klaus Zellmer CEO of Skoda Auto said India is the most promising growth market for Skoda Auto and Skoda Auto India is looking to increase its share in the Indian market to 5% by 2030.
- In April 2024, Hero Motocorp said it has opened an assembly facility in Nepal in partnership with its distributor CG Motors with capacity of 75,000 units per annum.
- Ola Electric IPO to be the first auto company in India to launch an IPO in over two decades (20 years). It has an expected size of US$ 1.01 billion (Rs. 8,500 crore).
- In January 2024, BMW sold 1,340 luxury cars, the highest in the segment, which gave it a market share of 0.34%. Mercedes-Benz sold 1,333 cars in January 2024.
- In January 2024, Hyundai Motor India Limited announced US$ 743.8 million (Rs. 6,180 crore) investment plans in the state of Tamil Nadu including US$ 21.7 million (Rs. 180 crore) towards a dedicated ‘Hydrogen Valley Innovation Hub,’ in association with IIT- Madras.
- In January 2024, Hyundai Motor India Ltd. finalized the acquisition and transfer of specified assets at General Motors India’s Talegaon Plant in Maharashtra and inked an MoU with the Government of Maharashtra committing to an investment of US$ 722 million (Rs. 6,000 crore) in the state.
- In January 2024, Mahindra & Mahindra Ltd. and the India-Japan Fund (“IJF”), managed by the National Investment and Infrastructure Fund Limited (“NIIF”), entered into a binding agreement, with IJF committing to invest US$ 48.1 million (Rs. 400 crore) in Mahindra Last Mile Mobility Limited (MLMML).
- In January 2024, at the Vibrant Gujarat Global Summit, Maruti Suzuki announced the investment plans in Gujarat with a New Greenfield plant and a fourth line in SMG.
The post Driving Forces: Unveiling the Top Automotive Powerhouses Around the Globe appeared first on ELE Times.
Indian Automobile Industry to Be Largest in Next 5 Years
The automotive industry in India is poised for significant growth in the coming years, with experts predicting that it will become the largest in the world within the next five years. This growth is driven by factors such as increasing disposable incomes, rising demand for cars and commercial vehicles, and government initiatives to promote manufacturing in the country.
Why is the Indian Automobile Industry on the Path to Becoming the Largest?- Growing Economy: India is one of the fastest-growing economies in the world, with a rising middle class that has more purchasing power than ever before. This has led to an increase in demand for vehicles, both for personal and commercial use.
- Government Initiatives: The Indian government has introduced several initiatives to promote the growth of the automotive industry, such as the “Make in India” campaign, which aims to boost manufacturing in the country. In addition, policies such as the Faster Adoption and Manufacturing of Hybrid and Electric Vehicles (FAME) scheme have incentivized the production of electric vehicles.
- Investment from Global Players: Several international automotive companies have set up manufacturing plants in India to cater to the growing demand in the country. This influx of investment has not only created job opportunities but has also boosted the overall growth of the industry.
- Infrastructure Development: The lack of adequate infrastructure, such as highways and roads, poses a significant challenge to the growth of the automotive industry in India. Poor road conditions can lead to increased wear and tear on vehicles, as well as higher maintenance costs.
- Environmental Concerns: With the increasing focus on sustainability and environmental conservation, the automotive industry in India is under pressure to reduce its carbon footprint. This has led to the development of electric vehicles and other alternative fuel technologies, but more needs to be done to address this issue.
- Competition from Foreign Markets: While the Indian automotive industry is experiencing significant growth, it faces tough competition from established markets such as China and the United States. Indian manufacturers need to innovate and adapt to changing market trends to stay ahead in the global market.
The Indian automobile industry is well-positioned to become the largest in the world within the next five years. With the right government support, investment from global players, and a focus on innovation and sustainability, the industry is set to witness exponential growth. However, challenges such as infrastructure development and environmental concerns need to be addressed to ensure sustainable growth in the long run.
The post Indian Automobile Industry to Be Largest in Next 5 Years appeared first on ELE Times.
Profile of an MCU promising AI at the tiny edge

The common misconception about artificial intelligence (AI) often relates this up-and-coming technology to data center and high-performance compute (HPC) applications. This is no longer true, says Tom Hackenberg, principal analyst for the Memory and Computing Group at Yole Group. He said this while commenting on STMicroelectronics’ new microcontroller that embeds a neural processing unit (NPU) to support AI workloads at the tiny edge.
ST has launched its most powerful MCU to date to cater to a new range of embedded AI applications. “The explosion of AI-enabled devices is accelerating the inference shift from the cloud to the tiny edge,” said Remi El-Ouazzane, president of Microcontrollers, Digital ICs and RF Products Group (MDRF) at STMicroelectronics.
He added that inferring at the edge brings substantial benefits, including ultra-low latency for real-time applications, reduced data transmission, and enhanced privacy and security. Not sharing data with the cloud also leads to sustainable energy use.
STM32N6, available to selected customers since October 2023, is now available in high volumes. It integrates a proprietary NPU, the Neural-ART Accelerator, which can deliver 600 times more machine-learning performance than a high-end STM32 MCU today. That will enable the new MCU to leverage computer vision, audio processing, sound analysis and other algorithms that are currently beyond the capabilities of small embedded systems.
Figure 1 STM32N6 offers the benefits of an MPU-like experience in industrial and consumer applications while leveraging the advantages of an MCU. Source: STMicroelectronics
“Today’s IoT edge applications are hungry for the kind of analytics that AI can provide,” said Yole’s Hackenberg. “The STM32N6 is a great example of the new trend melding energy-efficient microcontroller workloads with the power of AI analytics to provide computer vision and mass sensor-driven performance capable of great savings in the total cost of ownership in modern equipment.”
Besides the AI accelerator, STM32N6 features an 800-MHz Arm Cortex-M55 core and 4.2 MB of RAM for real-time data processing and multitasking, which ensure sufficient compute for complementing AI acceleration. As a result, the MCU can run AI models to carry out tasks like segmentation, classification, and recognition. Moreover, an image signal processor (ISP) incorporated into the MCU provides direct signal processing, which allows engineers to use simple and affordable image sensors in their designs.
Design testimonials
Lenovo Research, which rigorously evaluated STM32N6 in its labs, acknowledges its neural processing performance and power efficiency claims. “It accelerates our research of “AI for All” technologies at the edge,” said Seiichi Kawano, principal researcher at Lenovo Research. LG, currently incorporating AI features into smartphones, home appliances and televisions, has also recognized STM32N6’s AI performance for embedded systems.
Figure 2 Meta Bounds has employed the AI-enabled STM32N6 in its AR glasses. Source: STMicroelectronics
Then there is Meta Bounds, a Zhuhai, China-based developer of consumer-level augmented reality (AR) glasses. Its founding partner, Zhou Xing, acknowledges the vital role that STM32N6’s embedded AI accelerator, enhanced camera interfaces, and dedicated ISP played in the development of the company’s ultra-lightweight and compact form factor AI glasses.
Besides these design testimonials, what’s important to note is the transition from MPUs to MCUs for embedded inference. That eliminates the cost of cloud computing and related energy penalties, making AI a reality at the tiny edge.
Figure 3 The shift from MPU to MCU for AI applications saves cost and energy and it lowers the barrier to entry for developers to take advantage of AI-accelerated performance for real-time operating systems (RTOSes). Source: STMicroelectronics
Take the case of Autotrak, a trucking company in South Africa. According to its engineering director, Gavin Leask, fast and efficient AI inference within the vehicle can give the driver a real-time audible warning to prevent an upcoming incident.
At venues like this, AI-enabled MCUs can run computer vision, audio processing, sound analysis and more at a much lower cost and power usage than MPUs.
Related Content
- Getting a Grasp on AI at the Edge
- Implementing AI at the edge: How it works
- It’s All About Edge AI, But Where’s the Revenue?
- Edge AI accelerators are just sand without future-ready software
- Edge AI: The Future of Artificial Intelligence in embedded systems
The post Profile of an MCU promising AI at the tiny edge appeared first on EDN.
Harnessing Computer-on-Modules for Streamlined IT/OT Convergence and Enhanced Cybersecurity
IT/OT convergence brings physical (OT) equipment and devices into the digital (IT) world. This digital transformation is driven by technologies like the Industrial Internet of Things (IIoT) and big data analytics, which are crucial for enhancing production efficiency and boosting productivity. Historically, both systems have operated independently with distinct priorities, protocols, and security needs. However, with the dynamic digitalization landscape, challenges are ever evolving. Complexities arise as the demands for security, flexibility, scalability, lifecycle management, and efficiency become more evident. aReady.COM, congatec’s application-ready offering around computer-on-modules (COMs), provides the perfect building blocks for out-of-the box IT/OT convergence, reducing complexity by seamlessly integrating hardware and software for enhanced performance and flexibility.
The advent of Industry 4.0 and the IIoT have positioned IT/OT convergence as a pivotal element in the core of business operations, becoming indispensable for organizational success. This convergence demands the exchange of data from machines and systems with minimal latency to ensure the integrity of a real-time digital twin. Additionally, it is imperative to have a feedback mechanism integrated within the same hardware platform for usage-based business models that rely on immediate access to operational data, such as finance, for accurate billing, and maintenance to enhance productivity and maximize uptime.
However, as the integration of IT and OT systems deepens, the exposure to cyber threats escalates. Cyber attacks, once primarily aimed at IT, now extend their reach to OT. In response to this heightened risk, the European Union introduced the Cyber Resilience Act alongside the IEC 62443 standards. These measures mandate that starting in 2027, original equipment manufacturers (OEMs) must ensure their connected systems, devices, and machines comply with these regulations before entering the EU market. The objective is to reduce the vulnerability to cyber-attacks and safeguard against potential risks by secure software updates.
Security through separationTo ensure such secure updates via a separated IIoT gateway for example, OEMs don’t need to add individual hardware. Using system consolidation techniques alongside a hypervisor, such an instance can be implemented on the same multi-core module fully separated and secure. All that’s needed is a separate instance that doesn’t run under the same operating system as the HMI or the control system but instead operates in an isolated environment. This environment, acting as a security island, separates data and applications from one another. This approach helps reduce hardware costs while increasing the system’s flexibility and reliability.
However, implementing the necessary software for this consolidated system can be more complex than configuring the hardware itself. Crafting a hypervisor tailored for system consolidation is an arduous task if done in-house, given the tightly coupled association between this type of hardware-related software and the embedded platform. In such instances, the specialized knowledge of an embedded systems partner is invaluable.
IT/OT convergence needs dedicated softwareFurthermore, many organizations do not possess the necessary in-house capabilities to develop the functional IT/OT convergence software, as the generic software solutions available on the market may not meet specific functional needs. Furthermore, the availability and precision of the embedded system’s operation hinges on the hardware data, which must be accessible and standardized by the IIoT software in terms of format, transmission protocol, and measurement units. For instance, a discrepancy in temperature data units – receiving Kelvin or Fahrenheit when Celsius is expected – could lead to operational disarray. This can be circumvented by leveraging the expertise of embedded manufacturers who can provide the required building blocks for monitoring software, given their intimate knowledge of their hardware.
The software in question should enable a range of functionalities, including remote monitoring of essential hardware details such as module identification, health, specifications, and sensor data, as well as the integration with standard communication interfaces like I2C, GPIOs, and Ethernet. It should also facilitate comprehensive monitoring and secure access to embedded systems, encompassing security protocols, sensor and actuator integration, control logic, lifecycle management, and historical data. Additionally, it should provide connectivity to prevalent cloud services like Azure and AWS, with options for establishing or integrating private on-premises clouds to protect critical business data. At its most advanced, the software should grant secure, real-time control over machines through edge devices, complete with remote management capabilities.
With a resilient, reliable, and secure IIoT connection, businesses gain real-time visibility of all data types from devices and connected sensors. Further advantages include reliable data processing, secure and encrypted connection with authorized access, real-time machine operation capabilities, and optimized maintenance costs with minimal on-site service for routine work and updates. With or without AI enhancement, predictive maintenance provides further opportunities to reduce machine downtime compared to fixed maintenance intervals.
Application ready software building blocksWith aReady.VT for system consolidation and aReady.IOT for IIoT connection, congatec has set out to address these needs. The aReady.VT virtualization technology enables designers to consolidate functions that previously required multiple dedicated systems on one single hardware platform. For example, the IIoT connector for IT/OT convergence can be highly efficiently integrated on the same COM that is hosting the application by using a dedicated virtual machine.
By reducing the number of systems, embedded computing applications can achieve significant size, weight, power consumption, and cost (SWaP-C) savings. aReady.VT supports the full range of congatec’s x86 COMs, from low-power to high-performance Server-on-Modules (SOMs). Notably, congatec is currently the only manufacturer to implement such Hypervisor-on-Module functionality application-ready across all their current x86 modules. This system consolidation shortens time-to-market and optimizes overall system functionality.
aReady.IOT offers a range of application-ready software building blocks that can be chosen to implement the exact functionality needed for successful digitalization (Figure 1). The IoT software and hardware building blocks enable seamless communication and data transfer between diverse systems and devices. This empowers companies to optimize production processes, increase efficiency, and reduce costs. Crucially, aReady.IOT incorporates intrinsic security features to safeguard sensitive data against cyber threats and maintain operational integrity.
The capabilities of the aReady.IOT solution encompass a comprehensive suite of functions. Users can remotely access a wealth of device information, including serial numbers, software versions, voltages, and temperatures. It also allows for the retrieval of status values from an array of connected peripheral devices and sensors, capturing metrics such as acceleration, pressure, and vibrations. Beyond monitoring, the solution provides for the remote control of devices, enabling users to manage operations from afar.
In terms of data presentation, the system facilitates the visualization of information through dashboards or digital twins, offering an intuitive and interactive representation of the devices’ statuses. Additionally, the solution supports process automation, streamlining operations and enhancing efficiency.
The technology that underpins aReady.IOT is built upon the solid foundation established by Arendar, a company that congatec acquired in 2023. A key advantage of aReady.IOT is that designers don’t need to program their IIoT connection from scratch. Instead, they can simply parameterize it through a web interface. This approach offers maximum flexibility and the convenience of ready-made apps, providing instant access to cost-saving opportunities.
Robotic arm implementationConsider a robot arm with a stereoscopic camera for object recognition and positioning. This system consolidates various tasks but doesn’t run them under one operating system. Instead, it creates dedicated virtual systems for real-time control, HMI, AI-powered object recognition and an IIoT connection for secure IT/OT convergence.
This setup enables predictive maintenance and new business models like Robot-as-a-Service (RaaS). System consolidation allows these diverse tasks to co-exist on a single COM yet be kept separate by a hypervisor. This approach transforms multiple systems into one, maximizing resource utilization while reducing space requirements and cabling, resulting in significantly lower overall system and installation costs and increased reliability.
Application-ready COMsAs part of its aReady.COM strategy, congatec offers aReady.VT and aReady.IOT in an application-ready or custom-configured package (Figure 2). These aReady.COMs integrate a pre-configured hypervisor, operating system, and IIoT software. Developers can boot these individually configured aReady.COM modules immediately and install their applications.
Alternatively, they can skip this task and let congatec deliver ready-made images with pre-installed application software, allowing modules to be directly deployed on-site during the commissioning process. This streamlines workflows, supply chain, and warehousing, making them much more efficient.
Regardless of the chosen integration level, aReady.COMs minimize the complexity of integrating diverse IIoT functionalities into embedded and edge computing systems below the application layer.
The post Harnessing Computer-on-Modules for Streamlined IT/OT Convergence and Enhanced Cybersecurity appeared first on ELE Times.
What Is Next for Automotive Battery Technology?
In recent years, there have been significant advancements in automotive battery technology, paving the way for cleaner and more efficient vehicles. Researchers worldwide are actively exploring new materials and technologies to improve the performance and sustainability of batteries used in electric vehicles (EVs) and hybrid cars. So, what is next for automotive battery technology?
The Future of Automotive Battery Technology:- Lithium-Ion Batteries: Lithium-ion batteries have been the go-to choice for electric vehicles due to their high energy density and long cycle life. However, researchers are working on enhancing these batteries further to increase their energy storage capacity and reduce their cost.
- Solid-State Batteries: One of the most promising advancements in battery technology is the development of solid-state batteries. These batteries use solid electrolytes instead of liquid ones, which can significantly increase energy density and improve safety.
- Graphene Batteries: Graphene, a single layer of carbon atoms, has shown great potential for use in batteries due to its high conductivity and strength. Research is ongoing to incorporate graphene into battery designs to increase energy storage and reduce charging times.
- Sodium-Ion Batteries: Sodium-ion batteries are being explored as a more sustainable alternative to lithium-ion batteries. Sodium is abundant and inexpensive, making it a viable option for large-scale energy storage applications.
- Wireless Charging: Wireless charging technology is gaining traction in the automotive industry, allowing EVs to charge without physical connections to charging stations. This convenience could revolutionize the way we power our vehicles in the future.
- Challenges and Opportunities:
While the future of automotive battery technology looks promising, there are still challenges that need to be overcome. The high cost of advanced battery materials and the limited availability of rare earth elements are major hurdles in the widespread adoption of EVs. Additionally, battery recycling and disposal methods need to be improved to minimize environmental impact.
However, with continued research and development, these challenges can be addressed, opening up new opportunities for innovation in the automotive industry. From increasing energy density to reducing charging times, the possibilities for automotive battery technology are limitless.
The future of automotive battery technology is bright, with researchers worldwide working tirelessly to push the boundaries of energy storage and efficiency. From solid-state batteries to graphene-enhanced designs, the possibilities for enhancing EV performance are endless. As we look towards a cleaner and more sustainable future, automotive battery technology will undoubtedly play a crucial role in shaping the way we drive. So, what is next for automotive battery technology? The answer lies in continuous innovation and collaboration towards creating the next generation of batteries for electric vehicles.
What Is the Current State of Automotive Battery Technology?The current state of automotive battery technology is advanced, with lithium-ion batteries being the most common type used in electric vehicles. These batteries have a high energy density, which allows them to store a large amount of energy in a relatively small and lightweight package. However, there are still some challenges that automakers face when it comes to implementing these batteries in their vehicles.
The Most Challenging Aspect of Automotive Battery Technology TodayIn today’s fast-paced world, the automotive industry is constantly evolving to meet the demands of consumers and the environment. One of the key areas of focus for automakers is battery technology, as more and more vehicles are transitioning to electric power. But what is the most challenging aspect of automotive battery technology today?
- Cost: One of the biggest challenges of automotive battery technology today is the cost of manufacturing lithium-ion batteries. While the cost of these batteries has decreased in recent years, they still make up a significant portion of the overall cost of an electric vehicle, making them less accessible to the average consumer.
- Range: Another challenge facing automakers is the range of electric vehicles. While advancements in battery technology have increased the range of electric vehicles, they still cannot match the range of traditional gasoline-powered vehicles. This limitation makes consumers hesitant to make the switch to electric vehicles.
- Charging Infrastructure: The lack of a robust charging infrastructure is another challenge that automakers face. While there are more charging stations being built every day, the infrastructure is still not as widespread or convenient as gas stations, making it difficult for consumers to rely solely on electric vehicles for their transportation needs.
- Durability: Lithium-ion batteries degrade over time, which can lead to a decrease in performance and range. This degradation can be exacerbated by factors such as extreme temperatures or fast charging, making it difficult for automakers to guarantee the longevity and durability of their batteries.
- Research and Development: Continued research and development in battery technology is crucial to overcoming the challenges faced by automakers. By investing in new materials and manufacturing processes, automakers can reduce the cost of batteries, increase their energy density, and improve their longevity.
- Infrastructure Investment: Building a robust charging infrastructure is essential to increasing the adoption of electric vehicles. Governments and private companies must work together to install more charging stations and make them more accessible to consumers.
- Consumer Education: Educating consumers about the benefits of electric vehicles and addressing their concerns about range and charging infrastructure is key to increasing their adoption. Automakers must work to dispel myths and misconceptions about electric vehicles and highlight their environmental and cost-saving advantages.
While automotive battery technology has come a long way in recent years, there are still several challenges that automakers face in implementing these technologies in their vehicles. By addressing issues such as cost, range, charging infrastructure, and durability, automakers can pave the way for a future where electric vehicles are the norm rather than the exception.
The Future of Automotive Battery TechnologyAre you curious about what the future of automotive battery technology holds? In this article, we will explore the advancements and innovations that are shaping the future of automotive batteries.
What the Future Automotive Battery Would Be Like?- Longer Battery Life: One of the most significant developments in automotive battery technology is the quest for longer battery life. Manufacturers are constantly working on improving the energy density of batteries to increase the range of electric vehicles. This will result in fewer charges and longer driving distances, making electric cars more convenient and practical for everyday use.
2. Faster Charging Speeds: Another key aspect of the future of automotive batteries is faster charging speeds. With advancements in charging technology, electric vehicles will be able to charge more quickly, reducing the time it takes to power up and get back on the road. Fast-charging stations will become more widespread, making electric vehicles a more viable option for long-distance travel.
3. Enhanced Safety Features: Safety is always a top priority when it comes to automotive batteries. In the future, we can expect to see even more advanced safety features built into battery systems to prevent overheating, overcharging, and other potential hazards. This will give drivers peace of mind knowing that their electric vehicles are not only environmentally friendly but also safe to use.
4. Integration with Renewable Energy Sources: As the world moves towards sustainable energy solutions, automotive batteries will play a crucial role in storing and utilizing energy from renewable sources such as solar and wind. This integration will not only reduce the carbon footprint of electric vehicles but also help make them more self-sufficient and eco-friendlier.
5. Lightweight and Compact Designs: Advancements in battery materials and manufacturing processes will lead to lighter and more compact battery designs in the future. This will not only improve the overall performance of electric vehicles but also make them more efficient and easier to produce on a large scale.
The future of automotive battery technology is bright, with advancements in energy density, charging speed, safety features, integration with renewable energy sources, and lightweight designs. Electric vehicles are set to become even more practical, convenient, and environmentally friendly in the years to come.
The post What Is Next for Automotive Battery Technology? appeared first on ELE Times.
Semiconductor laser market growing at 9% CAGR to over $5bn in 2029
Celestial AI wins 2024 Global Semiconductor Alliance ‘Start-Up to Watch’ award
Made a 30 x 40mm watch with touch screen and external RTC with esp32 s3 powered by lithium battery or usb cable
![]() | submitted by /u/coolkid4232 [link] [comments] |
Pages
