-   Українською
-   In English
Новини світу мікро- та наноелектроніки
Latest issue of Semiconductor Today now available
ROHM’s 4th Generation SiC MOSFET Bare Chips Adopted in Three EV Models of ZEEKR from Geely
Integration in traction inverters extends the cruising range and improves performance
ROHM has announced the adoption of power modules equipped with 4th generation SiC MOSFET bare chips for the traction inverters in three models of ZEEKR EV brand from Zhejiang Geely Holding Group (Geely), a top 10 global automaker. Since 2023, these power modules have been mass-produced and shipped from HAIMOSIC (SHANGHAI) Co., Ltd. – a joint venture between ROHM and Zhenghai Group Co., Ltd. to Viridi E-Mobility Technology (Ningbo) Co., Ltd, a Tier 1 manufacturer under Geely.
Geely and ROHM have been collaborating since 2018, beginning with technical exchanges, then later forming a strategic partnership focused on SiC power devices in 2021. This led to the integration of ROHM’s SiC MOSFETs into the traction inverters of three models: the ZEEKR X, 009, and 001. In each of these EVs, ROHM’s power solutions centered on SiC MOSFETs play a key role in extending the cruising range and enhancing overall performance.
ROHM is committed to advancing SiC technology, with plans to launch 5th generation SiC MOSFETs in 2025 while accelerating market introduction of 6th and 7th generation devices. What’s more, by offering SiC in various forms, including bare chips, discrete components, and modules, ROHM is able to promote the widespread adoption of SiC technology, contributing to the creation of a sustainable society.
ZEEKR Models Equipped with ROHM’s EcoSiC
The ZEEKR X, which features a maximum output exceeding 300kW and cruising range of more than 400km despite being a compact SUV, is attracting attention even outside of China due to its exceptional cost performance. The 009 minivan features an intelligent cockpit and large 140kWh battery, achieving an outstanding maximum cruising range of 822km. And for those looking for superior performance, the flagship model, 001, offers a maximum output of over 400kW from dual motors with a range of over 580km along with a four-wheel independent control system.
Market Background and ROHM’s EcoSiC
In recent years, there has been a push to develop more compact, efficient, lightweight electric systems to expand the adoption of next-generation electric vehicles (xEVs) and achieve environmental goals such as carbon neutrality. For electric vehicles in particular, improving the efficiency of the traction inverter, a key element of the drive system, is crucial for extending the cruising range and reducing the size of the onboard battery, heightening expectations for SiC power devices.
As the world’s first supplier to begin mass production of SiC MOSFETs in 2010, ROHM continues to lead the industry in SiC device technology development. These devices are now marketed under the EcoSiC brand, encompassing a comprehensive lineup that includes bare chips, discrete components, and modules.
EcoSiC Brand
EcoSiC is a brand of devices that utilize silicon carbide (SiC), which is attracting attention in the power device field for performance that surpasses silicon (Si). ROHM independently develops technologies essential for the evolution of SiC, from wafer fabrication and production processes to packaging, and quality control methods. At the same time, we have established an integrated production system throughout the manufacturing process, solidifying our position as a leading SiC supplier.
The post ROHM’s 4th Generation SiC MOSFET Bare Chips Adopted in Three EV Models of ZEEKR from Geely appeared first on ELE Times.
Capacitor Discharger - Discharge HV Capacitors up to 450V and 1000 µF
submitted by /u/Southern-Stay704 [link] [comments] |
Renesas Unwraps MCU-Based Sensor Module for Smart Air Quality Monitoring
Hot Chips Keynote: AMD President Shares Thoughts on AI Pervasiveness
Cornell and Lit Thinking working on DARPA-funded project to develop AlN-based PiN diodes with low on-state resistance
Beaming solar power to Earth: feasible or fantasy?
It’s always interesting when we are presented with very different and knowledgeable perspectives about the feasibility of a proposed technological advance. I recently had this experience when I saw two sets of articles about the same highly advanced concept within a short time window, but with completely different assessments of their viability.
In this case, the concept is simple and has been around for a long time in science fiction and speculative stories: capture gigawatts of solar energy using orbiting structures (I hesitate to call them satellites) and then beam that energy down to Earth.
The concept has been written about for decades, is simple to describe in principle, and appears to offer many benefits with few downsides. In brief, the plan is to use huge solar panels to intercept some of the vast solar energy impinging on Earth, convert it to electricity, and then beam the resultant electrical energy to ground-based stations from where it could be distributed to users. In theory, this would be a nearly environmentally “painless” source of free energy. What’s not to like?
It’s actually more than just an “on paper” or speculative concept. There are several serious projects underway, including one at the California Institute of Technology (Caltech) which is building a very small-scale version of some of the needed components. They have been performing ground-based tests and have even launched some elements in orbit for in-pace evaluation in January 2023 (“In a First, Caltech’s Space Solar Power Demonstrator Wirelessly Transmits Power in Space”). The Wall Street Journal even had an upbeat article about it, “Beaming Solar Energy From Space Gets a Step Closer”.
There are many technical advances to be resolved in the real world (actually, they are “out of this world”) issues that have to be addressed. Note that the Caltech project is funded thus far by a $100 million grant, all from a single benefactor.
The Caltech Space Solar Power Project launched their Space Solar Power Demonstrator (SSPD) to test several key components of an ambitious plan to harvest solar power in space and beam the energy back to Earth. In brief, it consists of three main experiments, each tasked with testing a different key technology of the project, Figure 1.
Figure 1 Caltech’s Space Solar Power Demonstrator from their Space Solar Power Project has three key subsystems, encompassing structure, solar cells, and power transfer. Source: Caltech
The three segments are:
- Deployable on-Orbit ultraLight Composite Experiment (DOLCE): A structure measuring 6 feet by 6 feet that demonstrates the architecture, packaging scheme and deployment mechanisms of the modular spacecraft that would eventually make up a kilometer-scale constellation forming a power station, Figure 2;
Figure 2 Engineers carefully lower the DOLCE portion of the Space Solar Power Demonstrator onto the Vigoride spacecraft built by Momentus. Source: Caltech
- ALBA: A collection of 32 different types of photovoltaic (PV) cells, to enable an assessment of the types of cells that are the most effective in the punishing environment of space;
- Microwave Array for Power-transfer Low-orbit Experiment (MAPLE): An array of flexible lightweight microwave power transmitters with precise timing control focusing the power selectively on two different receivers to demonstrate wireless power transmission at distance in space.
Scaling a demonstration unit up to useable size is a major undertaking. The researchers envision the system as being designed and built as a highly modular, building-block architecture. Each spacecraft will carry a square-shaped membrane measuring roughly 200 feet on each side. The membrane is made up of hundreds or thousands of smaller units which have PV cells embedded on one side and a microwave transmitter on the other.
Each spacecraft would operate and maneuver in space on its own but also possess the ability to hover in formation and configure an orbiting power station spanning several kilometers with the potential to produce about 1.5 gigawatts of continuous power. A phased-array antenna would aim the 10-GHz power beam to a surface zone about five kilometers in diameter.
The concept is certainly ambitious. Perhaps most challenging is the very harsh reality that scaling up power-related projects from a small-scale bench-size demonstration unit to full-scale functioning system is a highly nonlinear process. This applies the battery storage systems, solar and wind energy harvesting, and other sources.
Experience shows that there’s an exponential increase in difficulties and issues as physical size and power levels; the only question is “what is that exponent value?” Still, the concept makes sense and seems so straightforward; we just have to keep moving the technology along and we’ll get there, right?
I was almost convinced, but then I saw a strong counterargument in an article in the June 2024 issue of IEEE Spectrum (“A Skeptic’s Take on Beaming Power to Earth from Space”). The article’s author, Henri Barde, joined the European Space Agency in 2007 and served as head of the power systems, electromagnetic compatibility, and space environment division until his retirement in 2017; he has worked in the space industry for nearly 30 years and has reality-based insight.
He looked at various proposed and distinctly different approaches to capturing and beaming the power, including CASSIOPeiA from Space Solar Holdings Group; SPS-ALPHA Mark-III from a former NASA physicist; Solar Power Satellite from Thales Alenia Space; and MR-SPS from the China Academy of Space Technology (there’s a brief mention of the Caltech project as well).
He discusses key attributes, presumed benefits, and most importantly, the real obstacles to success as well the dollar and technical cost to overcoming those obstacles—assuming they can be overcome. These include the hundreds, if not thousands, of launches needed to get everything “up there”; the need for robotic in-space assembly and repair; fuel for station-keeping at the desired low earth orbit (LEO), medium earth orbit (MEO), or geostationary orbit (GEO); temperature extremes (there will be periods when satellites are in the dark) and associated flexing; impacts from thousands of micrometeorites; electronic components capable of handling megawatts in space (none of which presently exist), and many more.
His conclusion is simple: it’s a major waste of resources that could be better spent on improved renewable power sources, storage, and grid on Earth. The problem he points out is that beamed solar power is such an enticing concept. It’s so elegant in concept and seems to solve the energy problem so cleanly and crisply, once you figure it out.
So now I am perplexed. The sobering reality described in Barde’s “downer” article wiped out the enthusiasm I was developing for these projects such as the one at Caltech. At some point, the $100 million seed money (and similar at other projects) will need to be supplemented by more money, and lots of it (easily, trillions), to take any of these ideas to their conclusion, while there will be substantial risk.
Is beamed solar power one of those attractive ideas that is actually impractical, impossible, too risky, and too costly when it meets reality of physics, electronics, space, and more? Do we need to keep pushing it to see where it can take us?
Or will the spigot of money as well as the personal energy of its proponents eventually dry up, since it is not a project that you can do part way? After all, with a project like this one, you’re either all in or you are all out.
I know that when it comes to the paths that technology advances take, you should “never say never.” So, check back in a few decades, and we’ll see where things stand.
Related Content
- Keep solar panels clean from dust, fungus
- Lightning as an energy harvesting source?
- The other fusion challenge: harvesting the power
References
- IEEE Spectrum, “A Skeptic’s Take on Beaming Power to Earth from Space”
- IEEE Spectrum, “Space-based Solar Power: A Great Idea Whose Time May Never Come”
- IEEE Spectrum, “Powering Planes With Microwaves Is Not the Craziest Idea”
- IEEE/Caltech Technical Paper, “The Caltech Space Solar Power Demonstration One”
- Caltech, “Solar Power at All Hours: Inside the Space Solar Power Project”
- Caltech, “Space Solar Power Project Ends First In-Space Mission with Successes and Lessons”
- Caltech, “In a First, Caltech’s Space Solar Power Demonstrator Wirelessly Transmits Power in Space”
- Caltech, “Caltech to Launch Space Solar Power Technology Demo into Orbit in January”
- The Wall Street Journal, “Beaming Solar Energy From Space Gets a Step Closer”
The post Beaming solar power to Earth: feasible or fantasy? appeared first on EDN.
ROHM’s fourth-generation SiC MOSFET chips adopted in three Geely ZEEKR EV models
EPC Space unveils dynamic cross-reference tool for rad-hard MOSFET device replacement
Coherent unveils CW DFB InP lasers for silicon photonics transceivers
Understanding the Effect of Diode Reverse Recovery in Class D Amplifiers
Hot Chips Heavy Hitter: IBM Tackles Generative AI With Two Processors
Wise-integration forms Hong Kong-based subsidiary to manage Asian business development
USB 3: How did it end up being so messy?
After this blog post’s proposed topic had already been approved, but shortly before I started to write, I realized I’d recently wasted a chunk of money. I’m going to try to not let that reality “color” the content and conclusions, but hey, I’m only human…
Some background: as regular readers may recall, I recently transitioned from a Microsoft Surface Pro 5 (SP5) hybrid tablet/laptop computer:
to a Surface Pro 7+ (SP7+) successor:
Both computer generations include a right-side USB-A port; the newer model migrates from a Mini DisplayPort connector on that same side (and above the USB-A connector) to a faster and more capable USB-C replacement.
Before continuing with my tale, a review: as I previously discussed in detail six years ago (time flies when you’re having fun), bandwidth and other signaling details documented in the generational USB 1.0, USB 2.0, USB 3.x and still embryonic USB4 specifications are largely decoupled from the connectors and other physical details in the USB-A, USB-B, mini-USB and micro-USB, and latest-and-greatest USB-C (formally: USB Type-C) specs.
The signaling and physical specs aren’t completely decoupled, mind you; some USB speeds are only implemented by a subset of the available connectors, for example (I’ll cover one case study here in a bit). But the general differentiation remains true and is important to keep in mind.
Back to my story. In early June, EDN published my disassembly of a misbehaving (on MacOS, at least) USB flash drive. The manufacturer had made the following performance potential claims:
USB 3.2 High-Speed Transmission Interface
Now there is no reason to shy away from the higher cost of the USB 3.2 Gen 1 interface. The UV128 USB flash drive brings the convenience and speed of premium USB drives to budget-minded consumers.
However, benchmarking showed that it came nowhere close to 5 Gbps baseline USB 3.x transfer rates, far from the even faster 10 and 20 Gbps speeds documented in newer spec versions:
What I didn’t tell you at the time was that the results I shared were from my second benchmark test suite run-through. The first time I ran Blackmagic Design’s Disk Speed Test, I had connected the flash drive to the computer via an inexpensive (sub-$5 inexpensive, to be exact) multi-port USB 3.0 hub intermediary.
The benchmark site ran ridiculously slow that first time: in retrospect, I wish I would have grabbed a screenshot then, too. In trying to figure out what had happened, I noticed (after doing a bunch of research; why Microsoft obscures this particular detail is beyond me) that its USB-C interface specified USB 3.2 Gen2 10 Gbps speeds. Here’s the point where I then over-extrapolated; I assumed (incorrectly, in retrospect) that the USB-A port was managed by the same controller circuitry and therefore was capable of 10 Gbps speeds, too. And indeed, direct-connecting the flash drive to the system’s USB-A port delivered (modestly) faster results:
But since this system only includes a single integrated USB-A port, I’d still need an external hub for ongoing use. So, I dropped (here’s the “wasted a chunk of money” bit) $40 each, nearly a 10x price increase over those inexpensive USB 3.0 hubs I mentioned earlier, on the only 10 Gbps USB-A hub I could find, Orico’s M3H4-G2:
I bought three of them, actually, one for the SP7+, one for my 2018 Mac mini, and the third for my M1 Max Mac Studio. All three systems spec 10 Gbps USB-C ports; those in the latter two systems do double duty with 40 Gbps Thunderbolt 3 or 4 capabilities. The Orico M3H4-G2 isn’t self-powered over the USB connection, as was its humble Idsonix precursor. I had to provide the M3H4-G2 with external power in order for it to function, but at least Orico bundled a wall wart with it. And the M3H4-G2’s orange-dominant paint job was an…umm…“acquired taste”. But all in all, I was still feeling pretty pleased with my acquisition…
…until I went back and re-read that Microsoft-published piece, continuing a bit further in it than I had before, whereupon I found that the SP7+ USB-A port was only specified at 5 Gbps. A peek at the Device Manager report also revealed distinct entries for the USB-A and USB-C ports:
Unfortunately, my MakerHawk Makerfire USB tester only measures power, not bandwidth, so I’m going to need to depend on the Microsoft documentation as the definitive ruling.
And, of course, when I went back to the Mac mini and Mac Studio product sheets, buried in the fine print was indication that their USB-A ports were only 5 Gbps, too. Sigh.
So, what had happened the first time I tried running Blackmagic Design’s Disk Speed Test on the SP7+? My root-case guess is a situation that I suspect at least some of you’ve also experienced; plug in a USB 3.x peripheral, and it incorrectly enumerates as being a USB 1.0 or USB 2.0 device instead. Had I just ejected the flash drive from the USB 3.0 hub, reinserted it and re-run the benchmarks, I suspect I would have ended up with the exact same result I got from plugging it directly into the computer, saving myself $120 plus tax in the process. Bitter? Who, me?
Here’s another thought you might now be having: why does the Orico M3H4-G2 exist at all? Good question. To be clear, USB-A optionally supports 10 Gbps USB 3 speeds, as does USB-C; the only USB-C-specific speed bin is 20 Gbps (for similar reasons, USB4 is also USB-C-only from a physical implementation standpoint). But my subsequent research confirmed that my three computers weren’t aberrations; pretty much all computers, even latest-and-greatest ones and both mobile and desktop, are 5 Gbps-only from a USB-A standpoint. Apparently, the suppliers have decided to focus their high-speed implementation attention solely on USB-C.
That said, I did find one add-in card, Startech’s PEXUSB311AC3, that implemented 10 Gbps USB-A:
I’m guessing there might also be the occasional motherboard out there that’s 10 Gbps USB-A-capable, too. You could theoretically connect the hub to a 10 Gbps USB-C system port via a USB-C-to-USB-A adapter, assuming the adapter can do 10 Gbps bidirectional transfers, too (I haven’t yet found one). And of course, two 10 Gbps USB-A-capable peripherals, such as a couple of SSD storage devices, can theoretically interact with each through the Orico hub at peak potential speeds. But suffice it to say that I now more clearly understand why the M3H4-G2 is one-of-a-kind and therefore pricey, both in an absolute sense and versus 5 Gbps-only hub alternatives.
1,000+ words in, what’s this all have to do with the “Why is USB 3 so messy” premise of this piece? After all, the mistake was ultimately mine in incorrectly believing that my systems’ USB-A interfaces were capable of faster transfer speeds than reality afforded. The answer: go back and re-scan the post to this point. Look at both the prose and photos. You’ll find, for example:
- A USB flash drive that’s variously described as being “USB 3.0” and with a “USB 3.2 Gen 1” interface and a “USB 3.2 High-Speed Transmission Interface”
- An add-in card whose description includes both “10 Gbps” and “USB 3.2 Gen 2” phrases
- And a multi-port hub that’s “USB 3.1”, “USB 3.1 Gen2” and “10Gbps Super Speed”, depending on where in the product page you look.
What I wrote back in 2018 remains valid:
USB 3.0, released in November 2008, is once again backwards compatible with USB 1.x and USB 2.0 from a transfer rate mode(s) standpoint. It broadens the pin count to a minimum of nine wires, with the additional four implementing the two differential data pairs (one transmitter, one receiver, for full duplex support) harnessed to support the new 5 Gbps SuperSpeed transfer mode. It’s subsequently been renamed USB 3.1 Gen 1, commensurate with the January 2013 announcement of USB 3.1 Gen 2, which increases the maximum data signaling rate to 10 Gbps (known as SuperSpeed+) along with reducing the encoding overhead via a protocol change from 8b/10b to 128b/132b.
Even more recently, in the summer of 2017 to be exact, the USB 3.0 Promoter Group announced two additional USB 3 variants, to be documented in the v3.2 specification. They both leverage multi-lane operation over existing cable wires originally intended to support the Type-C connector’s rotational symmetry. USB 3.2 Gen 1×2 delivers a 10 Gbps SuperSpeed+ data rate over 2 lanes using 8b/10b encoding, while USB 3.2 Gen 2×2 combines 2 lanes and 128b/132b encoding to support 20 Gbps SuperSpeed+ data rates.
But a mishmash of often incomplete and/or incorrect terminology, coupled with consumers’ instinctive interpretation that “larger numbers are better”, has severely muddied the waters as to what exactly a consumer is buying and therefore should expect to receive with a USB 3-based product. In fairness, the USB Implementers Forum would have been perfectly happy had its member companies and compatibility certifiers dispensed with the whole numbers-and-suffixes rigamarole and stuck with high-level labels instead (40 Gbps and 80 Gbps are USB4-specific):
That said:
- 5 Gbps = USB 3.0, USB 3.1 Gen 1, and USB 3.2 Gen 1 (with “Gen 1” implying single-lane operation even in the absence of an “x” lane-count qualifier)
- 10 Gbps = USB 3.1 Gen 2, USB 3.2 Gen 2 (with the absence of an “x” lane-count qualifier implying single-lane operation), and USB 3.2 Gen 2×1 (the more precise alternative)
- 20 Gbps = USB 3.2 Gen 2×2 (only supported by USB-C).
So, what, for example, does “10 Gbps USB 3” mean? Is it a single-lane USB 3.1 device, with that one lane capable of 10 Gbps speed? Or is it a dual-lane USB 3.2 device with each lane capable of 5 Gbps speeds? Perhaps obviously, try to connect devices representing both these 10 Gbps implementations together and you’ll end up with…5 Gbps (cue sad trombone sound).
So, like I said, what a mess. And while I’d like to think that USB4 will fix everything, a brief scan of the associated Wikipedia page details leave me highly skeptical. If anything, in contrast, I fear that the situation will end up even worse. Let me know your thoughts in the comments.
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- USB: Deciphering the signaling, connector, and power delivery differences
- An O/S-fussy USB flash drive
- A deep dive inside a USB flash drive
- USB Power Delivery: incompatibility-derived foibles and failures
- Cutting into a conventional USB-C charger
- Checking out a USB microphone
The post USB 3: How did it end up being so messy? appeared first on EDN.
Top 10 Lithium-ion Battery Manufacturing Companies in India in 2024
The top 10 lithium-ion battery manufacturing companies in India in 2024 are as follows:
- Servotech Power Systems
Servotech Power Systems was incorporated in 2004. It is based out of New Delhi. It has its manufacturing and R&D plant in Sonipat, Haryana.
It manufacturers its batteries by the application of the latest engineering concepts and high-quality raw materials.
Its manufactured batteries are among the best reliable energy storage solution available in India. They are known for their high efficiency and durability.
They find their application in numerous appliances. For instance, 2/3/4 wheelers, power back-up systems, solar power plants, offices, factories, etc.
It has also established a subsidiary company, Servotech Power Infrastructure, to operate as a charging point for the electric vehicles. This subsidiary company is reliant on the lithium-ion batteries manufactured by Servotech Power Systems.
- Amara Raja Energy & Mobility
Amara Raja Energy & Mobility is a flagship company of the famous Amara Raja Group. It was established by
It is one of the first companies in India to invest in Li-ion technology. It produces Li-ion cells, battery packs and charging solutions for batteries. They are widely used in various electric vehicles and the telecom industry.
It has established a state-of-the-art Gigafactory in Telangana. It has a cell production capacity of 16 GWh. It has a battery pack capacity of 5 GWh. It was established at a cost of Rs 9,500 crore.
It exports its quality batteries to 50 countries across the globe.
- Exide Energy Solutions Limited
It is a subsidiary company of the Exide Industries Limited. It was earlier called Exide Energy Private Limited. EEPL merged into Exide Energy Solutions Limited in March, 2024.
Exide Energy Private Limited was incorporated on 29 September, 2018. It was a joint venture between Exide Industries Limited (EIL) and Leclanche SA (LSA), Switzerland. In November, 2022, the latter exited from the joint venture. Thereafter, Exide Industries Limited became the sole owner of the venture.
Exide Energy Private Limited had its production plant in Prantij, which is situated in the Sabarkantha district of Gujarat. This plant is still functional.
This plant produces lithium ion batteries using the battery management system. They are used for both electric mobility and stationary power application. They are produced under the brand name Nexcharge.
Upon merger into Exide Energy Solutions Limited, the EESL is establishing a 12 Gwh gigafactory in Bengaluru, Karnataka.
Once this plant will become operational, it will further scale up the production of lithium ion batteries.
The Li-ion batteries produced by this organisation uses lithium iron phosphate (LiFePO4) as a raw material. It is the best choice among all available raw materials. It is because of three reason. First, high power density. Second, very high safety. And, third, very long life span of the battery.
- ATLBattery Technology (India) Private Limited
It is the Indian subsidiary company of the world-famous Japanese company, Amperex Technology Limited, the world’s leading producer of lithium ion batteries. It was established in 2020. In India, it is based out of Rewari, Haryana.
It has established a 180-acre lithium-ion manufacturing plot at MT Sohna, near Gurugram. It is the largest lithium-ion manufacturing plant in India.
It produces lithium ion batteries for electric vehicles and mobile phones.
- Tata Chemicals Limited
It is a subsidiary company of the prestigious Tata Group.
It had signed an MoU with the Indian Space Research Organisation (ISRO). Under this MoU, the lithium-ion cell technology developed by ISRO’s Vikram Sarabhai Space Centre (VSSC) was transferred to Tata Chemical.
ISRO had developed this technology for the production of lithium-ion cells for space-based applications, such as rockets, satellites, etc.
However, once it was transferred to the Tata Chemicals Limited, it is being used by the TCL to produce a wide variety of lithium-ion cells of different capacity, energy, size, and power density.
It produces lithium-ion batteries using lithium carbonate (Li2CO3) as a raw material.
It has entered into partnership with famous Indian R&D centres such as ISRO, CSIR-CECRI, and CMET, for indigenously developed lithium ion cells.
It also runs a li-ion battery recycling operations. Its recovery plant is able to recover valuable metals at 99% plus purity level within industry levels of yield. For instance, lithium, nickel, manganese, cobalt, etc.
Its main focus is on electric vehicle market in India.
- Okaya EV Private Limited
It is a subsidiary company of the Okaya Power group. It specialises in producing lithium-ion batteries for electric vehicles, charging, and battery swapping solutions.
It produced India’s first lithium-ion battery. It gave it the name Okaya Royale. It is produced in two variants. First, Okaya Royale. And, second, Okaya Royale XL.
Its production process is certified with ISO 14001:2004 certification.
It is the third-largest battery manufacturer in India. Besides, it is the leading charging station manufacturer in India.
The lithium-ion batteries produced by Okaya EV Private Limited have the following special features:
First, less weight and compact size.
Second, it recharges at a very fast rate.
Third, it has longer life-span.
Fourth, it provides longer back up.
Fifth, it is almost maintenance-free. Hence, it is highly durable.
It specialises in the production of batteries for the electric vehicles.
- Waaree Technologies Limited
It is one of the constituent Indian company of the world famous Waaree Group. Its parent company produces components in the energy storage, solar, and instrumentation domain.
It produces lithium ion cells and batteries for e-rickshaw, e-bicycles, e-bikes, e-forklift, battery energy storage system, telecom, and uninterruptible power supply (UPS).
It endeavours to create India’s top notch “cell to system” technology. It primarily caters to high quality energy storage solutions for electric utilities, energy storage system, and renewable energy applications.
It produces four series of batteries- Liger, Lion, Lynx, and Lit series.
- Loom Solar Private Limited
It is a six-years old start-up. It was established in 2018. It is based out of Faridabad, Haryana. It is certified as per the ISO 9001-2015 certification.
It has its manufacturing plant in Faridabad, Haryana.
It manufacturers lithium-ion batteries, inverters, and solar panels.
- Panasonic Life Solutions India Private Limited
It was established on 14 July, 2006, as Panasonic India Private Limited. With effect from 1 August, 2022, it changed its nomenclature to Panasonic Life Solutions India Private Limited. It was done to bring all businesses of the Panasonic Group in India under one roof.
It is the Indian subsidiary company of the Panasonic Group. Its parent firm is based out of Kadoma, Osaka, Japan.
Its Indian subsidiary’s head-office is in Gurugram, Haryana.
It manufactures lithium-ion batteries and energy storage system using lithium ion batteries.
It manufactures different lithium ion batteries in both coin and cylindrical forms and that too in a wide range of sizes. Hence, they are used in small appliances like digital devices, laptops, to large appliances like electric vehicles.
- Battrixx
It is a division of Kabra Extrusiontechnik Ltd. The latter is one of the two constituent companies of the Kolsite Group.
It manufactures lithium ion batteries for application in a wide range of appliances in the e-mobility sector. Its application ranges from electric bike, two or three-wheeler electric vehicles, electric car, electric passenger vehicles, light commercial electric vehicles, and electric tractors.
Besides, it also manufactures lithium ion batteries for application in electric forklift, electric golf cart, and devices used in the marine environment.
The post Top 10 Lithium-ion Battery Manufacturing Companies in India in 2024 appeared first on ELE Times.
Baylin receives CDN$2.25m order from satellite broadcaster and services provider
Found this Telecommunications board
submitted by /u/ElectroAmin [link] [comments] |
Component selection tool employs AI algorithms
An artificial intelligence (AI)-assisted hardware design platform enables engineers to find the right components for their design projects using machine learning and smart algorithms. It selects the ideal set of components while providing deliverables of architectural design, ECAD native schematics, bill of materials, footprints, and project information summary.
The CELUS design platform transforms technical requirements into schematic prototypes in less than an hour, allowing developers and engineers to move from concept to reality with unprecedented efficiency and precision. Moreover, with projects often comprised of anywhere from 200 to 1,000 individual components, it simplifies the complexities of electronic design and accelerates time to market for new products.
The design platform provides an automated way to transform technical requirements into schematic prototypes in record time. Source: CELUS
At a time when there is an increasing need for more efficient design processes, finding the right components for projects can be overwhelming and time-consuming. The CELUS platform streamlines the design process and provides real-time component recommendations that work.
“With more than 600 million components available to electronics designers, the task of identifying and selecting the ones right for any given project is at best a challenge,” said Tobias Pohl, co-founder and CEO of CELUS. “We developed the CELUS design platform to handle the heavy lifting and intricate details of product design to drive innovation and expand demand creation in a fraction of the time required of traditional approaches.”
We were told that such a system was impossible, but we did it and are now expanding its reach to end users and component suppliers around the world, Pohl added. CELUS aims to transform the $1.4 trillion component industry by aiding the circuit board design market through its unique design automation process.
While CELUS minimizes the time engineers spend identifying disparate component pieces, it also allows component suppliers to easily connect with design engineers for faster market integration and broader reach. Furthermore, this connection via engineering tools like CELUS enables component suppliers to reach developers and engineers who may not be accessible through traditional channels.
CELUS, based in Munich, Germany, is expanding the reach of its cloud-based design platform in the United States by setting up a U.S. headquarters in Ausin, Texas. The company has been founded by a team of mechanical, electrical, and aeronautical engineers and is backed by an advisory board of top industry experts.
Related Content
- 6 tips for choosing PCB components
- Trusting AI With Your Electronics Design Decisions
- AI-Powered Design Tool Simplifies Antenna Integration Challenge
- Proper layout and component selection control power-supply EMI
- Component selection and layout strategies for avoiding thermal EMF
The post Component selection tool employs AI algorithms appeared first on EDN.
Power management module I made
After a few years of copying and rerouting a few battery management designs for each project that required it got a bit tiring for me, so I wanted to make a small module that would cover a lot of use cases (for me at least). Primary goal was to provide a simple drop-in way to add power management features to projects, mainly on/off behavior using a switch. I got it all working using only interrupts so the cpu sleeps most of the time for power saving. Anyways, it's all open source, so if you're into small 6 layer PCBs you can make one for yourself [link] [comments] |