Feed aggregator

How will HBM4 impact the AI-centric memory landscape?

EDN Network - Fri, 07/12/2024 - 12:58

Just when Nvidia is prepping its Blackwell GPUs to utilize HBM3e memory modules, the JEDEC Solid State Technology Association has announced that the next version, HBM4, is near completion. HBM3e, an enhanced variant of the existing HBM3 memory, tops out at 9.8 Gbps, but HBM4 is likely to reach the double-digit 10+ Gbps speed.

HBM4, also an evolutionary step beyond the current HBM3 standard, further enhances data processing rates while maintaining essential features such as higher bandwidth, lower power consumption, and increased capacity per die and/or stack. Its features and capabilities are critical in applications that require efficient handling of large datasets and complex calculations, including generative artificial intelligence (AI), high-performance computing (HPC), high-end graphics cards, and servers.

For a start, HBM4 comes with a larger physical footprint as it introduces a doubled channel count per stack compared to HBM3. It also features different configurations that require various interposers to accommodate the differing footprints. Next, it will specify 24-Gb and 32-Gb layers with options for supporting 4-high, 8-high, 12-high and 16-high TSV stacks.

There are media reports about JEDEC having eased memory configurations by reducing thickness of HBM4 to 775 µm for 12-layer, 16-layer HBM4 due to rising complexity at higher thickness levels. However, while HBM manufacturers like Micron, SK hynix, and Samsung were poised to use hybrid bonding technology, the HBM4 design committee is reportedly of the view that hybrid bonding would increase pricing. That, in turn, will make HBM4-powered AI processors more expensive.

Hybrid bonding enables memory chip designers to add more stacks compactly without the need for through-silicon-via (TSV), which uses filler bumps to connect multiple stacks. However, with a thickness of 775 µm, hybrid bonding may not be needed for the HBM4 form factor.

For compatibility, the new spec will ensure that a single controller can work with both HBM3 and HBM4 if needed. The designers of the HBM4 spec have also reached an initial agreement on speed bins up to 6.4 Gbps with discussion ongoing for higher frequencies.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post How will HBM4 impact the AI-centric memory landscape? appeared first on EDN.

Absolute EMS and Ventiva: A Winning Partnership in Advanced Thermal Management

ELE Times - Fri, 07/12/2024 - 10:43

Absolute EMS, an electronics contract manufacturer, has partnered with Ventiva, a leading company in active cooling solutions for electronic devices, to build their revolutionary Ionic Cooling Engine (ICE). ICE is a cutting-edge cooling technology that promises to redefine thermal management in electronics. This collaboration highlights Absolute EMS’s capabilities in precision manufacturing and their commitment to providing exceptional customer service, while showcasing Ventiva’s innovative technology.

Established in 1996 and headquartered in Santa Clara, CA, Absolute EMS has built a reputation as a trusted provider of turnkey contract manufacturing services. Specializing in the entire lifecycle of product development—from New Product Introduction (NPI) to end-of-life—Absolute EMS ensures the highest quality and precision in every product. Their state-of-the-art facility offers touchless manufacturing, automated first article inspection, and in-line inspection capabilities such as 3D Solder Paste Inspection, (SPI), AOI and X-ray technology, upholding stringent standards of excellence. “The product that Ventiva is bringing to market is a leading-edge product that will change the industry,” stated Doug Dow, COO at Absolute EMS.  “We’re proud to create top tier manufacturing processes      to support their success. Our commitment to our customer relationships, excellence and innovation enables us to meet the rigorous demands of this advanced technology.”

Absolute EMS’s certifications, including ISO 13485:2016, AS9100 Rev D, and ISO 9001:2015, demonstrate their dedication to high reliability manufacturing. Their expertise spans various industries such as medical, military, industrial, networking, and engineering, providing customized quality reporting, device history record keeping, and traceability to the component level.

Ventiva, based in Silicon Valley, is known for its groundbreaking Ionic Cooling Engine (ICE®) technology. Developed over a 12-year period, ICE represents a quantum leap in thermal management by moving air without moving parts, noise, or vibration. Utilizing electrohydrodynamic (EHD) flow, ICE generates a potent “solid-state” cooling force suitable for up to 30-Watts Thermal Design Power (TDP) systems.

“Our technology addresses the limitations of conventional cooling solutions in laptops, handheld devices and other high-performance electronics. ICE provides a silent, ultra-compact, and vibration-free alternative for the AI enabled world,” explained Tim Lester, COO at Ventiva.

The partnership between Absolute EMS and Ventiva is a testament to the synergy between advanced manufacturing capabilities and cutting-edge technology. Absolute EMS’ role in manufacturing Ventiva’s ICE technology involves not only creating the electronics board that powers the device but also producing the unique blower component—a task requiring exceptional precision and flexibility.

Abosolute EMS Ventiva Image

“We chose Absolute EMS for their flexibility and willingness to help us in a new category where nobody has manufacturing experience,” added Lester. “Their support has been crucial in speeding up our production iterations and ensuring high-quality outputs.”

As the demand for more efficient, silent, and compact cooling solutions grows, the collaboration between Absolute EMS and Ventiva is poised to make significant impacts in the electronics industry. Ventiva’s ICE technology is set to replace traditional fans in various applications, starting with laptops and eventually extending to handheld devices, VR headsets, and more.

“We are excited about the potential of ICE technology and the role it will play in future electronic devices,” added Doug Dow. “Our partnership with Ventiva exemplifies our commitment to supporting innovative technologies and helping our customers succeed in competitive markets.”

The collaboration between Absolute EMS and Ventiva showcases the best of both worlds: advanced manufacturing capabilities and groundbreaking thermal management technology. Together, they are set to revolutionize the way electronic devices are cooled, paving the way for quieter, more efficient, and more compact electronics.

The post Absolute EMS and Ventiva: A Winning Partnership in Advanced Thermal Management appeared first on ELE Times.

Microchip Unveils Industry’s Highest Performance 64-bit HPSC Microprocessor (MPU) Family for a New Era of Autonomous Space Computing

ELE Times - Fri, 07/12/2024 - 09:59

New technology ecosystem is also launched as Microchip collaborates with over a dozen system and software partners to accelerate PIC64-HPSC adoption

The world has changed dramatically in the two decades since the debut of what was then considered a trail-blazing space-grade processor used in NASA missions such as the comet-chasing Deep Impact spacecraft and Mars Curiosity rover vehicle. A report released by the World Economic Forum estimated that the space hardware and the space service industry is set to grow at a CAGR of 7% from 2023’s $330 billion dollars to $755 billion dollars by 2035. To support a diverse and growing global space market with a rapidly expanding variety of computational needs, including more autonomous applications, Microchip Technology has launched the first devices in its planned family of PIC64 High-Performance Spaceflight Computing (PIC64-HPSC) microprocessors (MPUs).

Unlike previous spaceflight computing solutions, the radiation- and fault-tolerant PIC64-HPSC MPUs, which Microchip is delivering to NASA and the broader defense and commercial aerospace industry, integrate widely adopted RISC-V CPUs augmented with vector-processing instruction extensions to support Artificial Intelligence/Machine Learning (AI/ML) applications. The MPUs also feature a suite of features and industry-standard interfaces and protocols not previously available for space applications. A growing ecosystem of partners is being assembled to expedite the development of integrated system-level solutions. This ecosystem features Single-Board Computers (SBCs), space-grade companion components and a network of open-source and commercial software partners.

“This is a giant leap forward in the advancement and modernization of the space avionics and payload technology ecosystem,” said Maher Fahmi, corporate vice president, Microchip Technology’s communications business unit. “The PIC64-HPSC family is a testament to Microchip’s longstanding spaceflight heritage and our commitment to providing solutions built on industry-leading technologies and a total systems approach to accelerate our customers’ development process.”

The Radiation-Hardened (RH) PIC64-HPSC RH is designed to give autonomous missions the local processing power to execute real-time tasks such as rover hazard avoidance on the Moon’s surface, while also enabling long-duration, deep-space missions like Mars expeditions requiring extremely low-power consumption while withstanding harsh space conditions. For the commercial space sector, the Radiation-Tolerant (RT) PIC64-HPSC RT is designed to meet the needs of Low Earth Orbit (LEO) constellations where system providers must prioritize low cost over longevity, while also providing the high fault tolerance that is vital for round-the-clock service reliability and the cybersecurity of space assets.

PIC64-HPSC MPUs offer a variety of capabilities, many of which were not previously available for space computing applications, including:
  • Space-grade 64-bit MPU architecture: Includes eight SiFive RISC-V X280 64-bit CPU cores supporting virtualization and real-time operation, with vector extensions that can deliver up to 2 TOPS (int8) or 1 TFLOPS (bfloat16) of vector performance for implementing AI/ML processing for autonomous missions.
  • High-speed network connectivity: Includes a 240 Gbps Time Sensitive Networking (TSN) Ethernet switch for 10 GbE connectivity. Also supports scalable and extensible PCIe Gen 3 and Compute Express Link (CXL) 2.0 with x4 or x8 configurations and includes RMAP-compatible SpaceWire ports with internal routers.
  • Low-latency data transfers: Includes Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCEv2) hardware accelerators to facilitate low-latency data transfers from remote sensors without burdening compute performance, which maximizes compute capabilities by bringing data close to the CPU.
  • Platform-level defense-grade security: Implements defense-in-depth security with support for post-quantum cryptography and anti-tamper features.
  • High fault-tolerance capabilities: Supports Dual-Core Lockstep (DCLS) operation, WorldGuard hardware architecture for end-to-end partitioning and isolation, and an on-board system controller for fault monitoring and mitigation.
  • Flexible power tuning: Includes dynamic controls to balance the computational demands required by the multiple phases of space missions with tailored activation of functions and interfaces.

“Microchip’s PIC64-HPSC family replaces the purpose-built, obsolescence-prone solutions of the past with a high-performance and scalable space-grade compute processor platform supported by the company’s vibrant and growing development ecosystem,” said Kevin Kinsella, Architect – System Security Engineering with Northrop Grumman. “This innovative and forward-looking architecture integrates the best of the past 40-plus years of processing technology advances. By uniquely addressing the three critical areas of reliability, safety and security, we fully expect the PIC64-HPSC to see widespread adoption in air, land and sea applications.”

In 2022, NASA selected Microchip to develop a High-Performance Spaceflight Computing processor that could provide at least 100 times the computational capacity of current spaceflight computers. This key capability would advance future space missions, from planetary exploration to lunar and Mars surface missions. The PIC64-HPSC is the result of that partnership. Representatives from NASA, Microchip and industry leaders like Northrop Grumman will share insights about the HPSC technology and ecosystem at the IEEE Space Compute Conference 2024, July 15–19 in Mountain View, California:

  • Conference Keynote – Dr. Prasun Desai, Deputy Associate Administrator, Space Technology Mission Directorate, NASA: Dr. Desai will speak about the agency’s strategy for advanced computing and investment in HPSC technology.
  • HPSC Workshop, “HPSC: Redefine What’s Possible for the Future of Space Computing”: Prasun Desai will join Microchip and JPL speakers to provide an overview of HPSC program and platform. Invited aerospace industry partner Kevin Kinsella from Northrop Grumman will also share insights on the significance of HPSC for spaceflight computing. A Q&A session will follow.

Microchip’s inaugural PIC64-HPSC MPUs were launched in tandem with the company’s PIC64GX MPUs that enable intelligent edge designs in the industrial, automotive, communications, IoT, aerospace and defense segments. With the launch of its PIC64GX MPU family, Microchip has become the only embedded solutions provider actively developing a full spectrum of 8-, 16-, 32- and 64-bit solutions.

Microchip has a broad portfolio of solutions designed for the aerospace and defense market including processing with Radiation-Tolerant (RT) and Radiation-Hardened (RH) MCUs, FPGAs and Ethernet PHYs, power devices, RF products, timing, as well as discrete components from bare die to system modules. Additionally, Microchip offers a wide range of components on the Quality Products List (QPL) to better serve its customers.

Comprehensive Ecosystem

Microchip’s new PIC64-HPSC MPUs will be supported by a comprehensive space-grade ecosystem and innovation engine that encompasses flight-capable, industry-standard SBCs, a community of open-source and commercial software partners and the implementation of common commercial standards to help streamline and accelerate the development of system-level integrated solutions. Early members in the ecosystem include: SiFive, Moog, IDEAS-TEK, Ibeos, 3D PLUS, Micropac, Wind River, Linux Foundation, RTEMS, Xen, Lauterbach, Entrust and many more. For information visit Microchip’s PIC64-HPSC MPU ecosystem partners webpage.

Microchip will also offer a comprehensive PIC64-HPSC evaluation platform that incorporates the MPU, an expansion card and a variety of peripheral daughter cards.

Pricing and Availability

PIC64-HPSC samples will be available to Microchip’s early access partners in 2025. For additional information, please contact a Microchip sales representative.

The post Microchip Unveils Industry’s Highest Performance 64-bit HPSC Microprocessor (MPU) Family for a New Era of Autonomous Space Computing appeared first on ELE Times.

A piece of history (50 bytes of magnetic memory)

Reddit:Electronics - Thu, 07/11/2024 - 21:56
A piece of history (50 bytes of magnetic memory)

Someone today showed me this and I wanted to share it somewhere. Those are 50 bytes of memory and you can count every single bit! My mind blew away when he took this out of a box. I don't know if it's the right place tho 😬

submitted by /u/PDFriender
[link] [comments]

Toolset bolsters image sensor development

EDN Network - Thu, 07/11/2024 - 21:21

ST’s hardware kits, evaluation camera modules, and software ease development with its BrightSense global-shutter image sensors. The sensors feature a 3D-stacked construction, which results in a very small die area. This allows for integration in space-limited applications, especially within the final optical module. Additionally, their MIPI-CSI-2 interface makes them well-suited for embedded vision and edge AI devices, including industrial robots, AR/VR equipment, traffic monitoring, and medical devices.

Evaluation camera modules integrate a BrightSense image sensor, lens holder, lens, and plug-and-play flex connector that allows easy swapping of sensors. The modules offer a choice of lens options and come in sizes as small as 5 mm2. Also joining this image sensor ecosystem are hardware kits that enable developers to integrate the sensors with various desktop and embedded computing platforms.

Complementary software tools, available for free download on ST’s website, include a PC-based GUI and Linux drivers. These tools facilitate integration with common processing platforms, such as STM32MP microprocessors.

The BrightSense global-shutter family comprises the VD55G0, VD55G1, and VD56G3 monochrome sensors (0.38 Mpixel to 1.5 Mpixel), as well as the color VD66GY (1.5 Mpixel). Their high sensitivity enhances low-light performance and permits fast image capture without distortion.

BrightSense image sensors and supporting development tools are in production now.

STMicroelectronics

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Toolset bolsters image sensor development appeared first on EDN.

Current sensors improve design efficiency

EDN Network - Thu, 07/11/2024 - 21:21

Allegro’s two new magnetic current sensors enhance system efficiency and protection compared to discrete shunt-based current sensing circuits. The ACS37220 measures current up to 200 A, while the ACS37041 measures current up to 30 A. Both Hall effect sensors are designed for applications with isolation voltage requirements below 100 V. Additionally, the ACS37041 is anticipated to be the industry’s smallest leaded magnetic current sensor.

Existing shunt solutions need multiple components, occupy significant board space, and often require extra PCB layers and heatsinks to maintain thermal performance, adding weight, size, and design complexity. The ACS37220 and ACS37041 address these challenges by providing a smaller footprint, higher efficiency, and simpler integration.

The current sensors integrate the functions of a shunt resistor, shunt amplifier, and other passive components into a single, compact package. Housed in a 4×4-mm QFN package, the ACS37220 has low internal conductor resistance of 0.1 mΩ, ensuring minimal power loss and enabling it to withstand high inrush currents. The ACS37041, with a higher conductor resistance of >1 mΩ, fits into a compact 5-pin SOT23-W package.

The ACS37220 current sensor is available now through Allegro’s distributor network. Engineering samples of the ACS37041 pre-release sensor are available upon request.

ACS37220 product page 

ACS37041 product page 

Allegro Microsystems 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Current sensors improve design efficiency appeared first on EDN.

PC-based scopes gain 10Base-T1S decoder

EDN Network - Thu, 07/11/2024 - 21:20

All PicoScope oscilloscopes from Pico Technology now include a serial decoder for the 10Base-T1S automotive Ethernet standard. This brings the total number of serial protocol decoders available with the free PicoScope 7 software to 40. The software is compatible with all current PicoScope models, as well as legacy models marketed in the past 7 years or longer.

PicoScope 7 software is compatible with Windows, macOS, and Linux, offering a comprehensive suite of automotive decoders such as CAN, CAN XL, FlexRay, LIN, and now 10Base-T1S. In addition to these, the automotive version of PicoScope 7 introduces support for new vehicle and powertrain types and improved guided tests with waveform library linking.

Pico’s noise, vibration, and harshness (NVH) diagnostics application, PicoDiagnostics NVH, now supports the worldwide harmonized on-board diagnostics (WWH-OBD) protocol. Complementing the already-supported J1939 communication protocol, the app now provides an additional means to acquire speed information from heavy-duty and off-highway vehicles.

With support for 27 languages, PicoScope 7 software allows easy global collaboration. PicoScope 7 is free to download on Pico’s website.

PicoScope 7 product page

Pico Technology

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post PC-based scopes gain 10Base-T1S decoder appeared first on EDN.

Sensor enables ghost-free HDR imaging

EDN Network - Thu, 07/11/2024 - 21:20

The OG0TC global-shutter image sensor from Omnivision brings the company’s DCG high dynamic range technology to AR/VR/MR tracking cameras. Intended for eye and face tracking, the backside-illuminated sensor’s on-chip single-exposure DCG extends dynamic range up to 140 dB, ensuring images are free of ghosting and motion artifacts.

Based on a stacked-die construction, the OG0TC sensor is just 1.64×1.64 mm. It offers a resolution of 400×400 pixels with a pixel size of 2.2 µm in a 1/14.46-in. optical format. This small, low-power CMOS sensor is designed primarily for inward-facing tracking cameras. Its small form factor is key to AR/VR designs, as multiple cameras are required for tracking all aspects of the face (eyes, brows, lips).

Ultra-low power consumption is crucial for AR/VR devices. The OG0TC image sensor cuts power usage by over 40% compared to the previous-generation OG0TB, while maintaining pin-to-pin compatibility for easy upgrades and adding features like DCG technology, according to Devang Patel, Marketing Director of IoT/Emerging, Omnivision.

Offered in a 16-pin chip-scale package, the OG0TC global-shutter image sensor is now available for sampling and in mass production.

OG0TC product page

Omnivision 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Sensor enables ghost-free HDR imaging appeared first on EDN.

64-bit MPUs advance space computing

EDN Network - Thu, 07/11/2024 - 21:20

Microchip has launched the first devices in its PIC64 High-Performance Spaceflight Computing (PIC64-HPSC) family of microprocessors. These multicore 64-bit RISC-V processors, which Microchip is delivering to NASA and the broader defense and commercial aerospace industry, employ vector-processing instruction extensions to support AI and ML. They also offer features and industry-standard interfaces not previously available for space applications.

Radiation-hardened PIC64-HPSC RH MPUs provide autonomous missions with the local processing power needed to execute real-time tasks. They can be used for rover hazard avoidance on the moon’s surface, as well as long-duration deep-space missions like Mars expeditions.

Radiation-tolerant PIC64-HPSC RT MPUs are tailored for the commercial space sector, particularly Low Earth Orbit (LEO) constellations. They balance cost-effectiveness with high fault tolerance crucial for round-the-clock service reliability and space asset cybersecurity.

The space-grade architecture of these processors includes eight SiFive RISC-V X280 64-bit CPU cores. They support virtualization and real-time operation, with vector extensions capable of delivering up to 2 TOPS (Int8) or 1 TFLOPS (Bfloat16) for autonomous missions.

PIC64-HPSC devices also provide high-speed network connectivity, low-latency data transfers, and platform-level defense-grade security. Dynamic controls manage computational demands across different phases of space missions, activating functions and interfaces as needed.

Samples of the PIC64-HPSC processors will be available to early access partners in 2025. For additional information, contact a Microchip sales representative.

PIC64-HPSC product page

Microchip Technology 

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 64-bit MPUs advance space computing appeared first on EDN.

Automate battery management system (BMS) test with a digital twin

EDN Network - Thu, 07/11/2024 - 16:27

A battery management system (BMS) monitors and controls batteries in vehicles such as more-electric aircraft and electric cars. It needs to undergo rigorous tests under nominal and extreme conditions to prove its quality and integrity.

Testing with emulated battery cells is beneficial because one can safely test all kinds of conditions quickly and repeatedly without risking precious hardware. This type of hardware-in-the-loop testing simplifies quality assurance and keeps up with the pace of innovation.

Batteries are crucial for electrifying drive trains in vehicles or actuators in aircraft and ships, and BMS is a vital piece of the puzzle for controlling and monitoring the battery pack. The BMS ensures safe battery operation, effective use of its capacity, and long service life.

BMS is used in cars, aircraft, energy storage systems, and consumer electronics, among other things. It typically comprises a battery management unit (BMU), a cell monitoring unit (CMU), and a power distribution unit (PDU).

Figure 1 BMS ensures safe battery operation, effective use of its capacity, and long service life. Source: Speedgoat GmbH

The battery management unit is the main controller; being connected to the cell monitoring and power distribution unit, it monitors the overall state of charge (SOC) as well as cell voltages and cell temperature information. The cell monitoring unit is linked to the battery cells; these modules form battery packs—each has a CMU to regulate the charge and discharge of individual cells, temperature, and voltage.

The power distribution unit is connected to all components that draw power from the pack or feed it back in—in an electric vehicle (EV), this could be the charging system and the motor, for example.

Testing battery management system (BMS)

Rigorous testing ensures that BMS fulfills its requirements, such as optimally distributing the current between the cells during charging. The BMS is checked in the nominal range and extreme situations, for example, when cells overheat, a signal fails, or a short circuit occurs. This way, it’s possible to test how BMS reacts in such cases and ensure correct functioning.

Testing BMS comes with various challenges and is a complex task. The BMS houses various controllers, processes signals from distributed sensors and is linked to numerous systems such as the powertrain. Testing all functionalities, configurations, and states is a lot of effort, which is scarcely achievable with real batteries.

As batteries age, it’s also vital to reproduce conditions repeatedly to test critical algorithms such as cell balancing and state of charge or state of health (SoH) estimation. In addition, different teams are usually responsible for the individual components during development, and their engineers are not always available.

Batteries, BMS and infrastructures are also evolving, so the providers of the systems must keep up with this and react swiftly to new requirements. Finally, testing with real hardware can be dangerous; in the worst case, batteries could explode due to over-voltages or extreme temperatures.

BMS testing with digital twins

These challenges can be solved using a battery’s digital twin. A battery cell emulator can mimic the behavior of the battery by precisely emulating voltages, current levels, and temperatures. It can represent various battery pack architectures and integrate seamlessly with standard test frameworks.

Figure 2 The Battery Cell Emulator (BCE) mimics the behavior of the battery by precisely emulating voltages, current levels, and temperatures. Source: Speedgoat GmbH

Taking into account battery technology and chemistry, age, and operating temperatures, the battery cell emulator can accommodate all kinds of battery models. Tests can be conducted swiftly and safely in regular operational conditions and under faulty conditions.

The same communication protocols are used when interfacing with the actual batteries. Testing with a digital twin also facilitates testing the rest of the system, such as power distribution and charging components, motor drives and fuel cells.

It must be possible to reproduce the same conditions in tests to check the controllers’ behavior reliably. In addition, a flexible test infrastructure that allows engineers to continuously test functionalities and changes in the development process to fulfill novel requirements is essential. Furthermore, testing many scenarios, including complexities, is crucial to achieve complete test coverage.

Test cases are usually defined and tracked in software. Automated test procedures are important for repeating tests and comparing results efficiently. Requirements are typically managed in Simulink’s dedicated toolboxes, such as Simulink Test, or ASAM XIL-compatible third-party software tools.

The advantages of digital testing

For testing the BMS controller and its interactions with various components, the behavior of the actual battery can be precisely emulated with a digital twin. Many insights are available early, allowing the engineers to adjust designs and functions when changes are still easy to implement. Also, a battery’s digital twin does justice to fast-changing trends—it’s more flexible than hardware. It can be configured seamlessly to adapt to new conditions.

Additionally, with the digital twin, components can be tested before assembly. This saves time, and complications or design errors can be found early in the development process, improving the quality and speed of development. Likewise, development and testing costs can be saved because corrections are made at early stages rather than late in development. Finally, risk-free testing of the BMS is possible in extreme events such as collisions, faulty cells, or over-voltage.

Therefore, testing with digital twins of batteries is suitable for engineers who test and validate battery management systems and aim for a continuous and automated workflow. An attractive feature for users is that they can carry out all tests in one software environment (Simulink).

The battery model is developed in Simulink. Afterward, the tests can be performed using the same model with the hardware in the background. The flexible test infrastructure and toolboxes enable continuous, automated testing along the defined requirements.

Figure 3 All automated BMS tests can be performed in a single software environment (Simulink). Source: Speedgoat GmbH

As battery management systems ensure safe battery operation, they need to be thoroughly tested. A digital twin allows engineers to precisely emulate BMS batteries and to safely test all kinds of conditions quickly and repeatedly without risking damaging hardware. Such a test system is suitable for engineers in various industries who need to conduct BMS tests continuously and automatically,

Nadja Müller is a freelance journalist specializing in digitalization.

 

 

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Automate battery management system (BMS) test with a digital twin appeared first on EDN.

Transforming Devices into Smart Innovations: NeoMesh Sensor Modules Power Endrich Bauelemente’s Smart Fridge Concept

ELE Times - Thu, 07/11/2024 - 13:45

According to a recent Statista report, global spending on Internet of Things (IoT) products reached $805 billion in 2023, with expectations for continued growth in the coming years. For several years, Endrich Bauelemente GmbH, the German distributor for NeoCortec, has been engaged in developing smart IoT-related products, such as a concept for a smart fridge. They have incorporated NeoCortec’s ultra-low-power, scalable NeoMesh wireless sensor modules into their solutions.

One major advantage of a mesh solution like NeoMesh technology is that it requires only one central gateway to collect and transmit all sensor data to the Cloud. Connecting different parts of the network does not require repeaters or additional gateways. Zoltan Kiss, Head of the R&D Department at Endrich Bauelemente, explains, “By integrating NeoMesh sensor modules into our solution, we can easily capture necessary data and wirelessly transmit it to our own IoT ecosystem or any other suitable cloud service.” Kiss adds, “With its extremely low power consumption and ability to establish scalable local wireless networks, NeoMesh is the ideal product for our smart IoT solution.”

In partnership with NeoCortec, Endrich Bauelemente GmbH has been actively developing a smart fridge concept to enable manufacturers to incorporate intelligent features, allowing end users to monitor various parameters of their refrigerators via a mobile application.

The NeoMesh wireless sensor modules facilitate seamless and efficient integration of multiple IoT functionalities. These functionalities include monitoring temperature and humidity inside the refrigerator, as well as tracking interior light brightness and door status. Data collection on the frequency and duration of door openings can provide valuable insights for marketing or commercial purposes, especially in settings like gas stations or retail stores. One of Endrich’s initial clients, Audax Electronics in Brazil, has successfully integrated NeoMesh modules into the LED lighting units of their fridges to monitor temperature and door status. These LED units can be installed in both smart-capable and conventional refrigerators.

Key requirements for IoT sensor modules in smart devices include compact size, easy installation, independence from electrical and wired communication networks, and straightforward commissioning. Battery-powered, wireless communication technology like NeoMesh, with all necessary sensors integrated into the module, allows for installation without a specialist. “This user-friendliness is what makes our NeoMesh technology so appealing,” comments Thomas Steen Halkier, CEO of NeoCortec. Sensor measurements are wirelessly transmitted to the appropriate cloud service for data analysis via the self-forming mesh network. This eliminates the need for extensive cabling, reducing infrastructure costs and providing greater flexibility in network deployment, as sensor nodes can be installed anywhere. “Our NeoMesh technology is especially suited for wireless sensor networks where sensors don’t need to transmit data frequently and where the data payload size is small,” adds Halkier.

The NeoMesh wireless communication protocol is supported by a variety of fully-integrated, pre-certified ultra-low-power bi-directional sensor modules. These modules come in various versions, all incorporating the core NeoMesh protocol stack across different frequency bands (868 MHz, 915 MHz, and 2.4 GHz) and preloaded with the proprietary NeoCortec protocol stack.

The post Transforming Devices into Smart Innovations: NeoMesh Sensor Modules Power Endrich Bauelemente’s Smart Fridge Concept appeared first on ELE Times.

What’s next in on-device generative AI?

ELE Times - Thu, 07/11/2024 - 13:27

Upcoming generative AI trends and Qualcomm Technologies’ role in enabling the next wave of innovation on-device

The generative artificial intelligence (AI) era has begun. Generative AI innovations continue at a rapid pace and are being woven into our daily lives to offer enhanced experiences, improved productivity and new forms of entertainment. So, what comes next? This blog post explores upcoming trends in generative AI, advancements that are enabling generative AI at the edge and a path to humanoid robots. We’ll also illustrate how Qualcomm Technologies’ end-to-end system philosophy is at the forefront of enabling this next wave of innovation on-device.

Upcoming trends and why on-device AI is key Generative AI capabilities continue to increase in severaldimensions.Generative AI capabilities continue to increase in several
dimensions.

Transformers, with their ability to scale, have become the de facto architecture for generative AI. An ongoing trend is transformers extending to more modalities, moving beyond text and language to enable new capabilities. We’re seeing this trend in several areas, such as in automotive for multi-camera and light detection and ranging (LiDAR) alignment for bird’s-eye-view or in wireless communications where global position system (GPS), camera and millimeter wave (mmWave) radio frequency (RF) are combined using transformers to improve mmWave beam management.

Another major trend is generative AI capabilities continuing to increase in two broad categories:

  • Modality and use case
  • Capability and key performance indicators (KPIs)

For modality and use cases, we see improvements in voice user interface (UI), large multimodal models (LMMs), agents and video/3D. For capabilities and KPIs, we see improvements for longer context window, personalization and higher resolution.

In order for generative AI to reach its full potential, bringing the capabilities of these trends to edge devices is essential for improved latency, pervasive interaction and enhanced privacy. As an example, enabling humanoid robots to interact with their environment and humans in real time requires on-device processing for immediacy and scalability.

Advancements in edge platforms for generative AI

How can we bring more generative AI capabilities to edge devices?

We are taking a holistic approach to advance edge platforms for generative AI through research across multiple vectors.

We aim to optimize generative AI models and efficiently run them on hardware through techniques such as distillation, quantization, speculative decoding, efficient image/video architectures and heterogeneous computing. These techniques can be complementary, which is why it is important to attack the model optimization and efficiency challenge from multiple angles.

Consider quantization for large language models (LLMs). LLMs are generally trained in floating-point 16 (FP16). We’d like to shrink an LLM for increased performance while maintaining accuracy. For example, reducing the FP16 model to 4-bit integer (INT4), reduces the model size by four times. That also reduces memory bandwidth, storage, latency and power consumption.

Quantization-aware training with knowledge distillation helps to achieve accurate 4-bit LLMs, but what if we need an even lower number of bits per value? Vector quantization (VQ) can help with this. VQ shrinks models while maintaining desired accuracy. Our VQ method achieves 3.125 bits per value at similar accuracy as INT4 uniform quantization, enabling even bigger models to fit within the dynamic random-access memory (DRAM) constraints of edge devices.

Another example is efficient video architecture. We are developing techniques to make generative video methods efficient for on-device AI. As an example, we optimized FAIRY, a video-to-video generative AI technique. In the first stage of FAIRY, states are extracted from anchor frames. In the second stage, video is edited across the remaining frames. Example optimizations include: cross-frame optimization, efficient instructPix2Pix and image/text guidance conditioning.

A path to humanoid robots

We have expanded our generative AI efforts to study LLMs and their associated use cases, and in particular the incorporation of vision and reasoning for large multimodal models (LMMs). Last year, we demonstrated a fitness coach demo at CVPR 2023, and recently investigated the ability of LMMs to reason across more complex visual problems. In the process, we achieved state-of-the-art results to infer object positions in the presence of motion and occlusion.

However, open-ended, asynchronous interaction with situated agents is an open challenge. Most solutions for LLMs right now have basic capabilities:

  • Limited to turn-based interactions about offline documents or images.
  • Limited to capturing momentary snapshots of reality in a Visual Question Answering-style (VQA) dialogue.

We’ve made progress with situated LMMs, where the model is able to process a live video stream in real time and dynamically interact with users. One key innovation was the end-to-end training for situated visual understanding — this will enable a path to humanoids.

More on-device generative AI technology advancements to come

Our end-to-end system philosophy is at the forefront of enabling this next wave of innovation for generative AI at edge. We continue to research and quickly bring new techniques and optimizations to commercial products. We look forward to seeing how AI ecosystem leverages these new capabilities to make AI ubiquitous and to provide enhanced experiences.

DR. JOSEPH SORIAGA
Senior Director of Technology,
Qualcomm Technologies PAT LAWLORDirector, Technical Marketing,
Qualcomm Technologies, Inc.PAT LAWLOR Director, Technical Marketing, Qualcomm Technologies, Inc.

 

The post What’s next in on-device generative AI? appeared first on ELE Times.

Advanced Material Handling: The Flexible Transport System (FTS) from Rexroth

ELE Times - Thu, 07/11/2024 - 12:54

The Flexible Transport System (FTS) from Bosch Rexroth is an innovative solution designed for the precise transport and positioning of materials and workpieces. It is a magnetically propelled transport platform designed to enhance pallet speed and positioning accuracy, especially for heavier loads like battery modules.

Traditional rollers, chains, or belt systems often fall short in demanding applications, but the FTS overcomes these limitations with exceptional accuracy, programmable movements, and superior speed. The non-contact drive concept ensures particle-free transport, even in vacuum environments.

FTS Features at a Glance
  • Extremely Precise

The FTS achieves remarkable positioning accuracy and high repeatability, thanks to its advanced sensors placed between individual motors and a sophisticated motion control system. This precision is critical in applications requiring meticulous material handling.

  • Individually Scalable

The system’s scalability allows it to meet various production size requirements. It can be easily expanded with multiple motors to accommodate longer production lines. The carriers are designed to handle both heavy and light objects with equal precision, making the FTS a versatile solution for diverse industries.

  • Flexibly Adaptable

Offering maximum flexibility, the FTS allows for the free programmability of all carrier movements, including I/O synchronization if needed. This adaptability facilitates quick conversions to different products, ensuring seamless transitions and reducing downtime. The mechanical components of the system are designed to integrate effortlessly with various machines.

System Description

The FTS provides the flexibility to build a system tailored to specific needs, whether as a standalone unit or integrated into an existing production line. The open system design enables carriers to transition smoothly between external conveyor belts and the FTS. Additionally, robots can be strategically placed along the tracks to perform assembly tasks in conjunction with the FTS. It features interfaces in C/C++ or PLC with standard Ethernet, and since the software operates on a standard PC, other interfaces can also be incorporated.

Hardware

The FTS’s hardware is based on the embedded control YM, offering unparalleled design freedom. This next-generation hardware is engineered to handle complex operations, and its open software architecture supports the creation of customized motion solutions that integrate seamlessly into existing automation landscapes. The compact modular multi-axis controller contains all necessary control and drive hardware, facilitating precise control and high-speed operations.

High-level programming languages enable the development of intricate motion control programs, while high-speed control loops with 32 kHz bandwidth ensure pinpoint accuracy and performance. This hardware configuration supports complex, high-precision tasks with ease.

Software

The core of the FTS’s powerful and flexible technology is its intelligent motion control system from Rexroth. This system combines high-performance hardware capable of managing complex processes with open software structures that allow for customized movements. The software integrates effortlessly into existing automation systems, providing a robust and adaptable solution for various applications.

The control platform includes advanced diagnostics, error analysis, and maintenance capabilities. It continuously monitors carrier positions, current, position errors, and motion profiles, providing real-time data visualization through its toolset. This comprehensive monitoring ensures optimal performance and facilitates prompt issue resolution, maintaining smooth and efficient operations.

 

Rexroth’s FTS stands out as a versatile, precise, and adaptable solution for advanced material transport and positioning, meeting the needs of demanding applications with unparalleled efficiency and reliability.

The post Advanced Material Handling: The Flexible Transport System (FTS) from Rexroth appeared first on ELE Times.

AI-Powered Battery System on Chip: A Masterstroke in Battery Management System

ELE Times - Thu, 07/11/2024 - 12:32

Eatron Technologies has introduced its latest breakthrough in battery management technology—a state-of-the-art AI-powered Battery Management System on Chip, developed in partnership with Syntiant. This ground-breaking solution merges Eatron’s sophisticated Intelligent Software Layer with Syntiant’s ultra-low power NDP120 Neural Decision Processor, delivering unmatched battery performance, safety, and longevity.

The AI-BMS-on-chip marks a major advancement in battery management. This powerful yet energy-efficient system unlocks an additional 10% of battery capacity and extends battery life by up to 25%. By integrating their pre-trained AI models, the solution offers state-of-health, state-of-charge, and remaining useful life assessments with remarkable accuracy right out of the box.

Key Benefits
  • Enhanced Performance: This solution optimizes available battery power by providing precise state-of-charge and health estimations.
  • Improved Safety: Early detection of potential issues through predictive diagnostics ensures operational safety and prevents failures.
  • Increased Longevity: By effectively managing battery health and usage, this solution extends the lifespan of batteries.
Real-Time Edge Processing

A standout feature of the AI-BMS-on-chip is its capability to perform real-time analysis and decision-making directly on the device. By harnessing the efficient processing capabilities of Syntiant’s NDP120, this solution operates at the edge, thereby obviating the requirement for intricate cloud infrastructure. This results in reduced latency, lower power consumption, and overall system costs.

Versatile and Easy to Integrate

Designed for seamless integration, the AI-BMS-on-chip enhances performance, safety, and longevity across a wide range of battery-powered applications, including light mobility, industrial, and consumer electronics. In addition to expediting time-to-market, this plug-and-play solution offers customization capabilities through an intuitive toolchain, tailoring it precisely to individual applications. Existing BMS hardware can be easily upgraded to benefit from this best-in-class performance, providing a cost-effective solution for businesses striving to stay competitive.

Pioneering Collaboration: Transforming Battery Management

Eatron Technologies and Syntiant have been collaborating since 2022, merging their expertise in battery management and AI technologies. Amedeo Bianchimano, Chief Product Delivery Officer at Eatron Technologies, highlighted, “The AI-BMS-on-chip empowers the safe and efficient deployment of any battery-powered application, optimizing battery energy usage.” In agreement, Mallik P. Moturi, Chief Business Officer at Syntiant Corp., emphasized, “Through our NDP120, Eatron’s software processes all data at the edge, boosting battery life, safety, and overall performance. This makes it ideal for everything from consumer electronics to commercial vehicles.”

This collaboration represents a new chapter in battery management, providing unprecedented performance, safety, and longevity to a variety of applications. Eatron Technologies and Syntiant are proud to lead the way in this innovative field, offering cutting-edge solutions that address the evolving needs of the industry.

The post AI-Powered Battery System on Chip: A Masterstroke in Battery Management System appeared first on ELE Times.

Key Design Considerations for Offline SMPS Applications

ELE Times - Thu, 07/11/2024 - 12:12

Courtesy: Onsemi

Every electronic device that is powered from a wall outlet uses some form of offline switch mode power supply (SMPS) that converts the AC grid voltage to a DC voltage used by the device. An offline SMPS is a switched power supply with an isolation transformer and covers power range from a few watts to multi-kilowatt solutions. Offline SMPS is widely deployed and indispensable in providing reliable and safe power to electronic devices in various applications ranging from consumer electronics, industrial power supply, datacenters to telecom base stations.

When designing an offline SMPS there are many factors to be considered for a successful design including power level, voltages, safety requirements, size, and several more.

Understanding Offline SMPS & Popular Topologies

Fundamentally, an online SMPS uses a two-stage conversion. Firstly, the mains grid voltage is rectified and shaped by the first stage – the power factor corrector (PFC). The output voltage of the PFC stage is set to be a bit higher than the expected input peak voltage. For single phase solutions   this is usually around 380-400 VDC. Since the output of the PFC stage is stable and relatively well-regulated DC voltage, the following DC-DC stage can be less complex. In most offline SMPS, the PFC is single-phase, but for higher power units (multi-kilowatt) it can be 3-phase.

 Key Elements of an Offline SMPSFigure 1: Key Elements of an Offline SMPS

The PFC stage aims to improve efficiency by reducing the apparent power in the system. It corrects the phase difference between the current and voltage (the ‘Power Factor’) to maintain as little difference as possible, as well as shaping the current waveform to be as near as it can be to a pure sinusoid, minimizing total harmonic distortion (THD).

The DC-DC stage (often an LLC converter) takes the PFC output and converts this to the desired voltage, bearing in mind there may be several independent outputs. This stage also includes the galvanic isolation transformer that provides safety isolation as well as level shifting the voltage. Due to the transformer’s inability to accommodate direct current, the incoming DC from the PFC stage is converted back into an alternating current and then rectified for the output.

Efficiency (the ratio between the power delivered at the output and the power consumed by the input) is a crucial parameter for any SMPS. It affects the operating cost, but more importantly, it also defines the internal losses that manifest as heat. This, in turn, determines how much cooling is required when the SMPS is operating. The higher the amount of cooling in terms of fans and/or heatsinks is needed, the larger, heavier and more expensive the solution will be.

Advancements in Offline SMPS Technology

Striving for the highest levels of performance, there is ongoing advancement in the technologies used within offline SMPS.

Boost PFC is nowadays commonly used for a wide range of power due to its simple structure and straightforward control strategy. The inductor current is continuous, electromagnetic interference (EMI) is lower, and the current waveform is less distorted, which leads to a better power factor. A single-phase boost PFC will have a regulated DC output of around 380 V, which will then be converted by the DC/DC converter.

Furthermore, LLC converters are becoming increasingly popular for the DC-DC stage. These resonant converters regulate their output by altering the operating frequency of the resonant tank across a relatively narrow range, thereby operating in a soft-switching mode. This improves efficiency and reduces EMI. They operate at higher frequencies compared to allowing the use of smaller passive components.

 A Simple LLC ConverterFigure 2: A Simple LLC Converter

Synchronous or active rectification is a technique for improving efficiency and reducing conduction losses by replacing rectifier diodes with active switches. While semiconductor diodes exhibit a relatively fixed voltage drop (typically 0.5 to 1 V), MOSFET switches act as resistances and therefore can have very low drop. If further improvements are needed, MOSFET switches can be paralleled in order to handle higher output currents. In such a case the conduction losses are reduced, because the RDSON of the paralleled devices is equal to the inverse sum of their respective RDSON.

Semiconductor materials are also evolving as traditional silicon (Si) has reached its limit for further significant performance gains. New wide-bandgap (WBG) materials such as silicon carbide (SiC) are increasingly preferred in power designs for their ability to operate efficiently at higher switching frequencies and higher operating voltages.

WBG devices exhibit lower losses due to better reverse recovery, significantly contributing to enhanced conversion efficiency. As a result, and due to their ability to operate at higher temperatures, thermal mitigation requirements are reduced when using WBG devices.

onsemi Solutions

onsemi has one of the broadest portfolios of solutions for offline SMPS currently available. At the heart of the range are controllers for the PFC and DC/DC converter stages, power MOSFETs, rectifiers, and diodes. This is supported with MOSFET gate drivers (including for synchronous rectification), optocouplers, low dropout (LDO) regulators, and other devices.

Leading the way in modern high-performance devices, the range includes many SiC devices (diodes and MOSFETs) for use in the most challenging offline SMPS applications.

Using the onsemi range (and a few passive components), offline SMPS from a few watts to several kilowatts can be designed. onsemi’s experience in this area assures designers that their solution will have industry-leading performance and reliability.

Conclusion

Offline SMPS are one of the most common sub-systems, present in almost every mains-connected device. However, to create a successful design, safety, and EMI regulations must be met while performance, especially in terms of efficiency, is an ever-increasing requirement.

While several companies manufacture some of the devices necessary for these designs, few (if any) have a comprehensive range that covers all the components (excluding passives) needed to execute a complete design. There are significant benefits to sourcing components from a single supplier, including knowing that devices have been designed and tested to work together.

The post Key Design Considerations for Offline SMPS Applications appeared first on ELE Times.

TOPS of the Class: Decoding AI Performance on RTX AI PCs and Workstations

ELE Times - Thu, 07/11/2024 - 11:51

Courtesy: Nvidia

What is a token? Why is batch size important? And how do they help determine how fast AI computes?

The era of the AI PC is here, and it’s powered by NVIDIA RTX and GeForce RTX technologies. With it comes a new way to evaluate performance for AI-accelerated tasks, and a new language that can be daunting to decipher when choosing between the desktops and laptops available.

While PC gamers understand frames per second (FPS) and similar stats, measuring AI performance requires new metrics.

Coming Out on TOPS

The first baseline is TOPS, or trillions of operations per second. Trillions is the important word here — the processing numbers behind generative AI tasks are absolutely massive. Think of TOPS as a raw performance metric, similar to an engine’s horsepower rating. More is better.

Compare, for example, the recently announced Copilot+ PC lineup by Microsoft, which includes neural processing units (NPUs) able to perform upwards of 40 TOPS. Performing 40 TOPS is sufficient for some light AI-assisted tasks, like asking a local chatbot where yesterday’s notes are.

But many generative AI tasks are more demanding. NVIDIA RTX and GeForce RTX GPUs deliver unprecedented performance across all generative tasks — the GeForce RTX 4090 GPU offers more than 1,300 TOPS. This is the kind of horsepower needed to handle AI-assisted digital content creation, AI super resolution in PC gaming, generating images from text or video, querying local large language models (LLMs) and more.

Insert Tokens to Play

TOPS is only the beginning of the story. LLM performance is measured in the number of tokens generated by the model.

Tokens are the output of the LLM. A token can be a word in a sentence, or even a smaller fragment like punctuation or whitespace. Performance for AI-accelerated tasks can be measured in “tokens per second.”

Another important factor is batch size, or the number of inputs processed simultaneously in a single inference pass. As an LLM will sit at the core of many modern AI systems, the ability to handle multiple inputs (e.g. from a single application or across multiple applications) will be a key differentiator. While larger batch sizes improve performance for concurrent inputs, they also require more memory, especially when combined with larger models.

The more you batch, the more (time) you save.

RTX GPUs are exceptionally well-suited for LLMs due to their large amounts of dedicated video random access memory (VRAM), Tensor Cores and TensorRT-LLM software.

GeForce RTX GPUs offer up to 24GB of high-speed VRAM, and NVIDIA RTX GPUs up to 48GB, which can handle larger models and enable higher batch sizes. RTX GPUs also take advantage of Tensor Cores — dedicated AI accelerators that dramatically speed up the computationally intensive operations required for deep learning and generative AI models. That maximum performance is easily accessed when an application uses the NVIDIA TensorRT software development kit (SDK), which unlocks the highest-performance generative AI on the more than 100 million Windows PCs and workstations powered by RTX GPUs.

The combination of memory, dedicated AI accelerators and optimized software gives RTX GPUs massive throughput gains, especially as batch sizes increase.

Text-to-Image, Faster Than Ever

Measuring image generation speed is another way to evaluate performance. One of the most straightforward ways uses Stable Diffusion, a popular image-based AI model that allows users to easily convert text descriptions into complex visual representations.

With Stable Diffusion, users can quickly create and refine images from text prompts to achieve their desired output. When using an RTX GPU, these results can be generated faster than processing the AI model on a CPU or NPU.

That performance is even higher when using the TensorRT extension for the popular Automatic1111 interface. RTX users can generate images from prompts up to 2x faster with the SDXL Base checkpoint — significantly streamlining Stable Diffusion workflows.

ComfyUI, another popular Stable Diffusion user interface, added TensorRT acceleration last week. RTX users can now generate images from prompts up to 60% faster, and can even convert these images to videos using Stable Video Diffuson up to 70% faster with TensorRT.

TensorRT acceleration can be put to the test in the new UL Procyon AI Image Generation benchmark, which delivers speedups of 50% on a GeForce RTX 4080 SUPER GPU compared with the fastest non-TensorRT implementation.

TensorRT acceleration will soon be released for Stable Diffusion 3 — Stability AI’s new, highly anticipated text-to-image model — boosting performance by 50%. Plus, the new TensorRT-Model Optimizer enables accelerating performance even further. This results in a 70% speedup compared with the non-TensorRT implementation, along with a 50% reduction in memory consumption.

Of course, seeing is believing — the true test is in the real-world use case of iterating on an original prompt. Users can refine image generation by tweaking prompts significantly faster on RTX GPUs, taking seconds per iteration compared with minutes on a Macbook Pro M3 Max. Plus, users get both speed and security with everything remaining private when running locally on an RTX-powered PC or workstation.

The Results Are in and Open Sourced

But don’t just take our word for it. The team of AI researchers and engineers behind the open-source Jan.ai recently integrated TensorRT-LLM into its local chatbot app, then tested these optimizations for themselves.

The researchers tested its implementation of TensorRT-LLM against the open-source llama.cpp inference engine across a variety of GPUs and CPUs used by the community. They found that TensorRT is “30-70% faster than llama.cpp on the same hardware,” as well as more efficient on consecutive processing runs. The team also included its methodology, inviting others to measure generative AI performance for themselves.

From games to generative AI, speed wins. TOPS, images per second, tokens per second and batch size are all considerations when determining performance champs.

The post TOPS of the Class: Decoding AI Performance on RTX AI PCs and Workstations appeared first on ELE Times.

Canada invests $120m to support semiconductor manufacturing and commercialization

Semiconductor today - Thu, 07/11/2024 - 10:46
Through ISED (Innovation, Science and Economic Development Canada), the Canadian government is investing $120m into FABrIC (Fabrication of Integrated Components for the Internet’s Edge) network, which is a five-year project totalling over $220m to advance domestic semiconductor manufacturing and commercialization capabilities...

Українсько-Японський центр КПІ у "Країні мрій"

Новини - Thu, 07/11/2024 - 10:34
Українсько-Японський центр КПІ у "Країні мрій"
Image
Інформація КП чт, 07/11/2024 - 10:34
Текст

На запрошення організаторів у цьогорічному міжнародному етнофестивалі "Країна мрій" взяв участь Українсько-Японський центр КПІ ім. Ігоря Сікорського.

Luminus launches high-efficacy mid-power MP-5050 LEDs

Semiconductor today - Thu, 07/11/2024 - 10:24
Luminus Devices Inc of Sunnyvale, CA, USA – which designs and makes LEDs and solid-state technology (SST) light sources for illumination markets – has launched the mid-power MP-5050-240E and MP-5050-810E LEDs, which deliver what is claimed to be unmatched efficacy of 233 lumens per watt at a correlated colour temperature (CCT) of 4000K at 70 CRI (color-rendering index) and 1 Watt, exceeding the standards in outdoor and industrial lighting applications...

Accenture Acquires Cientra to Expand Silicon Design Capabilities

ELE Times - Thu, 07/11/2024 - 10:23

Accenture has acquired Cientra, a silicon design and engineering services company, offering custom silicon solutions for global clients. The terms of the acquisition were not disclosed.

Founded in 2015, Cientra is headquartered in New Jersey, U.S. and has offices in Frankfurt, Germany as well as in Bangalore, Hyderabad and New Delhi, India. The company brings consulting expertise in embedded IoT and application-specific integrated circuit design and verification capabilities, which augments Accenture’s silicon design experience and further enhances its ability to help clients accelerate semiconductor innovation required to support growing data computing needs.

“Everything from data center expansion to cloud computing, wireless technologies, edge computing and the proliferation of AI, are driving demand for next-generation silicon products,” said Karthik Narain, group chief executive—Technology at Accenture. “Our acquisition of Cientra is our latest move to expand our silicon design and engineering capabilities and it underscores our commitment to helping our clients maximize value and reinvent themselves in this space.”

Cientra has deep experience in engineering, development and testing across hardware, software and networks, in the automotive, telecommunications and high-tech industries. The company brings approximately 530 experienced engineers and practitioners to Accenture’s Advanced Technology Centers in India.

“Since inception, Cientra has been dedicated to building top talent and fostering continuous innovation, developing product solutions that drive value for our clients,” said Anil Kempanna, CEO, Cientra. “Joining Accenture provides exciting opportunities to expand globally and scale our capabilities to create new avenues of growth for our clients as well as our people.”

This acquisition follows the addition of Excelmax Technologies, a Bangalore, India-based semiconductor design services provider, earlier this week, and XtremeEDA, an Ottawa, Canada-based silicon design services company, in 2022.

The post Accenture Acquires Cientra to Expand Silicon Design Capabilities appeared first on ELE Times.

Pages

Subscribe to Кафедра Електронної Інженерії aggregator