-   Українською
-   In English
Feed aggregator
IoT Smart Lighting System, Types, Technology, Products and Benefits
IoT (Internet of Things) smart lighting refers to a technology-driven lighting system that integrates traditional lighting with IoT capabilities, allowing for advanced features such as remote control, automation, energy efficiency, and personalized user experiences. These systems are connected to the internet and can be managed via smartphones, voice assistants, or central hubs. They often incorporate sensors and advanced algorithms to adjust lighting conditions based on environmental and user preferences.
What is an IoT Lighting System?An IoT lighting system is a network of interconnected smart lighting devices designed to operate collaboratively through internet connectivity. These systems include components such as smart bulbs, luminaires, motion sensors, and control units, all communicating with each other through protocols like Wi-Fi, Zigbee, or Bluetooth. IoT lighting systems can be part of larger smart home or smart building solutions, enabling seamless integration with other IoT devices like thermostats, security cameras, or HVAC systems.
Types of IoT Smart LightingIoT smart lighting solutions come in various types, tailored to different applications and needs:
- Smart Bulbs: Individual bulbs that can change color, intensity, and schedules via apps or voice assistants.
- Examples: Philips Hue, Wyze Bulb.
- Smart Light Strips: Flexible lighting strips for decorative purposes, often used in architectural or ambient lighting.
- Examples: LIFX Z, Govee LED Strips.
- Smart Outdoor Lighting: Weather-resistant lighting solutions for gardens, pathways, or security purposes.
- Examples: Ring Smart Lighting, Philips Hue Outdoor.
- Connected Ceiling Fixtures: Entire luminaires with built-in IoT features for homes or offices.
- Examples: GE Cync Smart Ceiling Fixtures.
- Industrial and Commercial IoT Lighting: Large-scale lighting solutions for warehouses, factories, and office buildings, incorporating energy optimization and centralized control.
- Examples: Current by GE, SmartCast by Cree Lighting.
IoT smart lighting relies on several key technologies to function effectively:
- Wireless Communication Protocols:
- Wi-Fi: Offers direct connectivity but may consume more power.
- Zigbee: Low-power, mesh networking for reliable communication.
- Bluetooth Low Energy (BLE): Energy-efficient and suitable for localized controls.
- Sensors:
- Motion Sensors: Detect movement to activate or dim lights.
- Ambient Light Sensors: Adjust brightness based on surrounding light levels.
- Presence Sensors: Differentiate between occupied and unoccupied spaces.
- Cloud Computing: Enables remote access, data storage, and processing for features like predictive maintenance and advanced analytics.
- Edge Computing: Processes data locally for real-time adjustments, reducing latency and dependence on cloud services.
- Integration with AI and Machine Learning: Personalizes lighting based on learned user habits and preferences.
- Philips Hue: A comprehensive smart lighting ecosystem including bulbs, light strips, and outdoor lights.
- LIFX Smart Bulbs: Known for vibrant colors and Wi-Fi connectivity without the need for a hub.
- Wyze Bulb: Affordable smart bulbs offering voice and app controls.
- Ring Smart Lighting: Focused on outdoor and security lighting solutions.
- SmartCast by Cree Lighting: Advanced solutions for commercial and industrial applications.
IoT smart lighting provides numerous benefits for households, businesses, and cities, making it a transformative technology for modern living and operations:
- Energy Efficiency:
- Automatically adjusts lighting based on natural light availability or room occupancy, significantly reducing energy consumption.
- LED technology combined with smart features leads to lower electricity bills.
- Convenience and Automation:
- Allows remote control via apps or voice commands, eliminating the need to physically interact with switches.
- Supports customizable schedules and routines to match daily habits.
- Enhanced Security:
- Motion-activated outdoor lights deter intruders.
- Lighting schedules can mimic human activity when occupants are away, enhancing home security.
- Improved Mood and Productivity:
- Dynamic lighting options like warm tones for relaxation and bright white light for focus contribute to well-being.
- Suitable for circadian rhythm lighting, which aligns with natural daylight patterns to promote better sleep and energy levels.
- Scalability and Flexibility:
- Easy to add or replace components without significant infrastructure changes.
- Adaptable for diverse environments, from small homes to large commercial buildings.
- Cost Savings in Maintenance:
- Predictive analytics notify users of potential failures, enabling timely replacements and reducing downtime.
- Sustainability:
- Promotes eco-friendly practices through reduced energy use and longer lifespans of LED products.
IoT smart lighting represents a significant leap forward in lighting technology, combining energy efficiency, automation, and personalization to enhance living and working environments. With continuous advancements in IoT and AI, these systems are becoming increasingly sophisticated, accessible, and essential in achieving sustainability and convenience. Whether for homes, businesses, or cities, IoT smart lighting is paving the way for a brighter, smarter future.
The post IoT Smart Lighting System, Types, Technology, Products and Benefits appeared first on ELE Times.
4 Bit ALU calculator
submitted by /u/sMerkuls [link] [comments] |
You usually short 2&6 or 8&4 together.
submitted by /u/fatjuan [link] [comments] |
Building an analog ESR meter
submitted by /u/The_Mr_Nemo15 [link] [comments] |
Starting my electronics journey! Very excited
for each click of the button a blue led lights up and when the counter is 5 a yellow one does. [link] [comments] |
Weekly discussion, complaint, and rant thread
Open to anything, including discussions, complaints, and rants.
Sub rules do not apply, so don't bother reporting incivility, off-topic, or spam.
Reddit-wide rules do apply.
To see the newest posts, sort the comments by "new" (instead of "best" or "top").
[link] [comments]
I love electronics but this hobby is a racket!
Most retail sellers will sell you a component for 5 to 10 times the price they bought it just because they can, websites like Digikey, Mouser...etc will charge you an obnoxious shipping fee.
Buying from Ebay, Aliexpress and other websites is almost guarantee to end up with a fake component
Any basic diy project will end up costing you at least 5 times as much as an already made product with the same components
[link] [comments]
New Photon 2 Lander!
submitted by /u/mohitsbhoite [link] [comments] |
Latest issue of Semiconductor Today now available
Malaysia’s Globetronics to manufacture POET’s optical engines
2024: The year when MCUs became AI-enabled
Artificial intelligence (AI) and machine learning (ML) technologies, once synonymous with large-scale data centers and powerful GPUs, are steadily moving toward the network edge via resource-limited devices like microcontrollers (MCUs). Energy-efficient MCU workloads are being melded with AI power to leverage audio processing, computer vision, sound analysis, and other algorithms in a variety of embedded applications.
Take the case of STMicroelectronics and its STM32N6 microcontroller, which features neural processing unit (NPU) for embedded inference. It’s ST’s most powerful MCU and carries out tasks like segmentation, classification, and recognition. Alongside this MCU, ST offers software and tools to lower the barrier to entry for developers to take advantage of AI-accelerated performance for real-time operating systems (RTOSes).
Figure 1 The Neural-ART accelerator in STM32N6 claims to deliver 600 times more ML performance than a high-end STM32 MCU today. Source: STMicroelectronics
Infineon, another leading MCU supplier, has also incorporated a hardware accelerator in its PSOC family of MCUs. Its NNlite neural network accelerator aims to facilitate new consumer, industrial, and Internet of Things (IoT) applications with ML-based wake-up, vision-based position detection, and face/object recognition.
Next, Texas Instruments, which calls its AI-enabled MCUs real-time microcontrollers, has integrated an NPU inside its C2000 devices to enable fault detection with high accuracy and low latency. This will allow embedded applications to make accurate, intelligent decisions in real-time to perform functions like arc fault detection in solar and energy storage systems and motor-bearing fault detection for predictive maintenance.
Figure 2 C2000 MCUs integrate edge AI hardware accelerators to facilitate smarter real-time control. Source: Texas Instruments
The models that run on these AI-enabled MCUs learn and adapt to different environments through training. That, in turn, helps systems achieve greater than 99% fault detection accuracy to enable more informed decision-making at the edge. The availability of pre-trained models further lowers the barrier to entry for running AI applications on low-cost MCUs.
Moreover, the use of a hardware accelerator inside an MCU offloads the burden of inferencing from the main processor, leaving more clock cycles to service embedded applications. This marks the beginning of a long journey for AI hardware-accelerated MCUs, and for a start, it will thrust MCUs into applications that previously required MPUs. The MPUs in the embedded design realm are also not fully capable of controlling design tasks in real-time.
Figure 3 The AI-enabled MCUs replacing MPUs in several embedded system designs could be a major disruption in the semiconductor industry. Source: STMicroelectronics
AI is clearly the next big thing in the evolution of MCUs, but AI-optimized MCUs have a long way to go. For instance, software tools and their ease of use will go hand in hand with these AI-enabled MCUs; they will help developers evaluate the embeddability of AI models for MCUs. Developers should also be able to test AI models running on an MCU in just a few clicks.
The AI party in the MCU space started in 2024, and 2025 is very likely to witness more advances for MCUs running lightweight AI models.
Related Content
- Smarter MCUs Keep AI at the Edge
- Profile of an MCU promising AI at the tiny edge
- 32-bit Microcontrollers Need a Major AI Upgrade
- AI algorithms on MCU demo progress in automated driving
- An MCU approach for AI/ML inferencing in battery-operated designs
The post 2024: The year when MCUs became AI-enabled appeared first on EDN.
The Growing Demand for Edge AI Hardware in Transforming Real-Time Data Processing
The rise of edge computing and the increasing demand for AI-driven applications have led to a significant shift in the way AI models are deployed and processed. Edge AI hardware, or AI accelerators, plays a critical role in enabling real-time deep learning inference on edge devices, allowing them to process and analyze data locally without relying on cloud computing. As industries adopt AI to solve complex problems in real-time, edge AI hardware has become a crucial component in delivering faster, more efficient, and secure AI-powered solutions.
The Need for Edge AI Hardware
Traditionally, AI workloads have been handled by powerful cloud-based systems, where massive amounts of data are transmitted, processed, and analyzed remotely. However, as the number of connected devices and the volume of data generated continues to grow, the limitations of cloud computing have become evident. Cloud-based systems struggle with issues like latency, bandwidth constraints, data privacy concerns, and the high costs of transmitting large amounts of data.
Edge AI hardware addresses these challenges by bringing the computational power directly to the devices, enabling them to make decisions and process data locally. By processing data at the edge, organizations can reduce reliance on cloud infrastructure, lower latency, improve security, and achieve more efficient energy usage, especially for battery-powered IoT devices.
What Is Edge AI Hardware?
Edge AI hardware refers to specialized devices or components designed to accelerate AI processes, particularly deep learning inference, at the edge of a network. Unlike general-purpose processors such as CPUs, AI accelerators are built to handle the unique demands of AI workloads, including the ability to efficiently process large volumes of data in real-time while minimizing power consumption.
The key function of edge AI hardware is to optimize the execution of machine learning models, enabling devices to perform tasks like image recognition, natural language processing, and autonomous decision-making without relying on the cloud for heavy computations. This is particularly important in applications where latency is a critical factor, such as autonomous vehicles, robotics, and smart cities.
The Evolution of Edge Computing and AI
As IoT devices proliferate, the need for efficient data processing has intensified. The vast amounts of data generated by these devices cannot always be efficiently handled by cloud-based systems, and this is where edge computing comes into play. The concept of edge computing involves processing data closer to where it is generated, thereby reducing the distance it needs to travel and minimizing the risk of data loss or delay.
Edge AI builds on this concept by incorporating machine learning capabilities directly into the devices. With the help of edge AI hardware, devices can process data in real-time, learn from it, and make autonomous decisions without the need for constant communication with the cloud. This capability is crucial for sectors that demand rapid decision-making, including healthcare, automotive, and industrial automation.
Benefits of Edge AI Hardware
- Reduced Latency: One of the primary benefits of edge AI hardware is the reduction in latency. When data is processed locally, there is no need to wait for it to be sent to the cloud for analysis, resulting in faster decision-making. This is particularly crucial in time-sensitive applications, such as autonomous driving, where milliseconds can make the difference between an accident and a successful maneuver.
- Improved Bandwidth Efficiency: With edge AI hardware, devices can process data locally, reducing the need for continuous communication with the cloud. This significantly lowers bandwidth usage, helping organizations save on data transmission costs. By minimizing the amount of data sent to the cloud, edge AI also helps prevent network congestion and ensures smoother operations in environments with limited bandwidth.
- Enhanced Privacy and Security: Data privacy and security are major concerns for organizations using cloud-based AI systems, especially when dealing with sensitive or personal information. Edge AI hardware reduces these risks by keeping data on the device, minimizing the chances of data breaches or interception during transmission. For applications in healthcare, finance, or surveillance, the ability to process data locally enhances trust and compliance with data protection regulations.
- Energy Efficiency: Edge AI hardware is designed to be energy-efficient, making it ideal for battery-powered devices that require long operational lifespans. AI accelerators use significantly less power than general-purpose CPUs and GPUs, ensuring that devices can run complex AI models without draining their power source. This is particularly important for applications in IoT, wearable devices, and remote sensors, where energy efficiency is paramount.
Types of Edge AI Hardware
Edge AI hardware comes in various forms, each optimized for different use cases and performance requirements. The most common types of edge AI hardware include:
- AI Accelerators: AI accelerators are specialized processors designed to speed up the inference of machine learning models. These include:
- Tensor Processing Units (TPUs): Developed by Google, TPUs are optimized for deep learning tasks and offer high computational power with low energy consumption.
- Graphics Processing Units (GPUs): GPUs, which are traditionally used for rendering graphics, are well-suited for parallel processing tasks required by AI models, especially deep learning.
- Vision Processing Units (VPUs): VPUs are designed specifically for computer vision tasks and are used in applications like smart cameras and drones.
- Field-Programmable Gate Arrays (FPGAs): FPGAs offer flexibility in terms of reconfiguration and are used for specialized AI tasks in environments where adaptability is essential.
- Application-Specific Integrated Circuits (ASICs): ASICs are custom-designed chips optimized for specific AI tasks. While they are expensive and take time to design, they are highly efficient and provide unmatched performance for specific applications.
- Edge Computing Platforms: These platforms integrate AI accelerators with computing hardware to create a complete solution for edge AI. They often include CPUs, GPUs, memory, storage, and networking capabilities, and are used in applications such as industrial automation, smart cities, and autonomous vehicles.
- Edge AI Modules: Edge AI modules combine AI accelerators with other system components to create compact, ready-to-deploy solutions. These modules are typically used in devices like smart cameras, robotics, and wearables, where space and power are limited.
Use Cases for Edge AI Hardware
The adoption of edge AI hardware is driving innovation in numerous industries. SSeveral key use cases include:
- Autonomous Vehicles: Autonomous vehicles rely heavily on AI to process sensor data from cameras, LiDAR, and radar in real-time. Edge AI hardware enables these vehicles to make split-second decisions without the need for cloud-based processing, ensuring safety and reliability on the road.
- Robotics: Robots equipped with edge AI hardware can perform complex tasks like navigation, object recognition, and decision-making independently. This is particularly useful in industries like manufacturing, logistics, and healthcare, where robots need to operate efficiently and autonomously in dynamic environments.
- Healthcare: Edge AI hardware is transforming healthcare by enabling real-time monitoring and diagnostics. Wearable devices, such as smartwatches and fitness trackers, use edge AI to analyze biometric data, providing users with insights into their health and fitness. In medical imaging, AI accelerators enable quick image analysis, helping doctors make faster diagnoses.
- Industrial Automation: In smart factories, AI-powered robots and sensors equipped with edge AI hardware improve efficiency and reduce downtime. These devices can detect anomalies, predict maintenance needs, and automate tasks without relying on cloud infrastructure.
The Future of Edge AI Hardware
As the demand for real-time AI processing continues to grow, the development of edge AI hardware is expected to evolve rapidly. Advances in AI accelerator technologies, such as more powerful TPUs, GPUs, and custom ASICs, will enable even more sophisticated AI models to run on resource-constrained edge devices.
Additionally, the expansion of 5G networks will boost edge AI capabilities by offering the fast, reliable connectivity needed for large-scale, real-time AI processing. As edge AI continues to gain momentum, it will unlock new possibilities across industries, creating smarter, more efficient, and secure solutions for a wide range of applications.
In conclusion, edge AI hardware is revolutionizing the way AI is deployed and processed. By bringing AI capabilities to the edge of the network, organizations can reduce latency, lower bandwidth costs, improve privacy and security, and achieve energy-efficient solutions. As the demand for real-time AI grows, the role of edge AI hardware will become even more critical, enabling faster, smarter, and more secure AI applications across industries.
The post The Growing Demand for Edge AI Hardware in Transforming Real-Time Data Processing appeared first on ELE Times.
Transforming Industries with Artificial Intelligence in Embedded Systems
The integration of Artificial Intelligence (AI) with embedded systems is transforming industries by enabling smarter, more responsive, and autonomous devices. Embedded systems, traditionally task-specific and resource-limited, now benefit from AI’s ability to process, learn, and make decisions in real-time. This innovation enhances safety, efficiency, and user experience across domains like automotive, healthcare, agriculture, and smart homes.
What Are Embedded Systems?Embedded systems are specialized computers designed to execute specific functions within larger systems, typically operating under real-time constraints. These systems are designed for efficiency, and they include microcontrollers, sensors, and software. They are found in everyday technologies like home appliances, cars, industrial machines, and medical devices. As AI integrates with these systems, they can handle more sophisticated tasks autonomously.
Embedded systems are small, purpose-built computers that operate within larger systems, characterized by:
- Components: Microcontrollers, sensors, and dedicated software.
- Applications: Found in everyday items like home appliances, vehicles, and medical devices.
- Features: Efficient, reliable, and designed for specific tasks with constrained resources.
Adding AI into these systems enhances their ability to handle complex operations, transforming even basic devices into intelligent systems.
Role of AI in Embedded SystemsAI in embedded systems enhances their decision-making capabilities. By processing real-time data from sensors, AI enables devices to learn from the data and make intelligent decisions autonomously. Examples include autonomous vehicles that use AI to navigate safely, and smart home devices that adjust settings based on user preferences. Although there are challenges (e.g., limited power and processing capacity), AI’s integration leads to more efficient and reliable systems.
AI enriches embedded systems by:
- Real-time Decision-Making: Analyzing sensor data to act autonomously.
- Predictive Capabilities: Preempting problems before they occur.
- Improved Accuracy: Making systems smarter and more efficient in applications like:
- Autonomous vehicles for navigation and safety.
- Smart home gadgets for personalized automation.
- Healthcare devices for diagnostics and monitoring.
- Autonomous Vehicles: Real-time object detection and navigation for self-driving cars.
- Smart Homes: Devices like thermostats and security cameras optimize user experience.
- Healthcare: Wearables and imaging systems offer enhanced diagnostics and monitoring.
- Industrial Automation: Robots improve efficiency, reduce downtime, and enhance precision.
- Agriculture: AI-driven drones and sensors optimize irrigation and yield.
- Retail & Supply Chains: Smart shelves and predictive analytics streamline operations.
- Energy Management: AI optimizes renewable energy use and reduces waste.
- Consumer Electronics: Devices offer personalized recommendations and smarter interfaces.
- Aerospace & Defense: AI powers drones and autonomous systems for critical missions.
- Environmental Monitoring: AI-equipped sensors monitor and safeguard ecosystems.
Although the integration of AI in embedded systems offers significant advantages, it also presents several challenges:
- Processing and Power Limitations: Embedded systems often lack the computational power needed for advanced AI.
- Data Security: Handling sensitive data locally requires robust encryption and security measures.
- Interoperability: Ensuring seamless communication between devices is crucial.
Despite these challenges, the opportunities are vast, especially in areas like autonomous systems, smart environments, and industrial efficiency.
Opportunities:
- The advancement of Edge AI and TinyML helps address resource limitations by enabling efficient processing directly on devices with minimal computational power.
- Rapid advancements in areas such as robotics, IoT, and sustainable energy solutions.
- Enhanced user-centric designs, such as wearable health monitors or autonomous systems.
- Edge AI: Data is processed on the device, reducing latency and improving privacy.
- AIoT (AI + IoT): AIoT (Artificial Intelligence of Things) combines AI and IoT technologies to create intelligent, interconnected devices that collaborate and make data-driven decisions more effectively.
- TinyML: Tiny machine learning allows AI to operate on devices with limited resources.
- AI Hardware Accelerators: Custom chips like NPUs or TPUs optimize AI inference.
- Software Toolchains: Frameworks for training, deploying, and optimizing AI models.
- Model Optimization:
- Pruning and quantization reduce model complexity.
- Knowledge distillation helps convey valuable information from larger models to more compact ones.
Real-world examples include:
- Smartwatches/Fitness Trackers: Embedded AI tracks activities in real-time using sensors.
- Autonomous drones use AI to independently navigate and detect obstacles in their environment.
- Medical Devices: AI helps in early detection and monitoring, improving healthcare outcomes.
- Autonomous Driving: Embedded AI processes sensor data for real-time object detection and decision-making.
Embedded AI brings several benefits:
- Bandwidth Efficiency: Reduces reliance on cloud services, lowering data transmission costs.
- Energy Efficiency: Local processing minimizes energy consumption, especially in battery-operated devices.
- Reduced Latency: Real-time data processing ensures quick decision-making, which is vital in applications like autonomous driving.
- Privacy: Is enhanced as data is processed locally on the device, minimizing the potential for breaches.
Performance can be evaluated using benchmarks like MLperf Tiny, which measures inference latency, frames per second (FPS), accuracy, and power efficiency.
Technical EnablersFor embedded AI to thrive, three key enablers are necessary:
- AI Hardware Accelerators: Dedicated processors designed for fast AI computations.
- Software Toolchains: Enable efficient training and deployment of AI models on embedded systems.
- Deep Neural Network Optimization: Techniques like model compression and parameter quantization help optimize performance.
Embedded AI uses a general-purpose framework to support AI functions on devices, enabling real-time data analysis and decision-making without relying heavily on cloud computing. EAI optimizes for lower data transmission costs, better data security, and efficient real-time processing.
Applications of Embedded AI in NetworkingOne fascinating use case of Embedded AI is AI ECN (Explicit Congestion Notification) in networks. AI dynamically modifies the network’s congestion settings in response to real-time traffic conditions, improving data flow and preventing packet loss. This use case showcases the powerful combination of AI and embedded systems in improving operational performance across sectors.
In conclusion, the integration of Artificial Intelligence into embedded systems is revolutionizing industries by enabling devices to process data, learn, and make decisions in real-time. This synergy enhances the capabilities of embedded systems, transforming them from task-specific tools to intelligent, autonomous solutions that deliver improved safety, efficiency, and user-centric experiences. As advancements in hardware accelerators, software optimization, and techniques like Edge AI and TinyML continue to evolve, the opportunities for embedded AI will only expand, addressing challenges such as resource constraints and security. With its potential to reshape sectors ranging from healthcare and automotive to agriculture and networking, embedded AI stands as a cornerstone of technological progress, paving the way for smarter, more connected, and sustainable future systems.
The post Transforming Industries with Artificial Intelligence in Embedded Systems appeared first on ELE Times.
Wide-creepage switcher improves vehicle safety
A wide-creepage package option for Power Integrations’ InnoSwitch 3-AQ flyback switcher IC enhances safety and reliability in automotive applications. According to the company, the increased primary-to-primary creepage and clearance distance of 5.1 mm between the drain and source pins of the InSOP-28G package eliminates the need for conformal coating, making the IC compliant with the IEC 60664-1 reinforced isolation standard in 800-V vehicles.
The new 1700-V CV/CC InnoSwitch3-AQ devices feature an integrated SiC primary switch delivering up to 80 W of output power. They also include a multimode QR/CCM flyback controller, secondary-side sensing, and a FluxLink safety-rated feedback mechanism. This high level of integration reduces component count by half, simplifying power supply implementation. The wider drain pin enhances durability, making the ICs well-suited for high-shock and vibration environments, such as eAxle drive units.
These latest members of the InnoSwitch3-AQ family start up with as little as 30 V on the drain without external circuitry, critical for functional safety. Devices achieve greater than 90% efficiency and consume less than 15 mW at no-load. Target automotive applications include battery management systems, µDC/DC converters, control circuits, and emergency power supplies in the main traction inverter.
Prices for the 1700 V-rated InnoSwitch3-AQ switching power supply ICs start at $6 each in lots of 10,000 units. Samples are available now, with full production in 1Q 2025.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Wide-creepage switcher improves vehicle safety appeared first on EDN.
R&S boosts GMSL testing for automotive systems
Rohde & Schwarz expands testing for automotive systems that employ Analog Devices’ Gigabit Multimedia Serial Link (GMSL) technology. Designed to enhance high-speed video links in applications like In-Vehicle Infotainment (IVI) and Advanced Driver Assistance Systems (ADAS), GMSL offers a simple, scalable SerDes solution. The R&S and ADI partnership aims to assist automotive developers and manufacturers in creating and deploying GMSL-based systems.
Physical Medium Attachment (PMA) testing, compliant with GMSL requirements, is now fully integrated into R&S oscilloscope firmware, along with a suite of signal integrity tools. These include LiveEye for real-time signal monitoring, advanced jitter and noise analysis, and built-in eye masks for forward and reverse channels.
To verify narrowband crosstalk, the offering includes built-in spectrum analysis on the R&S RTP oscilloscope. In addition, cable, connector, and channel characterization can be performed using R&S vector network analyzers.
R&S will demonstrate the application at next month’s CES 2025 trade show. To learn more about ADI’s GMSL technology click here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post R&S boosts GMSL testing for automotive systems appeared first on EDN.
Gen3 UCIe IP elevates chiplet link speeds
Alphawave Semi’s Gen3 UCIe Die-to-Die (D2D) IP subsystem enables chiplet interconnect rates up to 64 Gbps. Building on the successful tapeout of its Gen2 36-Gbps UCIe IP on TSMC’s 3-nm process, the Gen3 subsystem supports both high-yield, low-cost organic substrates and advanced packaging technologies.
At 64 Gbps, the Gen3 IP delivers over 20 Tbps/mm in bandwidth density with ultra-low power and latency. The configurable subsystem supports multiple protocols, including AXI-4, AXI-S, CXS, CHI, and CHI-C2C, enabling high-performance connectivity across disaggregated systems in HPC, data center, and AI applications.
The design complies with the latest UCIe specification and features a scalable architecture with advanced testability, including live per-lane health monitoring. UCIe D2D interconnects support a variety of chiplet connectivity scenarios, including low-latency, coherent links between compute chiplets and I/O chiplets, as well as reliable optical I/O connections.
“Our successful tapeout of the Gen2 UCIe IP at 36 Gbps on 3-nm technology builds on our pioneering silicon-proven 3-nm UCIe IP with CoWoS packaging,” said Mohit Gupta, senior VP & GM, Custom Silicon & IP, Alphawave Semi. “This achievement sets the stage for our Gen3 UCIe IP at 64 Gbps, which is on target to deliver high performance, 20-Tbps/mm throughput functionality to our customers who need the maximization of shoreline density for critical AI bandwidth needs in 2025.”
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Gen3 UCIe IP elevates chiplet link speeds appeared first on EDN.
UWB radar SoC enables 3D beamforming
Hydrogen, an ultra-wideband (UWB) radar SoC from Aria Sensing, delivers 3D MIMO beamforming with programmable pulse bandwidths ranging from 500 MHz to 1.8 GHz. Its advanced waveforms support single-pulse and pulse-compression modes, enabling precise depth perception and spatial resolution. The chip optimizes signal-to-noise ratios for various detection tasks while maintaining low radiated power.
Equipped with two integrated RISC-V microprocessors, Hydrogen accommodates up to four transmitting and four receiving antenna channels with flexible and scalable array configurations to enhance cross-range resolution. Offering 1D, 2D, and 3D sensing, the SoC detects presence, position, vital signs, and gestures, serving automotive, industrial automation, and smart home markets.
“Hydrogen represents a paradigm shift in radar technology, combining cutting-edge UWB advancements with compact SoC design. We are excited to see how this innovation will redefine radar sensing applications,” said Alessio Cacciatori, Aria founder and CEO.
The Hydrogen UWB radar SoC supports multiple center frequencies for global operation without sacrificing resolution. It consumes 90 mA at 1.8 V and is housed in a 9×9-mm QFN64 package.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post UWB radar SoC enables 3D beamforming appeared first on EDN.
GPU IP powers scalable AI and cloud gaming
Vitality is VeriSilicon’s latest GPU IP architecture targeting cloud gaming, AI PCs, and both discrete and integrated graphics cards. According to the company, Vitality offers advancements in computation performance and scalability. With support for Microsoft DirectX 12 APIs and AI acceleration libraries, the GPU architecture suits performance-intensive applications and complex workloads.
Vitality integrates a configurable Tensor Core AI accelerator and 32 Mbytes to 64 Mbytes of Level 3 cache. Capable of handling up to 128 cloud gaming channels per core, it meets demands for high concurrency and image quality in cloud-based entertainment while enabling large-scale desktop gaming and Windows applications.
“The Vitality architecture GPU represents the next generation of high-performance and energy-efficient GPUs,” said Weijin Dai, chief strategy officer, executive VP and GM of VeriSilicon’s IP Division. “With over 20 years of GPU development experience across diverse market segments, the Vitality architecture is built to support the most advanced GPU APIs. Its scalability enables widespread deployment in fields such as automotive systems and mobile computing devices.”
A datasheet was not available at the time of this announcement.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post GPU IP powers scalable AI and cloud gaming appeared first on EDN.
Metamaterial’s mechanical maximization enhances vibration-energy harvesting
The number of ways to harvest energy that would otherwise go unused and wasted is extraordinary. To cite a few of the many examples, there’s the heat given off during almost any physical or electronic process, ambient light which is “just there,” noise, and ever-present vibration. Each of these has different attributes along with pros and cons which are fluid with respect to consistency, reliability, and, of course, useful output power in a given situation.
For example, the harvesting of vibration-sourced energy is attractive (when available) as it is unaffected by weather or terrain conditions. However, most of the many manifestations of such energy are quite small. It requires attention to details and design to extract and squeeze out a useful amount in the energy chain from a raw source to the harvesting transducer.
Most vibrations in daily life are tiny and often not “focused” but spread across a wide area or volume. To overcome this significant issue, numerous conversion devices, typically piezoelectric elements, are often installed in multiple locations that are exposed to relatively large vibrations.
Addressing this issue, a research effort lead by a team at KRISS—the Korea Research Institute of Standards and Science in the Republic of Korea (South Korea) —has developed a metamaterial that traps and amplifies micro-vibrations into small areas. The behavior of the metamaterials enhances and localizes the mechanical-energy density level at a local spot in which a harvester is installed.
The metamaterial has a thin, flat structure roughly the size of an adult’s palm, allowing it to be easily attached to any surface where vibration occurs, Figure 1. The structure can be easily modified to fit the object to which it will be attached. They expect that the increase in the power output will accelerate its commercialization.
Figure 1 The metamaterial developed by the KRISS-led team is flat and easy to position. Source: KRISS
The metamaterial developed by KRISS traps and accumulates micro-vibrations within it and amplifies it. This allows the generation of large-scale electrical power relative to the small number of piezoelectric elements that are used. By applying vibration harvesting with the developed metamaterial, the research team has succeeded in generating more than four times more electricity per unit area than conventional technologies.
Their metasurface structure can be divided into three finite regions, each with a distinct role: metasurface, phase-matching, and attaching regions. Their design used what is called “trapping” physics with carefully designed defects in structure to simultaneously achieve the focusing and accumulation of wave energy.
They validated their metasurface using experiments, with results showing an amplification factor of the input flexural vibration amplitude by a factor of twenty. They achieved this significant amplification largely due to the intrinsic negligible damping characteristic of their metallic structure, Figure 2.
Figure 2 (right) Schematic of the proposed metasurface attachment and (left) a conceptual illustration of the attachment installed on a vibrating rigid structure for flexural wave energy amplification. Source: KRISS
Their phase-gradient metasurfaces (also called metagratings in the acoustic field) feature intrinsic wave-trapping behavior. (Here, the term “metasurfaces” refers to structures that diffract waves, primarily through spatially-varying phase accumulations within the constituent wave channels.)
Constructs, analysis, and modeling are one thing, but a proposal such as theirs requires and is very conducive to validation. Their experimental setup used a vibration shaker and a laser Doppler vibrometer (LDV) sensor to excite and then measure the flexural vibration inside the specimen, Figure 3. For convenience, the specimen was firmly clamped to the shaker instead of being directly attached onto the shaker using a jig and a bolted joint.
Figure 3 (a) Schematic illustration and (b) photographs to demonstrate the experimental setup in order to validate the flexural-vibration amplifying performance of the fabricated metasurface attachment. Using a specially-configured jig and a bolted joint, the metasurface structure is firmly clamped to a vibration shaker. The surface region covering a unit supercell (denoted as M1) and the interfacial line (M2) between the metasurface strips and phase-matching plate are measured using laser Doppler vibrometer equipment. Source: KRISS
The shaker was set to constantly vibrate at frequencies between 3 kHz and 5 kHz at arbitrary weak amplitudes set by a function generator and an RF power amplifier. The phase-matching plate (somewhat analogous to impedance-matching circuit) was another essential component in the structure. It dramatically improved the amplifying performance by assisting coherent phases of scattering wave fields to constantly develop within the metasurface strips in the steady state.
It would be nice to have a summary of before-and-after performance using their design. Unfortunately, their published paper is too much of a good thing: it has a large number of such graphs and tables under different conditions, but no overall summary other than a semi-quantitative image, Figure 4 (top right).
Figure 4 This conceptual illustration graphically demonstrates the nature of the vibration amplification performance of the metamaterial developed by the KRISS-lead team. Source: KRISS
If you want to see more, check out their paper “Finite elastic metasurface attachment for flexural vibration amplification” published in Elsevier’s Mechanical Systems and Signal Processing. But I’ll warn you that at 32 pages, the full paper (main part, appendix, and references) is the longest I have seen by far in an academic journal!
Have you had any personal experience with vibration-based energy harvesting? Was the requisite modeling difficult and valid? Did it meet or exceed your expectations? What sort of real-work problems or issues did you encounter?
Bill Schweber is an EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features.
Related Content
- Nothing new about energy harvesting
- Clever harvesting scheme takes a deep dive, literally
- Energy harvesting gets really personal
- Lightning as an energy harvesting source?
- What’s that?…A fuel cell that harvests energy from…dirt?
The post Metamaterial’s mechanical maximization enhances vibration-energy harvesting appeared first on EDN.