-   Українською
-   In English
Feed aggregator
Position Tracking with Bluetooth Low Energy Technology
Courtesy: Onsemi
As Bluetooth Low Energy (LE) has evolved to version 5.2 and beyond, one of the most significant advancements has been in position tracking – a technology that is used indoors to track movements and positions of assets.
Bluetooth direction-finding methods, including both connection-less and connection-oriented modes, offer versatility that allows them to be used in a wide variety of applications. This adaptability opens new possibilities in wireless communication and location services, promising exciting advancements in the future.
Figure 1: Analysis of Movement in a Retail Store, Showing Popular RoutesOne of the primary markets for this technology is in the retail sector where large stores seek to understand better how customers move around the store so that they can maximize sales potential.
Beyond retail, asset tracking can also have a profound impact on industrial efficiency. It can be deployed to monitor material handling vehicles, reducing wasted time and improving efficiency. It can also be used to drive complex digital twins allowing for the accurate replication of movements in a virtual environment.
Asset tracking is not solely focused on improving efficiency; it also plays a significant role in ensuring safety. In warehouses and distribution centers, the use of tracking tags enables a safe coexistence of employees and industrial robotics, eliminating the possibility of collisions by allowing robots to track employee movements.
Basic System Design PrinciplesTo establish a position detection system, an array of antennas is placed in a building, whether that be a retail store, warehouse, hospital, airport, or other type of building. This array allows for highly accurate position measurement.
The methodology used can be either Angle of Arrival (AoA) or Angle of Departure (AoD). While both use the same radio frequency (RF) signal measurements, the signal processing and antenna configuration is different in each case.
Figure 2: Anatomy of a Position Detection SystemTypically, a system will consist of three main elements, a Bluetooth transmitter (AoA tag), a receiver/antenna array (AoA locator), and a system for calculating angle and position. To operate, the AoA tag sends a constant tone extension (CTE) signal.
This CTE signal spreads out in an expanding spherical pattern and is picked up by the antennas. As the wavelength/frequency of the signal is known, as is the distance between the receivers, then relatively simple trigonometry can be used to calculate the angle of the signal and, therefore, the transmitter based upon the phase difference of the signal arriving at each antenna.
Alternative Methods and Enhanced AccuracyBy performing the detection twice with two pairs of antennas, it is possible to triangulate the exact position of the AoA tag with a high degree of precision.
An alternative method that does not require angle measurement is trilateration. This is based upon a time-of-flight (ToF) distance measurement using Bluetooth 5.4 channel sounding (CS) feature or ultra-wideband (UWB).
CS is also known as high accuracy distance measurement (HADM) and many consider it to be a very accurate alternative to RSSI-based distance measurement.
onsemi’s RSL15 AoA SolutionThe RSL15 from onsemi is a Bluetooth 5.2 certified secure wireless microcontroller that is optimized for ultra-low power applications including industrial, medical and AoA. The device is based around an Arm Cortex-M33 processor running at up to 48 MHz and features encrypted security. Providing the industry’s lowest power consumption, the peak current when transmitting is just 4.3 mA and this reduces to 36 nA in sleep mode while waiting for a GPIO wakeup. It is designed to meet the demands of a wide range of tracking applications ranging from retail and clinical settings to manufacturing and distribution centers.
The post Position Tracking with Bluetooth Low Energy Technology appeared first on ELE Times.
Mission NIMpossible: Decoding the Microservices That Accelerate Generative AI
Sama Bali, Senior Product Marketer for AI solutions at NVIDIA Generative AI
Run generative AI NVIDIA NIM microservices locally on NVIDIA RTX AI workstations and NVIDIA GeForce RTX systems.
In the rapidly evolving world of artificial intelligence, generative AI is captivating imaginations and transforming industries. Behind the scenes, an unsung hero is making it all possible: microservices architecture.
The Building Blocks of Modern AI ApplicationsMicroservices have emerged as a powerful architecture, fundamentally changing how people design, build and deploy software.
A microservices architecture breaks down an application into a collection of loosely coupled, independently deployable services. Each service is responsible for a specific capability and communicates with other services through well-defined application programming interfaces, or APIs. This modular approach stands in stark contrast to traditional all-in-one architectures, in which all functionality is bundled into a single, tightly integrated application.
By decoupling services, teams can work on different components simultaneously, accelerating development processes and allowing updates to be rolled out independently without affecting the entire application. Developers can focus on building and improving specific services, leading to better code quality and faster problem resolution. Such specialization allows developers to become experts in their particular domain.
Services can be scaled independently based on demand, optimizing resource utilization and improving overall system performance. In addition, different services can use different technologies, allowing developers to choose the best tools for each specific task.
A Perfect Match: Microservices and Generative AIThe microservices architecture is particularly well-suited for developing generative AI applications due to its scalability, enhanced modularity and flexibility.
AI models, especially large language models, require significant computational resources. Microservices allow for efficient scaling of these resource-intensive components without affecting the entire system.
Generative AI applications often involve multiple steps, such as data preprocessing, model inference and post-processing. Microservices enable each step to be developed, optimized and scaled independently. Plus, as AI models and techniques evolve rapidly, a microservices architecture allows for easier integration of new models as well as the replacement of existing ones without disrupting the entire application.
NVIDIA NIM: Simplifying Generative AI DeploymentAs the demand for AI-powered applications grows, developers face challenges in efficiently deploying and managing AI models.
NVIDIA NIM inference microservices provide models as optimized containers to deploy in the cloud, data centers, workstations, desktops and laptops. Each NIM container includes the pretrained AI models and all the necessary runtime components, making it simple to integrate AI capabilities into applications.
NIM offers a game-changing approach for application developers looking to incorporate AI functionality by providing simplified integration, production-readiness and flexibility. Developers can focus on building their applications without worrying about the complexities of data preparation, model training or customization, as NIM inference microservices are optimized for performance, come with runtime optimizations and support industry-standard APIs.
AI at Your Fingertips: NVIDIA NIM on Workstations and PCsBuilding enterprise generative AI applications comes with many challenges. While cloud-hosted model APIs can help developers get started, issues related to data privacy, security, model response latency, accuracy, API costs and scaling often hinder the path to production.
Workstations with NIM provide developers with secure access to a broad range of models and performance-optimized inference microservices.
By avoiding the latency, cost and compliance concerns associated with cloud-hosted APIs as well as the complexities of model deployment, developers can focus on application development. This accelerates the delivery of production-ready generative AI applications — enabling seamless, automatic scale out with performance optimization in data centers and the cloud.
The recently announced general availability of the Meta Llama 3 8B model as a NIM, which can run locally on RTX systems, brings state-of-the-art language model capabilities to individual developers, enabling local testing and experimentation without the need for cloud resources. With NIM running locally, developers can create sophisticated retrieval-augmented generation (RAG) projects right on their workstations.
Local RAG refers to implementing RAG systems entirely on local hardware, without relying on cloud-based services or external APIs.
Developers can use the Llama 3 8B NIM on workstations with one or more NVIDIA RTX 6000 Ada Generation GPUs or on NVIDIA RTX systems to build end-to-end RAG systems entirely on local hardware. This setup allows developers to tap the full power of Llama 3 8B, ensuring high performance and low latency.
By running the entire RAG pipeline locally, developers can maintain complete control over their data, ensuring privacy and security. This approach is particularly helpful for developers building applications that require real-time responses and high accuracy, such as customer-support chatbots, personalized content-generation tools and interactive virtual assistants.
Hybrid RAG combines local and cloud-based resources to optimize performance and flexibility in AI applications. With NVIDIA AI Workbench, developers can get started with the hybrid-RAG Workbench Project — an example application that can be used to run vector databases and embedding models locally while performing inference using NIM in the cloud or data center, offering a flexible approach to resource allocation.
This hybrid setup allows developers to balance the computational load between local and cloud resources, optimizing performance and cost. For example, the vector database and embedding models can be hosted on local workstations to ensure fast data retrieval and processing, while the more computationally intensive inference tasks can be offloaded to powerful cloud-based NIM inference microservices. This flexibility enables developers to scale their applications seamlessly, accommodating varying workloads and ensuring consistent performance.
NVIDIA ACE NIM inference microservices bring digital humans, AI non-playable characters (NPCs) and interactive avatars for customer service to life with generative AI, running on RTX PCs and workstations.
ACE NIM inference microservices for speech — including Riva automatic speech recognition, text-to-speech and neural machine translation — allow accurate transcription, translation and realistic voices.
The NVIDIA Nemotron small language model is a NIM for intelligence that includes INT4 quantization for minimal memory usage and supports roleplay and RAG use cases.
And ACE NIM inference microservices for appearance include Audio2Face and Omniverse RTX for lifelike animation with ultrarealistic visuals. These provide more immersive and engaging gaming characters, as well as more satisfying experiences for users interacting with virtual customer-service agents.
Dive Into NIMAs AI progresses, the ability to rapidly deploy and scale its capabilities will become increasingly crucial.
NVIDIA NIM microservices provide the foundation for this new era of AI application development, enabling breakthrough innovations. Whether building the next generation of AI-powered games, developing advanced natural language processing applications or creating intelligent automation systems, users can access these powerful development tools at their fingertips.
The post Mission NIMpossible: Decoding the Microservices That Accelerate Generative AI appeared first on ELE Times.
Futureproof Your Industrial Network Security
Courtesy: Moxa
Today, industrial organizations are embracing digital transformation to gain a competitive edge and boost business revenue. To achieve digital transformation, industrial operators must first address the daunting task of merging their information technology (IT) and operational technology (OT) infrastructure. However, businesses trying to streamline data connectivity for integrated IT/OT systems often encounter challenges such as lacking performance, limited network visibility, and lower network security from existing OT network infrastructure. Building a robust, high-performance network for daily operations that is easy to maintain requires thorough planning. In this article, we will focus on the importance of strong OT network security and provide some tips on how to strengthen cybersecurity for industrial operations.
Why Ramping Up OT Network Security Is a MustNowadays, industrial applications are facing more and unprecedented cyberthreats. These threats often target critical infrastructure in different industries all across the world, including energy, transportation, and water and wastewater services. If successful, such attacks can cause significant damage to industrial organizations in the form of high recovery costs or production delays. Before building IT/OT converged networks, asset owners must define the target security level of the entire network and strengthen measures to minimize the impact of potential intrusions. Poor network security exposes critical field assets to unwanted access and allows malicious actors to breach integrated systems.
However, strengthening OT network security is not that straightforward. IT security solutions require constant updates to ensure they can protect against the latest cyberthreats. Applying these necessary updates often means interrupting network services and systems, which is something OT operations cannot afford. Operators need an OT-centric cybersecurity approach to protect their industrial networks without sacrificing network or operational uptime.
Three Major Stages of Building OT CybersecurityBuilding a secure industrial network can be done with the right approach. The key to strong cybersecurity is implementing a multi-layered defense strategy in several stages.
Stage One: Build a Solid Foundation with Secure Networking DevicesWhen developing secure networking infrastructure, start with choosing secure building blocks. The increasing number of cyberthreats has also led to the development of comprehensive OT network security standards. Industrial cybersecurity standards, such as NIST CSF and IEC 62443, provide security guidelines for critical assets, systems, and components. Implementing industrial cybersecurity standards and using networking devices designed around these standards provides asset owners with a solid foundation for building secure network infrastructure.
Stage Two: Deploy OT-centric Layered ProtectionThe idea of defense-in-depth is to provide multi-layered protection by implementing cybersecurity measures at every level to minimize security risks. In the event of an intrusion, if one layer of protection is compromised, another layer prevents the threat from further affecting the network. In addition, instant notifications for security events allow users to quickly respond to potential threats and mitigate any risk.
When deploying multi-layered network protection for OT networks and infrastructure, there are two key OT cybersecurity solutions to consider, namely industrial firewalls and secure routers.
Shield Critical Assets with Industrial FirewallsAn efficient way to protect critical field assets is using industrial firewalls to create secure network zones and defend against potential threats across the network. With every connected device being the potential target of cyberthreats, it’s important to deploy firewalls with robust traffic filtering that allow administrators to set up secure conduits throughout the network. Next-generation firewalls feature advanced security functions such as Intrusion Detection/Prevention Systems (IDS/IPS) and Deep Packet Inspection (DPI) to strengthen network protection against intrusions by proactively detecting and blocking threats.
Advanced security functions tailored for OT environments help ensure seamless communications and maximum uptime for industrial operations. For example, OT-centered DPI technology that supports industrial protocols can detect and block unwanted traffic, ensuring secure industrial protocol communications. In addition, industrial-grade IPS can support virtual patching to protect critical assets and legacy devices from the latest known threats without affecting network uptime. Designed for industrial applications, IPS provides pattern-based detection for PLCs, HMIs, and other common field site equipment.
Fortify Network Boundaries with Industrial Secure RoutersIT/OT converged networks require a multi-layered and complex industrial network infrastructure to transmit large amounts of data from field sites to the control center. Deploying powerful industrial secure routers between different networks can both fortify network boundaries and maintain solid network performance. Featuring built-in advanced security functions such as firewall and NAT, secure routers allow administrators to establish secure network segments and enable data routing between segments. For optimal network performance, a powerful industrial secure router features both switching and routing functions with Gigabit speeds, alongside redundancy measures for smooth intra- and inter-network communication.
The demand for remote access to maintain critical assets and networks has also been on the rise. Industrial secure routers with VPN support allow maintenance engineers and network administrators to access private networks remotely through a secure tunnel, enabling more efficient remote management.
Stage Three: Monitor the Network Status and Identify CyberthreatsDeploying a secure industrial network is just the start of the journey towards robust cybersecurity. During daily operations, it takes a lot of time and effort for network administrators to have full network visibility, monitor traffic, and manage the countless networking devices. Implementing a centralized network management platform can provide a huge boost to operational efficiency by visualizing the entire network and simplifying device management. It also allows network administrators to focus more resources on ramping up network and device security.
In addition, a centralized network security management platform for cybersecurity solutions can boost efficiency even more. Such software allows administrators to perform mass deployments for firewall policies, monitor cyberthreats, and configure notifications for when threats occur. The right combination of cybersecurity solutions and management software offers administrators an invaluable way to monitor and identify cyberthreats with a holistic view.
Futureproof Network Security with Our SolutionsNetwork security is imperative for industrial network infrastructure. Moxa has translated over 35 years of industrial networking experience into a comprehensive OT-centric cybersecurity portfolio that offers enhanced security with maximum network uptime. Moxa is an IEC 62443-4-1 certified industrial connectivity and networking solutions provider. When developing our products, we adhere to the security principles of the IEC 62443-4-2 standard to ensure secure product development. Our goal is to provide our users with the tools necessary to build robust device security for their industrial applications.
To defend against increasing cyberthreats, our OT-focused cybersecurity solutions maximize uptime while protecting industrial networks from intruders. Our network management software simplifies management for networking devices and OT cybersecurity solutions, allowing administrators to monitor the network security status and manage cyberthreats with ease.
The post Futureproof Your Industrial Network Security appeared first on ELE Times.
Why the performance of your storage system matters for AI workloads?
Courtesy: Micron
A guide to understanding some key factors that influence the speed and efficiency of your data storage
Data is the lifeblood of any modern business, and how you store, access and manage it can make a dramatic difference in your productivity, profitability and competitiveness. The emergence of artificial intelligence (AI) is transforming every industry and forcing businesses to re-evaluate how they can use data to accelerate innovation and growth. However, AI training and inferencing pose unique challenges for data management and storage, as they require massive amounts of data, high performance, scalability and availability.
Not all storage systems are created equal, and many factors that can affect their performance. In this blog post, we will explore some of the main factors that influence storage system performance for AI and, importantly, how your choice of underlying storage media will affect them.
Key attributes of AI workloadsAI workloads are data-intensive and compute-intensive, meaning that they need to process large volumes of data at high speed and with low latency. Storage plays a vital role in enabling AI workloads to access, ingest, process and store data efficiently and effectively. Some key attributes of typical AI workloads that affect storage requirements are:
- Data variety: AI workloads need to access data from multiple sources and formats, such as structured, unstructured or semi-structured data, and from various locations, such as on-premises, cloud or edge. Storage solutions need to provide fast and reliable data access and movement across different environments and platforms.
- Data velocity: AI workloads need to process data in real-time or near-real-time. Storage solutions need to deliver high throughput, low latency and consistent performance for data ingestion, processing and analysis.
- Data volume: As AI models grow in complexity and accuracy and GPU clusters grow in compute power, their storage solutions need to provide flexible and scalable capacity and performance.
- Data reliability and availability: AI workloads need to ensure data integrity, security and extremely high availability, particularly when connected to large GPU clusters that are intolerant of interruptions in data access.
Storage system performance is not a single metric but a combination of several factors that depend on the characteristics and requirements of your data, applications and data center infrastructure. Some of the most crucial factors are:
- Throughput: The rate at which your storage system can transfer data to and from the network or the host. Higher throughput can improve performance by increasing the bandwidth and reducing the congestion and bottlenecks of your data flow. The throughput is usually limited by either the network bandwidth or the speed of the storage media.
- Latency: The time it takes for your storage system to respond to a read or write request. A lower latency can improve performance by reducing GPU idle time and improving the system’s responsiveness to user inputs. The latency of mechanical devices (such as HDDs) is inherently much higher than for solid-state devices (SSDs).
- Scalability: The ability of your storage system to adapt to changes in data volume, velocity and variety. High scalability is key to enabling your storage system to grow and evolve with your business needs and goals. The biggest challenge to increasing the amount of data that your system can store and manage is maintaining performance scaling without hitting bottlenecks or storage device limitations.
- Resiliency: The ability of your storage system to maintain data integrity and availability in the event of failures, errors or disasters. Higher reliability can improve performance by reducing the frequency and impact of data corruption, loss and recovery.
Hard disk drives (HDDs) and solid-state drives (SSDs) are the two main types of devices employed for persistent storage in data center applications. HDDs are mechanical devices that use rotating disk platters with a magnetic coating to store data, while SSDs use solid-state flash memory chips to store data. HDDs have been the dominant storage devices for decades. HDDs offer the lowest cost per bit and long-term, power-off durability, but they are slower and less reliable than SSDs. SSDs offer higher throughputs, lower latencies, higher reliability and denser packaging options.
As technology advances and computing demands increase, the mechanical nature of the HDD may not allow it to keep pace in performance. There are a few options that system designs can deploy to extend the effective performance of HDD-based storage systems, such as mixing hot and cold data (hot data borrowing performance from the colder data), sharing data across many HDD spindles in parallel (increasing throughput but not improving latency), overprovisioning HDD capacity (in essence provisioning for IO and not capacity), and adding SSD caching layers for latency outliers (see recent blog by Steve Wells HDDs and SSDs. What are the right questions? | Micron Technology Inc.). These system-level solutions have limited scalability before their cost becomes prohibitive. How extendable these solutions are is dependent on the level of performance an application requires. For many of today’s AI workloads, HDD-based systems are falling short on scalability of performance and power efficiency.
High-capacity, SSD-based storage systems, though, can provide a less complex and more extendable solution, and they are rapidly evolving as the storage media of choice for high-performance AI data lakes at many large GPU-centric data centers. While at the drive level, on a cost-per-bit basis, these SSDs are more expensive than HDDs. But at a system level, systems built with these SSDs can have better operating costs than HDDs when you consider these improvements:
- Much higher throughput
- Greater than 100 times lower latency
- Fewer servers and racks per petabyte needed
- Better reliability with longer useful lifetimes
- Better energy efficiency for a given level of performance
The capacity of SSDs is expected to grow to over 120TB in the next few years. As their capacities grow and the pricing gap between SSDs and HDDs narrows, these SSDs can become attractive alternatives for other workloads that demand higher than average performance or need much lower latency on large data sets, such as video editing and medical imaging diagnostics.
ConclusionStorage performance is an important design criterion for systems running AI workloads. It affects system performance, scalability, data availability and overall system cost and power requirements. Therefore, it’s important that you understand the features and benefits of different storage options and select the best storage solution for your AI needs. By choosing the right storage solution, you can optimize your AI workloads and achieve your AI goals.
The post Why the performance of your storage system matters for AI workloads? appeared first on ELE Times.
Semiconductor Attributes for Sustainable System Design
Courtesy: Jay Nagle, Principal Product Marketing Engineer, Microchip Technology Inc.
Jay Nagle, Principal Product Marketing Engineer, Microchip Technology Inc.
Gain further insights on some of the key attributes required of semiconductors to facilitate sustainability in electronic systems design.
Semiconductor Innovations for Sustainable Energy ManagementAs systems design becomes more technologically advanced, the resultant volume increase in electronic content poses threats to environmental sustainability. Global sustainability initiatives are being implemented to mitigate these threats. However, with the rise of these initiatives, there is also an increasing demand for the generation of electricity. Thus, a new challenge emerges: how can we manage these increasing levels of energy consumption?
To answer the call for more electricity generation, it is essential for renewable energy sources to have increasing shares of energy production vs. fossil fuels to reduce greenhouse gas emissions. The efficiency of a renewable energy source hinges on optimizing the transfer of energy from the source to the power grid or various electrical loads. These loads include commonly utilized consumer electronics, residential appliances and large-scale battery energy storage systems. Furthermore, the electrical loads must utilize an optimal amount of power during operation to encourage efficient energy usage.
Read on to learn more about the key attributes of semiconductors that contribute to enhanced sustainability in system designs.
Integrated circuits (ICs) or application-specific integrated circuits (ASICs) used for renewable power conversion and embedded systems must have four key features: low power dissipation, high reliability, high power density and security.
Low Power DissipationOne of the main characteristics needed in a semiconductor for sustainable design is low power consumption. This extends battery life, allowing longer operating times between recharges, which ultimately conserves energy.
There are two leading sources of semiconductor power loss. The first is static power dissipation or power consumption when a circuit is in stand-by or a non-operational state. The second source is dynamic power dissipation, or power consumption when the circuit is in an operational state.
To reduce both static and dynamic power dissipation, semiconductors are developed to minimize capacitance through their internal layout construction, operate at lower voltage levels and activate functional blocks depending on if the device is in “deep sleep” stand-by or functional mode.
Microchip offers low power solutions that are energy efficient and reduce hazardous e-waste production.
High ReliabilityThe reliability of parts and the longevity of the system help to measure performance of semiconductors in sustainable system designs. Semiconductor reliability and longevity can be compromised by operation near the limits of the device’s temperature ratings, mechanical stresses, and torsion.
We use Very Thin Quad Flat No-Lead (VQFN) and Thin Quad Flat Pack (TQFP) packages to encapsulate complex layouts in small form factor packages to address these concerns. Exposed pads on the bottom surface of the VQFN package dissipate an adequate amount of heat, which helps to hold a low junction to case thermal resistance when the device operates at maximum capacity. TQFP packages use gull-wing leads on low-profile height packages to withstand torsion and other mechanical stresses.
High Power DensityPower density refers to the amount of power generated per unit of die size. Semiconductors with high power densities can run at high power levels while being packaged in small footprints. This is common in silicon carbide (SiC) wide-bandgap (WBG) discretes and power modules used in solar, wind and electric-vehicle power-conversion applications.
SiC enhances power-conversion systems by allowing the system to operate at higher frequencies, reducing the size and weight of electrical passives needed to transfer the maximum amount of power from a renewable source.
Our WBG SiC semiconductors offer several advantages over traditional silicon devices, such as running at higher temperatures and faster switching speeds. SiC devices’ low switching losses improve system efficiency while their high-power density reduces size and weight. They also can achieve a smaller footprint with reduction in heat sink dimensions.
SecuritySecurity in semiconductors is almost synonymous with longevity, as security features can enable continued reuse of existing systems. This means that the design can be operated for longer periods of time without the need for replacement or becoming outdated.
There are helpful security features that support system longevity. For example, secure and immutable boot can verify the integrity of any necessary software updates to enhance system performance or fix software bugs. Secure key storage and node authentication can protect against external attacks as well as ensure that verified code runs on the embedded design.
The post Semiconductor Attributes for Sustainable System Design appeared first on ELE Times.
ADAS and autonomous vehicles with distributed aperture radar
The automotive landscape is evolving, and vehicles are increasingly defined by advanced driver-assistance systems (ADAS) and autonomous driving technologies. Moreover, radar is becoming increasingly popular for ADAS applications, offering multiple benefits over rival technologies such as cameras and LiDAR.
It’s a lot more affordable, and it also operates more efficiently in challenging conditions, such as in the dark, when it’s raining or snowing, or even when sensors are covered in dirt. As such, radar sensors have become a workhorse for today’s ADAS features such as adaptive cruise control (ACC) and automatic emergency braking (AEB).
However, improved radar performance is still needed to ensure reliability, safety, and convenience of ADAS functions. For example, the ability to distinguish between objects like roadside infrastructure and stationary people or animals, or to detect lost cargo on the road, are essential to enable autonomous driving features. Radar sensors must provide sufficient resolution and accuracy to precisely detect and localize these objects at long range, allowing sufficient reaction time for a safe and reliable operation.
A radar’s performance is strongly influenced by its size. A bigger sensor has a larger radar aperture, which typically offers a higher angular resolution. This delivers multiple benefits and is essential for the precise detection and localization of objects in next-generation safety systems.
Radar solutions for vehicles are limited by size restrictions and mounting constraints, however. Bigger sensors are often difficult to integrate into vehicles, and the advent of electric vehicles has resulted in front grills increasingly being replaced with other design elements, creating new constraints for the all-important front radar.
With its modular approach, distributed aperture radar (DAR) can play a key role in navigating such design and integration challenges. DAR builds on traditional radar technology, combining multiple standard sensors to create a solution that’s greater than the sum of its parts in terms of performance.
Figure 1 DAR combines multiple standard sensors to create a more viable radar solution. Source: NXP
The challenges DAR is addressing
To understand DAR, it’s worth looking at the challenges the technology needs to overcome. Traditional medium-rage radar (MRR) sensors feature 12-16 virtual antenna channels. This technology has evolved into high-resolution radars, which provide enhanced performance by integrating far more channels onto a sensor, with the latest production-ready sensors featuring 192 virtual channels.
The next generation of high-resolution sensors might offer 256 virtual channels with innovative antenna designs and software algorithms for substantial performance gains. Alternative massive MIMO (M-MIMO) solutions are about to hit the market packing over 1,000 channels.
Simply integrating 1000s of channels is incredibly hardware-intensive and power-hungry. Each channel consumes power and requires more chip and board area, contributing to additional costs. As the number of channels increases, the sensor becomes more and more expensive, while at the same time, the aperture size remains limited by the physical realities of manufacturing and vehicle integration considerations. At the same time, the large size and power consumption of an M-MIMO radar make it difficult to integrate with the vehicle’s front bumper.
Combining multiple radars to increase performance
DAR combines two or three MRR sensors, operated coherently together to provide enhanced radar resolution. The use of two physically displaced sensors creates a large virtual aperture enabling enhanced azimuth resolution of 0.5 degrees or lower, which helps to separate objects which are closely spaced.
Figure 2 DAR enhances performance by integrating far more channels onto a sensor. Source: NXP
The image can be further improved using three sensors, enhancing elevation resolution to less than 1 degree. The higher-resolution radar helps the vehicle navigate complex driving scenarios while recognizing debris and other potential hazards on the road.
The signals from the sensors, based on an RFCMOS radar chip, are fused coherently to produce a significantly richer point cloud than has historically been practical. The fused signal is processed using a radar processor, which is specially developed to support distributed architectures.
Figure 3 Zendar is a software-driven DAR technology. Source: NXP
Zendar is a DAR technology, developing system software for deployment in automobiles. The performance improvement is software-driven, enabling automakers to leverage low-cost, standard radar sensors yet attain performance that’s comparable to or better than the top-of-the-line high-resolution radar counterparts.
How DAR compares to M-MIMO radars
M-MIMO is an alternative high-resolution radar solution that embraces the more traditional radar design paradigm, which is to use more hardware and more channels when building a radar system. M-MIMO radars feature between 1,000 and 2,000 channels, which is many multiples more than the current generation of high-resolution sensors. This helps to deliver increased point density, and the ability to sense data from concurrent sensor transmissions.
The resolution and accuracy performance of radar are limited by the aperture size of the sensor; however, M-MIMO radars with 1,500 channels have apertures that are comparable in size to high-resolution radar sensors with 192 channels. The aperture itself is limited by the sensor size, which is capped by manufacturing and packaging constraints, along with size and weight specifications.
As a result, even though M-MIMO solutions can offer more channels, DAR systems can outperform M-MIMO radars on angular resolution and accuracy performance because their aperture is not limited by sensor size. This offers significant additional integration flexibility for OEMs.
M-MIMO solutions are expensive because they use highly specialized and complex hardware to improve radar performance. The cost of M-MIMO systems and their inherently unscalable hardware-centric design make them impractical for everything but niche high-end vehicles.
Such solutions are also power-hungry due to significantly increased hardware channels and processing requirements, which drive expensive cooling measures to manage the thermal design of the radar, which in turn, creates additional design and integration challenges.
More efficient, cost-effective solution
DAR has the potential to revolutionize ADAS and autonomous driving accessibility by using simple, efficient, and considerably more affordable hardware that makes it easy for OEMs to scale ADAS functionality across vehicle ranges.
Coherent combining of distributed radar is the only radar design approach where aperture size is not constrained by hardware, enabling an angular resolution lower than 0.5 degrees at significantly lower power dissipation. This is simply not possible in a large single sensor with thousands of antennas, and it’s particularly relevant considering OEM challenges with the proliferation of electric vehicles and the evolution of car design.
DAR’s high resolution helps it to differentiate between roadside infrastructure, objects, and stationary people or animals. It provides a higher probability of detection for debris on the road, which is essential for avoiding accidents, and it’s capable of detecting cars up to 350-m away—a substantial increase in detection range compared to current-generation radar solutions.
Figure 4 DAR’s high resolution provides a higher probability of detection for debris on the road. Source: NXP
Leveraging the significant detection range extension enabled by an RFCMOS radar chip, DAR also provides the ability to separate two very low radar cross section (RCS) objects such as cyclists, beyond 240 m, while conventional solutions start to fail around 100 m.
Simpler two-sensor DAR solutions can be used to enable more effective ACC and AEB systems for mainstream vehicles, with safety improvements helping OEMs to pass increasingly stringent NCAP requirements.
Perhaps most importantly for OEMs, DAR is a particularly cost-effective solution. The component sensors benefit from economies of scale, and OEMs can achieve higher autonomy levels by simply adding another sensor to the system, rather than resorting to complex hardware such as LiDAR or high-channel-count radar.
Because the technology relies on existing sensors, it’s also much more mature. Current ADAS systems are not fully reliable—they can disengage suddenly or find themselves unable to handle driving situations that require high-resolution radar to safely understand, plan and respond. This means drivers should be on standby to react and take over the control of the vehicle suddenly. The improvements offered by DAR will enable ADAS systems to be more capable, more reliable, and demand less human intervention.
Changing the future of driving
DAR’s effectiveness and reliability will help carmakers deliver enhanced ADAS and autonomous driving solutions that are more reliable than current offerings. With DAR, carmakers will be able to develop driving automation that is both safer and provides more comfortable experiences for drivers and their passengers.
For a new technology, DAR is already particularly robust as it relies on the mainstream radar sensors which have already been used in millions of cars over the past few years. As for the future, ADAS using DAR will become more trusted in the market as these systems provide comprehensive and comfortable assisted driving experiences at more affordable prices.
Karthik Ramesh is marketing director at NXP Semiconductors.
Related Content
- Radar Basics: Range, Pulse, Frequency, and More
- Is Digital Radar the Answer to ADAS Interference?
- Cameras, Radars, LiDARs: Sensing the Road Ahead
- Challenges in designing automotive radar systems
- Automated Driving Is Transforming the Sensor and Computing Market
- Implementing digital processing for automotive radar using SoC FPGAs
The post ADAS and autonomous vehicles with distributed aperture radar appeared first on EDN.
Pulsus Is a Breakthrough for PiezoMEMS Devices
Courtesy: Lam Research
- The tool enables the deposition of high-quality, highly scandium-doped AlScN films
- Features include dual-chamber configuration, degas, preclean, target library, precise laser scanning, and more
In this post, we explain how the Pulsus system works, and how it can achieve superior film quality and performance compared to conventional technologies.
PiezoMEMS devices are microelectromechanical systems that use piezoelectric materials to convert electrical energy into mechanical motion, or vice versa. They have applications in a wide range of fields, including sensors, actuators, microphones, speakers, filters, switches, and energy harvesters.
PiezoMEMS devices require high-quality thin films of piezoelectric materials, such as aluminum scandium nitride (AlScN), to achieve optimal performance. Conventional deposition technologies—think sputtering or chemical vapor deposition—face challenges in producing AlScN films with desired properties, such as composition, thickness, stress, and uniformity. These obstacles limit both the scalability and functionality of piezoMEMS devices.
Revolutionary TechTo help overcome these challenges, Lam Research recently introduced Pulsus, a pulsed laser deposition (PLD) system that we hope will revolutionize the world of piezoMEMS applications. The addition of Pulsus PLD to the Lam portfolio further expands our comprehensive range of deposition, etch and single wafer clean products focused on specialty technologies and demonstrates Lam’s continuous innovation in this sector.
Pulsus is a PLD process module that has been optimized and integrated on Lam’s production-proven 2300 platform. It was developed to enable the deposition of high-quality AlScN films, which are essential to produce piezoMEMS devices.
A key benefit of the Pulsus system is its ability to deposit multi-element thin films, like highly scandium-doped AlScN. The intrinsic high plasma density—in combination with pulsed growth—creates the conditions to stabilize the elements in the same ratio as they arrive from the target. This control is essential for depositing materials where the functionality of the film is driven by the precise composition of the elements.
Plasma, LasersLocal plasma allows for high local control of film specifications across the wafer, like thickness and local in-film stress. Pulsus can adjust deposition settings while the plasma “hovers” over the wafer surface. This local tuning of thickness and stress allows for high uniformities over the wafer, which is exactly what our customers are asking for. And because the plasma is generated locally, Pulsus uses targets that are much smaller than you would typically see in PVD systems. Pulsus can exchange these smaller targets, without breaking vacuum, through a target exchange module—the target library.
Pulsus uses a pulsed high-power laser to ablate a target material, in this case AlScN, and create a plasma plume. The plume expands and impinges on a substrate, where it forms a thin film.
Pulsus has a fast and precise laser optical path which, in combination with the target scanning mechanism, allows for uniform and controlled ablation of the target material. The Pulsus system has a high control of plasma plume generation, wafer temperature, and pressure control to achieve the desired film composition and stoichiometry.
By combining these features, Pulsus can produce high-quality films with superior performance for piezoMEMS devices. Pulsus can achieve excellent composition control, with low variation of the scandium (Sc) content across the wafer and within individual devices. It also delivers high film uniformity, with low WiW (within-wafer) and wafer-to-wafer (WtW) variation of the film thickness and stress.
Breakthrough TechnologyPulsus is a breakthrough technology for AlScN deposition, which can improve film quality and performance for piezoMEMS applications. In addition, Pulsus has the potential to enhance the functionality and scalability of piezoMEMS devices. The Pulsus technology deposits AlScN films with very high Sc concentration, resulting in high piezoelectric coefficients, which drive higher device sensitivity and output. These films feature tunable stress states to enable the design of different device configurations and shapes.
Pulsus is currently in use on 200 mm wafers and is planned to expand to 300 mm wafers in the future—a move that has the potential to increase the productivity and yield of piezoMEMS devices.
The post Pulsus Is a Breakthrough for PiezoMEMS Devices appeared first on ELE Times.
eevBLAB 120 - DCS sues Youtuber for Defamation! - Threatens entire review industry
Navitas’s Q2 revenue and gross margin at higher end of guidance
4K and beyond: Trends that are shaping India’s home projector market
Sushil Motwani, founder of Aytexcel Pvt Ltd, also evaluates the change in customer preferences that is driving the growth of the home entertainment segment
Sushil Motwani, Founder of Aytexcel Pvt. Ltd. and Official India Representative of FormovieRecent news reports indicate that a few leading companies in the home entertainment industry are in discussions with major production studios to ensure 8K resolution content, which offers extremely high-definition video quality. This means that the availability of 8K content is on the verge of becoming normative. From the modern consumer looking for the best visual experience, this is an exciting prospect.
Even though the availability of 8K content is currently minimal, many projectors boosted by technologies like Artificial Intelligence (AI) can upscale 4K content. While this cannot match the true quality of the native 8K, improved versions are expected in the coming years.
In the case of 4K and beyond, devices like laser projectors are continually evolving to match user preferences. Until the COVID-19 pandemic, laser projectors were mainly used for business presentations, in the education sector and at screening centres. However, with the rise of more OTT platforms and the availability of 4K content, there has been a huge demand for home theatres, where projector screens have replaced traditional TVs.
According to Statista, the number of households in India using home entertainment systems, such as home theatres, projectors and advanced TVs, is expected to reach 26.2 million by 2028. The revenue in this segment is projected to show an annual growth rate (CAGR) of 3.70 per cent, resulting in an estimated market volume of US$0.7 billion by 2028.
So, what are the key trends driving the home projector market in India? Visual quality is definitely one of them. Modern consumers demand upgraded display technologies like the Advanced Laser Phosphor Display® (ALPD). This innovative display combines laser-excited fluorescent materials with multi-colour lasers, resulting in a smooth and vividly coloured display, superior to regular projectors.
Multifunctionality is another key requirement for gamers. When transitioning from PCs to projector-driven gaming, consumers look for a large screen size, preferably 120 inches and above, high resolution, low input lag, quick refresh rate and excellent detailing and contrast.
With the integration of AI and Machine Learning (ML) tools, manufacturers are developing projectors with more user-friendly features and automatic settings that adjust to surrounding light conditions based on the displayed content. AI also helps improve security features and facilitates personalised user modes, while predictive maintenance makes the devices more intuitive and efficient.
Projectors with a multifaceted interface are also a popular choice. Voice assistance features enable users to connect their large-screen setups with other smart devices. The user experience is enhanced by options such as Alexa or voice commands through Google Assistant using a Google Home device or an Android smartphone. Multiple connectivity options, including HDMI, USB, Bluetooth and Wi-Fi facilitate smooth handling of these devices. Consumers also prefer projectors with native app integrations, like Netflix, to avoid external setups while streaming content.
There is also a desire among users to avoid messy cables and additional devices, which not only affect the convenience of installation but also impact the aesthetics of the interiors. This is why Ultra Short Throw (UST) projectors, which can offer a big screen experience even in small spaces, are emerging as a top choice. Some of these projectors can throw a 100-inch projection with an ultra-short throw distance of just 9 inches from the wall.
And finally, nothing can deliver a true cinematic experience like a dedicated surround sound system. But customers also want to avoid the additional setup of soundbars and subwoofers for enhanced sound. Since most movies are now supported by Dolby Atmos 7.1 sound, the home theatre segment is also looking for similar sound support. Projectors integrated with Dolby Atmos sound, powered by speakers from legendary manufacturers like Bowers & Wilkins, Yamaha, or Wharfedale, are key attractions for movie lovers and gamers.
Buyers are also looking for eye-friendly projectors equipped with features like infrared body sensors and diffuse reflection. The intelligent light-dimming and eye care technologies make their viewing experience more comfortable and reduce eye strain, especially during prolonged sessions like gaming.
The growing popularity of projectors is also attributed to the increasing focus on sustainability. Laser projectors are more energy-efficient than traditional lamp-based projectors. They use almost 50 per cent less power compared to the latter, which helps in energy savings and reduces the overall environmental impact. They are also very compact and made with sustainable and recycled materials, which minimises the logistical environmental impact and carbon footprint associated with their operation.
The post 4K and beyond: Trends that are shaping India’s home projector market appeared first on ELE Times.
Memory Leaders Rise to Meet the Storage Challenges of AI
🎥 Новітнє технологічне обладнання від Huawei для КПІ
Київська політехніка отримала від компанії Huawei енергетичне обладнання для НН ІЕЕ та новітнє обладнання для лабораторії DATACOM, що на РТФ.
Client DIMM chipset reaches 7200 MT/s
A memory interface chipset from Rambus enables DDR5 client CSODIMMS and CUDIMMs to operate at data rates of up to 7200 MT/s. This product offering includes a DDR5 client clock driver (CKD) and a serial presence detect (SPD) hub, bringing server-like performance to the client market.
The DDR5 client clock driver, part number DR5CKD1GC0, buffers the clock between the host controller and the DRAMs on DDR5 CUDIMMs and CSODIMMs. It receives up to four differential input clock pairs and supplies up to four differential output clock pairs. The device can operate in single PLL, dual PLL, and PLL bypass modes, supporting clock frequencies from 1600 MHz to 3600 MHz (DDR5-3200 to DDR5-7200). An I2C/I3C sideband bus interface allows device configuration and status monitoring.
Equipped with an internal temperature sensor, the SPD5118-G1B SPD hub senses and reports important data for system configuration and thermal management. The SPD hub contains 1024 bytes of nonvolatile memory arranged as 16 blocks of 64 bytes per block. Each block can be optionally write-protected via software command.
The DR5CKD1GC0 client clock driver is now sampling, while the SPD5118-G1B SPD hub is already in production. To learn more about the DDR5 client DIMM chipset, click here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Client DIMM chipset reaches 7200 MT/s appeared first on EDN.
Reference design trio covers EV chargers
Microchip has released three flexible and scalable EV charger reference designs for residential and commercial charging applications. These reference designs include a single-phase AC residential model, a three-phase AC commercial model that uses the Open Charge Point Protocol (OCPP) and a Wi-Fi SoC, and a three-phase AC commercial model with OCPP and a display.
The reference designs offer complete hardware design files and source code with software stacks that are tested and compliant with communication protocols, such as OCPP. OCPP provides a standard protocol for communication between charging stations and central systems, ensuring interoperability across networks and vendors.
Most of the active components for the reference designs, including the MCU, analog front-end, memory, connectivity, and power conversion, are available from Microchip. This streamlines integration and accelerates time to market for new EV charging systems.
The residential reference design is intended for home charging with a single-phase supply. It supports power up to 7.4 kW with an on-board relay and driver. The design also features an energy metering device with automatic calibration and two Bluetooth LE stacks.
The three-phase commercial reference design, aimed at high-end residential and commercial stations, integrates an OCPP 1.6 stack for network communication and a Wi-Fi SoC for remote management. It supports power up to 22 kW.
Catering to commercial and public stations, the three-phase commercial reference design with OCPP and a TFT touch-screen display supports bidirectional charging up to 22 kW.
To learn more about Microchip’s EV charger reference designs, click here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Reference design trio covers EV chargers appeared first on EDN.
Infineon expands GaN transistor portfolio
Infineon has launched the CoolGaN Drive family, featuring single switches and half-bridges with integrated drivers for compact, efficient designs. The family includes CoolGaN Drive 700-V G5 single switches, which integrate a transistor and gate driver in PQFN 5×6 and PQFN 6×8 packages. It also offers CoolGaN Drive HB 600-V G5 devices, which combine two transistors with high-side and low-side gate drivers in a LGA 6×8 package.
Depending on the product group, CoolGaN Drive components include a bootstrap diode, loss-free current measurement, and adjustable dV/dt. They also provide overcurrent, overtemperature, and short-circuit protection.
These devices support higher switching frequencies, leading to smaller, more efficient systems with reduced BoM, lower weight, and a smaller carbon footprint. The GaN HEMTs are suitable for longer-range e-bikes, portable power tools, and lighter-weight household appliances, such as vacuums, fans, and hairdryers.
Samples of the half-bridge devices are available now. Single-switch samples will be available starting Q4 2024. For more information about Infineon’s GaN HEMT lineup, click here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Infineon expands GaN transistor portfolio appeared first on EDN.
5G-enabled SBC packs AI accelerator
Tachyon, a Snapdragon-powered single-board computer (SBC) from Particle, boasts 5G connectivity and an NPU for AI/ML workloads. This credit-card-sized board provides the compute power and connectivity of a midrange smartphone in a Raspberry Pi form factor, supported by Particle’s edge-to-cloud IoT infrastructure.
At the heart of Tachyon is the Qualcomm Snapdragon QCM6490 SoC, featuring an octa-core Kryo CPU, Adreno 643 GPU, and an NPU for AI acceleration at a rate of up to 12 TOPS. The chipset also provides upstream Linux support, as well as support for Android 13 and Windows 11. Wireless connectivity includes 5G cellular and Wi-Fi 6E with on-device antennas. Ample storage is provided by 4 GB of RAM and 64 GB of flash memory.
Tachyon has two USB-C 3.1 connectors. One of these supports Display Port Alt Mode, which allows the connection of a USB-C capable monitor (up to 4K). Particle also offers a USB-C hub to add USB ports, HDMI, and a gigabit Ethernet port. The computer board includes a Raspberry PI-compatible 40-pin connector and support for cameras, displays, and PCIe peripherals connected via ribbon cables.
Tachyon is now available for pre-order on Kickstarter. Early bird prices start at $149. Shipments are expected to begin in January 2025. To learn more about the Tachyon SBC, click here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post 5G-enabled SBC packs AI accelerator appeared first on EDN.
AI tweaks presence sensor accuracy
Joining Aqara’s smart home sensor lineup is the FP1E, which combines mmWave technology and AI algorithms to enable precise human sensing. The FP1E, which supports Zigbee and Matter, detects human presence, even when the person is sitting or lying still.
Useful for various home automation scenarios, the FP1E detects presence up to 6 meters away and monitors rooms up to 50 square meters when ceiling-mounted. It can detect when someone leaves a room within seconds, automatically triggering actions such as turning off the lights or air conditioner.
The FP1E sensor uses AI algorithms to distinguish between relevant movements and false triggers, eliminating interference from small pets, reflections, and electronics to reduce unnecessary alerts. AI learning capabilities enhance detection accuracy through continuous learning, adapting to the user’s home environment over time.
The FP1E presence sensor is now available from Aqara’s Amazon brand stores for $50, as well as from select Aqara retailers worldwide. An Aqara hub, sold separately, is required for operation.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post AI tweaks presence sensor accuracy appeared first on EDN.