ELE Times

Subscribe to ELE Times feed ELE Times
latest product and technology information from electronics companies in India
Updated: 2 hours 40 min ago

Mission NIMpossible: Decoding the Microservices That Accelerate Generative AI

Fri, 08/09/2024 - 13:45

Sama Bali, Senior Product Marketer for AI solutions at NVIDIA Generative AI

Run generative AI NVIDIA NIM microservices locally on NVIDIA RTX AI workstations and NVIDIA GeForce RTX systems.

In the rapidly evolving world of artificial intelligence, generative AI is captivating imaginations and transforming industries. Behind the scenes, an unsung hero is making it all possible: microservices architecture.

The Building Blocks of Modern AI Applications

Microservices have emerged as a powerful architecture, fundamentally changing how people design, build and deploy software.

A microservices architecture breaks down an application into a collection of loosely coupled, independently deployable services. Each service is responsible for a specific capability and communicates with other services through well-defined application programming interfaces, or APIs. This modular approach stands in stark contrast to traditional all-in-one architectures, in which all functionality is bundled into a single, tightly integrated application.

By decoupling services, teams can work on different components simultaneously, accelerating development processes and allowing updates to be rolled out independently without affecting the entire application. Developers can focus on building and improving specific services, leading to better code quality and faster problem resolution. Such specialization allows developers to become experts in their particular domain.

Services can be scaled independently based on demand, optimizing resource utilization and improving overall system performance. In addition, different services can use different technologies, allowing developers to choose the best tools for each specific task.

A Perfect Match: Microservices and Generative AI

The microservices architecture is particularly well-suited for developing generative AI applications due to its scalability, enhanced modularity and flexibility.

AI models, especially large language models, require significant computational resources. Microservices allow for efficient scaling of these resource-intensive components without affecting the entire system.

Generative AI applications often involve multiple steps, such as data preprocessing, model inference and post-processing. Microservices enable each step to be developed, optimized and scaled independently. Plus, as AI models and techniques evolve rapidly, a microservices architecture allows for easier integration of new models as well as the replacement of existing ones without disrupting the entire application.

NVIDIA NIM: Simplifying Generative AI Deployment

As the demand for AI-powered applications grows, developers face challenges in efficiently deploying and managing AI models.

NVIDIA NIM inference microservices provide models as optimized containers to deploy in the cloud, data centers, workstations, desktops and laptops. Each NIM container includes the pretrained AI models and all the necessary runtime components, making it simple to integrate AI capabilities into applications.

NIM offers a game-changing approach for application developers looking to incorporate AI functionality by providing simplified integration, production-readiness and flexibility. Developers can focus on building their applications without worrying about the complexities of data preparation, model training or customization, as NIM inference microservices are optimized for performance, come with runtime optimizations and support industry-standard APIs.

AI at Your Fingertips: NVIDIA NIM on Workstations and PCs

Building enterprise generative AI applications comes with many challenges. While cloud-hosted model APIs can help developers get started, issues related to data privacy, security, model response latency, accuracy, API costs and scaling often hinder the path to production.

Workstations with NIM provide developers with secure access to a broad range of models and performance-optimized inference microservices.

By avoiding the latency, cost and compliance concerns associated with cloud-hosted APIs as well as the complexities of model deployment, developers can focus on application development. This accelerates the delivery of production-ready generative AI applications — enabling seamless, automatic scale out with performance optimization in data centers and the cloud.

The recently announced general availability of the Meta Llama 3 8B model as a NIM, which can run locally on RTX systems, brings state-of-the-art language model capabilities to individual developers, enabling local testing and experimentation without the need for cloud resources. With NIM running locally, developers can create sophisticated retrieval-augmented generation (RAG) projects right on their workstations.

Local RAG refers to implementing RAG systems entirely on local hardware, without relying on cloud-based services or external APIs.

Developers can use the Llama 3 8B NIM on workstations with one or more NVIDIA RTX 6000 Ada Generation GPUs or on NVIDIA RTX systems to build end-to-end RAG systems entirely on local hardware. This setup allows developers to tap the full power of Llama 3 8B, ensuring high performance and low latency.

By running the entire RAG pipeline locally, developers can maintain complete control over their data, ensuring privacy and security. This approach is particularly helpful for developers building applications that require real-time responses and high accuracy, such as customer-support chatbots, personalized content-generation tools and interactive virtual assistants.

Hybrid RAG combines local and cloud-based resources to optimize performance and flexibility in AI applications. With NVIDIA AI Workbench, developers can get started with the hybrid-RAG Workbench Project — an example application that can be used to run vector databases and embedding models locally while performing inference using NIM in the cloud or data center, offering a flexible approach to resource allocation.

This hybrid setup allows developers to balance the computational load between local and cloud resources, optimizing performance and cost. For example, the vector database and embedding models can be hosted on local workstations to ensure fast data retrieval and processing, while the more computationally intensive inference tasks can be offloaded to powerful cloud-based NIM inference microservices. This flexibility enables developers to scale their applications seamlessly, accommodating varying workloads and ensuring consistent performance.

NVIDIA ACE NIM inference microservices bring digital humans, AI non-playable characters (NPCs) and interactive avatars for customer service to life with generative AI, running on RTX PCs and workstations.

ACE NIM inference microservices for speech — including Riva automatic speech recognition, text-to-speech and neural machine translation — allow accurate transcription, translation and realistic voices.

The NVIDIA Nemotron small language model is a NIM for intelligence that includes INT4 quantization for minimal memory usage and supports roleplay and RAG use cases.

And ACE NIM inference microservices for appearance include Audio2Face and Omniverse RTX for lifelike animation with ultrarealistic visuals. These provide more immersive and engaging gaming characters, as well as more satisfying experiences for users interacting with virtual customer-service agents.

Dive Into NIM

As AI progresses, the ability to rapidly deploy and scale its capabilities will become increasingly crucial.

NVIDIA NIM microservices provide the foundation for this new era of AI application development, enabling breakthrough innovations. Whether building the next generation of AI-powered games, developing advanced natural language processing applications or creating intelligent automation systems, users can access these powerful development tools at their fingertips.

The post Mission NIMpossible: Decoding the Microservices That Accelerate Generative AI appeared first on ELE Times.

Futureproof Your Industrial Network Security

Fri, 08/09/2024 - 13:27

Courtesy: Moxa

Today, industrial organizations are embracing digital transformation to gain a competitive edge and boost business revenue. To achieve digital transformation, industrial operators must first address the daunting task of merging their information technology (IT) and operational technology (OT) infrastructure. However, businesses trying to streamline data connectivity for integrated IT/OT systems often encounter challenges such as lacking performance, limited network visibility, and lower network security from existing OT network infrastructure. Building a robust, high-performance network for daily operations that is easy to maintain requires thorough planning. In this article, we will focus on the importance of strong OT network security and provide some tips on how to strengthen cybersecurity for industrial operations.

Why Ramping Up OT Network Security Is a Must

Nowadays, industrial applications are facing more and unprecedented cyberthreats. These threats often target critical infrastructure in different industries all across the world, including energy, transportation, and water and wastewater services. If successful, such attacks can cause significant damage to industrial organizations in the form of high recovery costs or production delays. Before building IT/OT converged networks, asset owners must define the target security level of the entire network and strengthen measures to minimize the impact of potential intrusions. Poor network security exposes critical field assets to unwanted access and allows malicious actors to breach integrated systems.

However, strengthening OT network security is not that straightforward. IT security solutions require constant updates to ensure they can protect against the latest cyberthreats. Applying these necessary updates often means interrupting network services and systems, which is something OT operations cannot afford. Operators need an OT-centric cybersecurity approach to protect their industrial networks without sacrificing network or operational uptime.

Three Major Stages of Building OT Cybersecurity

Building a secure industrial network can be done with the right approach. The key to strong cybersecurity is implementing a multi-layered defense strategy in several stages.

Stage One: Build a Solid Foundation with Secure Networking Devices

When developing secure networking infrastructure, start with choosing secure building blocks. The increasing number of cyberthreats has also led to the development of comprehensive OT network security standards. Industrial cybersecurity standards, such as NIST CSF and IEC 62443, provide security guidelines for critical assets, systems, and components. Implementing industrial cybersecurity standards and using networking devices designed around these standards provides asset owners with a solid foundation for building secure network infrastructure.

Stage Two: Deploy OT-centric Layered Protection

The idea of defense-in-depth is to provide multi-layered protection by implementing cybersecurity measures at every level to minimize security risks. In the event of an intrusion, if one layer of protection is compromised, another layer prevents the threat from further affecting the network. In addition, instant notifications for security events allow users to quickly respond to potential threats and mitigate any risk.

When deploying multi-layered network protection for OT networks and infrastructure, there are two key OT cybersecurity solutions to consider, namely industrial firewalls and secure routers.

Shield Critical Assets with Industrial Firewalls

An efficient way to protect critical field assets is using industrial firewalls to create secure network zones and defend against potential threats across the network. With every connected device being the potential target of cyberthreats, it’s important to deploy firewalls with robust traffic filtering that allow administrators to set up secure conduits throughout the network. Next-generation firewalls feature advanced security functions such as Intrusion Detection/Prevention Systems (IDS/IPS) and Deep Packet Inspection (DPI) to strengthen network protection against intrusions by proactively detecting and blocking threats.

Advanced security functions tailored for OT environments help ensure seamless communications and maximum uptime for industrial operations. For example, OT-centered DPI technology that supports industrial protocols can detect and block unwanted traffic, ensuring secure industrial protocol communications. In addition, industrial-grade IPS can support virtual patching to protect critical assets and legacy devices from the latest known threats without affecting network uptime. Designed for industrial applications, IPS provides pattern-based detection for PLCs, HMIs, and other common field site equipment.

Fortify Network Boundaries with Industrial Secure Routers

IT/OT converged networks require a multi-layered and complex industrial network infrastructure to transmit large amounts of data from field sites to the control center. Deploying powerful industrial secure routers between different networks can both fortify network boundaries and maintain solid network performance. Featuring built-in advanced security functions such as firewall and NAT, secure routers allow administrators to establish secure network segments and enable data routing between segments. For optimal network performance, a powerful industrial secure router features both switching and routing functions with Gigabit speeds, alongside redundancy measures for smooth intra- and inter-network communication.

The demand for remote access to maintain critical assets and networks has also been on the rise. Industrial secure routers with VPN support allow maintenance engineers and network administrators to access private networks remotely through a secure tunnel, enabling more efficient remote management.

Stage Three: Monitor the Network Status and Identify Cyberthreats

Deploying a secure industrial network is just the start of the journey towards robust cybersecurity. During daily operations, it takes a lot of time and effort for network administrators to have full network visibility, monitor traffic, and manage the countless networking devices. Implementing a centralized network management platform can provide a huge boost to operational efficiency by visualizing the entire network and simplifying device management. It also allows network administrators to focus more resources on ramping up network and device security.

In addition, a centralized network security management platform for cybersecurity solutions can boost efficiency even more. Such software allows administrators to perform mass deployments for firewall policies, monitor cyberthreats, and configure notifications for when threats occur. The right combination of cybersecurity solutions and management software offers administrators an invaluable way to monitor and identify cyberthreats with a holistic view.

Futureproof Network Security with Our Solutions

Network security is imperative for industrial network infrastructure. Moxa has translated over 35 years of industrial networking experience into a comprehensive OT-centric cybersecurity portfolio that offers enhanced security with maximum network uptime. Moxa is an IEC 62443-4-1 certified industrial connectivity and networking solutions provider. When developing our products, we adhere to the security principles of the IEC 62443-4-2 standard to ensure secure product development. Our goal is to provide our users with the tools necessary to build robust device security for their industrial applications.

To defend against increasing cyberthreats, our OT-focused cybersecurity solutions maximize uptime while protecting industrial networks from intruders. Our network management software simplifies management for networking devices and OT cybersecurity solutions, allowing administrators to monitor the network security status and manage cyberthreats with ease.

The post Futureproof Your Industrial Network Security appeared first on ELE Times.

Why the performance of your storage system matters for AI workloads?

Fri, 08/09/2024 - 13:11

Courtesy: Micron

A guide to understanding some key factors that influence the speed and efficiency of your data storage

Data is the lifeblood of any modern business, and how you store, access and manage it can make a dramatic difference in your productivity, profitability and competitiveness. The emergence of artificial intelligence (AI) is transforming every industry and forcing businesses to re-evaluate how they can use data to accelerate innovation and growth. However, AI training and inferencing pose unique challenges for data management and storage, as they require massive amounts of data, high performance, scalability and availability.

Not all storage systems are created equal, and many factors that can affect their performance. In this blog post, we will explore some of the main factors that influence storage system performance for AI and, importantly, how your choice of underlying storage media will affect them.

Key attributes of AI workloads

AI workloads are data-intensive and compute-intensive, meaning that they need to process large volumes of data at high speed and with low latency. Storage plays a vital role in enabling AI workloads to access, ingest, process and store data efficiently and effectively. Some key attributes of typical AI workloads that affect storage requirements are:

  • Data variety: AI workloads need to access data from multiple sources and formats, such as structured, unstructured or semi-structured data, and from various locations, such as on-premises, cloud or edge. Storage solutions need to provide fast and reliable data access and movement across different environments and platforms.
  • Data velocity: AI workloads need to process data in real-time or near-real-time. Storage solutions need to deliver high throughput, low latency and consistent performance for data ingestion, processing and analysis.
  • Data volume: As AI models grow in complexity and accuracy and GPU clusters grow in compute power, their storage solutions need to provide flexible and scalable capacity and performance.
  • Data reliability and availability: AI workloads need to ensure data integrity, security and extremely high availability, particularly when connected to large GPU clusters that are intolerant of interruptions in data access.
Factors that affect storage system performance

Storage system performance is not a single metric but a combination of several factors that depend on the characteristics and requirements of your data, applications and data center infrastructure. Some of the most crucial factors are:

  • Throughput: The rate at which your storage system can transfer data to and from the network or the host. Higher throughput can improve performance by increasing the bandwidth and reducing the congestion and bottlenecks of your data flow. The throughput is usually limited by either the network bandwidth or the speed of the storage media.
  • Latency: The time it takes for your storage system to respond to a read or write request. A lower latency can improve performance by reducing GPU idle time and improving the system’s responsiveness to user inputs. The latency of mechanical devices (such as HDDs) is inherently much higher than for solid-state devices (SSDs).
  • Scalability: The ability of your storage system to adapt to changes in data volume, velocity and variety. High scalability is key to enabling your storage system to grow and evolve with your business needs and goals. The biggest challenge to increasing the amount of data that your system can store and manage is maintaining performance scaling without hitting bottlenecks or storage device limitations.
  • Resiliency: The ability of your storage system to maintain data integrity and availability in the event of failures, errors or disasters. Higher reliability can improve performance by reducing the frequency and impact of data corruption, loss and recovery.
Storage media alternatives

Hard disk drives (HDDs) and solid-state drives (SSDs) are the two main types of devices employed for persistent storage in data center applications. HDDs are mechanical devices that use rotating disk platters with a magnetic coating to store data, while SSDs use solid-state flash memory chips to store data. HDDs have been the dominant storage devices for decades. HDDs offer the lowest cost per bit and long-term, power-off durability, but they are slower and less reliable than SSDs. SSDs offer higher throughputs, lower latencies, higher reliability and denser packaging options.

As technology advances and computing demands increase, the mechanical nature of the HDD may not allow it to keep pace in performance. There are a few options that system designs can deploy to extend the effective performance of HDD-based storage systems, such as mixing hot and cold data (hot data borrowing performance from the colder data), sharing data across many HDD spindles in parallel (increasing throughput but not improving latency), overprovisioning HDD capacity (in essence provisioning for IO and not capacity), and adding SSD caching layers for latency outliers (see recent blog by Steve Wells HDDs and SSDs. What are the right questions? | Micron Technology Inc.). These system-level solutions have limited scalability before their cost becomes prohibitive. How extendable these solutions are is dependent on the level of performance an application requires. For many of today’s AI workloads, HDD-based systems are falling short on scalability of performance and power efficiency.

High-capacity, SSD-based storage systems, though, can provide a less complex and more extendable solution, and they are rapidly evolving as the storage media of choice for high-performance AI data lakes at many large GPU-centric data centers. While at the drive level, on a cost-per-bit basis, these SSDs are more expensive than HDDs. But at a system level, systems built with these SSDs can have better operating costs than HDDs when you consider these improvements:

  • Much higher throughput
  • Greater than 100 times lower latency
  • Fewer servers and racks per petabyte needed
  • Better reliability with longer useful lifetimes
  • Better energy efficiency for a given level of performance

The capacity of SSDs is expected to grow to over 120TB in the next few years. As their capacities grow and the pricing gap between SSDs and HDDs narrows, these SSDs can become attractive alternatives for other workloads that demand higher than average performance or need much lower latency on large data sets, such as video editing and medical imaging diagnostics.

Conclusion

Storage performance is an important design criterion for systems running AI workloads. It affects system performance, scalability, data availability and overall system cost and power requirements. Therefore, it’s important that you understand the features and benefits of different storage options and select the best storage solution for your AI needs. By choosing the right storage solution, you can optimize your AI workloads and achieve your AI goals.

The post Why the performance of your storage system matters for AI workloads? appeared first on ELE Times.

Semiconductor Attributes for Sustainable System Design

Fri, 08/09/2024 - 12:45

Courtesy: Jay Nagle, Principal Product Marketing Engineer, Microchip Technology Inc.

Jay Nagle, Principal Product Marketing Engineer, Microchip Technology Inc.

Gain further insights on some of the key attributes required of semiconductors to facilitate sustainability in electronic systems design.

Semiconductor Innovations for Sustainable Energy Management

As systems design becomes more technologically advanced, the resultant volume increase in electronic content poses threats to environmental sustainability. Global sustainability initiatives are being implemented to mitigate these threats. However, with the rise of these initiatives, there is also an increasing demand for the generation of electricity. Thus, a new challenge emerges: how can we manage these increasing levels of energy consumption?

To answer the call for more electricity generation, it is essential for renewable energy sources to have increasing shares of energy production vs. fossil fuels to reduce greenhouse gas emissions. The efficiency of a renewable energy source hinges on optimizing the transfer of energy from the source to the power grid or various electrical loads. These loads include commonly utilized consumer electronics, residential appliances and large-scale battery energy storage systems. Furthermore, the electrical loads must utilize an optimal amount of power during operation to encourage efficient energy usage.

Read on to learn more about the key attributes of semiconductors that contribute to enhanced sustainability in system designs.

Integrated circuits (ICs) or application-specific integrated circuits (ASICs) used for renewable power conversion and embedded systems must have four key features: low power dissipation, high reliability, high power density and security.

Low Power Dissipation

One of the main characteristics needed in a semiconductor for sustainable design is low power consumption. This extends battery life, allowing longer operating times between recharges, which ultimately conserves energy.

There are two leading sources of semiconductor power loss. The first is static power dissipation or power consumption when a circuit is in stand-by or a non-operational state. The second source is dynamic power dissipation, or power consumption when the circuit is in an operational state.

To reduce both static and dynamic power dissipation, semiconductors are developed to minimize capacitance through their internal layout construction, operate at lower voltage levels and activate functional blocks depending on if the device is in “deep sleep” stand-by or functional mode.

Microchip offers low power solutions that are energy efficient and reduce hazardous e-waste production.

High Reliability

The reliability of parts and the longevity of the system help to measure performance of semiconductors in sustainable system designs. Semiconductor reliability and longevity can be compromised by operation near the limits of the device’s temperature ratings, mechanical stresses, and torsion.

We use Very Thin Quad Flat No-Lead (VQFN) and Thin Quad Flat Pack (TQFP) packages to encapsulate complex layouts in small form factor packages to address these concerns. Exposed pads on the bottom surface of the VQFN package dissipate an adequate amount of heat, which helps to hold a low junction to case thermal resistance when the device operates at maximum capacity. TQFP packages use gull-wing leads on low-profile height packages to withstand torsion and other mechanical stresses.

High Power Density

Power density refers to the amount of power generated per unit of die size. Semiconductors with high power densities can run at high power levels while being packaged in small footprints. This is common in silicon carbide (SiC) wide-bandgap (WBG) discretes and power modules used in solar, wind and electric-vehicle power-conversion applications.

SiC enhances power-conversion systems by allowing the system to operate at higher frequencies, reducing the size and weight of electrical passives needed to transfer the maximum amount of power from a renewable source.

Our WBG SiC semiconductors offer several advantages over traditional silicon devices, such as running at higher temperatures and faster switching speeds. SiC devices’ low switching losses improve system efficiency while their high-power density reduces size and weight. They also can achieve a smaller footprint with reduction in heat sink dimensions.

Security

Security in semiconductors is almost synonymous with longevity, as security features can enable continued reuse of existing systems. This means that the design can be operated for longer periods of time without the need for replacement or becoming outdated.

There are helpful security features that support system longevity. For example, secure and immutable boot can verify the integrity of any necessary software updates to enhance system performance or fix software bugs. Secure key storage and node authentication can protect against external attacks as well as ensure that verified code runs on the embedded design.

The post Semiconductor Attributes for Sustainable System Design appeared first on ELE Times.

Pulsus Is a Breakthrough for PiezoMEMS Devices

Fri, 08/09/2024 - 12:30

Courtesy: Lam Research

  • The tool enables the deposition of high-quality, highly scandium-doped AlScN films
  • Features include dual-chamber configuration, degas, preclean, target library, precise laser scanning, and more

In this post, we explain how the Pulsus system works, and how it can achieve superior film quality and performance compared to conventional technologies.

PiezoMEMS devices are microelectromechanical systems that use piezoelectric materials to convert electrical energy into mechanical motion, or vice versa. They have applications in a wide range of fields, including sensors, actuators, microphones, speakers, filters, switches, and energy harvesters.

PiezoMEMS devices require high-quality thin films of piezoelectric materials, such as aluminum scandium nitride (AlScN), to achieve optimal performance. Conventional deposition technologies—think sputtering or chemical vapor deposition—face challenges in producing AlScN films with desired properties, such as composition, thickness, stress, and uniformity. These obstacles limit both the scalability and functionality of piezoMEMS devices.

Revolutionary Tech 

To help overcome these challenges, Lam Research recently introduced Pulsus, a pulsed laser deposition (PLD) system that we hope will revolutionize the world of piezoMEMS applications. The addition of Pulsus PLD to the Lam portfolio further expands our comprehensive range of deposition, etch and single wafer clean products focused on specialty technologies and demonstrates Lam’s continuous innovation in this sector.

Pulsus is a PLD process module that has been optimized and integrated on Lam’s production-proven 2300 platform. It was developed to enable the deposition of high-quality AlScN films, which are essential to produce piezoMEMS devices.

A key benefit of the Pulsus system is its ability to deposit multi-element thin films, like highly scandium-doped AlScN. The intrinsic high plasma density—in combination with pulsed growth—creates the conditions to stabilize the elements in the same ratio as they arrive from the target. This control is essential for depositing materials where the functionality of the film is driven by the precise composition of the elements.

Plasma, Lasers 

Local plasma allows for high local control of film specifications across the wafer, like thickness and local in-film stress. Pulsus can adjust deposition settings while the plasma “hovers” over the wafer surface. This local tuning of thickness and stress allows for high uniformities over the wafer, which is exactly what our customers are asking for.  And because the plasma is generated locally, Pulsus uses targets that are much smaller than you would typically see in PVD systems. Pulsus can exchange these smaller targets, without breaking vacuum, through a target exchange module—the target library.

Pulsus uses a pulsed high-power laser to ablate a target material, in this case AlScN, and create a plasma plume. The plume expands and impinges on a substrate, where it forms a thin film.

Pulsus has a fast and precise laser optical path which, in combination with the target scanning mechanism, allows for uniform and controlled ablation of the target material. The Pulsus system has a high control of plasma plume generation, wafer temperature, and pressure control to achieve the desired film composition and stoichiometry.

By combining these features, Pulsus can produce high-quality films with superior performance for piezoMEMS devices. Pulsus can achieve excellent composition control, with low variation of the scandium (Sc) content across the wafer and within individual devices. It also delivers high film uniformity, with low WiW (within-wafer) and wafer-to-wafer (WtW) variation of the film thickness and stress.

Breakthrough Technology 

Pulsus is a breakthrough technology for AlScN deposition, which can improve film quality and performance for piezoMEMS applications. In addition, Pulsus has the potential to enhance the functionality and scalability of piezoMEMS devices. The Pulsus technology deposits AlScN films with very high Sc concentration, resulting in high piezoelectric coefficients, which drive higher device sensitivity and output. These films feature tunable stress states to enable the design of different device configurations and shapes.

Pulsus is currently in use on 200 mm wafers and is planned to expand to 300 mm wafers in the future—a move that has the potential to increase the productivity and yield of piezoMEMS devices.

The post Pulsus Is a Breakthrough for PiezoMEMS Devices appeared first on ELE Times.

4K and beyond: Trends that are shaping India’s home projector market

Fri, 08/09/2024 - 08:57

Sushil Motwani, founder of Aytexcel Pvt Ltd, also evaluates the change in customer preferences that is driving the growth of the home entertainment segment

Sushil Motwani, Founder of Aytexcel Pvt. Ltd. and Official India Representative of Formovie

Recent news reports indicate that a few leading companies in the home entertainment industry are in discussions with major production studios to ensure 8K resolution content, which offers extremely high-definition video quality. This means that the availability of 8K content is on the verge of becoming normative.  From the modern consumer looking for the best visual experience, this is an exciting prospect.

 

 

Even though the availability of 8K content is currently minimal, many projectors boosted by technologies like Artificial Intelligence (AI) can upscale 4K content. While this cannot match the true quality of the native 8K, improved versions are expected in the coming years.

In the case of 4K and beyond, devices like laser projectors are continually evolving to match user preferences. Until the COVID-19 pandemic, laser projectors were mainly used for business presentations, in the education sector and at screening centres. However, with the rise of more OTT platforms and the availability of 4K content, there has been a huge demand for home theatres, where projector screens have replaced traditional TVs.

According to Statista, the number of households in India using home entertainment systems, such as home theatres, projectors and advanced TVs, is expected to reach 26.2 million by 2028. The revenue in this segment is projected to show an annual growth rate (CAGR) of 3.70 per cent, resulting in an estimated market volume of US$0.7 billion by 2028.

So, what are the key trends driving the home projector market in India? Visual quality is definitely one of them. Modern consumers demand upgraded display technologies like the Advanced Laser Phosphor Display® (ALPD). This innovative display combines laser-excited fluorescent materials with multi-colour lasers, resulting in a smooth and vividly coloured display, superior to regular projectors.

Multifunctionality is another key requirement for gamers. When transitioning from PCs to projector-driven gaming, consumers look for a large screen size, preferably 120 inches and above, high resolution, low input lag, quick refresh rate and excellent detailing and contrast.

With the integration of AI and Machine Learning (ML) tools, manufacturers are developing projectors with more user-friendly features and automatic settings that adjust to surrounding light conditions based on the displayed content. AI also helps improve security features and facilitates personalised user modes, while predictive maintenance makes the devices more intuitive and efficient.

Projectors with a multifaceted interface are also a popular choice. Voice assistance features enable users to connect their large-screen setups with other smart devices. The user experience is enhanced by options such as Alexa or voice commands through Google Assistant using a Google Home device or an Android smartphone. Multiple connectivity options, including HDMI, USB, Bluetooth and Wi-Fi facilitate smooth handling of these devices. Consumers also prefer projectors with native app integrations, like Netflix, to avoid external setups while streaming content.

There is also a desire among users to avoid messy cables and additional devices, which not only affect the convenience of installation but also impact the aesthetics of the interiors. This is why Ultra Short Throw (UST) projectors, which can offer a big screen experience even in small spaces, are emerging as a top choice. Some of these projectors can throw a 100-inch projection with an ultra-short throw distance of just 9 inches from the wall.

And finally, nothing can deliver a true cinematic experience like a dedicated surround sound system. But customers also want to avoid the additional setup of soundbars and subwoofers for enhanced sound. Since most movies are now supported by Dolby Atmos 7.1 sound, the home theatre segment is also looking for similar sound support. Projectors integrated with Dolby Atmos sound, powered by speakers from legendary manufacturers like Bowers & Wilkins, Yamaha, or Wharfedale, are key attractions for movie lovers and gamers.

Buyers are also looking for eye-friendly projectors equipped with features like infrared body sensors and diffuse reflection. The intelligent light-dimming and eye care technologies make their viewing experience more comfortable and reduce eye strain, especially during prolonged sessions like gaming.

The growing popularity of projectors is also attributed to the increasing focus on sustainability. Laser projectors are more energy-efficient than traditional lamp-based projectors. They use almost 50 per cent less power compared to the latter, which helps in energy savings and reduces the overall environmental impact. They are also very compact and made with sustainable and recycled materials, which minimises the logistical environmental impact and carbon footprint associated with their operation.

The post 4K and beyond: Trends that are shaping India’s home projector market appeared first on ELE Times.

An Overview of Oscilloscopes and Their Industrial Uses

Thu, 08/08/2024 - 14:10

Key takeaways:

  • Oscilloscopes are primarily time-domain measurement instruments that mostly display timing-related characteristics.
  • However, mixed-domain oscilloscopes give you the best of both worlds by including built-in spectrum analyzers for frequency-domain measurements.
  • Modern oscilloscopes sport extremely sophisticated triggering and analysis features, both on-device and through remote measurement software.

After a multimeter, an oscilloscope is probably the second-most popular instrument on an engineer’s workbench. Oscilloscopes enable you to peer into the internals of electronic devices and monitor the signals they use under the hood.

What do engineers look for when using oscilloscopes? What are some innovations that these instruments have facilitated? What are some key characteristics to look for? Find out the answers to all this and more below.

What is the primary function of oscilloscopes in electronic measurements?

Oscilloscopes enable engineers to measure and visualize the amplitude of an electrical signal over time. This is also the reason they are generally considered time-domain measurement instruments. However, there are mixed-domain oscilloscopes that provide both time-domain (amplitude vs. time) and frequency-domain (power vs. frequency) measurements.

The precise characterization of waveforms is a critical diagnostic tool in every stage of an electronic product lifecycle, including cutting-edge research, prototyping, design, quality assurance, compliance, maintenance, and calibration.

Let’s look at the type of signals that are being tested with oscilloscopes in various industries to facilitate innovations and products.

What signal characteristics are verified using oscilloscopes?

When experienced electronics engineers are troubleshooting issues using oscilloscopes, they are looking for evidence of several ideal characteristics as well as problematic phenomena, depending on the type of signal and the application. Some of the common aspects and phenomena they examine are listed below:

  • Signal shape: The waveform should match the expected shape if the specification requires a square, sawtooth, or sine wave. Any deviations might indicate an issue.
  • Amplitude: The signal levels should remain within the expected range of volts without excessive fluctuations.
  • Frequency or period: The frequency or period of the signal should always remain within specified limits. Deviations from the expected frequency can lead to synchronization problems in communication and control systems.
  • Rise and fall times: For digital signals, sharp and consistent rise and fall times are essential for reliable operation. If the rise time is slower than required, it may lead to problems like data corruption, timing errors, and reduced performance in digital circuits. If it’s overly fast, it can lead to increased electromagnetic interference as well as signal integrity issues like ringing and crosstalk.
  • Jitter: Jitter is the variation in a signal characteristic during significant transitions. Period jitter is the variation in the duration of individual clock periods. Cycle-to-cycle jitter is the variation in duration between consecutive clock cycles. Phase jitter is the variation in the phase of the signal with respect to a reference clock. Timing jitter is the variation in the timing of signal edges. Low jitter indicates stable signal timing. Excessive jitter may cause errors in high-speed digital communication.
  • Phase consistency: In systems with multiple signals, phase consistency between them is critical for proper synchronization.
  • Duty cycle: For pulse-width modulation signals and clock signals, the duty cycle should be as specified.
  • Noise: Noise is any unwanted disturbance that affects a signal’s amplitude, phase, frequency, or other characteristics. It should be minimal and within acceptable limits to avoid interference and degradation of the signal. Too much noise indicates poor signal integrity, possible shielding issues, or noise due to suboptimal power supply. Phase noise can affect the synchronization of communication and clock signals.
  • Harmonics and distortion: For analog signals, low harmonic distortion ensures signal fidelity.
  • Ringing: Ringing refers to oscillations after a signal transition, usually seen in digital circuits, that can lead to errors and signal integrity issues.
  • Crosstalk: Unwanted coupling from adjacent signal lines can appear as unexpected waveforms on the oscilloscope trace.
  • Drift: Changes in signal amplitude or frequency over time are indicators of instability in the power supply or other components.
  • Ground bounce: Variability in the ground potential, often visible as a noisy baseline, can be critical in fast-switching digital circuits.
  • Clipping: If the input signal amplitude exceeds the oscilloscope’s input range, the displayed waveform will be clipped, indicating a need for signal attenuation or a more appropriate input setting on the scope.
  • Direct current (DC) offsets: Unexpected DC offsets can indicate issues with the waveform generation or coupling methods.
  • Aliasing: Aliasing occurs if the oscilloscope sampling rate is too low for the signal frequency, leading to an incorrect representation of the signal.
What types of waveforms and signals can be analyzed using an oscilloscope?

Oscilloscopes are used to verify a variety of analog signals and digital signals in many industries as explained below.

5G and 6G telecom

Figure 1: A Keysight Infiniium UXR-series real-time oscilloscope

The radio frequency (RF) signals used in telecom systems and devices must strictly adhere to specifications for optimum performance as well as regulatory compliance.

Some examples of oscilloscope use in this domain include:

  • InfiniiumUXR-B series real-time oscilloscopes (RTOs) for characterizing 5G and 6G systems, including phased-array antenna transceivers and mmWave wideband analysis capable of measuring frequencies as high as 110 gigahertz (GHz) and bandwidths of as much as 5 GHz
  • development and verification of 41-GHz power amplifier chips for 5G New Radio applications
  • qualifying a 6G 100 gigabits-per-second (Gbps) 300GHz (sub-terahertz) wireless data link using a 70 GHz UXR0704B Infiniium UXR-Series RTO
Photonics and fiber optics

Oscilloscopes are extensively employed for functional and compliance testing of optical and electrical transceivers used in high-speed data center networks.

Some of the use cases are listed below:

  • Oscilloscopes, with the help of optical-to-electrical adaptors, verify characteristics like phase-amplitude modulation (PAM4) of 400Ghigh-speed optical networks.
  • Oscilloscopes test the conformance of 400G/800G electrical data center transceivers with the Institute of Electrical and Electronics Engineers (IEEE) 802.3CK and the Optical Internetworking Forum’s (OIF) OIF-CEI-5.0 specifications.
  • Real-time oscilloscopes like the UXR-B are used to evaluate the forward error correction performance of high-speed optical network links.
Digital interfaces of consumer electronics

Oscilloscopes and arbitrary waveform generators are used together for debugging and automated testing of high-speed digital interfaces like:

  • Wi-Fi 7 networking standard
  • universal serial bus (USB)
  • mobile industry processor interface (MIPI) standards
  • peripheral component interconnect express (PCIe) buses
  • high-definition multimedia interface (HDMI)

They are also being used for testing general-purpose digital interfaces like the inter-integrated circuit (I2C), the serial peripheral interface (SPI), and more.

Automotive radars and in-vehicle networks Figure 2: Integrated protocol decoders for automotive and other digital signals

Oscilloscopes are used for validating automotive mmWave radar chips. Additionally, oscilloscopes are extensively used for verifying automotive in-vehicle network signals like:

  • automotive Ethernet
  • controller area network (CAN)
  • FlexRay
  • local interconnect network (LIN)
Aerospace and defense

Radars for aerospace and defense uses are validated using instruments like the UXR-series oscilloscopes.

They are also used for ensuring that data communications comply with standards like the MIL-STD 1553 and ARINC 429.

Space

Oscilloscopes are being used for developing 2.65 Gbps high-speed data links to satellites.

How does an oscilloscope visually represent electrical signals? Figure 3: Schematic of an oscilloscope

An oscilloscope’s display panel consists of a two-dimensional resizable digital grid. The horizontal X-axis represents the time base for the signal, while the vertical Y-axis represents signal amplitude in volts.

Each segment of an axis is called a division (or div). Control knobs on the oscilloscope allow the user to change the magnitude of volts or time that each div represents.

Figure 4: Visualizing a signal on an oscilloscope

Increasing this magnitude on the X-axis means more seconds or milliseconds per division. So you can view a longer capture of the signal, effectively zooming out on it. Similarly, by reducing the magnitude on the X-axis, you’re able to zoom into the signal to see finer details. The maximum zoom depends on the oscilloscope’s sampling rate. It’s often possible to zoom in to nanosecond levels on modern oscilloscopes since they have sampling rates of some giga samples per second.

Similarly, you can zoom in or out on the Y-axis to examine finer details of changes in amplitude.

What are the various types of oscilloscopes? Figure 5:Waveform acquisition using an equivalent time sampling oscilloscope

Some of the common types of oscilloscopes are:

  • Digital storage oscilloscopes (DSOs): They capture and store digital representations of analog signals, allowing for detailed analysis and post-processing. All modern scopes, including the sub-types below, are DSOs. The term differentiates them from older analog scopes that showed waveforms by firing an electron beam from a cathode ray tube (CRT) onto a phosphor-coated screen to make it glow.
  • Mixed-signal oscilloscopes (MSOs): They integrate both analog and digital channels, enabling simultaneous observation of analog signals and digital logic states. They’re useful for use cases like monitoring power management chips.
  • Mixed-domain oscilloscopes (MDOs): They combine normal time-domain oscilloscope functions with a built-in spectrum analyzer, allowing for time-correlated viewing of time-domain and frequency-domain signals.
  • Real-time oscilloscopes: They capture and process a waveform in real time as it happens, making them suitable for non-repetitive and transient signal analysis.
  • Equivalent time oscilloscopes: Equivalent time or sampling oscilloscopes are designed to capture high-frequency or fast repetitive signals by reconstructing them using equivalent time sampling. They sample a repetitive input signal at a slightly different point of time during each repetition. By piecing these samples together, they can reconstruct an accurate representation of the waveform, even one that is very high frequency.
How does an oscilloscope differ from other test and measurement equipment?

Oscilloscopes often complement other instruments like spectrum analyzers and logic analyzers. Some key differences between oscilloscopes and spectrum analyzers include:

  • Purpose: Oscilloscopes show how a signal changes over time by measuring its amplitude. Spectrum analyzers show how the energy of a signal is spread over different frequencies by measuring the power at each frequency.
  • Displayed information: Oscilloscopes show time-related information like rise and fall times, phase shifts, and jitter. Spectrum analyzers show frequency-related information like signal bandwidth, carrier frequency, and harmonics.
  • Uses: Oscilloscopes are extensively used for visualizing signals in real time and near real time. Spectrum analyzers are useful when frequency analysis is critical, such as in radio frequency communications and electromagnetic interference testing.

A mixed-domain oscilloscope combines oscilloscope and spectrum analyzer capabilities in a single instrument with features like fast Fourier transforms (FFT) to convert between the two domains.

Another complementary instrument is a logic analyzer. Both mixed-signal oscilloscopes and logic analyzers are capable of measuring digital signals. But they differ in some important aspects:

  • Analog and digital signals: An MSO can measure both analog and digital signals. However, logic analyzers only measure digital signals.
  • Number of channels: Most oscilloscopes support two to four channels and a few top out around eight. In sharp contrast, logic analyzers can support dozens to hundreds of digital signals.
  • Analysis capabilities: Oscilloscopes provide sophisticated triggering options for capturing complex analog signals. But logic analyzers can keep it relatively simple since they only focus on digital signals.
What are the key specifications to consider when choosing an oscilloscope for a specific application? Figure 6: A Keysight UXR-B series scope

The most important specifications and features to consider when choosing an oscilloscope include:

  • Bandwidth: For analog signals, the recommended bandwidth is three times or more of the highest sine wave frequency. For digital signals, the ideal bandwidth is five times or more of the highest digital clock rate, measured in hertz (Hz), megahertz (MHz), or GHz.
  • Sample rate: This is the number of times the oscilloscope measures the signal each second. State-of-the-art oscilloscopes, like the UXR series, support up to 256 giga samples (billion samples) per second, which works out to a measurement taken every four femtoseconds. The sample rate dramatically impacts the signal you see on the display. An incorrect sample rate can result in an inaccurate or distorted representation of a signal. A low sample rate can cause errors to go undetected because they can occur between collected samples. The sample rate should be at least twice the highest frequency of the signal to avoid aliasing, but a sample rate of 4-5 times the bandwidth is often recommended to precisely capture signal details.
  • Waveform update rate: A higher waveform rate increases the chances of detecting possible glitches and other infrequent events that occur during the blind time between two acquisitions.
  • Number of channels: Most use cases are mixed-signal environments with multiple analog and digital signals. Select an oscilloscope with sufficient channels for critical time-correlated measurements across multiple waveforms.
  • Effective number of bits (ENOB): ENOB says how many bits are truly useful for accurate measurements. Unlike the total analog-to-digital converter (ADC) bits, which can include some bits influenced by noise and errors, ENOB reflects the realistic performance and quality of the oscilloscope’s measurements.
  • Signal-to-noise ratio (SNR): This is the ratio of actual signal information to noise in a measurement. Low SNR is recommended for higher accuracy.
  • Time base accuracy: This tells you the timing accuracy in parts per billion.
  • Memory depth: This is specified as the number of data points that the scope can store in memory. It determines the longest waveforms that can be captured while measuring at the maximum sample rate.
What trends are emerging in oscilloscope development?

Some emerging trends in oscilloscopes and onboard embedded software are in the areas of signal analysis, automated compliance testing, and protocol decoding capabilities:

Advances in signal analysis include:

  • deep signal integrity analysis for high-speed digital applications
  • advanced statistical analysis of jitter and noise in digital interfaces in the voltage and time domains
  • analysis of high-speed PAM data signals
  • power integrity analysis to understand the effects of alternating or digital signals and DC supplies on each other
  • de-embedding of cables, probes, fixtures, and S-parameters to remove their impacts from measurements for higher accuracy

Automated compliance testing software can automatically check high-speed digital transceivers for compliance with the latest digital interface standards like USB4, MIPI, HDMI, PCIe 7.0, and more.

Comprehensive protocol decoding capabilities enable engineers to understand the digital data of MIPI, USB, automotive protocols, and more in real time.

Measure with the assurance of Keysight oscilloscopes Fig 7. Keysight Infiniium and InfiniiVision oscilloscopes

This blog introduced several high-level aspects of oscilloscopes. Keysight provides a wide range of state-of-the-art, reliable, and proven oscilloscopes including real-time and equivalent-time scopes for lab use and handheld portable oscilloscopes for field use.

MICHELLE TATE
Product Marketing
Keysight Technologies

The post An Overview of Oscilloscopes and Their Industrial Uses appeared first on ELE Times.

Best Virtual Machine Size for Self-Managed MongoDB on Microsoft Azure

Thu, 08/08/2024 - 13:52

Courtesy: Michał Prostko (Intel) and Izabella Raulin (Intel)

In this post, we explore the performance of MongoDB on Microsoft Azure examining various Virtual Machine (VM) sizes from the D-series as they are recommended for general-purpose needs.

Benchmarks were conducted on the following Linux VMs: Dpsv5, Dasv5, Dasv4, Dsv5, and Dsv4. They have been chosen to represent both the DS-Series v5 and DS-Series v4, showcasing a variety of CPU types. The scenarios included testing instances with 4 vCPUs, 8 vCPUs, and 16 vCPUs to provide comprehensive insights into MongoDB performance and performance-per-dollar across different compute capacities.

Our examination showed that, among instances with the same number of vCPUs, the Dsv5 instances consistently delivered the most favorable performance and the best performance-per-dollar advantage for running MongoDB.

 

MongoDB Leading in NoSQL Ranking

MongoDB stands out as the undisputed leader in the NoSQL Database category, as demonstrated by the DB-Engines Ranking. MongoDB emerges as the clear frontrunner in the NoSQL domain, with its closest competitors, namely Amazon DynamoDB and Databricks, trailing significantly in scores. Thus, MongoDB is supposed to maintain its leadership position.

MongoDB Adoption in Microsoft Azure

Enterprises utilizing Microsoft Azure can opt for a self-managed MongoDB deployment or leverage the cloud-native MongoDB Atlas service. MongoDB Atlas is a fully managed cloud database service that simplifies the deployment, management, and scaling of MongoDB databases. Naturally, this convenience comes with additional costs. Additionally, it restricts us, for example, we cannot choose the instance type to run the service on.

In this study, the deployment of MongoDB through self-managed environments within Azure’s ecosystem was deliberately chosen to retain autonomy and control over Azure’s infrastructure. This approach allowed for comprehensive benchmarking across various instances, providing insights into performance and the total cost of ownership associated only with running these instances.

Methodology

In the investigation into MongoDB’s performance across various Microsoft Azure VMs, the same methodology was followed as in our prior study conducted on the Google Cloud Platform. Below is a recap of the benchmarking procedures along with the tooling information necessary to reproduce the tests.

Benchmarking Software – YCSB

The Yahoo! Cloud Serving Benchmark (YCSB), an open-source benchmarking tool, is a popular benchmark for testing MongoDB’s performance. The most recent release of the YCSB package, version 0.17.0, was used.

The benchmark of MongoDB was conducted using a workload comprising 90% read operations and 10% updates to reflect, in our opinion, the most likely distribution of operations. To carry out a comprehensive measurement and ensure robust testing of system performance, we configured the YCSB utility to populate the MongoDB database with 10 million records and execute up to 10 million operations on the dataset. This was achieved by configuring the recordcount and operationcount properties within YCSB. To maximize CPU utilization on selected instances and minimize the impact of other variables such as disk and network speeds we configured each MongoDB instance with at least 12GB of WiredTiger cache. This ensured that the entire database dataset could be loaded into the internal cache, minimizing the impact of disk access. Furthermore, 64 client threads were set to simulate concurrency. Other YCSB parameters, if not mentioned below, remained as default.

Setup

Each test consisted of a pair of VMs of identical size: one VM running MongoDB v7.0.0 designated as the Server Under Test (SUT) and one VM running YCSB designed as the load generator. Both VMs ran in the Azure West US Region as on-demand instances, and the prices from this region were used to calculate performance-per-dollar indicators.

Scenarios

MongoDB performance on Microsoft Azure was evaluated by testing various Virtual Machines from the D-series, which are part of the general-purpose machine family. These VMs are recommended for their balanced CPU-to-memory ratio and their capability to handle most production workloads, including databases, as per Azure’s documentation.

The objective of the study is to compare performance and performance-per-dollar metrics across different processors for the last generation and its predecessor. Considering that the newer Dasv6 and Dadsv6 series are currently in preview, the v5 generation represents the latest generally available option. We selected five VM sizes that offer a substantively representative cross-section of choices in the general-purpose D-Series spectrum: Dsv5 and Dsv4 powered by Intel Xeon Scalable Processors, Dasv5 and Dasv4 powered by AMD EPYC processors, and Dpsv5 powered by Ampere Altra Arm-based processors. The testing scenarios included instances with 4, 8, and 16 vCPUs.

Challenges in VM type selection on Azure

In Microsoft Azure instances are structured in a manner where a single VM size can accommodate multiple CPU families. This means that different VMs created under the same VM Size can be provisioned on different CPU types. Azure does not provide a way to specify the desired CPU during instance creation, neither through the Azure Portal nor API. The CPU type can only be determined once the instance is created and operational from within the operating system. It turned out that it required multiple tries to get matching instances as we opted for an approach where both the SUT and the client instance have the same CPU type. What was observed is that larger instances (with more vCPUs) tended to have newer generations of CPU more frequently, while smaller instances were more likely to have the older ones. Consequently, for the smaller instances of Dsv5 and Dsv4 we have never come across VMs with 4th Generation Intel Xeon Scalable Processors.

More details about VM sizes used for testing are provided in Appendix A. For each scenario, a minimum of three runs were conducted. If the results showed variations exceeding 3%, an additional measurement was taken to eliminate outlier cases. This approach ensures the accuracy of the final value, which is derived from the median of these three recorded values.

Results

The measurements were conducted in March 2024, with Linux VMs running Ubuntu 22.04.4 LTS and kernel 6.5.0 in each case. To better illustrate the differences between the individual instance types, normalized values were computed relative to the performance of the Dsv5 instance powered by the 3rd Generation Intel Xeon Scalable Processor. The raw results are shown in Appendix A.

Whether both 16 vCPUs Dsv4 and Dsv5 VMs are powered by 3rd Generation Intel Xeon Scalable Processors 8370C and, moreover, they share the same compute cost of $654.08/month, the discrepancy in MongoDB workload performance scores is observed, favoring the Dsv5 instance. This difference can be attributed to the fact that the tested 16 vCPUs Dsv4, as a representation of the 4th generation of D-series, is expected to be more aligned with other representatives of its generation (see Table 1). Analyzing results for Dasv4 VMs vs Dasv5 VMs, powered by 3rd Generation AMD EPYC 7763v, similar outcomes can be noted – in each tested case, Dasv5-series VMs overperformed Dasv4-series VMs.

Observations:
  • Dsv5 VMs, powered by 3rd Generation Intel Xeon Scalable Processor, offer both the most favorable performance and the best performance-per-dollaramong the other instances tested in each scenario (4vCPUs, 8vCPUs, and 16 vCPUs).
  • Dasv5 compared to Dsv5 is less expensive, yet it provides lower performance. Therefore, the Total Cost of Ownership (TCO) is in favour of the Dsv5 instances.
  • Dpsv5 VMs, powered by Ampere Altra Arm-based processors, have the lowest costs among the tested VM sizes. However, when comparing performance results, that type of VM falls behind, resulting in the lowest performance-per-dollar among the tested VMs.
Conclusion

The presented benchmark analysis covers MongoDB performance and performance-per-dollar across 4vCPUs, 8vCPUs, and 16 vCPUs instances representing general-purpose family VM sizes available on Microsoft Azure and powered by various processor vendors. Results show that among the tested instances, Dsv5 VMs, powered by 3rd Generation Intel Xeon Scalable Processors, provide the best performance for the MongoDB benchmark and lead in performance-per-dollar.

Appendix A

The post Best Virtual Machine Size for Self-Managed MongoDB on Microsoft Azure appeared first on ELE Times.

An Introduction to Several Commonly used AFEs and their Schemes

Thu, 08/08/2024 - 13:38

Courtesy: Infineon

This article will introduce you to the development of new energy vehicles and energy storage industry, several ways of cell collection solutions, and focus on Infineon’s new AFE acquisition chip TLE9018DQK as well as its use and technical characteristics.

In terms of passenger cars, in recent years, with the emergence of a new round of scientific and technological revolution and industrial transformation, the new energy vehicle industry has entered a stage of accelerated development. China’s new energy vehicle industry, after years of continuous efforts, the technical level has been significantly improved, the industrial system has been improved, and the competitiveness of enterprises has been greatly enhanced, showing a good situation of “double improvement” of market scale and development quality.

In terms of energy storage, electrochemical energy storage, as the fastest growing energy storage method in recent years, has rapidly increased from 3.7% in 2018 to 7.5% in 2020. Lithium-ion battery energy storage has high energy density, wide commercial application, declining unit cost, and mature technology, so lithium-ion battery energy storage has become the hegemon of global electrochemical energy storage. According to the data, by the end of 2020, lithium-ion batteries accounted for 92% of the installed capacity of electrochemical energy storage, and sodium-sulfur batteries and lead-acid batteries accounted for 3.6% and 3.5%, respectively. At present, lithium-ion energy storage mainly uses lithium iron phosphate battery technology.

2. Introduction to the collection plan (1) Cell collection realized by AD chip

Whether it is new energy passenger vehicles or energy storage, new requirements and standards have been put forward for the BMS industry. A key technology in BMS technology is the collection and protection of cell parameters, the so-called AFE technology. In the early days, there were no AFE chips, and the collection of battery cells was basically collected one by one with AD chips. The electronic switch is used to achieve the purpose of sequential switching. The scheme is roughly as follows:

As shown in the figure above, its principle is to use 2 electronic switches, at the same time, the upper switch strobe one way, and the lower switch strobe one way, (the electronic switch adopts TI’s MUX5081DR, and which one gated is configured through 3 addresses). During the switching process, ensure that the strobe of the upper switch and the lower switch are adjacent to each other, and the strobe of the upper switch is the high position. The gated two-way data is collected differentially through ADI’s ADS1113IDGSR and converted into an I2C signal for MCU communication.

The acquisition rate and accuracy of this collection method are not very high, and the general acquisition accuracy must reach 14 bits to meet the requirements of cell collection. This collection method is currently used in the collection of some special cells, such as nickel-chromium, nickel-metal hydride, and lead-acid batteries. Because most of the cells in this concentration appear in groups. Most of us use this group as a cell to deal with, and the voltage of this group will generally be greater than 5V, such as the 12V of the lead-acid group that we are familiar with. This is where this approach is needed to achieve partial pressure harvesting.

(2) Cell protection achieved by a single-cell cell protection chip

There is also a simple single-cell collection scheme on the market, which features that each cell is independently protected by a single chip. As shown in the figure below:

The chip HY2112-BB can realize the protection of over-voltage and over-under-voltage, charging detection and charge-discharge control. The operation mode is mainly realized by driving the external MOS with the drive pin of the output.

(3) Cell protection realized by multi-cell cell protection chips

There are mainly 4 strings, 7 strings, 8 strings, 12 strings, 14 strings, and 16 strings of multi-cell acquisition chips on the market. Among them, 12 strings and 14 strings are the majority. Typical examples are ADI’s LTC6803HG and NXP’s MC33771A.

As shown in the figure below:
Figure 3 shows BYD’s 4-string BM3451 acquisition chip scheme. Figure 4 shows ADI’s 12-string LTC6803HG acquisition scheme, and Figure 5 shows NXP’s 14-string MC33771 acquisition scheme.

In addition to cell voltage acquisition, noise filtering, self-function monitoring, and integrated internal equalization MOS, these two types of acquisition also have multi-channel temperature acquisition, and MC33771 also comes with internal current measurement, simplifying the BMS current acquisition loop.

Fig.4 Acquisition scheme of LTC6803HG

3. Infineon’s new TLE9018DQK AFE acquisition solution

Infineon’s AFE acquisition solution is mainly TLE9012 in the early stage, and compared with other chips, the most important feature of this chip is the integrated pressure and temperature compensation function. This function enables the chip to maintain good acquisition accuracy in special environments. The main disadvantage is that the number of strings is relatively small, and there are limitations in the combination of strings and combinations. The TLE9018DQK is developed on the basis of TLE9012, and its functional characteristics are mainly as follows:

1. It can monitor 18 strings of batteries at the same time.
2. The maximum voltage can reach 120V, with strong ESD function. It can support frontal hot swapping without external protection (due to its high withstand voltage value and internal protection).
3. Integrated stress sensor and temperature compensation with digital compensation algorithm. Its products can maintain high acquisition accuracy in a variety of complex environments.
4. Integrated 8 temperature acquisition channels.
5. The maximum passive equalization circuit can reach 300mA.
6. Support capacitive coupling and transformer coupling in communication.
7. It has multiple wake-up sources, so it can wake up the chip in a variety of ways.
8. With automatic no-load detection function.

The peripheral pinout diagram looks like this:

TLE9018DQK is suitable for HEV, PHEV, BEV, energy storage and other product applications. The package form is: PG-FQLP-64. The maximum permissible temperature range is –40-150°C. The comparison of TLE9018 with other chip technical parameters can be referred to the following figure:

Here, for TLE9018DQK, a set of cell collection schemes is made, as shown in the figure below.

Because the TLE9018DQK has its own passive balancing MOS, the external balancing circuit only needs to be connected to the current-limiting resistor, and considering the resistor power problem under the maximum balancing current, two 82R, package 2512 power resistors are connected in parallel to improve reliability. The 8-channel temperature measurement is delivered to the MCU with a 3.3K current-limiting resistor. External 100K NTC temperature sensor for temperature collection. If the temperature sensor is not used so much, the unused pins must be pulled down to the ground end.
IFL_L, IFL_H, IFH_L, IFH_H, and four pins are externally connected to Infineon’s isolated communication chip TLE9015, which can realize multi-level series daisy-chain communication.

The ERR pin is connected directly to the MCU and can feed back the current state of the chip to the MCU.

The chip comes with 4 GPIO pins, which can control external indicators, alarms, etc., which are not used here for the time being, so they are directly pulled down to ground.

The post An Introduction to Several Commonly used AFEs and their Schemes appeared first on ELE Times.

TRI to display new Wafer Inspection and Metrology Solution

Thu, 08/08/2024 - 12:56

Test Research, Inc. (TRI), the leading test and inspection systems provider for the electronics manufacturing industry, will join SEMICON Taiwan held at Taipei Nangang Exhibition Center, Hall 1 – 4F from September 4 – 6, 2024. OmniMeasure, TRI’s SEMI inspection partner, will be joining TRI’s booth #N0990.

Visit TRI’s booth to learn more about AI-powered AOI solutions for the Semiconductor and Advanced Packaging Industry. TRI has SEMI Inspection solutions for Advanced WLP/PLP and SEMI Back-End Package processes.

TRI will be exhibiting the new Wafer Inspection Platform, TR7950Q SII, featuring a 25MP camera with 2.5 μm resolution and a 12MP camera with 0.55um microlens for high-resolution 2D/3D DFF inspection and AI-powered inspection algorithms. The TR7950Q SII is suitable for the Wafer Macroscopic 3D Inspection and micro measurement metrology. The TR7950Q SII can inspect Advanced WLP, Wafer Frame, Patterned Wafer, Wafer Bumping, WLCSP, through-silicon via (TSV), and more. The TR7950Q SII can inspect TSV at ultra-high speeds thanks to the TSV module from OmniMeasure. TSV Metrology functions: sensing TSV depth, trench depth, oxide, nitride, PR, and PI film thickness.

TRI will also showcase the latest back-end inspection solutions, TR7007Q SII-S and TR7700Q SII-S. The TR7007Q SII-S can inspect Mini-LED, C4 bumps (~100 μm Ø), and 008004 paste inspection applications. The AI-powered 3D SEMI AOI, TR7700Q SII-S, can inspect die, wire diameters of up to 15 μm (0.6 mil), SiP, underfill, bumps, and more. The lineup will also include an X-ray Inspection Demo Station. TRI’s SEMI AXI solutions can inspect C4 bumps and Cu pillars.

OmniMeasure will display the TGV (Through Glass Via) 3D Viewer, a TGV metrology tool that employs non-contact tomography measurement to view cross-sections of the glass via easily. The TGV viewer can also measure side wall angles without needing SEM.

Visit TRI’s Booth No. N0990 at SEMICON Taiwan 2024 to learn more about TRI’s SEMI applications and the latest inspection innovations for the Semiconductor and Advanced Packaging Industry.

The post TRI to display new Wafer Inspection and Metrology Solution appeared first on ELE Times.

New, powerful SoCs aid drive to e-mobility

Thu, 08/08/2024 - 12:29

Courtesy: Nordic Semiconductor

The way we get from A to B and back is rapidly evolving. At the heart of the urban transport revolution is technology-driven electric mobility (‘e-mobility’). This encompasses not only electric bikes (‘e-bikes’), electric scooters (‘e-scooters’), and other lightweight electric transport, but also electric cars that rely on the availability of electric vehicle (EV) charging infrastructure to stay in motion. Driven by smart connectivity solutions, the e-mobility market is taking transportation efficiency to the next level in cities around the world.

Shared micromobility takes commuters the last mile 

Micromobility technologies are enabling flexible, cost-effective, and eco-friendly ‘last mile’ alternatives to traditional commuting. For example, rentable e-bikes and e-scooters now make it faster, cheaper, healthier, more convenient, more efficient, and more environmentally friendly for people to travel the final part of a journey. Better yet, these solutions allow people to avoid private and public transport, reducing both traffic congestion and carbon footprints.

It’s still early days for the sector, but all signs point to sustained growth and exciting development. One report by Market Research Future forecasts the global micromobility market will expand from $114.15 billion in 2024 to $303.47 billion by 2032 at a CAGR of 13 percent during the forecast period.

Bluetooth LE the key to unlocking e-mobility potential 

Advanced low power wireless connectivity is the key to efficient e-mobility. Shared micromobility solutions require short-range wireless technologies such as Bluetooth LE to communicate between smartphones and rented transport, enabling equipment unlocking and mobile payment/subscription functionality.

By reliably and securely linking shared e-bikes and e-scooters to smartphones, for example, riders can use associated apps to not only locate and unlock the nearest machine, but also take advantage of unique features such as beginner/safe modes and the ability to check estimated travel times to help plan journeys.

Bluetooth LE is currently used in most share bikes for communication between the bike and a linked mobile because of its ubiquitous smartphone interoperability. Other systems employ cellular IoT with Bluetooth LE as a backup connectivity technology. The low power consumption of both cellular IoT and Bluetooth LE ensures e-vehicles remain connected for long periods.

Wireless tech powers electric vehicle charging stations 

And it’s not only micromobility that’s shifting the gears of urban transport. Reliable, secure wireless connectivity also enhances the value proposition of EV charging stations. By encouraging EVs instead of conventional vehicles in city centers, carbon emissions are slashed, and everyone gets to breathe cleaner air.

Wireless connectivity enables EV charging stations to become smart. For example, data can be gathered on the availability and condition of charging sockets. This data can be relayed to a central platform for staff to respond to disruptions or problems remotely. Avoiding potential technical issues can improve availability of the charging outlet for the consumer’s benefit.

By seamlessly integrating Bluetooth LE, Wi-Fi, and cellular IoT, developers can create innovative charging solutions that meet the evolving needs of the EV industry. This is important as EV adoption is accelerating and a large fleet of reliable charging points will be needed to meet demand.

One innovative solution for increasing the number of charging points is to integrate them into smart streetlamps. Streetlamps are already connected to the main electricity supply offering a ready supply of energy for EVs. The U.K., for example, already boasts over 8,000 streetlight and bollard charging stations. Further lamppost conversions will allow the country to greatly expand its network of over 53,000 public charging points. Given charging point access has proved a significant barrier to EV adoption, converting streetlamps into EV charging stations will aid the rollout of EVs generally by ensuring charging is more accessible and convenient.

Nordic nRF54 Series future ready for e-mobility advancement 

Nordic Semiconductor’s latest generation SoCs provide a powerful solution for developers of innovative e-mobility applications. The nRF54H20, for example, boasts multiple Arm Cortex-M33 processors and multiple RISC-V coprocessors. Combined with 2 MB non-volatile memory and 1 MB of RAM, the nRF54H Series endows the developer with the dedicated computing power needed to run complex e-mobility applications while also keeping power consumption low to extend battery life.

As one of the most secure low power, multiprotocol SoCs on the market, the nRF54H20 is an ideal connectivity solution for e-mobility applications that demand protection of sensitive personal data used for payment, as well as safeguarding valuable e-transport assets.

Furthermore, the nRF54H20 features an integrated high-speed CAN FD controller. CAN (controller area network) is a standard bus used in vehicles for communications within their electronic systems, and integrating it into the nRF54H20 provides a powerful option for lower cost e-mobility implementations.

Tomorrow’s e-mobility solutions powered by the nRF54H20 will be even more flexible, convenient, efficient, and secure. What those solutions will look like are down to the imagination of the developer, but we can be sure that they will extend micromobility to an even wider population resulting in cleaner, quieter, and safer cities.

The post New, powerful SoCs aid drive to e-mobility appeared first on ELE Times.

Simulating Thermal Propagation in a Battery Pack

Thu, 08/08/2024 - 12:16

Courtesy: Comsol

Picture this: A battery pack is connected to a charger and is left to recharge. The first minute passes without incident, with electricity flowing into the pack as expected. Suddenly, one battery cell experiences a short circuit and rapidly heats up, which in turn sparks a chain reaction as other cells in the pack follow suit. By the time 20 minutes have passed, the entire battery pack has been completely ruined. To explore this potentially dangerous scenario, we modelled a battery pack as it endures this rapid change.

The Risks of Batteries Going Wrong

Batteries can experience thermal runaway when they are pushed beyond their normal operating range, subjected to damage, or suffering from a short circuit, like in our dramatic example above. During this process, a battery cell heats up uncontrollably and triggers adjacent cells to follow suit. When excessive heat generation is not counteracted by sufficient dissipation, the whole pack exhibits thermal runaway. This can quickly damage the entire battery pack beyond use. In worst case scenarios, the extreme temperatures can even start fires, with potentially dire consequences.

To get insight into how this type of failure could develop and progress in prospective designs, battery designers can turn to modeling and simulation (M&S) to test their designs without damaging any materials — or themselves, for that matter — in the process. M&S makes it possible to look inside the battery pack in a way that is impossible in a lab setting and multiphysics simulation, specifically, ensures that the models reflect the real-world context in which the battery pack will eventually live.

Building a Battery Pack Model in COMSOL Multiphysics

Let’s take a look at a simple pack of 20 cylindrical batteries in a 5s4p configuration. In a 5s4p configuration, 4 sets of battery cells are connected in parallel, and each set contains 5 serially connected individual battery cells. For this model example, we included two plastic holder frames to keep the batteries in their locations and fix the cell-to-cell distances. The model also has parallel connectors welded to the serial connectors, midway between the battery cylinders, and a thin plastic wrapping that encloses the whole pack. This wrapping forms a compartment of quiescent air surrounding the battery cylinders.

The modelled battery pack geometry.

The model uses the following materials from the material library in the COMSOL Multiphysics software:

  • Acrylic plastic (for the plastic holders)
  • Steel AISI 4340 (for the connectors and battery terminals)
  • Air (for the air in the compartment)

Next, let’s trigger thermal runaway in the pack! To initiate our propagation, we assume that one cell endures a short circuit early in the charging process.

Modeling Thermal Runaway

In our simulation, as soon as the short circuit is triggered (at the 1-minute mark) the maximum measured temperature within our battery pack instantaneously increases by more than 300°C. However, the average temperature only jumps moderately as just one battery cell experiences this dramatic increase in temperature. We see an incubation period during which nearby cells are warmed by our problem cell until another cell is triggered to heat up instantly.

Pack voltage and maximum battery temperature in the pack.

The threshold temperature for the remaining cells to be triggered into experiencing a thermal event is 80°C and, with the overall heat growing in the battery pack, the intervals between successive cell runaways become shorter. To simulate the loss of electrolyte and the resulting increase of internal cell resistances, the internal ohmic resistance of a battery cell is set to increase about two orders of magnitude when a thermal event is triggered.

At the 10-minute mark, the maximum charging voltage limit has been reached and the charger is turned off. Unfortunately, this has come too late to prevent further damage, and the thermal runaway continues to propagate throughout the rest of the pack. After just a few more minutes, we have lost all 20 of our battery cells. The thermal processes have run their course by the 20-minute mark, but the average temperature of our battery pack remains at more than 350°C. Had this been a real battery pack, the modelled scenario would likely have resulted in a fire, or even an explosion.

Prevent Problems Before They Arise

Batteries that have been kept too hot, operated in an unsafe way, or damaged, can experience thermal runaway events. When one part of the system begins to overheat, things can rapidly devolve. By modeling these events, users can virtually test their designs and verify, for example, the effectiveness of battery management systems as well as the temperature regulation of the system in potential deployment locations. It is through this approach that thermal runaway events can be better understood and, hopefully, avoided altogether.

The post Simulating Thermal Propagation in a Battery Pack appeared first on ELE Times.

AET Displays to Launch 5 New LED Solutions in 2024, Expanding Current Range of 60+ Products in India

Thu, 08/08/2024 - 10:44

The company’s market presence in India has been steadily growing, with over 25,000 square meters of LED displays deployed to date

Building on this success, the company has set an ambitious target for 2024, aiming to achieve more than 5,000 deployments by the end of this year

The market reception for AET Displays has been overwhelmingly positive, with a surge in business inquiries not only from India but also from other APAC regions, including Malaysia, Korea, Singapore, Hong Kong, Indonesia, and Thailand

AET Displays, a renowned industry expert in fine-pitch LED displays, has announced plans to launch five new LED solutions by the end of 2024, further expanding its already impressive range of over 60 products available in the Indian market. Currently, AET Displays boasts a comprehensive product lineup tailored for both outdoor and indoor applications. The outdoor category features more than 20 SKUs, including the AEO Series, AEO Plus Series, AEO Pro Series, and AMO Series. For indoor environments, the company offers over 30 SKUs, comprising the AT Series, NT Series, KOALA Series, NX Series, and All in One Series. Additionally, AET Displays provides specialized solutions such as Flexible Screens, Transparent Series, Modules, and Rental Series, catering to unique customer requirements.

The company’s market presence in India has been steadily growing, with over 25,000 square meters of LED displays deployed to date. AET Displays has made significant inroads in more than 20 diverse sectors, including government institutions (ministries, defence, PSUs, and state entities), broadcasting and media houses, retail, education, hospitals, corporate environments, transportation hubs (airports, railways, and metro stations), outdoor and indoor advertising, NOC rooms, surveillance facilities, and the cinema industry. Notably, the government sector, broadcasting and media houses, retail and corporate clients, and the entertainment industry have contributed most to the company’s revenue stream, highlighting AET’s strong position in high-demand, high-visibility markets. Geographically, AET Displays has seen a particularly strong market presence in the South, West, and North regions of India. Building on this success, the company has set an ambitious target for 2024, aiming to achieve more than 5,000 deployments by the end of this year.

Commenting on the company’s expansion plans, Mr. Su Piow Ko, Vice President of AET Global, stated, “Our decision to introduce five new LED solutions by the end of 2024 is a direct response to the dynamic market demands and our unwavering commitment to technological leadership. The remarkable success we’ve achieved in India, coupled with growing interest from other APAC regions, validates our approach to innovation and quality. These new solutions are designed not just to meet current market needs, but to anticipate future requirements, ensuring AET Displays remains at the forefront of visual communication technology. We are confident that these new offerings, when launched, will be received with love and appreciation by our customers and partners.”

The flagship product of AET Displays, the AT 55′ COB, exemplifies the company’s technological prowess. Utilizing MIP (Mass Transfer) Technology and featuring HDMI connectivity, this versatile display offers 2K resolution in a single unit, making it ideal for a wide range of indoor applications. At the heart of AET Displays’ innovation is its cutting-edge COB (Chip on Board) technology and patented QCOB Technology. Implemented across all indoor Active LED Displays, QCOB Technology provides an IP65 rating with ingress protection on the surface, ensuring both moisture and dust resistance. This proprietary technology is part of AET Displays’ impressive portfolio of over 1,000 patents, highlighting the company’s dedication to research and development in the LED display industry.

The market reception for AET Displays has been overwhelmingly positive, with a surge in business inquiries not only from India but also from other APAC regions, including Malaysia, Korea, Singapore, Hong Kong, Indonesia, and Thailand. As the demand for active LED displays continues to rise, AET Displays is strategically positioning itself to capture a larger market share in both government and corporate sectors. To support this growth and ensure customer satisfaction, AET Displays plans to double its employee strength by the end of 2024, with a focus on enhancing after-sales and pre-sales support.

Mr. Su Piow Ko, CEO, AET Display

The post AET Displays to Launch 5 New LED Solutions in 2024, Expanding Current Range of 60+ Products in India appeared first on ELE Times.

Embedded Technology in Electronics: Powering the Future, Igniting Careers!

Thu, 08/08/2024 - 10:27

Buckle up, tech enthusiasts! We’re about to dive into the electrifying world of embedded technology in electronics – a realm where innovation meets opportunity, and where the tiniest chips spark the grandest revolutions. If you’re looking for a career that’s not just cutting-edge but blazing a trail into the future, you’ve come to the right place! In today’s interconnected world, embedded technology silently powers countless devices and systems that we interact with daily. From smart home appliances to advanced industrial machinery, embedded systems form the backbone of modern technological innovation. This article explores the world of embedded technology, its applications, and the exciting career opportunities it offers.

What is Embedded Technology?

Embedded technology refers to computer systems designed for specific functions within larger mechanical or electrical systems. Unlike general-purpose computers, embedded systems are optimized for particular tasks, often with real-time computing constraints. These systems typically consist of a microprocessor or microcontroller, memory, input/output interfaces, and software tailored to the application.

Key Characteristics of Embedded Systems:

  • Dedicated functionality
  • Real-time operation
  • Limited resources (memory, processing power)
  • Low power consumption
  • High reliability and durability
  • Often operating without human intervention
Applications Across Industries: Powering Innovation and Efficiency

Embedded systems find applications across a diverse range of industries, driving innovation, enhancing operational efficiency, and improving user experiences.

  • Automotive Electronics: Embedded systems play a pivotal role in automotive electronics, powering advanced driver assistance systems (ADAS), infotainment systems, and vehicle telematics. These systems enable intelligent features such as adaptive cruise control, collision avoidance, and autonomous driving technologies.
  • Healthcare and Medical Devices: Medical IoT devices equipped with embedded systems monitor patient health, deliver personalized treatments, and transmit critical data securely to healthcare providers. Embedded systems in medical devices ensure reliability, accuracy, and compliance with regulatory standards for patient safety.
  • Smart Home and Consumer Electronics: From smart thermostats to connected appliances, embedded systems enhance convenience, energy efficiency, and connectivity in modern homes. These systems enable seamless integration, remote monitoring, and intelligent automation for enhanced lifestyle experiences.
  • Industrial Automation and Manufacturing: Embedded systems drive automation and process control in industrial environments, optimizing production efficiency, monitoring equipment performance, and enabling predictive maintenance. Industrial IoT platforms leverage embedded systems for real-time analytics, inventory management, and supply chain optimization.
Growing Importance of Embedded Technology

As we progress towards a more connected and automated world, the demand for embedded systems continues to surge. The IoT revolution, Industry 4.0, and the push for smarter, more efficient devices are driving factors behind this growth. According to market research firm Precedence Research, the global embedded systems market size was reached at USD 162.3 billion in 2022 and is expected to hit around USD 258.6 billion by 2032, poised to grow at a CAGR of 4.77% during the forecast period from 2023 to 2032.

Embedded Future: A Symphony of Progress

The embedded revolution is a marathon, not a sprint. By embracing the practical realities, fostering collaboration, and continuously pushing boundaries, we can unlock the full potential of embedded systems. These tiny titans have the power to revolutionize industries, improve our lives, and create a more connected, efficient, and sustainable future. The future is embedded, and it’s an orchestra waiting to be conducted. Are you ready to pick up the baton and join the symphony?

Call to Action: Be a Part of the Embedded Revolution

The future of embedded systems is bright, and the Electronics Sector Skills Council of India (ESSCI) is committed to equipping professionals with the necessary skills to lead this revolution. ESSCI offers a range of skill development programs in IoT hardware and Embedded Full Stack for candidates who meet specific educational and experience requirements. ESSCI provides four specialized courses – Embedded Software Engineer, Embedded Product Design Engineer-Technical Lead, Embedded Full Stack IoT Analyst and IoT Hardware Analyst. These roles involve preparing comprehensive blueprints of hardware, including schematic layouts, quality verification requirements, and performing PCB testing in compliance with regulatory standards. The design documentation process ensures all details are accurately recorded. Additionally, individuals in these roles are responsible for the efficient functioning and overall performance of the systems.

Career progression in embedded technology often involves moving from junior roles to senior engineering positions, then to team lead or project manager roles. Some professionals may specialize in particular industries or technologies, while others may transition into roles such as systems architect or technical director.

Skills and Qualifications:

To succeed in the embedded technology field, professionals typically need:

  • Strong programming skills, especially in C and C++
  • Knowledge of microcontroller architectures and peripherals
  • Familiarity with real-time operating systems (RTOS)
  • Understanding of digital electronics and circuit design
  • Experience with debugging tools and techniques
  • Proficiency in version control systems like Git
  • Knowledge of communication protocols (I2C, SPI, CAN, etc.)
  • Familiarity with IoT platforms and cloud technologies
  • Problem-solving and analytical skills
  • Ability to work in cross-functional teams

In conclusion, the embedded revolution is a testament to human ingenuity. By harnessing the power of these tiny titans, we can create a future that is not only technologically advanced but also efficient, sustainable, and improves our quality of life. Join the movement, become a part of the symphony, and let’s shape the future together, one embedded system at a time.

Dr Abhilasha Gaur- Chief Executive Officer, Electronics Sector Skills Council of India

The post Embedded Technology in Electronics: Powering the Future, Igniting Careers! appeared first on ELE Times.

BoardSurfers: Leveraging Object Hierarchy for Effective Constraint Management

Thu, 08/08/2024 - 10:11

Courtesy: Cadence Systems

Allegro X Constraint Manager provides a worksheet-based environment where you define and manage constraints for all the objects in your design. In large, complex designs with various object relations, grouping objects can easily manage constraints. Grouping objects helps to assign constraints to multiple objects at once. However, assigning unique values to individual objects that are part of these group objects requires understanding constraint inheritance and precedence. For instance, constraining multiple Net Groups, which share the same constraints except for one constraint for one of the Net Groups.

Constraining design objects in the Allegro X design environment is a streamlined process. Constraints are organized in a hierarchy, which governs their flow across the objects, ensuring that the expected constraints appear at the appropriate levels in the design.

Constraint Inheritance

Constraints defined for objects at the top level of the object hierarchy are inherited by the lower-level objects, as illustrated in the following table:

For example, if you define the MIN_LINE_WIDTH constraint for a Net Class object in the Physical domain, all the objects placed below the Net Class object in the constraint hierarchy—Net Groups, Buses, Differential Pairs, XNets, NetsPin Pairs, Region, and Region Class—inherit the new value of the MIN_LINE_WIDTH constraint as well.

In Constraint Manager, assigned values appear in bold blue and inherited values appear in white text (in dark mode, which is the default).

In the illustrated example, when you update the constraint value for the Net Class, POWER_GROUP(10), the value of the MIN_LINE_WIDTH constraint is updated for all the nets under it.

Constraint Precedence

Constraints defined for the objects that are placed at a lower level in the object hierarchy take precedence over the values of the same constraints applied to higher-level objects, as illustrated in the following image:

In the following example, you can override the value of the MIN_LINE_WIDTH constraint for a Net object that already has the constraint inherited from a higher-level object in the hierarchy, such as Net Classes or Differential Pairs. The constraint value for all the higher-level objects associated with the updated net remains unchanged.

Constraints for a design must be defined at the highest level of the object hierarchy. This ensures that the constraints are consistent across all the objects in the hierarchy, as all the lower-level objects inherit the constraints. You can update the individual objects that need to be constrained differently.

Constraint Resolution

The Allegro X constraint system adheres to object precedence when resolving constraints. Constraint resolution works differently for constraints, depending on the domain. There are no default values for any electrical constraints in the Electrical domain. You can have unspecified electrical constraints for design objects, but not in the Physical, Spacing, and Same Net Spacing domains. In these domains, physical design objects—clines, shapes, pins, or vias—are considered part of a net or an XNet. Constraint Manager uses the constraint value that is set on a net or XNet object.

If the net or XNet is not constrained directly, it inherits a constraint value from a higher-level object in the constraint hierarchy that includes this net as a member. The higher-level object can be a group object, such as a Match Group, Differential Pair, Bus, or Net Class.

In this way, the Constraint Manager moves one level up to look for a constraint value. It continues this process until it finds a constraint specified on a level that includes the net as a member and uses that constraint value. If no constraint value is specified on the net or on a hierarchy level to which the net belongs, the net inherits the constraint from the design (Dsn).

Conclusion

You can leverage the constraint inheritance and precedence behavior to your advantage by grouping design objects appropriately. This can significantly aid the process of constraining design objects. Instead of defining a consistent property at multiple object levels, if the objects are properly organized in group-objects, you can simply define the constraint at the highest required level and have the rest of the objects inherit the constraint.

 

The post BoardSurfers: Leveraging Object Hierarchy for Effective Constraint Management appeared first on ELE Times.

25 years of Wi-Fi A quarter century of Broadcom innovation

Wed, 08/07/2024 - 14:25

Like the internet and computer, Wi-Fi has woven itself into the fabric of our daily lives for more than two decades. The term “Wi-Fi” – first used in 1999 – helped usher in a new era of connectivity. However, it was Steve Jobs’ iconic “One more thing” reveal at the 1999 Macworld event in New York that truly catapulted Wi-Fi into the limelight. He introduced the iBook laptop, fully equipped with Wi-Fi connectivity, marking a pivotal moment in digital communication. This event not only popularized Wi-Fi but also set it on a path to becoming the ubiquitous wireless networking technology it is today, seamlessly integrating into our homes, workplaces, schools, and public spaces around the world.

Remarkable evolution

 

Wi-Fi technology has evolved considerably over the past 25 years with each generation marked by significant innovation and improvements. In 1999, Wi-Fi was only capable of supporting up to 11 megabits per second based on the IEEE 802.11b standard. Now in its seventh generation, Wi-Fi access points can reach speeds of about 25 gigabits per second. That’s more than 2000x improvement in speed performance.

Continuous improvements to the IEEE 802.11 standards over the past two and a half decades have made Wi-Fi one of the fastest adopted technologies in modern times. From 1999 to early 2000’s, there were no Wi-Fi enabled mobile devices, only a small number of laptops equipped with Wi-Fi connectivity. Today, Wi-Fi is one of the most prevalent technologies used all over the world with a huge installed base of connected devices, including smartphones, tablets, PCs, and wireless access points. Just to drive home the point, we would not have video streaming services to binge watch your favorite TV shows, or chatGPT on your computers without Wi-Fi. According to the latest IDC research, there were less than 2.5 million Wi-Fi enabled devices shipped in 2000. By the end of 2024, the cumulative shipment of Wi-Fi enabled devices is expected to surpass 45 billion units with an installed base of more than 20 billion units. Wi-Fi has undoubtedly become ubiquitous in everyday devices and plays an important role in today’s hyperconnected world.

The sheer growth of connected devices in the past decade has led to a massive increase in wireless data traffic, which started putting a strain on the airwaves used by these devices and limiting the actual user experience in many instances. Having the foresight to increase unlicensed spectrum access to meet the rising data demand, the U.S. Federal Communications Commission (FCC), chaired by Ajit Pai, made a monumental decision on April 23, 2020 to open up 1.2 GHz of spectrum in the 6 GHz band for Wi-Fi. The new swath of bandwidth (5.925 – 7.125 GHz) not only boosts Wi-Fi speed performance, but also reduces the uplink and downlink latency dramatically. This was quickly followed by a spate of countries opening up the 6 GHz band for unlicensed access. Today, countries accounting for over 70% of the world’s GDP have enabled the 6 GHz band, underscoring the recognition for better, faster Wi-Fi as a way of life.

Allowing Wi-Fi devices to operate in the 6 GHz band was pivotal in the evolution of Wi-Fi. This paradigm shift in wireless connectivity has enabled major advances in Wi-Fi applications and services and unlocked many new use cases, such as 16K video streaming, real-time collaboration, and wireless gaming.

Sustained continuous innovation

Since the release of IEEE 802.11b standard in 1999, Broadcom has been at the forefront of Wi-Fi development and played a major role in driving innovation and technology adoption. Broadcom has pioneered successive generations of Wi-Fi chips that have enabled countless new applications and transformed wireless experiences. Broadcom Wi-Fi chips are found in billions of devices spanning both the consumer and enterprise markets. With a steadfast commitment to innovation, Broadcom continues to push the frontiers of wireless communications, supporting our global vision of Connecting Everything and bridging the digital divide.

A few of our more notable achievements that have helped in the evolution and advancement of Wi-Fi technology are shown below. While the past 25 years of Wi-Fi has been impressive, we are excited about the possibilities and opportunities that lie ahead for Wi-Fi. We look forward to the next 25 years.

Vijay Nagarajan, Vice President, Wireless Connectivity Division, BroadcomVijay Nagarajan, Vice President, Wireless Connectivity Division, Broadcom

The post 25 years of Wi-Fi A quarter century of Broadcom innovation appeared first on ELE Times.

Only your fingers have the force

Wed, 08/07/2024 - 13:56

Courtesy: Avnet

It’s not the first time we’ve talked about the phenomena of the ‘ghost touch’ or ‘false touch’, where a touch screen responds, seemingly without human interaction. Fortunately (or unfortunately, depending on how you view it!), there is nothing spooky going on here. There are quite a few circumstances where it could happen – electrical noise or even water spilling on the screen can trigger an unwanted response in a standard projective capacitive touch screen. Simply put, the screen just can’t tell what’s human and what is not.

If your screen is being used for tasks which have safety implications, as many of our customers do, this is far from ideal. So how do we tackle the problem?

We use force. (No, not that kind of force.)

By adding a pressure detection solution under the touch screen, we can remove all fear of false triggering. Based on electromagnetic induction, ‘force touch’ creates a waterproof, mistake-proof environment that will even allow for continuous clicking without lifting your hand. But how does this work?

Eddy Current Pressure Sensors

These sensors operate based on the principle of electromagnetic induction and are a spiral planar coil made from a printed circuit board (PCB). When an alternating current (AC) flows through the coil, it generates an alternating magnetic field around it.

If a conductive material (such as a metal target) is brought near this magnetic field, eddy currents are induced within the material. These then create an opposing magnetic field, which reduces the inductance of the sensor. The inductance changes as a function of the distance between the sensor and the conductive surface.

Why we use Eddy Current Pressure Sensors in TFT Projects

ECP Sensors can measure pressure over a really large surface area. In TFT (thin-film transistor) projects, these sensors can be very useful for touchscreens or interactive displays and have many benefits:

Accurate Measurement of Distribution Force: This means that, in a TFT display, you can easily and precisely detect variations in pressure across the screen.

Button Replacement: Eddy current sensors can even be an alternative to physical buttons. Plus, they don’t need any cutouts or holes in the display, which makes for a sleeker design.

Unaffected by debris, liquids or magnetic interference: Unlike mechanical buttons, these sensors are immune to these problematic external factors.

Into the assembly – how pressure sensing sits in the stack

Pressure sensitive coils are made on a Flexible Printed Circuit (FPC) and laminated to the metal frame and touch screen with double-sided adhesive. Micro-deformation occurs between the inductive layer and the metal frame when the touch screen surface is pressed, and the pressure-sensitive chip detects this electromagnetic change.

pressure-sensitivity-infographic

Design benefits of a Pressure Sensing Solution

This approach reduces structural design difficulties by using the metal frame for touch screen assembly and there is no need for additional sensors when you can just use the PCB metal alignment.

The pressure-sensitive chip has a built-in algorithm which directly outputs the press force level, according to the commissioning parameters, after the structure has been assembled, making it easy for our customer to adjust.

The IO port can drive LEDs directly, creating an integrated solution from pressure sensing to light output.

Subdivision of 1024 levels of force within 0.1mm pressure deformation range.

Eddy Current Sensors are reliable and offer good linearity and repeatability in sizes up to 12.1”.

And finally – they even work with gloves! Which is superb news for those working in extreme environments. Because Eddy Current Sensors aren’t affected by temperature variations, unlike some other pressure sensors. In fact, you could go as far as to say that we’ve ‘forced’ out ghost touches for good.

The post Only your fingers have the force appeared first on ELE Times.

Staying Connected when always on the Move – the Communication Backbone of Mobile Robots

Wed, 08/07/2024 - 12:34

Courtesy: Analog Devices

Mobile robots consist of various technologies that must communicate with each other quickly and reliably to transmit critical messages for navigation and performing tasks, whether it’s an Autonomous Mobile Robot (AMR) or an Automated Guided Vehicle (AGV). Let’s consider the architecture of an AMR as shown:

There are several components that make up any mobile robot (such as wheel drive & encoder systems, vision inputs, inertial measurement unit (IMU) data, and battery management systems), and all of them need to communicate, usually with a main controller or main compute unit or sometimes to decentralized units that control specific functions of the robot, which can be done to reduce the overhead on a main controller and also aid in time critical applications such as  perception of its environment and actuator control. There are many communication methods that live within the operation of a typical mobile robot, and each type of protocol has their pros and cons for use. In the above example there are potentially 7 different communication methods employed within the one mobile robot: GMSL, UART, CAN, Ethernet, RS-485, SPI, RS-422. While this blog focuses on wired communication protocols, it is important to note that mobile robots typically require wireless communication as well. Wireless communication is essential for enabling mobile robots to interact with a base station and collaborate with other robots, ensuring seamless coordination and operation in dynamic environments.

Here is a quick comparison of a selection of technologies comparing their speed and latency.

As it can be seen in table 1, the parameters for the highlighted technologies vary in speed and latency and the appropriate technology needs to be chosen according to the need and the design itself and will most likely include a combination of different technologies. Operations in mobile robots typically demand near real-time speeds to function effectively. This is crucial for tasks such as obstacle avoidance, navigation, and interaction with dynamic environments, where even slight delays can impact performance and safety. The key parameters that need to be taken into consideration for communication are performance, reliability, and scalability.

An AMR needs to be able to navigate while perceiving its surroundings to execute tasks in an efficient way, and a simple flow diagram can describe how it acts:

Both the perception and the action parts play important roles, the environment needs to be perceived in order for actions to be taken and this data is usually acquired with RGB cameras, depth cameras, Lidar sensors and radar or a combination but transferring all this data to a processing unit needs a robust link with enough bandwidth and in the case of industrial robots, reliability against interferences. That critical work can be executed by protocols such as GMSL.

Gigabit Multimedia Serial Link

There is a new protocol entering the mobile robotics scene, GMSL. The protocol can transfer up to 6 Gbps of advanced driver assistance systems (ADAS) sensor data over a coax cable while simultaneously transferring power and control data over a reverse channel. It is a highly configurable serializer deserializer (SERDES) interconnect solution which supports sensor data aggregation (Video, LiDAR, Radar, etc.), video splitting, low latency and low bit error, and Power over Coax (PoC)

The topology for a GMSL application consists of the sensor, a serializer, a cable, and a deserializer on the system on chip (SoC) side.

This simplifies the mobile robot design and makes it more robust since GMSL was designed with transferring this type of data and was optimized to ensure high bandwidth, low latency transmission of data.

The synergy between Industrial Ethernet, GMSL, and wireless communication technologies is driving the next generation of mobile robotics. These technologies provide the robust, high-speed, and flexible communication necessary for mobile robots to operate autonomously and efficiently in various environments. As innovations continue to emerge, the capabilities of mobile robots will expand, revolutionizing industries and transforming our daily lives.

The post Staying Connected when always on the Move – the Communication Backbone of Mobile Robots appeared first on ELE Times.

Smallest-ever TI DLP display controller enables 4K UHD projectors for epic displays anywhere

Wed, 08/07/2024 - 10:50
News highlights:
  • New DLP controller is 90% smaller than the previous generation, enabling compact design for consumer applications such as lifestyle projectors, gaming projectors and augmented reality glasses.
  • Designers can replicate the experience of immersive, high-end gaming monitors in a fraction of the size with submillisecond display latency and frame rates up to 240Hz.

Texas Instruments (TI) has introduced a new display controller to enable the smallest, fastest and lowest-power 4K ultra-high-definition (UHD) projectors ever. Measuring just 9mm by 9mm, or the width of a pencil eraser, TI’s DLPC8445 display controller is the smallest of its kind while enabling a diagonal display of 100 inches or more in vivid image quality with ultra-low latency. When combined with TI’s compatible digital micromirror device (DMD), the DLP472TP, and power-management integrated circuit (PMIC) with LED driver, the DLPA3085, TI’s new controller enables designers to replicate the display experiences of high-end televisions and gaming monitors in the form of a compact projector.

For more information, see ti.com/DLPC8445.

“Immersive display entertainment is now sought out by everyday consumers, not just movie enthusiasts and gamers,” said Jeff Marsh, vice president and general manager of DLP Products at TI. “Where consumers once needed a big TV or monitor for a crisp and clear display, they can now use a lifestyle or gaming projector and transform a wall into the screen size of their choosing with 4K UHD quality. Our new controller is the the latest example of how TI DLP technology is helping engineers develop epic displays for entertainment that can be taken anywhere.”

Bring big-screen gaming and projection anywhere

Lifestyle and gaming projectors are growing in popularity as consumers seek immersive experiences with their content, from movies and games to TV shows. With TI’s new DLPC8445 controller and DLP472TP DMD, designers can deliver displays that achieve submillisecond display latency, matching or exceeding the world’s most high-end gaming monitors and reducing lag time for gamers.

Integration of variable refresh rate (VRR) support, a first for a DLP chipset, will enable better displays for gamers by allowing designers to easily sync frame rates and eliminate lagging, image tearing and stuttering. Advanced image-correction capabilities dynamically adjust for surface imperfections, making it possible for consumers to conveniently take their gaming and viewing experience anywhere. It is also the first DLP controller designed for laser-illuminated battery-powered projectors.

To learn more, see the technical article, “Big-screen gaming anywhere: designing portable 4K UHD gaming projectors up to 240Hz.”

For over 25 years, TI DLP technology has impacted how people experience content, delivering high-resolution display and advanced light control solutions to enable vivid, crisp image quality from movie theaters to your homes and even on the go.  Learn more at ti.com/DLP.

Available today on TI.com

  • Preproduction quantities of the new DLPC8445 controller, DLP472TP DMD and DLPA3085 PMIC are available for purchase now on TI.com.
  • The DLPC8445 controller is the first device in the family. Future chipsets using the new controller technology will feature DMDs of different sizes and resolutions to address new trends in display applications such as augmented reality glasses.
  • Pricing for the new DLPC8445 controller starts at US$60 in 1,000-unit quantities.
  • Multiple payment and shipping options are available.

The post Smallest-ever TI DLP display controller enables 4K UHD projectors for epic displays anywhere appeared first on ELE Times.

Microchip Introduces High-Performance PCIe Gen 5 SSD Controller Family

Tue, 08/06/2024 - 10:59

Flashtec NVMe 5016 controllers are optimized to manage growing enterprise and data center workloads

The Artificial Intelligence (AI) boom and rapid expansion of cloud-based services are accelerating the need for data centers to be more powerful, efficient and highly reliable. To meet the growing market demands, Microchip Technology has released the Flashtec NVMe 5016 Solid State Drive (SSD) controller. The 16-channel, PCIe Gen 5 NVM Express® (NVMe) controller is designed to offer higher levels of bandwidth, security and flexibility.

“Data center technology must evolve to keep up with the significant advancements occurring in AI and Machine Learning (ML). Our fifth generation Flashtec NVMe controller is designed to lead the market in fulfilling the increased need for high-performance, power-optimized SSDs,” said Pete Hazen, vice president of Microchip’s data center solutions business unit. “The NVMe 5016 Flashtec PCIe controller can be deployed in data centers to facilitate effective and secure cloud computing and business-critical applications.”

The Flashtec NVMe 5016 controller is designed to support enterprise applications such as online transaction processing, financial data processing, database mining and other applications that are sensitive to latency and performance. Additionally, it serves growing AI needs with higher throughput for reading and writing large data sets used in model training and inference processing and provides the high bandwidth necessary to move large volumes of data quickly between storage and compute resources. At sequential read performance of more than 14 GB per second, the NVMe 5016 controller maximizes the usage of valuable compute resources in traditional and AI-accelerated servers under demanding workloads.

In addition to supporting the latest standard NVMe host interface, the NVMe 5016 controller is designed for a high random read performance of 3.5M IOs per second and a power profile focused on power-sensitive data center needs, delivering more than 2.5 GB of data per watt. The NVMe 5016 controller utilizes advanced node technologies and includes power management features like automatic idling of processor cores and autonomous power reduction capabilities. To support the latest Flash memory, including Quad-Level Cell (QLC), Triple-Level Cell (TLC) and Multi-Level Cell (MLC) NAND technologies, the NVMe 5016 controller provides strong Error Correction Code (ECC). All Flash management operations are performed on-chip, consuming negligible host processing and memory resources.

“Microchip’s latest Flashtec PCIe controller, utilizing advanced 6 nm process technology, addresses the power optimization requirements for demanding applications. Its flexible architecture delivers the processing power needed for cutting-edge AI workloads in a compact package,” said Greg Matson, senior vice president of strategic planning and marketing for Solidigm. “The Flashtec PCIe controller’s quality and reliability interop very well with Solidigm’s QLC NAND, ideal for meeting the increasing demand for data-intensive workloads such as AI and ML.”

“Longsys and Microchip have cultivated a strong relationship to drive the rapidly expanding enterprise SSD market,” said Huabo Cai, chairman and CEO of Longsys. “Microchip’s reliable and flexible architecture of the PCIe Flashtec products offers an excellent foundation for Longsys’ enterprise solutions with multiple advanced NAND Flash, delivering efficiency and reliability to standard or customized high-performance enterprise SSDs.”

The NVMe 5016 controller’s flexibility and scalability help reduce the total cost of ownership as advanced virtualization capabilities like single root I/O virtualization (SR-IOV), multiple physical functions and multiple virtual functions per physical function maximize the PCIe resource utilization. The consistent, programmable platform gives developers who plan to utilize Flexible Data Placement (FDP) in their SSDs the control to maximize the performance, efficiency and reliability of Flash resources on the SSD. Coupled with Microchip’s Credit Engine for dynamic allocation of resources, the NVMe 5016 controller enables reliable on-demand cloud services.

“We congratulate Microchip on its latest generation of Flashtec PCIe controllers,” said Maitry Dholakia, vice president of memory products for KIOXIA America, Inc. “The ongoing innovation of ECC in Flashtec controllers, along with their flexible architecture, enables compatibility with our advanced NAND flash products.”

“Congratulations to Microchip on the launch of their new NVMe SSD controllers,” said Dan Loughmiller, director of NAND Product Line Management and Applications Engineering at Micron. “As an industry-leading NAND supplier, our collaboration within the data center storage ecosystem enables customers who want to couple our packaged NAND solutions with their new controllers. We are excited that our work together continues to deliver compatibility with our NAND.”

As the volume of data storage expands, the risk of security threats correspondingly increases, underscoring the imperative for robust and reliable security measures. The Flashtec NVMe 5016 controller is designed to deliver enterprise-level integrity and dependability with comprehensive data protection, uninterrupted operations and safeguarding of confidential information.

Security features have been integrated into the NVMe 5016 controller to help maintain the integrity of both firmware and data throughout its lifecycle—from factory inception to retirement. These features encompass Secure Boot with a hardware Root-of-Trust, dual signature authentication to facilitate system OEM or end-user verification, support for various security standards through diverse authentication algorithms, user data protection with encryption for both data-in-transit (link level) and data-at-rest (media level) and sophisticated key management practices. These practices adhere to stringent security protocols, including the Federal Information Processing Standard (FIPS) 140-3 Level 2 and the Trusted Computing Group (TCG) Opal standards.

In terms of data integrity and reliability, the controller features overlapping end-to-end data protection with NVMe Protection Information (NVMe PI) and single error correction and double error detection (SECDED) ECC, and advanced error correction through Adaptive LDPC. It also includes failover recovery mechanisms utilizing Redundant Array of Independent Disk (RAID) techniques, further fortifying the resilience of the storage system.

Visit the Data Center Solutions page on Microchip’s website to learn more about the company’s full portfolio of data center hardware, software and development tools.

Development Tools

The Flashtec NVMe 5016 PCIe Gen 5 SSD controller is supported by an ecosystem of tools including the PM35160-KIT and PMT35161-KIT evaluation boards in various NAND options, a Software Development Kit (SDK) with PCIe-compliant front-end firmware, Microchip’s ChipLink tool for advanced debug and more.

Pricing and Availability

Microchip Flashtec NVMe 5016 controllers are available for sampling to qualified customers. Contact your Microchip salesperson for details or visit https://www.microchip.com/en-us/about/global-sales-and-distribution to find a listing of sales offices and locations near you.

The post Microchip Introduces High-Performance PCIe Gen 5 SSD Controller Family appeared first on ELE Times.

Pages