Microelectronics world news

Building the Better SSD

ELE Times - Fri, 01/12/2024 - 11:38

Courtesy: Samsung

As the demand for both SSD memory capacity and operating speed continues to increase at a breakneck pace, so does the need to improve data storage efficiency, reduce garbage collection, and handle errors more proactively.

For a big-picture analogy, let’s compare the problems faced with SSD data to the challenges of grain delivery from silo to transportation to warehouse. We’ll consider bags of grain to be delivered as bulk data to be stored on an SSD. NVMe SSD technologies allow the shipper (Data Center host) to specify:

  • A way for multiple grain shippers to tag their bags so that a single transport channel can carry all without mixing them up (SR-IOV, ZNS)
  • The best place in the warehouse to store each bag of grain with other like-grains stored (Flexible Data Placement – FDP) to minimize the number of bags to reorganize (Garbage Collection – GC)
  • The number of resources applied to high-priority shipments vs low-priority ones (Performance Control).

Now let’s consider the associated problem of pest control. In ages past, the world beat a path to the door of those who built a better mousetrap. In the SSD world, that task is akin to error management.

  • Improve the trap mechanism to maximize mice caught (CECC/UECC)
  • Monitor the trap to check the number of mice caught, whether the trap is full, and whether one trap is not working as well as others (SMART/Health)
  • Track and report the most mouse-related activity possible (Telemetry)
  • Use the activity data to foresee a major pest infestation before it happens (Failure Prediction)

And then there are cross-functional issues, such as…

  • Recovering grain bags to a new storage area when the original area has been overrun (data recovery and new drive migration)

Samsung is building a better mousetrap by leading the technology world in SSD engineering.

The Samsung annual Memory Tech Day event offered several breakout sessions that uncovered our latest storage technologies. Here are the key takeaways from the computing memory solutions track.

Jung Seungjin, VP of Solution Product Engineering team, discusses SSD Telemetry.

Consider a brief history of telemetry: Telemetry concepts of collecting operations data and then transmitting it to a remote location for interpretation have been around for well over a century.  Various forms of error logging and retrieval were included from the beginning of modern hard drive technologies. Basic SSD-specific telemetry commands and delivery formats became standard starting with NVMe 1.3.

In more recent times, Samsung has been using its position as the leader in SSD technology to drive sophisticated and necessary telemetry additions to the spec. The benefits of Samsung’s cutting-edge research become immediately obvious. Consider, for example, Samsung Telemetry Service, an advanced tool helping enterprise customers remotely analyze and manage their devices. It guarantees the stability of data – allowing data center operators to prevent future drive failures, manage drive replacement, and migrate data.

“Through monitoring, we realized that multi-address CECC can become a UECC that can cause problems in the system in the future.”

The Telemetry presentation focuses on telemetry background, the latest improvements that Samsung is driving to add to the specification, and examples of the value they add to enable detection of drive failure. Of key interest is Samsung’s advanced machine learning-based anomaly prediction research.

Silwan Chang, VP of Software Development team, talks about Flexible Data Placement (FDP) and the ease of its implementation to dramatically reduce Write Amplification Factor (WAF). The discussion includes a comparative analysis of various Data Placement technologies including ZNS, showcasing a use case for Samsung’s FDP technology.

The underlying limitation of NAND is that data in a NAND cell cannot be overwritten – thus, a NAND block must be erased before writing the data. Data placement technology overcomes this limitation because ideal data placement can increase the performance and endurance of modern SSDs without additional H/W cost.

The host influences data placement through the Reclaim Unit (RU) handled by the SSD; knowing the most efficient size and boundaries of this basic SSD storage unit, the host can group data of similar life cycles to reduce or eliminate SSD garbage collection inefficiencies.

“The best thing about an FDP SSD is that this is possible with a very small change of the system SW.”

Following up, Ross Stenfort of Meta presents Hyperscale FDP Perspectives where he shows the progression of improvements to reduce WAF:

  • Overprovisioning – allocating extra blocks to use for garbage collection
  • Trim/Deallocate host commands – telling the SSD what can safely be deleted
  • FDP – telling the SSD how to group data in order to minimize future garbage collection.

The presentation includes a compelling workload example without and with FDP, noting that:

“Applications are not required to understand FDP to get benefits.”

In his next session, Silwan Chang continues with a discussion about the present and future of Samsung SSD virtualization technology using SR-IOV.

Efficiency has become a central focus for increasing datacenter processing capacity. With the number of datacenter CPU cores typically exceeding 100, the number of tenants (separate instances / applications) utilizing a single SSD has surged.

Virtualization provides each tenant its own private window into SSD storage space. The PCIe SR-IOV specification provided the basics for setting up a virtualized environment. With its research giving it an early lead, Samsung now has nearly a decade of experience with SR-IOV – and has identified and developed solutions for underlying security and performance issues:

  • Data Isolation – keeping data from one tenant secure from access by others, evolving from logical sharing to physically isolated partitioning
  • Performance Isolation – preventing activity by one tenant from adversely affecting performance of other tenants
  • Security Enhancement – encryption evolving from Virtual Function level to link level
  • Live Migration – moving data from one SSD to another while keeping both in active service to the datacenter host.

“To realize completely isolated storage spaces in a single SSD, we need to evolve into physical partitioning where NAND chips and even controller resources are dedicated to a namespace.”

Sunghoon Chun, VP of Solution Development team, talks about Samsung’s ongoing development of new solutions tailored to meet the challenges of rapidly evolving PCIe interface speeds and the trend towards high-capacity products.

The key focus is higher speeds at lower active power, aspects that tend to be mutually exclusive.

Samsung targets lower active-power in two main ways:

  • Designing lower power components by adding power rails to boost the efficiency of the voltage regulator
  • Introducing power-saving features to optimize the interaction between components, such as by modifying firmware to favor lower-power SRAM utilization over DRAM.

The higher speed target brings with it higher temperatures, which Samsung addresses with:

  • Form factor conversion to accommodate higher thermal dissipation for power demands going from 25W to 40W
  • Use of more effective and novel case construction materials and design techniques
  • Thermal management solutions using immersion cooling that yield strong experimental gains.

“The goal is to continue efforts to create a perfect SSD, optimized for use in immersion cooling systems over the next few years in line with the trend of the times.”

In summary, this presentation track reveals the Samsung SSD strategy for customer success.

  • Dramatically reduce WAF by taking advantage of Samsung’s advanced Flexible Data Placement technology
  • Vastly increase virtualization efficiency using Samsung’s performance regulation and space partitioning technology to maximize the processing capacity for each core of the multi-core datacenter CPU
  • Achieve significantly higher operating speeds while both reducing power and increasing heat dissipation by using Samsung’s novel design and packaging techniques
  • Remotely analyze and manage devices to virtually eliminate data loss and its crippling downtime through the innovative Samsung Telemetry Service.

The post Building the Better SSD appeared first on ELE Times.

Can Your Vision AI Solution Keep Up with Cortex-M85?

ELE Times - Fri, 01/12/2024 - 11:24

Kavita Char | Principal Product Marketing Manager | Renesas

Vision AI – or computer vision – refers to technology that allows systems to sense and interpret visual data and make autonomous decisions based on an analysis of this data. These systems typically have camera sensors for acquisition of visual data that is provided as input activation to a neural network trained on large image datasets to recognize images. Vision AI can enable many applications like industrial machine vision for fault detection, autonomous vehicles, face recognition in security applications, image classification, object detection and tracking, medical imaging, traffic management, road condition monitoring, customer heatmap generation and so many others.

In my previous blog, Power Your Edge AI Application with the Industry’s Most Powerful Arm MCUs, I discussed some of the key performance advantages of the powerful RA8 Series MCUs with the Cortex-M85 core and Helium that make them ideally suited for voice and vision AI applications. As discussed there, the availability of higher performance MCUs as well as thin neural network models more suited for the resource constrained MCUs used in end point devices, are enabling these sorts of edge AI applications.

In this blog, I will discuss a vision AI application built on the new RA8D1 graphics-enabled MCUs featuring the same Cortex-M85 core and use of Helium to accelerate the neural network. RA8D1 MCUs provide a unique combination of advanced graphics capabilities, sensor interfaces, large memory and the powerful Cortex-M85 core with Helium for acceleration of the vision AI neural networks, making them ideally suited for these vision AI applications.

Graphics and Vision AI Applications with RA8D1 MCUs

Renesas has successfully demonstrated the performance uplift with Helium, in various AI / ML use cases showing significant improvement over a Cortex-M7 MCU – more than 3.6x in some cases.

One such use case is a people detection application developed in collaboration with Plumerai, a leading provider of vision AI solutions. This camera-based AI solution has been ported and optimized for the Helium-enabled Arm Cortex-M85 core, successfully demonstrating both the performance as well as the graphics capabilities of the RA8D1 devices.

Accelerated with Helium, the application achieves a 3.6x performance uplift vs. Cortex-M7 core and 13.6 fps frame rate, a strong performance for an MCU without hardware acceleration. The demo platform captures live images from an OV7740 image-sensor-based camera at 640×480 resolution and presents detection results on an attached 800×480 LCD display. The software detects and tracks each person within the camera frame, even if partially occluded, and shows bounding boxes drawn around each detected person overlaid on the live camera display.

 Renesas People Detection AI Demo Platform, showcased at Embedded World 2023Figure 1: Renesas People Detection AI Demo Platform, showcased at Embedded World 2023

Plumerai people detection software uses a convolution neural network with multiple layers, trained with over 32 million labeled images. The layers that account for the majority of the total latency, are Helium accelerated, such as the Conv2D and fully connected layers, as well as depthwise convolution and transpose convolution layers.

The camera module provides images in YUV422 format which is converted to RGB565 format for display on the LCD screen. The 2D graphics engine integrated on the RA8D1 resizes and converts the RGB565 to ABGR8888 at resolution 256×192 for input to the neural network. The software then converts the ARBG8888 format to the neural network model input format and runs the people detection inference function. The graphics LCD controller and 2D drawing engine on the RA8D1 are used to render the camera input to the LCD screen as well as draw bounding boxes around detected people and present the frame rate. The people detection software uses roughly 1.2MB of flash and 320KB of SRAM, including the memory for the 256×192 ABGR8888 input image.

 People Detection AI application on the RA8D1 MCUFigure 2: People Detection AI application on the RA8D1 MCU

Benchmarking was done to compare the latency of Plumerai’s people detection solution as well as the same neural network running with TFMicro using Arm’s CMSIS-NN kernels. Additionally, for the Cortex-M85, the performance of both solutions with Helium (MVE) disabled was also benchmarked. This benchmark data shows pure inference performance and does not include latency for the graphics functions, such as image format conversions.

 The Renesas people detection demo based on the RA8D1 demonstrates a performance uplift of 3.6x over the Cortex-M7 coreFigure 3: The Renesas people detection demo based on the RA8D1 demonstrates a performance uplift of 3.6x over the Cortex-M7 core  Inference performance of 13.6 fps @ 480 MHz using RA8D1 with Helium enabledFigure 4: Inference performance of 13.6 fps @ 480 MHz using RA8D1 with Helium enabled

This application makes optimal use of all the resources available on the RA8D1:

  • High-performance 480 MHz processor
  • Helium for neural network acceleration
  • Large flash and SRAM for storage of model weights and input activations
  • Camera interface for capture of input images/video
  • Display interface to show the people detection results

Renesas has also demonstrated multi-modal voice and vision AI solutions based on the RA8D1 devices that integrate visual wake words and face detection and recognition with speaker identification. RA8D1 MCUs with Helium can significantly improve neural network performance without the need for any additional hardware acceleration, thus providing a low-cost, low-power option for implementing AI and machine learning use cases.

The post Can Your Vision AI Solution Keep Up with Cortex-M85? appeared first on ELE Times.

Getting Started with Large Language Models for Enterprise Solutions

ELE Times - Fri, 01/12/2024 - 11:06

ERIK POUNDS | Nvidia

Large language models (LLMs) are deep learning algorithms that are trained on Internet-scale datasets with hundreds of billions of parameters. LLMs can read, write, code, draw, and augment human creativity to improve productivity across industries and solve the world’s toughest problems.

LLMs are used in a wide range of industries, from retail to healthcare, and for a wide range of tasks. They learn the language of protein sequences to generate new, viable compounds that can help scientists develop groundbreaking, life-saving vaccines. They help software programmers generate code and fix bugs based on natural language descriptions. And they provide productivity co-pilots so humans can do what they do best—create, question, and understand.

Effectively leveraging LLMs in enterprise applications and workflows requires understanding key topics such as model selection, customization, optimization, and deployment. This post explores the following enterprise LLM topics:

  • How organizations are using LLMs
  • Use, customize, or build an LLM?
  • Begin with foundation models
  • Build a custom language model
  • Connect an LLM to external data
  • Keep LLMs secure and on track
  • Optimize LLM inference in production
  • Get started using LLMs

Whether you are a data scientist looking to build custom models or a chief data officer exploring the potential of LLMs for your organization, read on for valuable insights and guidance.

How organizations are using LLMs Figure 1. LLMs are used to generate content, summarize, translate, classify, answer questions, and much more

LLMs are used in a wide variety of applications across industries to efficiently recognize, summarize, translate, predict, and generate text and other forms of content based on knowledge gained from massive datasets. For example, companies are leveraging LLMs to develop chatbot-like interfaces that can support users with customer inquiries, provide personalized recommendations, and assist with internal knowledge management.

LLMs also have the potential to broaden the reach of AI across industries and enterprises and enable a new wave of research, creativity, and productivity. They can help generate complex solutions to challenging problems in fields such as healthcare and chemistry. LLMs are also used to create reimagined search engines, tutoring chatbots, composition tools, marketing materials, and more.

Collaboration between ServiceNow and NVIDIA will help drive new levels of automation to fuel productivity and maximize business impact. Generative AI use cases being explored include developing intelligent virtual assistants and agents to help answer user questions and resolve support requests and using generative AI for automatic issue resolution, knowledge-base article generation, and chat summarization.

A consortium in Sweden is developing a state-of-the-art language model with NVIDIA NeMo Megatron and will make it available to any user in the Nordic region. The team aims to train an LLM with a whopping 175 billion parameters that can handle all sorts of language tasks in the Nordic languages of Swedish, Danish, Norwegian, and potentially Icelandic.

The project is seen as a strategic asset, a keystone of digital sovereignty in a world that speaks thousands of languages across nearly 200 countries. To learn more, see The King’s Swedish: AI Rewrites the Book in Scandinavia.

The leading mobile operator in South Korea, KT, has developed a billion-parameter LLM using the NVIDIA DGX SuperPOD platform and NVIDIA NeMo framework. NeMo is an end-to-end, cloud-native enterprise framework that provides prebuilt components for building, training, and running custom LLMs.

KT’s LLM has been used to improve the understanding of the company’s AI-powered speaker, GiGA Genie, which can control TVs, offer real-time traffic updates, and complete other home-assistance tasks based on voice commands.

Use, customize, or build an LLM?

Organizations can choose to use an existing LLM, customize a pretrained LLM, or build a custom LLM from scratch. Using an existing LLM provides a quick and cost-effective solution, while customizing a pretrained LLM enables organizations to tune the model for specific tasks and embed proprietary knowledge. Building an LLM from scratch offers the most flexibility but requires significant expertise and resources.

NeMo offers a choice of several customization techniques and is optimized for at-scale inference of models for language and image applications, with multi-GPU and multi-node configurations. For more details, see Unlocking the Power of Enterprise-Ready LLMs with NVIDIA NeMo.

NeMo makes generative AI model development easy, cost-effective, and fast for enterprises. It is available across all major clouds, including Google Cloud as part of their A3 instances powered by NVIDIA H100 Tensor Core GPUs to build, customize, and deploy LLMs at scale. To learn more, see Streamline Generative AI Development with NVIDIA NeMo on GPU-Accelerated Google Cloud.

To quickly try generative AI models such as Llama 2 directly from your browser with an easy-to-use interface, visit NVIDIA AI Playground.

Begin with foundation models

Foundation models are large AI models trained on enormous quantities of unlabeled data through self-supervised learning. Examples include Llama 2, GPT-3, and Stable Diffusion.

The models can handle a wide variety of tasks, such as image classification, natural language processing, and question-answering, with remarkable accuracy.

These foundation models are the starting point for building more specialized and sophisticated custom models. Organizations can customize foundation models using domain-specific labeled data to create more accurate and context-aware models for specific use cases.

Foundation models generate an enormous number of unique responses from a single prompt by generating a probability distribution over all items that could follow the input and then choosing the next output randomly from that distribution. The randomization is amplified by the model’s use of context. Each time the model generates a probability distribution, it considers the last generated item, which means each prediction impacts every prediction that follows.

NeMo supports NVIDIA-trained foundation models as well as community models such as Llama 2, Falcon LLM, and MPT. You can experience a variety of optimized community and NVIDIA-built foundation models directly from your browser for free on NVIDIA AI Playground. You can then customize the foundation model using your proprietary enterprise data. This results in a model that is an expert in your business and domain.

Build a custom language model

Enterprises will often need custom models to tailor ‌language processing capabilities to their specific use cases and domain knowledge. Custom LLMs enable a business to generate and understand text more efficiently and accurately within a certain industry or organizational context. They empower enterprises to create personalized solutions that align with their brand voice, optimize workflows, provide more precise insights, and deliver enhanced user experiences, ultimately driving a competitive edge in the market.

NVIDIA NeMo is a powerful framework that provides components for building and training custom LLMs on-premises, across all leading cloud service providers, or in NVIDIA DGX Cloud. It includes a suite of customization techniques from prompt learning to parameter-efficient fine-tuning, to reinforcement learning through human feedback (RLHF). NVIDIA also released a new, open customization technique called SteerLM that allows for tuning during inference.

When training an LLM, there is always the risk of it becoming “garbage in, garbage out.” A large percentage of the effort is acquiring and curating the data that will be used to train or customize the LLM.

NeMo Data Curator is a scalable data-curation tool that enables you to curate trillion-token multilingual datasets for pretraining LLMs. The tool allows you to preprocess and deduplicate datasets with exact or fuzzy deduplication, so you can ensure that models are trained on unique documents, potentially leading to greatly reduced training costs.

Connect an LLM to external data

Connecting an LLM to external enterprise data sources enhances its capabilities. This enables the LLM to perform more complex tasks and leverage data that has been created since it was last trained.

Retrieval Augmented Generation (RAG) is an architecture that provides an LLM with the ability to use current, curated, domain-specific data sources that are easy to add, delete, and update. With RAG, external data sources are processed into vectors (using an embedding model) and placed into a vector database for fast retrieval at inference time.

In addition to reducing computational and financial costs, RAG increases accuracy and enables more reliable and trustworthy AI-powered applications. Accelerating vector search is one of the hottest topics in the AI landscape due to its applications in LLMs and generative AI.

Keep LLMs on track and secure

To ensure an LLM’s behavior aligns with desired outcomes, it’s important to establish guidelines, monitor its performance, and customize as needed. This involves defining ethical boundaries, addressing biases in training data, and regularly evaluating the model’s outputs against predefined metrics, often in concert with a guardrails capability. For more information, see NVIDIA Enables Trustworthy, Safe, and Secure Large Language Model Conversational Systems.

To address this need, NVIDIA has developed NeMo Guardrails, an open-source toolkit that helps developers ensure their generative AI applications are accurate, appropriate, and safe. It provides a framework that works with all LLMs, including OpenAI’s ChatGPT, to make it easier for developers to build safe and trustworthy LLM conversational systems that leverage foundation models.

Keeping LLMs secure is of paramount importance for generative AI-powered applications. NVIDIA has also introduced accelerated Confidential Computing, a groundbreaking security feature that mitigates threats while providing access to the unprecedented acceleration of NVIDIA H100 Tensor Core GPUs for AI workloads. This feature ensures that sensitive data remains secure and protected, even during processing.

Optimize LLM inference in production

Optimizing LLM inference involves techniques such as model quantization, hardware acceleration, and efficient deployment strategies. Model quantization reduces the memory footprint of the model, while hardware acceleration leverages specialized hardware like GPUs for faster inference. Efficient deployment strategies ensure scalability and reliability in production environments.

NVIDIA TensorRT-LLM is an open-source software library that supercharges large LLM inference on NVIDIA accelerated computing. It enables users to convert their model weights into a new FP8 format and compile their models to take advantage of optimized FP8 kernels with NVIDIA H100 GPUs. TensorRT-LLM can accelerate inference performance by 4.6x compared to NVIDIA A100 GPUs. It provides a faster and more efficient way to run LLMs, making them more accessible and cost-effective.

These custom generative AI processes involve pulling together models, frameworks, toolkits, and more. Many of these tools are open source, requiring time and energy to maintain development projects. The process can become incredibly complex and time-consuming, especially when trying to collaborate and deploy across multiple environments and platforms.

NVIDIA AI Workbench helps simplify this process by providing a single platform for managing data, models, resources, and compute needs. This enables seamless collaboration and deployment for developers to create cost-effective, scalable generative AI models quickly.

NVIDIA and VMware are working together to transform the modern data center built on VMware Cloud Foundation and bring AI to every enterprise. Using the NVIDIA AI Enterprise suite and NVIDIA’s most advanced GPUs and data processing units (DPUs), VMware customers can securely run modern, accelerated workloads alongside existing enterprise applications on NVIDIA-Certified Systems.

Get started using LLMs

Getting started with LLMs requires weighing factors such as cost, effort, training data availability, and business objectives. Organizations should evaluate the trade-offs between using existing models and customizing them with domain-specific knowledge versus building custom models from scratch in most circumstances. Choosing tools and frameworks that align with specific use cases and technical requirements is important, including those listed below.

The Generative AI Knowledge Base Chatbot lab ‌shows you how to adapt an existing AI foundational model to accurately generate responses for your specific use case. This free lab provides hands-on experience with customizing a model using prompt learning, ingesting data into a vector database, and chaining all components to create a chatbot.

NVIDIA AI Enterprise, available on all major cloud and data center platforms, is a cloud-native suite of AI and data analytics software that provides over 50 frameworks, including the NeMo framework, pretrained models, and development tools optimized for accelerated GPU infrastructures. You can try this end-to-end enterprise-ready software suite is with a free 90-day trial.

NeMo is an end-to-end, cloud-native enterprise framework for developers to build, customize, and deploy generative AI models with billions of parameters. It is optimized for at-scale inference of models with multi-GPU and multi-node configurations. The framework makes generative AI model development easy, cost-effective, and fast for enterprises. Explore the NeMo tutorials to get started.

NVIDIA Training helps organizations train their workforce on the latest technology and bridge the skills gap by offering comprehensive technical hands-on workshops and courses. The LLM learning path developed by NVIDIA subject matter experts spans fundamental to advanced topics that are relevant to software engineering and IT operations teams. NVIDIA Training Advisors are available to help develop customized training plans and offer team pricing.

Summary

As enterprises race to keep pace with AI advancements, identifying the best approach for adopting LLMs is essential. Foundation models help jumpstart the development process. Using key tools and environments to efficiently process and store data and customize models can significantly accelerate productivity and advance business goals.

The post Getting Started with Large Language Models for Enterprise Solutions appeared first on ELE Times.

2024 Predictions in storage, technology, and the world, part 1: the AI hype is real!

ELE Times - Fri, 01/12/2024 - 09:47

JEREMY WERNER | Micron

Over the past 100 years, driven by the introduction of increasingly connecting
technologies enabling richer communications and lower-latency information
transfer, the world has gotten closer than ever.

This increased connectivity has led to fantastic benefits for many people around the world, lifting people out of poverty, increasing information availability, revolutionizing business and education, connecting people with like-minded citizens of Earth no matter where they may be, and shining spotlights on injustices around the world that we can tackle as a human species, among myriad other benefits. But there have been downsides that are often lamented as we age and look back fondly on less connected times.

Our privacy has eroded as we are now traceable and trackable — from our phone locations to our online search history. Our ability to sustain concentration for tasks that require significant time and effort has diminished due to the nature of our always on and always reachable connectivity. Also, some of the worst human traits are brought forth through the power of social media and often misleading information that is difficult or impossible to discern as fact or fiction, often leading to hate, jealousy, greed, gluttony and self-loathing.

These technologies have remade the world and the world’s economy through the introduction of new capabilities including mass production in a global interconnected supply chain, which is driving productivity gains. Now that the information revolution has transformed the world, we sit on the cusp of another great revolution as we enter the Age of Intelligence1, undoubtedly greater than any we’ve seen in the history of humankind – built on the shoulders of the giant leaps that humans, as the world’s ultimate social and ingenious beings, have taken in the past.

Now, on to my first prediction.

Prediction 1: The AI hype is REAL and will change the world forever

Like all technologies as they first take off, questions abound about whether they are real or hype. Many technologies are hyped, only to flounder for years before becoming mainstream; others catch the momentum and take off, never looking in the rearview mirror; and some fade into the annals of history, a distant memory in nostalgia, the ever-common one-hit wonder.

Gartner writes about this in its famous Hype Cycle – and I think it’s a good way to look at where new technologies stand. One of the most common questions I get is, “Is the AI boom hype?” and my answer is, “Unequivocally not hype!” Now, it’s possible that the fine people (or algorithmic trading supercomputers these days) on Wall Street will fade the trade of AI companies as growth inevitably tames. But the impact that AI will have on our lives, on the future of the data center and personal devices, on the future of memory and storage technology, and on the growth rate of IT spending will be tremendous — and we are just at the very beginning of what is possible!

The introduction of ChatGPT and the other more than 100-billion-parameter large language models (LLMs) kicked off the generative AI revolution, although neural networks, deep learning and artificial intelligence (AI) have been in use for decades in fields such as image recognition and advertising recommendation engines. But something about the latest LLM AI capabilities makes them seem different than what came before – more capable, more intelligent, more thoughtful, more human? And these capabilities are advancing at an accelerating pace – especially as all the world’s largest companies race to monetize and productize the LLM-based applications that will change the world forever.

Let me provide a few examples of new near-term, medium-term, and long-term capabilities and applications that will reshape the world as we know it. In the process, they will reshape the need for faster, larger, more secure, and more capable memory, storage, networking, and compute devices, with a special focus on data creation, storage, and analytics from these new applications. Whether these technologies go mainstream today, tomorrow or in 20 years, the race to deploy them starts NOW and Micron is at the heart of all the innovation and ramping capabilities.

The basics: near-term capabilities guaranteed to explode in the next two to three years

Most of these technologies are applications that will be run in the data center and accessed remotely through a phone or PC by the consumer, or run in the backbone of business applications to speed time to market for new product development, gain insights on how companies are performing to drive improvements, uncover areas of saving and productivity gain, and bring these companies closer to their customers by enhancing their understanding of their customers desires and connecting them with the products that will interest them.

  • General generative AI – Want to create a new logo for your company, draw a funny picture for a friend, or express your ideas in art? Maybe write a blog or piece of marketing collateral, find or create a legal agreement template, brainstorm ideas for team building events, review the flow of your presentation and make suggestions to wow the audience – or even touch up your slides and presentation for you?

It’s all possible and it’s real, here and now, and the rollout into Office365 and Google Docs is happening, gated primarily by integrating these capabilities into applications, users learning how to use the new capabilities (that is, adoption), and the compute power on the backend supporting these new capabilities. (Note that rolling out that compute power will benefit memory and storage demand.)

  • Video chat monitoring – Need real-time language translation for cross-border meetings with team members fluent in different languages? Tired of taking meeting minutes and want an automated summary — including key points, attendees, and action items — to be saved to the location of your choice and sent out after your meeting? These are just a couple examples of the capabilities in trial or being developed already.
  • Code generation – The average compensation of a software engineer in the U.S. is about $155,0002. Code generation empowers entrepreneurs and creators by giving them the ability to program without needing to know how to code. It can also transform an experienced coder or software engineer into a super engineer, enhancing their productivity by an average of 55%, according to one study.3

At Micron we’ve been deploying early prototypes of AI coding tools for our software engineers from IT and product development to test and validation. And even early tools — not bespoke trained tools on our data specifically — are showing huge promise to drive software developer productivity. One simple example that most software programmers will appreciate was that the AI software automatically generated and inserted comments for the code we were writing that was highly accurate. This simple task saved our engineers up to 20% of their time while enhancing the consistency, quality, and readability of our code for others assigned to or joining projects.

  • Entrepreneurship and business partners – Have a new idea but don’t know where to get started? Your favorite generative AI assistant has your back. Tell ChatGPT or other generative AI tools you want to start a business together and it’s your new business partner! Explain your idea and ask for a business plan, a roadmap and a step-by-step guide on how to realize your dream. You’ll be amazed at what an enthusiastic and capable business partner you’ve found. It’s not perfect but is any co-worker?
Medium-term technologies that will disrupt trillion-dollar industries in the next three to seven years

Most of these technologies require some complex problems to be solved, including government regulations for safety reasons or new physical capabilities to be developed. These dependencies will inevitably delay the introduction of what is possible as they are added into the existing physical world built for humans and their imperfections.

  • Autonomous driving – Remember the hype this new technology got around 2021? Uber and Lyft stock soared on the belief that their platforms would provide the robo-taxi fleet for the rapid transition into autonomous vehicles. But indeed Level 5 (fully autonomous) cars have fallen somewhat into the trough of disillusionment. The reasons for the delay are many – underestimation of the complexity and computing power required to make split second decisions, the variance of the driving, road and weather conditions, the complexity of the moral and ethical decision-making, and societal and regulatory questions such as who is liable in the event of an accident or how you prioritize saving the lives of passengers or pedestrians when no perfect decision exists. Accidents happen, right? But we will figure these issues out, and eventually most vehicles on the road will be capable of full autonomy. And this will have an enormous impact on the amount of memory and storage in a car as the average L5 vehicle in 2030 will use approximately 200 times the amount of NAND used by a typical L2+/L3 vehicle today. Multiply that by approximately 122 million4 vehicles in 2030 and you see an increase in demand for NAND in automotive applications reliant on AI of a whopping 500 exabytes! That’s over half the amount of NAND expected to be produced in 2024.
  • Healthcare – Artificial intelligence is transforming healthcare in many ways, including radiology scans and cancer detection. AI algorithms can analyze images from MRI scans to predict the presence of an IDH1 gene mutation in brain tumors or find prostate cancer when it’s present and dismiss anything that may be mistaken for cancer4. Researchers are using machine learning to build tools in the realm of cancer detection and diagnosing, potentially catching tumors or lesions that doctors could miss5. AI is also being used to help detect lung cancer tumors in computed tomography scans, with the AI deep learning tool outperforming radiologists in detecting lung cancer6. And AI will bring the best practices and procedures to patients around the world, especially in locations lacking the quantity and quality of top doctors, which is likely to massively improve outcomes.
  • Personal AI assistant – Movies and books have been written — from Awaken Online7 to Her8 — romanticizing the idea of a personal AI assistant always with you, capable of truly understanding your desires, preferences, and needs. Imagine being able to give vague instructions like find me something to eat, plan my vacation for me, create my to-do list, or help me choose an outfit today. These are all within the realm of possibility but require privacy and performance that is likely best delivered locally instead of from the cloud. The training and retraining of these models may happen on more powerful servers, but the inferencing/running of the model and your private data is likely to be resident on your phone or PC of the future. This means massive increases in local storage (NAND/SSD) and memory (DRAM) in future personal devices.
  • Video training – How about a virtual avatar of your boss, trained on their capabilities and thought processes, to review your work and provide feedback, or give advice that is close to what they would actually deliver, or a video of your favorite leader or scientist or celebrity who could come to a school and interact with the students in an authentic and thoughtful manner? Training on video and the compute power necessary to scale hyperrealistic advanced digital AI avatars are costly endeavors compared to still image or text generation, but they’re technologically viable once costs come down and investment scales into the next wave of generative models.
  • Policing and law enforcement – Artificial intelligence has the potential to transform the field of policing and law enforcement, especially in video surveillance. AI can help detect and prevent crimes, identify and track suspects, and provide evidence and insights for investigations. However, the use of AI also raises ethical and social issues, such as the balance between government monitoring and individual privacy rights, the risk of government tyranny and abuse of power, and the impact of AI on human dignity and civil liberties. Different countries have different approaches and regulations on how to use AI for video surveillance, reflecting their cultural and political values. For example, the U.S. prioritizes individual privacy and limits the use of facial recognition and other biometric technologies by law enforcement agencies. On the other hand, Britain and China allow more state surveillance and use AI to monitor public spaces, traffic, and social media for crime prevention and social control. These contrasting examples show that society must weigh the benefits and risks of AI in video surveillance and decide how to regulate and oversee its use in a democratic and transparent manner. So, while the technology exists for much of this use today, the sticky ethical questions and subsequent regulations are likely to take longer before they fully disrupt this industry.
Longer-term technologies that will create multitrillion-dollar industries in the next 10-plus years
  • Home-assistant robotics – The aging population in the United States is facing a number of challenges when it comes to eldercare. The need for caregivers will increase significantly as the population ages. However, the supply of eldercare is not keeping up with the demand. The shortage of workers in the eldercare industry is a nationwide dilemma, with millions of older adults unable to access the affordable care and services that they so desperately need. According to the Bureau of Labor Statistics, the employment of home health and personal care aides is projected to grow 22% from 2022 to 2032, much faster than the average for all occupations. About 684,600 openings for home health and personal care aides are projected each year, on average, over the decade9. Meanwhile, according to the CDC, 66% of U.S. households (86.9 million homes) own a pet, with dogs being the most popular pet in the U.S. (65.1 million U.S. households own a dog), followed by cats (46.5 million households)10. In 2022, Americans spent $5.8 billion on pet care services, including pet sitting, dog walking, grooming, and boarding.11

And over one million home burglaries occur annually in the U.S.; that’s one every 25.7 seconds!12 Home-assistant robots with AI embedded into their capabilities could help seniors or disabled people maintain their independence, protect our homes when we are out, or take care of our pets when we travel, as well as assisting in myriad other helpful ways such as cooking or cleaning. Eventually Isaac Asimov’s vision of intelligent and helpful robots is likely to become a reality.

  • Battle bots and revolutionized warfare – Artificial intelligence is likely to transform modern warfare in unprecedented ways, creating new opportunities and challenges for humanity. AI could be a means to peace, discouraging warfare by enhancing deterrence, reducing casualties, and enabling humanitarian interventions. However, AI could also be a dangerous tool in the hands of an evil dictator, increasing the scale, speed, and unpredictability of violence, lowering the threshold for conflict, and undermining human rights and accountability. AI could enable the development and deployment of new weapons and systems — such as drones, microscopic hordes, and robots — that could autonomously operate on the battlefield, with or without human supervision. These technologies could have significant implications for the ethics and laws of war, as well as the security and stability of the world order. Therefore, it is imperative that governments around the world navigate the ethical implications of AI in warfare, cooperate to establish norms and regulations that ensure the responsible and peaceful use of AI, and (hopefully) drive our planet to peace and shared prosperity.
  • The new hire – Why work when you could get your AI robot to go to work for you? At some point in the future, we have the opportunity for more leisure time and socialization as the mundane tasks in life can be managed by superintelligent robots – as individuals or hive beings. How will society choose to share this wealth among its citizens? Will we allow only a few who invent the technology to benefit or will all humankind have their quality of life lifted? What will we do with all the time we find on our hands, and what does it mean for the values that many of us hold in high esteem like working hard and learning about new things if we won’t have as broad an opportunity to apply them? Lots of questions with many ethical and societal challenges that must be worked out and reimagined from how the world operates today. We may be worried about AI taking our jobs, but maybe we can move to a three- or four-day workweek and spend more time enjoying the fruits of our labor through the help of our trusty AI assistants!

The post 2024 Predictions in storage, technology, and the world, part 1: the AI hype is real! appeared first on ELE Times.

BoardSurfers: Reusing AWR Microwave Office RF Blocks in Allegro PCB Designs

ELE Times - Fri, 01/12/2024 - 08:59

While RF circuits might appear complex at first glance, with the right tools, you can incorporate RF designs into your PCB projects effortlessly and confidently.

This blog post will delve deep into the AWR Microwave Office to Allegro RF Design flow. The foundation of this design flow is Cadence Unified Library, which is used to exchange data between AWR Microwave Office and Allegro PCB Design applications. Cadence Unified Library contains all the necessary information to design an RF schematic and a layout in AWR Microwave Office, including PCB technology, manufacturable components, and vias. AWR Process Design Kit (PDK) is generated from Cadence Unified Library and used by AWR Microwave Office to capture the RF schematic and the layout. The RF design is exported as a single container (.asc) file from AWR Microwave Office.

Let’s go through the design flow tasks to bring an RF design created in AWR Microwave Office into Allegro System Capture and Allegro PCB Editor and reuse these RF designs.

Importing RF Design into Allegro System Capture

To import an RF design in Allegro System Capture, do the following:

  • Choose File – Import – MWO RF Design.
  • In the file browser that opens, browse to the location of the .asc file exported from AWR Microwave Office.

The RF design is imported as a block that can be used to create schematic blocks in Allegro System Capture.

To mark the block as a reuse block, do the following:

  • Select the RF block, right-click, and choose Export to Reuse Layout.
  • In the Options form, specify the input layout field value to the path of the board file used for generating Cadence Unified Library.

After the export process is completed, a new board file is generated with connectivity information.

Importing RF Design into Allegro PCB Editor

To import the RF design into the layout design, perform the following steps in Allegro PCB Editor:

  • Open the board created in the previous step.
  • Choose File – Import – Cadence Unified Library.

After the import process is completed, the RF layout is placed in the design canvas.

Creating RF Design Module

Saving the RF layout as a module helps you create multiple PCB designs with the same RF logic. To create a module in Allegro PCB Editor, do the following:

  • Choose the Tools – Create Module menu command.

  • Select the entire RF layout intended for inclusion in the module by drawing a rectangular boundary, then click anywhere on the design canvas.
  • Specify a name in the Save As file browser and click Save to save the module (.mdd) file.
Reusing RF Blocks in Existing PCB Designs

If marked for physical and logical reuse, the RF block can be instantiated as a reused RF block in an existing schematic design. When this schematic design is transferred to Allegro PCB Editor, the RF modules can be reused in a larger PCB. To instantiate the RF blocks in an existing schematic project, perform the following:

  • Right-click the RF block name in Allegro System Capture and choose Place as Schematic Block.

The packaging options appear when you place the block.

  • Select the Physical Reuse Block check box. This step is essential to link the schematic to the reuse RF module.
  • Repeat the above steps to place multiple instances of the RF Block.
  • Complete the schematic design.
  • Use the Export to Layout option to complete the packaging process.
Conclusion

The tightly integrated AWR Microwave Office-Allegro PCB solution is a step ahead of the traditional flows in ensuring a first-time-right verification and manufacturing of RF modules in the context of a real PCB. The key value lies in a shift left approach where the RF section is designed using real manufacturing parts and PCB technology, thereby eliminating the recapture and verification of the RF block in the later stages of the PCB design process.

The post BoardSurfers: Reusing AWR Microwave Office RF Blocks in Allegro PCB Designs appeared first on ELE Times.

Advanced motor control systems improve motor control performance

ELE Times - Fri, 01/12/2024 - 08:46

Courtesy: Arrow Electronics

Electric motors are widely used in various industrial, automotive, and commercial applications. Motors are controlled by drivers, which regulate their torque, speed, and position by altering the input power. High-performance motor drivers can enhance efficiency and enable faster and more precise control. This article introduces modern motor control system architectures and various motor control solutions offered by ADI.

A modern intelligent motor control system with a multi-chip architecture

With the advancement of technology, motor control systems are evolving towards greater intelligence and efficiency. Advanced motor control systems integrate control algorithms, industrial networks, and user interfaces, thus requiring more processing power to execute all tasks in real-time. Modern motor control systems typically employ a multi-chip architecture, utilizing a Digital signal processor (DSP) for motor control algorithms, Field Programmable Gate Array (FPGA) for high-speed I/O and networking protocols, and microprocessors for handling executive control.

With the emergence of System-on-chip (SoC) devices, such as the Xilinx Zynq All Programmable SoC, which combines the flexibility of a CPU with the processing power of an FPGA, designers are finally able to consolidate motor control functions and other processing tasks within a single device. Control algorithms, networking, and other processing-intensive tasks are offloaded to the programmable logic, while supervisory control, system monitoring and diagnostics, user interfaces, and debugging are handled by the processing unit. The programmable logic can include multiple parallel working control cores to achieve multi-axis machines or multiple control systems.

In recent years, driven by modeling and simulation tools like MathWorks Simulink, model-based design has evolved into a complete design workflow, from model creation to implementation. Model-based design changes the way engineers and scientists work, shifting design tasks from the lab and the field to the desktop. Now, the entire system, including the plant and controllers, can be modeled, allowing engineers to fine-tune controller behavior before deploying it in the field. This can reduce the risk of damage, accelerate system integration, and reduce dependence on equipment availability. Once the control model is completed, the Simulink environment can automatically convert it into C and HDL code that is run by the control system, saving time and avoiding manual coding errors.

A complete development environment that enables higher motor control performance leverages the Xilinx Zynq SoC for controller implementation, MathWorks Simulink for model-based design and automatic code generation, and ADI’s Intelligent Drives Kit for rapid prototyping of drive systems.

1211-adi-simulation (1)

An advanced motor control system comprehensively manages control, communication, and user interface tasks

An advanced motor control system must comprehensively handle control, communication, and user interface tasks, each of which has different processing bandwidth requirements and real-time constraints. To achieve such a control system, the chosen hardware platform must be robust and scalable to accommodate future system improvements and expansions. The Zynq All Programmable SoC, which integrates a high-performance processing system with programmable logic, offers exceptional parallel processing capabilities, real-time performance, fast computation, and flexible connectivity. This SoC includes two Xilinx analog-to-digital converters (XADC) for monitoring the system or external analog sensors.

Simulink is a block diagram environment that supports multi-domain simulation and model-based design, making it ideal for simulating systems with both control algorithms and plant models. Motor control algorithms adjust parameters such as speed, torque, and others for precise positioning and other purposes. Evaluating control algorithms through simulation is an efficient way to determine if the motor control design is suitable, reducing the time and cost of expensive hardware testing once suitability is determined.

Choosing the right hardware for prototyping is a significant step in the design process. The ADI Intelligent Drives Kit facilitates rapid prototyping. It supports rapid and efficient prototyping for high-performance motor control and dual-channel Gigabit Ethernet industrial networking connectivity.

The ADI Intelligent Drives Kit includes a set of Simulink controller models, the complete Xilinx Vivado framework, and the ADI Linux infrastructure, which streamline all steps needed for designing a motor control system, from simulation to prototyping, and eventual implementation in production systems.

The Linux software and HDL infrastructure provided by ADI for the Intelligent Drives Kit, together with tools from MathWorks and Xilinx, are well-suited for prototyping motor control applications. They also include production-ready components that can be integrated into the final control system, reducing the time and cost required from concept to production.

1211-adi-mathworks

Modulators and differential amplifiers to support motor control applications

ADI offers a range of modulators, differential amplifiers, instrumentation amplifiers, and operational amplifiers solutions for motor control applications.

The AD7401 is a second-order sigma-delta (Σ-Δ) modulator that utilizes ADI’s on-chip digital isolator technology, providing a high-speed 1-bit data stream from an analog input signal. The AD7401 is powered with a 5V supply and can accept differential signals in the range of ±200 mV (±320 mV full-scale). The analog modulator continuously samples the analog input signal, eliminating the need for an external sample-and-hold circuitry. The input information is encoded in the output data stream, which can achieve a data rate of up to 20 MHz. The device features a serial I/O interface and can operate on either a 5V or 3V supply (VDD2).

The digital isolation of the serial interface is achieved by integrating high-speed CMOS technology with monolithic air core transformers, providing superior performance compared to traditional optocouplers and other components. The device includes an on-chip reference voltage and is also available as the AD7400 with an internal clock. The AD7401 is suitable for applications in AC motor control, data acquisition systems, and as an alternative to ADCs combined with opto-isolators.

The AD8207 is a single-supply differential amplifier designed for amplifying large differential voltages in the presence of large common-mode voltages. It operates on a 3.3V to 5V single supply and features an input common-mode voltage range from -4V to +65V when using a 5V supply. The AD8207 comes in an 8-lead SOIC package and is ideal for applications like electromagnetic valve and motor control where large input PWM common-mode voltages are common.

The AD8207 exhibits excellent DC performance with low drift. Its offset drift is typically less than 500 nV/°C, and gain drift is typically less than 10 ppm/°C. It’s well-suited for bidirectional current sensing applications and features two reference pins, V1 and V2, which allow users to easily offset the device’s output to any voltage within the supply voltage range. By connecting V1 to V+ and V2 to GND pin, the output is set to half-scale. Grounding both reference pins provides unipolar output starting near ground voltage. Connecting both reference pins to V+ provides unipolar output starting near the V+ voltage. Applying an external low-impedance voltage to V1 and V2 allows for other output offsets.

1211-adi-ad8251

Low-noise and low-distortion instrumentation amplifiers and operational amplifiers

AD8251 is a digitally programmable gain instrumentation amplifier with features including GΩ-level input impedance, low output noise, and low distortion. It is suitable for interfacing with sensors and driving high-speed analog-to-digital converters (ADCs). It has a 10 MHz bandwidth, -110 dB total harmonic distortion (THD), and fast settling time of 785 ns to 0.001% accuracy (maximum). Guaranteed offset drift and gain drift are 1.8 µV/°C and 10 ppm/°C (G = 8), respectively.

In addition to its wide input common-mode voltage range, the device has a high common-mode rejection capability of 80 dB (G = 1, DC to 50 kHz). The combination of precision DC performance and high-speed capabilities makes the AD8251 an excellent choice for data acquisition applications. Moreover, this monolithic solution simplifies design and manufacturing and enhances the performance of test and measurement instrumentation through tightly matched internal resistors and amplifiers.

The AD8251 user interface includes a parallel port where users can set the gain in two different ways. One method is to use the WR input to latch 2-bit word sent over the bus. The other is to use the transparent gain mode, where the gain is determined by the logic level states at the gain port.

The AD8251 is available in a 10-lead MSOP package and is rated over the -40°C to +85°C temperature range. It is well-suited for applications with strict size and packaging density requirements, including data acquisition, biomedical analysis, and testing and measurement.

AD8646 is a 24 MHz rail-to-rail dual-channel operational amplifier. Additionally, AD8647 and AD8648 are dual-channel and quad-channel, rail-to-rail input and output, single-supply amplifiers with features such as low input offset voltage, wide signal bandwidth, low input voltage and low current noise. AD8647 also features low-power shutdown.

The AD8646 series combines a 24 MHz bandwidth with low offset, low noise, and extremely low input bias current characteristics, making these amplifiers suitable for a variety of applications. Devices such as filters, integrators, photodiode amplifiers, and high-impedance sensors can benefit from this combination of characteristics. The wide bandwidth and low distortion characteristics are beneficial for AC applications. The high output drive capability of AD8646/AD8647/AD8648 makes them ideal choices for driving audio line drivers and other low-impedance applications, with AD8646 and AD8648 suitable for automotive applications.

The AD8646 series features rail-to-rail input and output swing capabilities, enabling design engineers to buffer CMOS ADCs, DACs, ASICs, and other wide-output swing devices in single-supply systems.

ADA4084-2 (dual) is a 30 V, low-noise, rail-to-rail I/O, low-power operational amplifier, along with ADA4084-1 (single) and ADA4084-4 (quad). They are rated over the -40°C to +125°C industrial temperature range. The single-channel ADA4084-1 comes in 5-lead SOT-23 and 8-lead SOIC packages; the dual-channel ADA4084-2 is available in 8-lead SOIC, 8-lead MSOP, and 8-lead LFCSP packages; ADA4084-4 is offered in 14-lead TSSOP and 16-lead LFCSP packages.

ADA4084-2 supports rail-to-rail input/output and has low power consumption of 0.625 mA (±15 V, typical per amplifier), a gain bandwidth product of 15.9 MHz (AV = 100, typical), unity gain crossover frequency of 9.9 MHz (typical), and supports a -3 dB closed-loop bandwidth of 13.9 MHz (±15 V, typical) while providing low offset voltage of 100 μV (SOIC, maximum), unity gain stability, high slew rate of 4.6 V/µs (typical), and low noise of 3.9 nV/√Hz (1 kHz, typical).

Conclusion

Modern motor control systems, combined with tools and systems from FPGA, MathWorks, Xilinx, and ADI, can help achieve more efficient and precise motor control solutions. By integrating MathWorks’ model-based design and code generation tools with powerful Xilinx Zynq SoC and ADI’s isolation, power, signal conditioning, and measurement solutions, the design, validation, testing, and implementation of motor drive systems can be more efficient than ever before, thereby improving motor control performance and shortening time to market. ADI’s Intelligent Drives Kit provides an excellent prototyping environment to expedite system evaluation and assist in quickly starting motor control projects. Interested customers are encouraged to learn more.

The post Advanced motor control systems improve motor control performance appeared first on ELE Times.

3 Common Challenges Stopping Your Production Line

ELE Times - Fri, 01/12/2024 - 08:23

The efficiency of production lines is crucial for any successful hardware product development. However, several common challenges can significantly derail these processes. This article examines major operational efficiency issues and explores how manual, disjointed workflows, outdated documentation, and a lack of transparent design decisions can adversely affect manufacturing. Do you face these problems, too? Let’s find out!

Modern Design: The Era of Accelerated Product Development

Before focusing on the challenges mentioned above, let’s first look at a few industry trends and how hardware products are being developed to understand the topic’s complexity better.

Firstly, you can observe an undeniable surge in the intelligence of devices. Modern hardware is not just about physical components; it’s about embedding sophisticated intelligence into every machine. This evolution demands technical prowess and a strategic approach to design and development.

Secondly, the product development timelines have sped up. Remember the 1980s, when launching a new car model took 54 to 60 months? Fast forward to the 2020s, and this timeframe has dramatically shrunk to just 18 to 22 months, sometimes even less. This acceleration is dictated by a necessity to stay competitive and calls for an agile development process where multiple workstreams progress in parallel, demanding rapid iteration and tight collaboration across various engineering disciplines and business functions. The key to success here lies in using simulation and digitization to address issues before they manifest in the physical product.

However, something prevents hardware development teams from responding to these trends, namely the data and technology gap in electronics development. Even with Product Data Management (PDM) systems or Product Lifecycle Management (PLM) tools, discrepancies persist between software and mechanical domains. While tools like Altium Designer facilitate schematic and layout capture, the rest of the process often relies on inefficient, manual methods like PDFs, emails, and paper printouts. This disjointed approach leads to outdated component libraries, misaligned software-hardware integration, and delayed manufacturers’ involvement in the process, resulting in designs that may not be production-ready.

This disconnection extends to procurement, which, at the end of a design process, often copes with incomplete parts lists, finding components are unavailable or unaffordable. Mechanical engineers face hours of manual file exchanges, leading to fit and enclosure issues, while engineering managers, product managers, and system architects operate with limited visibility. This fragmented approach is costly and inefficient, underscoring the urgent need for a cohesive digital infrastructure in electronics development.

3 Core Challenges Affecting Operational Efficiency

As we explore the world of manufacturing and product development, it’s crucial to address three core challenges that significantly impact operational efficiency:

  • Time
  • Quality
  • Risk
Time: The Race Against the Clock

Our current workflows often suffer from being manual and siloed. Vital information becomes trapped within individual departments, lost in fragmented toolsets and local files. Fragmentation and disjointed communication channels make it challenging to decipher design intents and manage data efficiently. It’s like trying to piece together a complex puzzle without having all the pieces in hand.

This situation often leads to inefficient handling of critical design information, such as component lead times and end-of-life notices, which are essential for timely and successful product launches. We’ve all experienced how prolonged processes can hinder new releases and negatively impact our time-to-market. Such delays mean you’re at risk of losing your competitive edge. So, how do you turn these challenges into a smooth workflow and transform the way you handle time from a potential blocker into a strategic advantage?

The answer lies in enhancing connectivity across the processes. Start by implementing cross-functional collaboration to enable a free flow of information between departments. This approach helps break down data silos, ensuring everyone works with the latest data, thereby minimizing rework and fostering iterative improvements.

Next, shift your focus to efficient component selection. By putting the right systems in place, you can manage component information effectively and be sure every part of your design is available, compliant, and optimized for specific needs.

Finally, enhance your workflow visibility and management. When you can see the entire landscape of your project, you can collaborate more effectively, make informed decisions, and manage your processes with precision.

Quality: The Cornerstone of Customer Satisfaction

Quality is the foundation of customer trust and satisfaction. Yet, despite our best efforts, defects and quality issues can slip through, jeopardizing the product and your reputation. Why is this happening? Because most of your documentation is static, it often needs more context and is siloed from the design data it supports. This can lead to misinterpretation and a reliance on outdated information–a recipe for errors that only become apparent after production, resulting in waste and rework.

A typical day in a board-mounting department reveals several issues. Determining the quality of an electric board from its image alone is challenging without additional context. To make an informed decision, you need access to design information, part lists, ordering data, datasheets, identification of designators, analysis of nets, and test results. However, this information often resides in disparate systems, necessitating time-consuming searches and interpretation. This process, known as a ‘media break,’ is evident in nearly every stage of the board mounting assembly line, yet it often goes unnoticed.

The key to overcoming this challenge lies in leveraging the background provided by your design data, transitioning to digital documentation, and automating its management. Doing so ensures that your documents are always up-to-date and offer the context for your designs. It’s not just about having the correct data; it’s about understanding it within the framework of your entire design.

You can also introduce interactive data validation and verification processes. These systems reduce your reliance on human-based checks, which, while important, are prone to error. With automated checks, you can catch potential issues before they escalate. For example, you verify a design before it enters the reflow oven rather than after a flawed product has been fully assembled. This proactive strategy ensures quality is embedded in every stage of your design and manufacturing process.

Integrating advanced technologies like augmented microscopy suggests further improvements in PCB manufacturing. This leap forward promises to enhance quality control by optimizing performance, accuracy, quality, and consistency while reducing operational costs.

Risk: From Reactive to Proactive

Lastly, let’s look at compliance. The challenges we face here are multifaceted. You need to prove accountability in every aspect of your design and manufacturing, which requires a deep understanding of the impact of design changes the ‘where’ and the ‘scope.’ Without this, you risk the integrity of your products and the trust of your clients.

Lack of transparency and predictability in your operations hinders your project management and decision-making. If the ‘why’ behind your design decisions goes undocumented, it leads to confusion and potential non-compliance, consequences of which might be severe, ranging from penalties to, in the worst cases, businesses having to shut their doors.

The solution? Establishing a system of digital traceability. Having a transparent system for documenting design decisions means you have a clear record that supports your rationale and ensures adherence to standards, giving you an explicit audit trail from conception to production and understanding how every design decision influences the final product.

Implementing automated verification can help you track your project’s progress, solidify your compliance framework, anticipate risks, and make informed decisions. This way, you transform risk management from a reactive to a proactive strategy, staying in control even in the face of uncertainties. Integrating your validation processes with compliance measures makes ‘where used’ visibility and risk management a part of the design journey, not just afterthoughts.

lena-weglarzalena-weglarza| Altium

The post 3 Common Challenges Stopping Your Production Line appeared first on ELE Times.

Nokia and Rohde & Schwarz Win FCC Certification for Drone Network

AAC - Fri, 01/12/2024 - 02:00
Following successful collaboration with Rohde & Schwarz to secure FCC certification, Nokia Drone Networks has launched in North America.

ATE system tests wireless BMS

EDN Network - Thu, 01/11/2024 - 21:11

Rhode & Schwarz, with technology from Analog Devices, has developed an automated test system for wireless battery management systems (wBMS). The collaboration aims to help the automotive industry adopt wBMS technology and realize its many advantages over wired battery management systems.

The ATE setup performs essential calibration of the wBMS module, as well as receiver, transmitter, and DC verification tests. It covers the entire wBMS lifecycle, from the development lab to the production line. The system comprises the R&S CMW100 radio communication tester, WMT wireless automated test software, and the ExpressTSVP universal test and measurement platform.

R&S and Analog Devices also worked together to develop a record and playback solution for RF robustness testing of the wBMS. During several test drives in various complex RF environments, the R&S FSW signal and spectrum analyzer monitored the RF spectrum and sent it to the IQW wideband I/Q data recorder. For playback of the recorded spectrum profiles, the IQW was connected to the SMW200A vector signal generator.

Analog Device’s complete wBMS solution, currently in production across multiple EV platforms, complies with the strictest cybersecurity requirements of ISO 21424 CAL4. In addition, its RF performance and robustness maximize battery capacity and lifetime values.

Rohde & Schwarz 

Analog Devices

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post ATE system tests wireless BMS appeared first on EDN.

UWB RF switch aids automotive connectivity

EDN Network - Thu, 01/11/2024 - 21:11

A 50-Ω SPDT RF switch from pSemi, the automotive-grade PE423211, covers bandwidths ranging from 300 MHz to 10.6 GHz. The part can be used in Bluetooth LE, ultra-wideband (UWB), ISM, and WLAN 802.11 a/b/g/n/ac/ax applications. Its suitability for BLE and UWB make the switch particularly useful for secure car access, telematics, sensing, infotainment, in-cabin monitoring systems, and general-purpose switching.

Qualified to AEC-Q100 Grade 2 requirements, the PE423211 operates over a temperature range of -40° to +105°C. The device combines low power, high isolation, and wide broadband frequency support in a compact 6-lead, 1.6×1.6-mm DFN package. It consumes less than 90 nA and provides ESD performance of 2000 V at HBM levels and 500 V at CDM levels.

The RF switch is manufactured on the company’s UltraCMOS process, a silicon-on-insulator technology. It also leverages HaRP technology enhancement, which reduces gate lag and insertion loss drift.

The PE423211 RF switch is sampling now, with production devices expected in late 2024. A datasheet for the switch was not available at the time of this announcement.

PE423211 product page

pSemi

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post UWB RF switch aids automotive connectivity appeared first on EDN.

Quectel unveils low-latency Wi-Fi 7 modules

EDN Network - Thu, 01/11/2024 - 21:11

The first entries in Quectel’s Wi-Fi 7 module family, the FGE576Q and FGE573Q, deliver fast data rates and low latency for real-time response. Both modules offer Wi-Fi 7 and Bluetooth 5.3 connectivity for use in a diverse range of applications, including smart homes, industrial automation, healthcare, and transportation.

The FGE576Q provides a data rate of up to 3.6 Gbps and operates on dual Wi-Fi bands simultaneously: 2.4 GHz and 5 GHz or 2.4 GHz and 6 GHz. The FGE573Q operates at a maximum data rate of 2.9 Gbps. Devices feature 4K QAM and multi-link operation (MLO), which enables routers to use multiple wireless bands and channels concurrently when connected to a Wi-Fi 7 client. With Bluetooth 5.3 integration, each module supports LE audio and a maximum data rate of 2 Mbps, as well as BLE long-range capabilities.

Housed in 16×20×1.8-mm LGA packages, the FGE576Q and FGE573Q operate over a temperature range of -20°C to +70°C. Quectel also offers Wi-Fi/Bluetooth antennas in various formats for use with these modules.

FGE576Q product page

FGE573Q product page

Quectel Wireless Solutions

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Quectel unveils low-latency Wi-Fi 7 modules appeared first on EDN.

Wi-Fi 7 SoCs garner Wi-Fi Alliance certification

EDN Network - Thu, 01/11/2024 - 21:11

MaxLinear’s Wi-Fi 7 SoC with integrated triband access point has been certified by the Wi-Fi Alliance and selected as a Wi-Fi Certified 7 test bed device. Certification ensures that devices interoperate seamlessly and deliver the high-performance features of the Wi-Fi 7 standard.

The test bed employs the MxL31712 SoC, with the triband access point capable of operating at 2.4 GHz, 5 GHz, and 6 GHz. Well-suited for high-density environments, the access point includes the advanced features of 4K QAM, multi-link operation (MLO), multiple resource units (MRU) and puncturing, MU-MIMO, OFDMA, advanced beamforming, and power-saving enhancements.

MaxLinear’s Wi-Fi Certified 7 SoC family, comprising the triband MxL31712 and dual-band MxL31708, is based on the upcoming IEEE 802.11be standard and delivers peak throughput of 11.5 Gbps on 6-GHz (6E) spectrum. The MxL31712 accommodates up to 12 spatial streams, while the MxL31708 handles up to 8 spatial streams.

To learn more about the Wi-Fi 7 SoCs, click here.

MaxLinear

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Wi-Fi 7 SoCs garner Wi-Fi Alliance certification appeared first on EDN.

6-DoF inertial sensor improves machine control

EDN Network - Thu, 01/11/2024 - 21:10

The SCH16T-K01 inertial sensor from Murata combines an XYZ-axis gyroscope and XYZ-axis accelerometer in a robust SOIC package. Based on the company’s capacitive 3D-MEMS process, the device achieves centimeter-level accuracy in machine dynamics and position sensing, even in harsh environments.

The SCH16T-K01 provides an angular rate measurement range of ±300°/s and an acceleration measurement range of ±8 g. A redundant digital accelerometer channel offers a dynamic range of up to ±26 g, which offers resistance against saturation and vibration. Gyro bias instability is typically 0.5°/h. According to the company, the component overall exhibits excellent linearity and offset stability over the entire operating temperature range of -40°C to +110°C.

Other features of the industrial sensor include a SafeSPI V2.0 digital interface, self-diagnostics, and options for output interpolation and decimation. Housed in a 12×14×3-mm, 24-pin SOIC plastic package, the SCH16T-K01 is suitable for lead-free soldering and SMD mounting.

SCH16T-K01 product page

Murata

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 6-DoF inertial sensor improves machine control appeared first on EDN.

Andes Introduces RISC-V Out-of-Order Superscalar Multicore Processor

AAC - Thu, 01/11/2024 - 20:00
The new CPU features the company’s first out-of-order architecture for higher instruction throughput, better performance, and faster processing speeds.

First Solar inaugurates $700m, 3.3GW PV module manufacturing plant in India

Semiconductor today - Thu, 01/11/2024 - 18:05
Cadmium telluride (CdTe) thin-film photovoltaic (PV) module maker First Solar Inc of Tempe, AZ, USA says that its new facility in Tamil Nadu, India, the country’s first fully vertically integrated solar manufacturing plant, has been inaugurated by Dr T R B Rajaa (Minister for Industries, Promotions and Commerce of the Government of Tamil Nadu) in a ceremony attended by Eric Garcetti (the US Ambassador to India) and Scott Nathan, CEO of the US International Development Finance Corporation (DFC)...

The 2024 CES: It’s “AI everywhere”, if you hadn’t already guessed

EDN Network - Thu, 01/11/2024 - 17:23

This year’s CES officially runs from today (as I write these words), Tuesday, January 9 through Friday, January 12. So why, you might ask, am I committing my coverage to cyber-paper on Day 1, only halfway through it, in fact? That’s because CES didn’t really start just today. The true official kickoff, at least for media purposes, was Sunday evening’s CES Unveiled event, which is traditionally reminiscent of a Japanese subway car, or if you prefer, a Las Vegas Monorail:

Yesterday was Media Day, where the bulk of the press releases and other announcement paraphernalia was freed from its prior corporate captivity for public perusal:

And some companies “jumped the gun”, announcing last week or even prior to the holidays, in attempting to get ahead of the CES “noise”. So, the bulk of the news is already “in the wild”; all that’s left is for the huddled masses at the various Convention Centers and other CES-allotted facilities to peruse it as they aimlessly wander zombie-like from booth to booth in search of free tchotchkes (can you tell how sad I am to not be there in person this year? Have I mentioned the always-rancid restrooms yet? Or, speaking of which, the wastewater-suggestive COVID super-spreader potential? Or…). Plus, it enables EDN to get my writeup up on the website and in the newsletters earlier than would otherwise be the case. I’ll augment this piece with comments and/or do follow-on standalone posts if anything else notable arrives before end-of-week.

AI (nearly) everywhere

The pervasiveness of AI wasn’t a surprise to me, and likely wasn’t to you, either. Two years ago, after all, I put to prose something that I’d much earlier believed was inevitable, ever since I saw an elementary live demo of deep learning-based object recognition (accelerated by the NVIDIA GPU in his laptop) from Yann LeCun, Director of AI Research at Facebook and a professor at New York University, at the May 2014 Embedded Vision Summit:

One year later (and one year ago), I amped up my enthusiasm in discussing generative AI in its myriad implementation forms, a topic which I revisited just a few months ago. And just about a week ago, I pontificated on the exploding popularity of AI-based large language models. It takes a while for implementation ideas to turn into prototypes, not to mention for them to further transition to volume production (if they make it to that far at all, that is), so this year’s CES promised to be the “fish or cut bait” moment for companies run by executives who’d previously only been able to shoehorn the “AI” catchphrase into every earnings briefing and elevator pitch.

So this week we got, among other things, AI-augmented telescopes (a pretty cool idea, actually, says this owner of a conventional Schmidt-Cassegrain scope with an 8” primary mirror). We got (I’m resisting inserting a fecal-themed adjective here, but only barely) voice-controllable bidet seats, although as I was reminded of in doing the research for this piece, the concept isn’t new, just the price point (originally ~$10,000, now “only” ~$2,000, although the concept still only makes me shudder). And speaking of fecund subjects, AI brings us “smart” cat doors that won’t allow Fluffy to enter your abode if it’s carrying a recently killed “present” in its mouth. Meow.

Snark aside, I have no doubt that AI will also sooner-or-later deliver a critical mass of tangibly beneficial products. I’ll save further discussion of the chips, IP cores, and software that fundamentally enable these breakthroughs for a later section. For now, I’ll just highlight one technology implementation that I find particularly nifty: AI-powered upscaling. Graphics chips have leveraged conventional upscaling techniques for a while now, for understandably beneficial reasons: they can harness a lower-performance polygons-to-pixels “engine” (along with employing less dedicated graphics memory) than would otherwise be needed to render a given resolution frame, then upscale the pixels before sending them to the screen. Dedicated-function upscaling devices (first) and integrated upscaling ICs in TVs (later) have done the same thing for TVs, as long-time readers may recall, again using conventional “averaging” and other approaches to create the added intermediary pixels between “real” ones.

But over the past several years, thanks to the massive, function-flexible parallelism now available in GPUs, this upscaling is increasingly now being accomplished using more intelligent deep learning-based algorithms, instead. And now, so too with TVs. This transition is, I (perhaps simplistically) believe, fundamentally being driven by necessity. TV suppliers want to sell us ever-larger displays. But regardless of how many pixels they also squeeze into each panel, the source material’s resolution isn’t increasing at the same pace…4K content is still the exception, not the norm, and especially if you sit close and/or if the display is enormous, you’re going to see the individual pixels if they’re not upscaled and otherwise robustly processed.

See-through displays: pricey gimmick or effective differentiator?

Speaking of TVs…bigger (case study: TCL’s 115” monstrosity), thinner, faster-refreshing (case study: LG’s 480 Hz refresh-rate OLED…I’ll remind readers of my longstanding skepticism regarding this particular specification, recently validated by Vizio’s class action settlement) and otherwise “better” displays were as usual rife around CES. But I admittedly was surprised by another innovation, which LG’s suite reportedly most pervasively exemplified, with Samsung apparently a secondary participant: transparent displays. I’m a bit embarrassed to admit this, but so-called “See-through Displays” (to quote Wikipedia vernacular) have apparently been around for a few years now; this is the first time they’ve hit my radar screen.

Admittedly, they neatly solve (at least somewhat) a problem I identified a while back; ever-larger displays increasingly dominate the “footprint” of the room they’re installed in, to the detriment of…oh…furniture, or anything else that the room might otherwise also contain. A panel that can be made transparent (with consequent degradation of contrast ratio, dynamic range, and other image quality metrics, but you can always re-enable the solid background when those are important) at least creates the illusion of more empty room space. LG’s prototypes are OLED-based and don’t have firm prices (unless “very expensive” is enough to satisfy you) or production schedules yet. Samsung claims its MicroLED-based alternative approach is superior but isn’t bothering to even pretend that what it’s showing are anything but proof-of-concepts.

High-end TV supplier options expand and abound

Speaking of LG and Samsung…something caught my eye amidst the flurry of news coming through my various Mozilla Thunderbird-enabled RSS feeds this week. Roku announced a new high-end TV family, implementing (among other things) the aforementioned upscaling and other image enhancement capabilities. What’s the big deal, and what’s this got to do with LG and Samsung? Well, those two were traditionally the world’s largest LCD TV panel suppliers, by a long shot. But nowadays, China’s suppliers are rapidly expanding in market share, in part because LG and Samsung are instead striving to move consumers to more advanced display technologies, such as the aforementioned OLED and microLED, along with QLED (see my post-2019 CES coverage for more details on these potential successors).

LG and Samsung manufacture not only display panels but also TVs based on them, of course, and historically they’d likely be inclined to save the best panels for themselves. But now, Roku is (presumably) being supplied by Chinese panel manufacturers who don’t (yet, at least) have the brand name recognition to be able to sell their own TVs to the US and other Western markets. And Roku apparently isn’t afraid (or maybe it’s desperation?) to directly challenge other TV suppliers such as LG and Samsung, who it’d previously aspired to have as partners, integrate support for its streaming platform. Interesting.

Premium smartphones swim upstream

Speaking of aspiring for the high end…a couple of weeks ago, I shared my skepticism regarding any near-term reignition of new smartphone sales. While I’m standing by that premise in a broad sense, there is one segment of the market that seemingly remains healthy, at least comparatively: premium brands and models. Thereby explaining, for example, Qualcomm’s latest high-end Qualcomm Snapdragon 8 Gen 3 SoC platform, unveiled last October. And similarly explaining the CES-launched initial round of premium smartphones based on the Snapdragon 8 Gen 3 and competitive chipsets from companies like Apple and MediaTek.

Take, for example, the OPPO Find X7 Ultra. Apple’s iPhone 15 Pro Max might have one periscope lens, but OPPO’s new premium smartphone has two! Any sarcasm you might be sensing is intentional, by the way…that said, keep in mind that I’m one of an apparently dying breed of folks who’s still fond of standalone cameras, and that I also take great pride in not acquiring the latest-and-greatest smartphones (or brand-new ones at all, for that matter).

Wi-Fi gets faster and more robust…and slower but longer distance

Speaking of wireless communications…Wi-Fi 7 (aka IEEE 802.11be), the latest version of the specification from the Wi-Fi Alliance, was officially certified this week. Predictably, as with past versions of the standard, manufacturers had jumped the gun and began developing and sampling chipsets (and systems based on them) well ahead of this time; hopefully all the equipment already out there based on “draft” specs will be firmware-upgradeable to the final version. In brief, Wi-Fi 7 builds on Wi-Fi 6 (aka IEEE 802.11ax), which had added support for both MU-MIMO and OFDMA, and Wi-Fi 6e, which added support for the 6 GHz license-exempt band, with several key potential enhancements:

  • Wider channels: up to 80 MHz in the 5 GHz band (vs 20 MHz initially) and up to 320 MHz in the 6 GHz band (vs 160 MHz previously)
  • Multi-link operation: the transmitter-to-receiver connection can employ multiple channels in multiple bands simultaneously, for higher performance and/or reliability
  • Higher QAM levels for denser data packing: 4K-QAM, versus 1,024-QAM with Wi-Fi 6 and 256-QAM in Wi-Fi 5.

The key word in all of this, of course, is “potential”. The devices on both ends of the connection must both support Wi-Fi 7, first and foremost, otherwise it’ll down-throttle to a lower version of the standard. Wide channel usage is dependent on spectrum availability, and the flip side of the coin is also relevant: its usage may also adversely affect other ISM-based devices. And QAM level relevance is fundamentally defined by signal strength and contending interference sources…i.e., 4K-QAM is only relevant at close range, among other factors.

That said, Wi-Fi’s slower but longer range sibling, Wi-Fi HaLow (aka IEEE 802.11ah), which also had its coming-out party at CES this year, is to me actually the more interesting wireless communication standard. The key word here is “standard”. Long-time readers may remember my earlier discussions of my Blink outdoor security camera setup. Here’s a relevant excerpt from the premier post in the series:

A Blink system consists of one or multiple tiny cameras, each connected both directly to a common router or to an access point intermediary (and from there to the Internet) via Wi-Fi, and to a common (and equally diminutive) Sync Module control point (which itself then connects to that same router or access point intermediary via Wi-Fi) via a proprietary “LFR” long-range 900 MHz channel.

The purpose of the Sync Module may be non-intuitive to those of you who (like me) have used standalone cameras before…until you realize that each camera is claimed to be capable of running for up to two years on a single set of two AA lithium cells. Perhaps obviously, this power stinginess precludes continuous video broadcast from each camera, a “constraint” which also neatly preserves both available LAN and WAN bandwidth. Instead, the Android or iOS smartphone or tablet app first communicates with the Sync Module and uses it to initiate subsequent transmission from a network-connected camera (generic web browser access to the cameras is unfortunately not available, although you can also view the cameras’ outputs from either a standalone Echo Show or Spot, or a Kindle Fire tablet in Echo Show mode).

In summary, WiFi HaLow takes that “proprietary “LFR” long-range 900 MHz channel” and makes it industry-standard. One of the first Wi-Fi HaLow products to debut this week was Abode Systems’ Edge Camera, developed in conjunction with silicon partner Morse Micro and software partner Xailent, which will enter production later this quarter at $199.99 and touts a 1.5 mile broadcast range and one year of operating life from its integrated 6,000 mAh rechargeable Li-ion battery. The broader implications of the technology for IoT and other apps are intriguing.

Does Matter (along with Thread, for that matter) matter?

Speaking of networking…the Matter smart home communication standard, built on the foundation of the Thread (based on Zigbee) wireless protocol, had no shortage of associated press releases and product demos in Las Vegas this week. But to date, its implementation has been underwhelming (leading to a scathing but spot-on recent diatribe from The Verge, among other pieces), both in comparison to its backers’ rosy projections and its true potential.

Not that any of this was a surprise to me, alas. Consider that the fundamental premise of Matter and Thread was to unite the now-fragmented smart home device ecosystem exemplified by, for example, the various Belkin WeMo devices currently residing in my abode. If you’re an up-and-coming startup in the space, you love industry standards, because they lower your market-entry barriers versus larger, more established competitors. Conversely, if you’re one of those larger, more established suppliers, you love barriers to entry for your competitors. Therefore the lukewarm-at-best (and more frequently, nonexistent or flat-out broken) embrace of Matter and Thread by legacy smart home technology and product suppliers (for which, to be precise, and as my earlier Blink example exemplifies, conventional web browser access, vs a proprietary app, is even a bridge too far).

I’ll have more to say on Matter and Thread in a dedicated-topic post to come. But suffice it to say that I’m skeptical about their long-term prospects, albeit only cautiously so. I just don’t know what it might take to break the logjam that understandably prevents competitors from working together, in spite of the reality that a rising tide often does end up lifting all boats…or if you prefer, it’s often better to get a slice of a large pie versus the entirety of a much smaller pie. I’d promise to turn metaphors off at this point, but then there’s the title of the next section…

The Apple-ephant in the room

Speaking of standards…Apple, as far as I know, has never had a show floor, hospitality suite or other formal presence at CES, although I’m sure plenty of company employees attend, scope out competitors’ wares and meet with suppliers (and of course, there are plenty of third-party iPhone case suppliers and the like showing off their latest-and-greatest). That said, Apple still regularly casts a heavy pall over the event proceedings by virtue of its recently announced, already-public upcoming and rumored planned product and service offerings. Back in 2007, for example, the first-generation iPhone was all that anyone to talk about. And this year, it was the Vision Pro headset, which Apple announced on Monday (nothing like pre-empting CES, eh?) would be open for pre-sale beginning next week, with shipments starting on February 2:

The thematic commonality with the first iPhone commercial was, I suspect, not by accident:

What’s the competitive landscape look like? Well, in addition to Qualcomm’s earlier mentioned Snapdragon 8 Gen 3 SoC for premium smartphones, the company more recently (a few days ago, to be precise) unveiled a spec-bumped “+” variant of its XR2 Gen 2 SoC for mixed-reality devices, several of which were on display at the show. There was, for example, the latest-generation XREAL augmented reality (AR) glasses, along with an upcoming (and currently unnamed) standalone head-mounted display (HMD) from Sony. The latter is particularly interesting to me…it was seemingly (and likely obviously) rushed to the stage to respond to Apple’s unveil, for one thing. Sony’s also in an interesting situation, because it first and foremost wants to preserve its lucrative game console business, for which it already offers several generations of VR headsets as peripherals (thereby explaining why I earlier italicized “standalone”). Maybe that’s why development partner Siemens is, at least for now, positioning it as intended solely for the “industrial metaverse”?

The march of the semiconductors

Speaking of ICs…in addition to the announcements I’ve already mentioned, the following vendors (and others as well; these are what caught my eye) released chips and/or software packages:

The rest of the story

I’m a few words shy of 3,000 at this point, and I’m not up for incurring Aalyia’s wrath, so I’ll only briefly mention other CES 2024 announcements and trends that particularly caught my eye:

And with that, pushing beyond 3,100 words (and pushing my luck with Aalyia in the process) I’ll sign off. Sound off with your thoughts in the comments, please!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post The 2024 CES: It’s “AI everywhere”, if you hadn’t already guessed appeared first on EDN.

Renesas to acquire GaN device maker Transphorm for $339m

Semiconductor today - Thu, 01/11/2024 - 15:00
Transphorm Inc of Goleta, CA, USA is to be acquired by a subsidiary of Renesas Electronics Corp of Tokyo, Japan for $5.10 per share in cash (a premium of about 35% to Transphorm’s closing stock price on 10 January, and about 56% to the volume-weighted average price over the last 12 months and 78% to that over the last six months). The transaction values Transphorm at about $339m...

Pages

Subscribe to Кафедра Електронної Інженерії aggregator - Новини світу мікро- та наноелектроніки