ELE Times

Subscribe to ELE Times feed ELE Times
latest product and technology information from electronics companies in India
Updated: 1 hour 55 min ago

Intel Accelerates AI Integration in Automotive Industry with Strategic Moves and Open Standards

Fri, 01/12/2024 - 14:25

In a groundbreaking announcement, Intel Corporation reveals its comprehensive plan to infuse artificial intelligence (AI) into the automotive landscape, solidifying its commitment to the industry’s transformative shift towards electric vehicles (EVs). The company outlines its AI everywhere strategy and finalizes an acquisition deal with Silicon Mobility, a leading fabless silicon and software company specializing in Systems-on-Chips (SoCs) for intelligent electric vehicle energy management.

Key Developments:

  • Acquisition of Silicon Mobility: Intel acquires Silicon Mobility, aligning with sustainability goals and addressing critical energy management needs in the EV sector.
  • AI-Enhanced SoCs: Intel introduces a new family of AI-enhanced software-defined vehicle SoCs. Zeekr is the pioneering Original Equipment Manufacturer (OEM) to adopt these chips for advanced generative AI-driven in-vehicle experiences.
  • Open UCIe-Based Chiplet Platform: Intel commits to deliver the industry’s first open Universal Chiplet Interface (UCIe)-based platform for Software-Defined Vehicles (SDVs) in collaboration with imec, ensuring rigorous quality and reliability for automotive applications.
  • Industry-Defining Standards: Intel takes the lead in chairing a new international standard for EV power management, emphasizing the company’s role in steering the industry towards sustainable and efficient electric vehicles.

Intel’s Whole Vehicle Approach:

Jack Weast, Vice President and General Manager of Intel Automotive emphasizes the company’s holistic strategy, stating, “Intel is taking a ‘whole vehicle’ approach to solving the industry’s biggest challenges. Driving innovative AI solutions across the vehicle platform will help the industry navigate the transformation to EVs.”

AI-Enhanced SDV SoCs Unveiled:

The new family of AI-enhanced SoCs from Intel addresses the critical need for power and performance scalability. These chips draw from Intel’s AI PC roadmap, enabling advanced in-vehicle AI use cases like driver and passenger monitoring. A live demo showcased the simultaneous operation of 12 advanced workloads, highlighting the potential for consolidating legacy electronic control unit (ECU) architecture for improved efficiency and scalability.

Zeekr Takes the Lead:

Intel’s SDV SoCs will debut in Geely’s Zeekr brand, making it the first OEM to leverage Intel’s latest technology. The collaboration ensures forward compatibility on Intel systems, allowing Zeekr to scale and upgrade services to meet evolving customer demands for next-gen experiences.

Open Standards for a Sustainable Future:

Intel collaborates with SAE International to establish a committee delivering an automotive standard for Vehicle Platform Power Management (J3311). Inspired by proven power management techniques, the new standard aims to enhance energy efficiency and sustainability across all EVs. The committee includes industry representation from Stellantis, HERE, and Monolithic Power Systems, with openness for additional industry participation.

This strategic leap by Intel signifies a pivotal moment in the automotive industry, where cutting-edge AI technology meets the growing demand for sustainable and intelligent electric vehicles.

The post Intel Accelerates AI Integration in Automotive Industry with Strategic Moves and Open Standards appeared first on ELE Times.

Explore the challenges and opportunities of bi-directional charging and EVs

Fri, 01/12/2024 - 13:40

Courtesy: Avnet

Anyone with a recently built car will probably have one or more buttons that they don’t use or don’t even know how to use. “Off-road” mode for an SUV used for the school run might be an example.

Another one in electric vehicles (EVs) that could rouse some curiosity soon is a button to enable bi-directional charging, or more correctly “bi-directional power transfer (BPT) in the on-board battery charger.” This is a feature of EVs that would make energy stored in the battery available for purposes other than traction. Another generic term is V2X or “Vehicle to (something)” power transfer.

Given widespread concerns about how long it takes to add range to an EV from the available charge points, it might seem odd that the ability to drain it away again might be seen as advantageous. There are reasons though why it can be a good thing, and we’ll look at these now.

Why do it?

The reason often given for BPT in EVs is that utility providers will give credit for energy returned to the grid to enable “peak shaving,” or providing an energy buffer for peak demands on the grid. Otherwise, utilities would have to bring extra power sources online at high cost to avoid outages. Timing is the issue here. EVs will often be “plugged in” overnight to charge, sometimes at a lower tariff, and this is when excess energy might be available. Peak demands are typically during the day when industry is operating. Of course, when EV adoption is ubiquitous, EVs will become a peak demand themselves as people return from work and plug in.

Those with solar power will be familiar with the principle of “feed-in” to the grid. But remember that after the capital expense, solar energy is effectively free so any credit is significant. Energy in EV batteries must be bought initially, so the benefit is the difference between debit and credit, which might be small or even negative. However, estimates show that a typical EV owner could save around $420 each year or $4,000 over the vehicle’s lifetime.

This hardly offsets the current extra cost of an EV over an internal combustion engine, but the owner might take the wider view to include environmental benefits and overall cost to society. With the increasing adoption of intermittent alternative energy sources such as solar and wind, EV batteries are seen as a valuable reserve to draw on by the utilities, making their argument for feed-in more compelling.

Another use for energy returned from the battery is V2H or “Vehicle to Home.” This could include groups of homes and even small businesses in a microgrid. Here, local alternative energy sources are combined with a grid connection and the energy storage in EVs to potentially provide independence from the grid for cost savings and resilience to power outages–if the EV is parked and connected. If not, then a wall-mount battery might be needed. Forward-looking property developers could include this in new builds. The capital cost to retrospectively install the infrastructure with the appropriate safety features is high.

A simpler “Vehicle to Load” arrangement might be a standard AC power socket on the EV, which could power hand tools outside or lighting when camping. Also, as battery-powered vehicles are adopted in the construction industry, the V2L AC power source will become an alternative to polluting and noisy diesel generators. The ability to provide power at emergency scenes from heavy-duty vehicles is also seen as a potential asset. This was done at the Fukushima nuclear plant disaster in March 2011.

Regulatory pressures

Whether the use cases described can be justified, authorities have taken the longer view and urged the automotive industry to include BPT in EVs. California, for example, got some way toward approving a bill in the state legislature to require all new EVs sold to include BPT. This was dropped after some opposition because of the extra cost expected to be added to an EV.

Other countries have initiatives such as the UK’s “Electric vehicle smart charging action plan” to help meet the UK’s 2050 climate change targets. Still, the plan identifies V2H as a challenge.

Major European auto manufacturers are including BPT in their roadmaps. In the U.S., GM announced that all its EVs will be bi-directional by 2026 and Tesla said its EVs will be by 2025. To support its introduction, BPT is now embedded in standard ISO 15118-20, which specifies the communications interface and charging infrastructure needed.

What’s available?

The first thing to note is that BPT is only applicable (at least now) to the vehicle onboard charger. Fast DC chargers are overwhelmingly situated at public stations and serve users who need to add range quickly to continue a journey. There are bi-directional DC chargers rolled out in trials, but most applications are, and will be, AC wall boxes, mostly Level 1 up to around 2kW and some Level 2 up to around 22kW. Note though that the return power is often pegged to a low rate, as low as 2.2kW for the MG ZS EV for example, and only up to 3.6kW for several makes, at about half of the battery charge rates.

An analysis of the online database of EVs by IEEE dated September 2023 identified just nine EVs available with bi-directional chargers, with a mix of DC and AC outputs.

Implementation of bi-directional power transfer

One of the reasons why BPT is promoted is because it’s a perceived added value that is relatively low cost to implement in a vehicle. The development of all the security, safety and control electronics and software is surely extremely complex, and this is why BPT-enabled EVs might initially be more expensive to recoup the costs, but the final hardware is a little different in the vehicle.

The reason for this is that on-board chargers have been continuously developed to be more efficient for the fastest charge time and smaller and lighter to not adversely affect range. This is in part achieved by replacing power diodes in various positions with MOSFETs, either silicon or silicon-carbide, which drop lower voltage and produce less dissipation. This saves energy and reduces heatsinking size, weight and costs. The MOSFETs can then also be configured to act as switches, reversing their function from rectification to power switching.

The outline principle is shown in the diagram. The circuit at the top is uni-directional, whereas the circuit at the bottom uses MOSFETs as synchronous rectifiers for bi-directional operation. The Totem Pole PFC stage with diodes now becomes an inverter and the symmetry of the isolated Dual Active Bridge DC-DC stage arrangement is seen, allowing power flow in either direction, depending on the control scheme.

The four MOSFETs in the primary of the DAB converter act as diodes in the reverse power flow direction, either by simply using their body diodes with the channels off, or for best efficiency, they are driven as synchronous rectifiers. The second circuit has become common anyway for best efficiency in uni-directional applications so is easily modified for reverse power flow simply by changing the gate drive control algorithms.

With the momentum of climate protection legislation behind it, bi-directional power transfer in EVs is a feature that will become common. Unlike some features in modern cars, the button that enables it could be one we eventually appreciate for its cost and environmental benefits.

The post Explore the challenges and opportunities of bi-directional charging and EVs appeared first on ELE Times.

Generative AI in 2024: The 6 most important consumer tech trends for next year

Fri, 01/12/2024 - 13:15

Qualcomm executives reveal key trends in AI, consumer technology and more for the future.

Not that long ago, the banana-peel-and-beer-fueled DeLorean in “Back to the Future” was presented as comedy. Yet today, 10% of cars are electric-powered.1 Just a year ago, conversing with a computer in true natural language was science fiction, but we know now that the next generation will not know life without a personal AI assistant.

Generative AI was the undisputed game-changer across nearly every industry, and we will undoubtedly continue to feel its impact next year.

One of the reasons I love working at Qualcomm is that I am surrounded by inventors and business leaders who are developing and deploying the leading edge AI, high performance, low-power computing and connectivity technologies poised to deliver intelligent computing everywhere.

Exactly how generative AI and other technology trends will continue to play out next year, of course, no one can completely know. But as we close out 2023, I was interested in understanding what our executives here at Qualcomm thought would be the key trends of 2024. Here is what I heard.

AI-PCs-drive-laptop-super-cycle

  1. AI PCs will drive a laptop replacement “super cycle”

The PC market is set to experience a transformative shift in 2024, fueled by a “super cycle” of laptop replacements with the convergence of AI advancements for PCs.

Morgan Stanley predicts a drastic shift, with 40% of laptops due for replacement in 2024, expected to rise to 65% by 2025.2

“We anticipate a market-defining “super cycle” in the PC starting in 2024, where the need for new laptops and the advancement of AI will drive a new era of PCs,” says Qualcomm Technologies’ Senior Vice President and GM of Compute and Gaming, Kedar Kondap, adding,

“This innovation is not just an evolution in the PC market, but a revolution, driving the demand for AI PCs forward and reshaping the computing experience for businesses and consumers into the new year.”

You only have to look at what Microsoft is doing to know that these Intelligent PC and AI assistants, like Copilot, are coming.

Unapologetic plug/reminder: In October, at Snapdragon Summit the likes of Microsoft, HP, Lenovo and Dell stood with us as we announced how we’re enabling the AI PC with Snapdragon X Elite, built to take on AI tasks.

A girl holds her smartphone that houses generative AI processing on device, instead of the cloud.

Generative-AI-moves-from-cloud-to-personal-devices

  1. Generative AI will move from the cloud to personal devices

The generative AI conversation in 2023 was predominantly about the cloud, but privacy, latency and cost will increasingly be choke points that on-device AI capabilities can help solve.

“As generative AI becomes more integrated in our lives, our personal devices like our smartphones, PCs, vehicles, and even IoT devices will become the hubs for multi-modal generative AI models,” noted Qualcomm Technologies’ Senior Vice President & General Manager of Technology Planning & Edge Solutions, Durga Malladi.

Not only does it make sense to do many AI tasks on-device, but it also broadens the access of these awesome capabilities, for both the consumer and enterprises.

“This transition will usher in next-level privacy-focused, personalized AI experiences to consumers and enterprises, and cut down cloud costs for developers,” added Malladi. “With large generative AI multi-modal models running on devices, the shift from cloud-based to hybrid or on-device AI is inevitable.”

People-using-smartphone-AI-on-transportation

  1. Your smartphone will become even more indispensable

As generative AI capabilities are brought onto the smartphone, personal AI assistants will evolve into indispensable companions, continuously learning from our daily lives to provide tailored experiences.

“Smartphones, our most personal devices, are poised to leverage multi-modal generative AI models and combine on-device sensor data,” said Qualcomm Technologies’ Senior Vice President & General Manager of Mobile Handset, Chris Patrick. He added,

“Your on-device AI assistant will evolve from generic responses to personalized, informative outcomes.”

Applications leveraging large language models (LLMs) and visual models will use sensor data such as health, location and hyperlocal information to deliver personalized, meaningful content.

Patrick added, “By using different modalities, these AI assistants will enable natural engagement and be able to process and generate text, voice, images and even videos, solely on-device. This will bring next-level user experience to the mainstream while addressing the escalating costs of cloud-based AI.”

Another unapologetic plug/reminder: Also at Snapdragon Summit, we demonstrated on-device personalization on our new Snapdragon 8 Gen 3 to enable this market need.

Painting-on-tablet-with-AI

  1. Creatives will get more creative

Deeper integration of AI in the creative and marketing process is inevitable.

“Generative AI is changing how we learn, how we play and how we work,” said Qualcomm Incorporated’s Chief Marketing Officer, Don McGuire, adding, “Not only is Qualcomm one of the largest companies enabling this technology, but as the CMO, I’m deploying the tools throughout the marketing organization.

“As a result, we’re seeing an increase in productivity level, time-to-market and efficiency, so the team can spend more time on strategy and creative collaboration, and less on time-consuming, repetitive tasks.

“It’s not about replacing people but augmenting and enhancing their capabilities.”

With access to vast amounts of data, generative AI can make suggestions and provide valuable insights. It enables marketers to target specific audiences more effectively and gives us the ability to produce highly personalized content across various mediums.

Using-smartphone-AI-outdoors-hiking

  1. Consumers will push for open multi-device ecosystems

The adoption of open ecosystems will empower consumers with the freedom to select the best devices from a variety of brands that fit their specific needs.

This increased interoperability will drive innovation and enhance consumer experiences as brands compete on a level playing field, striving to outperform one another and deliver superior products.

“Consumers will be the driving force behind device makers opening their ecosystems, demanding enhanced communication and functionality across devices,”

says Qualcomm Technologies’ Senior Vice President & General Manager, Mobile, Compute & XR, Alex Katouzian.

“With the recent announcement of Apple’s rich communication services messaging integration, and technologies like Link to Windows and Snapdragon Seamless experiences becoming more widespread, there’s a growing push for interoperability across brands and platforms,” he adds. “This shift towards open ecosystems will empower consumers with greater choice, enabling them to select the best device for their specific needs.”

v2_mixed-reality-in-workplace

  1. Mixed Reality will redefine your world

In 2024, mixed reality, virtual reality and extended reality (XR) will make their way into the mainstream as technologies once reserved for enthusiasts become integrated into consumer products.

Qualcomm Technologies’ Vice President and GM of XR Hugo Swart says,

“XR is entering a stage of rapid progress, thanks to the widespread adoption of mixed reality capabilities, smaller devices and the advancement of spatial computing.”

Affordable hardware options, such as Meta’s Quest 3 and Ray Ban Meta, are just the beginning of what’s to come.

Generative AI will play a crucial role in improving and scaling XR experiences, democratizing three-dimensional (3D) content generation through new tools and creating more realistic and engaging virtual environments.

Voice interfaces powered by generative AI will provide a natural and intuitive way to interact with XR devices, while personal assistants and lifelike 3D avatars, also powered by generative AI, will become increasingly prevalent in the XR space.

The post Generative AI in 2024: The 6 most important consumer tech trends for next year appeared first on ELE Times.

Using Secure IC Devices to Maintain the Integrity of DC Current Metering Data

Fri, 01/12/2024 - 12:59

Courtesy: Brette Mullenaux | Microchip

This blog post explores how our secure Integrated Circuit (IC) devices play a critical role in ensuring the credibility of Direct Current (DC) metering applications.

Enhancing Trust in DC Metering Technology

Direct Current (DC) metering is essential in various industries including data centers, communications, transportation, industrial and renewable energy. The proliferation of DC circuits, especially in applications like Electric Vehicle (EV) charging stations, has led to an increased demand for reliable and trustworthy DC metering technology. Unlike AC metering, which may overlook losses that occur from AC-to-DC conversion, DC metering ensures the accurate measurement of energy consumption.

However, ensuring the credibility and integrity of DC metering data is a significant challenge. As global standards and regulations evolve to standardize metering results, the need for authentic and secure measurements becomes paramount. This is where our secure Integrated Circuit (IC) devices play a crucial role in ensuring the trustworthiness of DC metering applications.

The Importance of Reliable DC Metering

DC metering plays a critical role in applications such as DC fast charging Level 3 and above. In these scenarios, end users need to pay for the precise amount of energy they receive. AC metering may not provide accurate results due to the losses incurred during the AC-to-DC conversion process. Therefore, the use of DC metering is essential for billing transparency and fairness.

Global standards, like the German Eichrecht standard, are being developed to ensure that DC metering measurements are authentic and trustworthy. These standards require end users to have the means to validate the authenticity of energy measurements; this is where our secure IC devices come into play.

Challenges in Ensuring DC Metering Security

DC metering systems typically include a microcontroller (MCU) that is responsible for logic, LCD displays and communication protocols. While these systems often use MCUs from reputable suppliers, the security aspect is often implemented in the software. However, software-based security can expose DC meters to vulnerabilities that could compromise the credibility of the measurements.

Secure IC Devices: A Solution for Reliable DC Metering

We offer a wide range of solutions, including reference designs, that cater to vertical markets where DC metering is vital. One notable example is the market for EV chargers, where reliable measurements are crucial for accurate billing.

Our TA100, ATECC608 and ECC204 devices are specifically designed to address the security challenges in DC metering applications. These devices provide robust hardware-level protection for private keys and support ECC P256 ECDSA sign operations in hardware. By leveraging our CryptoAuthentication library, DC metering vendors can efficiently implement secure JSON-encrypted data signing.

OCMF: Ensuring Authenticity and Integrity

In the context of DC metering, the Open Charging Metering Format (OCMF) often comes into play, particularly in reference to the Eichrecht standard in Germany. OCMF is a JSON format that includes energy measurements and a valid ECC signature. This format allows end users to verify the authenticity of measurements by using the corresponding public key. Additionally, the German Eichrecht standard mandates that DC meters include a small display accessible to users for transparency and validation.

Our secure IC devices, including the TA100, ATECC608 and ECC204, ensure that private keys are securely stored in hardware, making it challenging for hackers to compromise the integrity of DC metering data. Implementing JSON data signing using these devices is straightforward thanks to the high-level APIs provided by our CryptoAuthentication library.

Conclusion

In an era where the credibility of DC metering data is crucial for various applications, our secure IC devices provide a robust solution for safeguarding private keys and ensuring the authenticity and integrity of measurements. By embracing hardware-level security, DC metering vendors can meet evolving standards and regulations while offering end users transparent and trustworthy billing. As DC metering continues to expand across different industries, our contribution to enhancing security in this field is invaluable.

The post Using Secure IC Devices to Maintain the Integrity of DC Current Metering Data appeared first on ELE Times.

Text-to-SQL Generation Using Fine-tuned LLMs on Intel GPUs (XPUs) and QLoRA

Fri, 01/12/2024 - 12:14

Courtesy: Rahul Unnikrishnan Nair | Intel

The landscape of AI and natural language processing has dramatically shifted with the advent of Large Language Models (LLMs). This shift is characterized by advancements like Low-Rank Adaptation (LoRA) and its more advanced iteration, Quantized LoRA (QLoRA), which have transformed the fine-tuning process from a compute-intensive task into an efficient, scalable procedure.

Generated with Stable Diffusion XL using the prompt: “A cute laughing llama with big eyelashes, sitting on a beach with sunglasses reading in gibili style”

The Advent of LoRA: A Paradigm Shift in LLM Fine-Tuning

LoRA represents a significant advancement in the fine-tuning of LLMs. By introducing trainable adapter modules between the layers of a large pre-trained model, LoRA focuses on refining a smaller subset of model parameters. These adapters are low-rank matrices, significantly reducing the computational burden and preserving the valuable pre-trained knowledge embedded within LLMs. The key aspects of LoRA include:

  • Low-Rank Matrix Structure: Shaped as (r x d), where ‘r’ is a small rank hyperparameter and ‘d’ is the hidden dimension size. This structure ensures fewer trainable parameters.
  • Factorization: The adapter matrix is factorized into two smaller matrices, enhancing the model’s function adaptability with fewer parameters.
  • Scalability and Adaptability: LoRA balances the model’s learning capacity and generalizability by scaling adapters with a parameter α and incorporating dropout for regularization.
Eugenie_Wirz_1-1702661527401Left: Integration of LoRA adapters into the model. Right: Deployment of LoRA adapters with a foundation model as a task-specific model library

Quantized LoRA (QLoRA): Efficient Finetuning on Intel Hardware

QLoRA advances LoRA by introducing weight quantization, further reducing memory usage. This approach enables the fine-tuning of large models, such as the 70B LLama2, on hardware like Intel’s Data Center GPU Max Series 1100 with 48 GB VRAM. QLoRA’s main features include:

  • Memory Efficiency: Through weight quantization, QLoRA substantially reduces the model’s memory footprint, crucial for handling large LLMs.
  • Precision in Training: QLoRA maintains high accuracy, crucial for the effectiveness of fine-tuned models.
  • On-the-Fly Dequantization: It involves temporary dequantization of quantized weights for computations, focusing only on adapter gradients during training.

Fine-Tuning Process with QLoRA on Intel Hardware

The fine-tuning process starts with setting up the environment and installing necessary packages, including bigdl-llm for model loading, peft for LoRA adapters, Intel Extension for PyTorch for training using Intel dGPUs, transformers for finetuning and datasets for loading the dataset. We will walk through the high-level process of fine-tuning a large language model (LLM) to improve its capabilities. As an example, I am taking generating SQL queries from natural language input, but the focus is on general QLoRA finetuning here. For detailed explanations you can check out the full notebook that takes you from setting up the required python packages, loading the model, finetuning and inferencing the finetuned LLM to generate SQL from text on Intel Developer Cloud and also here.

Model Loading and Configuration for Fine-Tuning

The foundational model is loaded in a 4-bit format using bigdl-llm, significantly reducing memory usage. This step is crucial for fine-tuning large models like the 70B LLama2 on Intel hardware.

from bigdl.llm.transformers import AutoModelForCausalLM

# Loading the model in a 4-bit format for efficient memory usage

model = AutoModelForCausalLM.from_pretrained(

“model_id”,  # Replace with your model ID

load_in_low_bit=”nf4″,

optimize_model=False,

torch_dtype=torch.float16,

modules_to_not_convert=[“lm_head”],

)

Learning Rate and Stability in Training

Selecting an optimal learning rate is critical in QLoRA fine-tuning to balance training stability and convergence speed. This decision is vital for effective fine-tuning outcomes as a higher learning rate can lead to instabilities and the training loss to abnormally drop to zero after a few steps.

from transformers import TrainingArguments

# Configuration for training

training_args = TrainingArguments(

learning_rate=2e-5,  # Optimal starting point; adjust as needed

per_device_train_batch_size=4,

max_steps=200,

# Additional parameters…

)

During the fine-tuning process, there is a notable rapid decrease in the loss after just a few steps, which then gradually levels off, reaching a value near 0.6 at approximately 300 steps as seen in the graph below:

Eugenie_Wirz_2-1702661527116

Text-to-SQL Conversion: Prompt Engineering

With the fine-tuned model, we can convert natural language queries into SQL commands, a vital capability in data analytics and business intelligence. To finetune the model, we must carefully convert the data into structured prompt like below as an instruction dataset with Input, Context and Response:

# Function to generate structured prompts for Text-to-SQL tasks

def generate_prompt_sql(input_question, context, output=””):

return f”””You are a powerful text-to-SQL model. Your job is to answer questions about a database. You are given a question and context regarding one or more tables.

You must output the SQL query that answers the question.

### Input:

{input_question}

### Context:

{context}

### Response:

{output}”””

Diverse Model Options

The notebook supports an array of models, each offering unique capabilities for different fine-tuning objectives:

NousResearch/Nous-Hermes-Llama-2-7b

NousResearch/Llama-2-7b-chat-hf

NousResearch/Llama-2-13b-hf

NousResearch/CodeLlama-7b-hf

Phind/Phind-CodeLlama-34B-v2

openlm-research/open_llama_3b_v2

openlm-research/open_llama_13b

HuggingFaceH4/zephyr-7b-beta

Enhanced Inference with QLoRA: A Comparative Approach

The true test of any fine-tuning process lies in its inference capabilities. In the case of the implementation, the inference stage not only demonstrates the model’s proficiency in task-specific applications but also allows for a comparative analysis between the base and the fine-tuned models. This comparison sheds light on the effectiveness of the LoRA adapters in enhancing the model’s performance for specific tasks.

Model Loading for Inference:

For inference, the model is loaded in a low-bit format, typically 4-bit, using bigdl-llm library. This approach drastically reduces the memory footprint, making it suitable to run multiple LLMs with high parameter count on a single resource-optimized hardware like Intel’s Data Center GPUs 1100. The following code snippet illustrates the model loading process for inference:

from bigdl.llm.transformers import AutoModelForCausalLM

# Loading the model for inference

model_for_inference = AutoModelForCausalLM.from_pretrained(

“finetuned_model_path”,  # Path to the fine-tuned model

load_in_4bit=True,  # 4 bit loading

optimize_model=True,

use_cache=True,

torch_dtype=torch.float16,

modules_to_not_convert=[“lm_head”],

)

Running Inference: Comparing Base vs Fine-Tuned Model

Once the model is loaded, we can perform inference to generate SQL queries from natural language inputs. This process can be conducted on both the base model and the fine-tuned model, allowing users to directly compare the outcomes and assess the improvements brought about by fine-tuning with QLoRA:

# Generating a SQL query from a text prompt

text_prompt = generate_sql_prompt(…)

# Base Model Inference

base_model_sql = base_model.generate(text_prompt)

print(“Base Model SQL:”, base_model_sql)

# Fine-Tuned Model Inference

finetuned_model_sql = finetuned_model.generate(text_prompt)

print(“Fine-Tuned Model SQL:”, finetuned_model_sql)

Following a 15-minute training session itself, the finetuned model demonstrates enhanced proficiency in generating SQL queries that more accurately reflect the given questions, compared to the base model. With additional training steps, we can anticipate further improvements in the model’s response accuracy:

Finetuned model SQL generation for a given question and context:

Base model SQL generation for a given question and context:

LoRA Adapters: A Library of Task-Specific Enhancements

One of the most compelling aspects of LoRA is its ability to act as a library of task-specific enhancements. These adapters can be fine-tuned for distinct tasks and then saved. Depending on the requirement, a specific adapter can be loaded and used with the base model, effectively switching the model’s capabilities to suit different tasks. This adaptability makes LoRA a highly versatile tool in the realm of LLM fine-tuning.

Checkout the notebook on Intel Developer Cloud

We invite AI practitioners and developers to explore the full notebook on the Intel Developer Cloud (IDC). IDC is the perfect environment to experiment with and explore the capabilities of fine-tuning LLMs using QLoRA on Intel hardware. Once you login to Intel Developer Cloud, go to the “Training Catalog” and under “Gen AI Essentials” in the catalog, you can find the LLM finetuning notebook.

Conclusion: QLoRA’s Impact and Future Prospects

QLoRA, especially when implemented on Intel’s advanced hardware, represents a significant leap in LLM fine-tuning. It opens up new avenues for leveraging massive models in various applications, making fine-tuning more accessible and efficient.

The post Text-to-SQL Generation Using Fine-tuned LLMs on Intel GPUs (XPUs) and QLoRA appeared first on ELE Times.

Keeping the Pace of Moore’s Law through Industry Collaboration

Fri, 01/12/2024 - 11:48

By Regan Mills, VP and GM, SOC product marketing, Teradyne

It’s no secret: the semiconductor industry is at a crossroads.

In the past, our industry could rely on Moore’s Law and Dennard scaling to continuously advance each new generation of semiconductors. But that’s no longer the case as unprecedented challenges, such as the physical limitations of scaling, have altered this once-linear path. At the same time, new trends in computing — including advanced packaging techniques (e.g., chiplets) and increased demand for more powerful processing — are making devices more complex. Add a significant skills shortage to the mix, and it’s clear that we’re on the precipice of the next evolution in semiconductor design and manufacturing.

The rising complexity of semiconductor devices is unprecedented.The rising complexity of semiconductor devices is unprecedented.

How do we fix it? By seeking out methods and techniques that further optimize our existing solutions and processes in ways that move the industry forward.

Collaboration, A Part of the Solution

The various stages of the semiconductor lifecycle — design, fabrication and testing —have traditionally operated in silos, with limited sharing of information. Instead of directly sharing raw data and real results with one another, the information has been abstracted into specification and data sheets.

For example, a chip designer may have simulated their original design in detail. However, instead of directly sharing the simulation results with other groups, they’ll conventionally distill that information into a specification sheet — which is the only information that is passed down the line.

And that’s problematic because many times specification sheets don’t capture all the granular detail, so significant information is lost. Because this lack of transparency obscures important details, it’s been difficult for the semiconductor industry to fully optimize designs and processes.

The Role of Data

Fortunately, change is in progress. Advanced analytics platforms that rely on sophisticated ML and AI models are enabling every part of the semiconductor value chain to take advantage of new methods for analyzing and acting on the vast amounts of data available during the design and manufacturing process.

On one hand, sharing can work in the forward direction, with each subsequent stage in the lifecycle receiving data from the previous stage. If testing groups could access simulation results, they’d be better informed on the tolerances and margins required for their test setups. This would produce more accurate and reliable data, resulting in higher-quality devices and a positive impact on yield.

Sharing feedback is also integral to collaboration. Consider what happens when a product fails in the field. Here, sharing lifetime and diagnostic data from the device could help to identify which stages in the lifecycle led to the failure. This feedback could then be integrated to improve processes, leading to better-designed and higher-quality end devices.

By way of analogy, imagine a fleet of electric vehicles (EVs), each equipped with advanced sensors that collect data on battery life, motor efficiency and overall vehicle performance under various conditions. If one EV in the fleet experiences a failure, the EV’s communications system could share the data collected up to the point of failure with the automotive manufacturer. This shared information would allow the manufacturer to diagnose the cause of the failure, whether it’s a flaw in battery design, an issue with the electric motor or another problem in a different subsystem.

In the same way, the semiconductor industry can leverage data to identify flaws in processes, the resolution of which will lead to quality, yield and efficiency gains across the board.

Climate for Collaboration

It’s clear that sharing information has huge potential for improving processes in the semiconductor industry. Fortunately, the climate for collaboration has never been better. With governments around the world supporting their own versions of CHIPS Acts, funding and resources in the semiconductor industry are at an all-time high. This groundswell gives the entire semiconductor industry the chance to benefit from the momentum.

Simply interfacing, however, is not enough. We need well-defined standards, whether those are standard file formats for different aspects of the semiconductor lifecycle or a standard means of sharing data that allow every player in the value chain to maintain their differentiation and competitive edge while enabling interoperability.

Peripheral Component Interconnect Express (PCIe) is a great example of how standards can improve not only technical performance and efficiency but also differentiation within a well-defined system. PCIe, a high-speed, serial bus standard, is the common interface between motherboards, and PC hardware and peripherals. PCIe provides lower latency and higher data transfer rates than previous standards, such as PCI. Industry adoption of this standard has ensured that companies could create differentiated products based on the intended application with the confidence that the components would be interoperable.

The semiconductor industry needs to continue this evolution, prioritizing the development of new data standards that will benefit the semiconductor manufacturing ecosystem as a whole.

An open architecture is a secure conduit to off-the-shelf data analytics solutions, improving the speed and efficiency of semiconductor test. An open architecture is a secure conduit to off-the-shelf data analytics solutions, improving the speed and efficiency of semiconductor test.

Change is already underway. SEMI’s Smart Data-AI Initiative exemplifies a new approach to industry collaboration as it aims to provide a framework for sharing data among different functions within a fab.

“The global semiconductor industry is projected to reach $1 trillion by 2030, according to a 2022 report from McKinsey & Company, but this will not happen on ‘auto-pilot.’ To accomplish this, we will need to continue the pace of innovation to make billions of increasingly complex microelectronic devices — all of which must be tested for performance, reliability and other metrics before they reach their target application,” said Dr. Pushkar Apte, strategic technical advisor, SEMI. “If we are to maintain high performance and quality on such a massive scale, we need to embrace the strategic integration of data analytics, machine learning and AI in semiconductor manufacturing processes. SEMI’s Smart Data-AI Initiative provides a platform to drive value-creation from data and AI that are specific to the semiconductor ecosystem. The initiative enables pre-competitive collaboration through the entire ecosystem to accelerate innovation while preserving the integrity of an individual company’s IP.”

With this foundation beginning to take shape, how do we handle the resultant influx of data analytics?

At Teradyne, we facilitate sharing through analytics solutions that are based on an open architecture. This approach lets our customers easily integrate off-the-shelf data analytics solutions from third-party companies with our testers. And because our architecture is agnostic, customers can also use the same open architecture with their home-grown analytics solutions. The choice is theirs.

Beyond Moore’s

The physical aspects of Moore’s Law are decelerating, but that doesn’t necessitate a slowdown in semiconductor advancements. In this era where collaboration is taking on an increasingly important role in the semiconductor industry, the opportunity for new paradigms is plentiful, but it’s up to us to evolve the way the industry works together.

The post Keeping the Pace of Moore’s Law through Industry Collaboration appeared first on ELE Times.

Building the Better SSD

Fri, 01/12/2024 - 11:38

Courtesy: Samsung

As the demand for both SSD memory capacity and operating speed continues to increase at a breakneck pace, so does the need to improve data storage efficiency, reduce garbage collection, and handle errors more proactively.

For a big-picture analogy, let’s compare the problems faced with SSD data to the challenges of grain delivery from silo to transportation to warehouse. We’ll consider bags of grain to be delivered as bulk data to be stored on an SSD. NVMe SSD technologies allow the shipper (Data Center host) to specify:

  • A way for multiple grain shippers to tag their bags so that a single transport channel can carry all without mixing them up (SR-IOV, ZNS)
  • The best place in the warehouse to store each bag of grain with other like-grains stored (Flexible Data Placement – FDP) to minimize the number of bags to reorganize (Garbage Collection – GC)
  • The number of resources applied to high-priority shipments vs low-priority ones (Performance Control).

Now let’s consider the associated problem of pest control. In ages past, the world beat a path to the door of those who built a better mousetrap. In the SSD world, that task is akin to error management.

  • Improve the trap mechanism to maximize mice caught (CECC/UECC)
  • Monitor the trap to check the number of mice caught, whether the trap is full, and whether one trap is not working as well as others (SMART/Health)
  • Track and report the most mouse-related activity possible (Telemetry)
  • Use the activity data to foresee a major pest infestation before it happens (Failure Prediction)

And then there are cross-functional issues, such as…

  • Recovering grain bags to a new storage area when the original area has been overrun (data recovery and new drive migration)

Samsung is building a better mousetrap by leading the technology world in SSD engineering.

The Samsung annual Memory Tech Day event offered several breakout sessions that uncovered our latest storage technologies. Here are the key takeaways from the computing memory solutions track.

Jung Seungjin, VP of Solution Product Engineering team, discusses SSD Telemetry.

Consider a brief history of telemetry: Telemetry concepts of collecting operations data and then transmitting it to a remote location for interpretation have been around for well over a century.  Various forms of error logging and retrieval were included from the beginning of modern hard drive technologies. Basic SSD-specific telemetry commands and delivery formats became standard starting with NVMe 1.3.

In more recent times, Samsung has been using its position as the leader in SSD technology to drive sophisticated and necessary telemetry additions to the spec. The benefits of Samsung’s cutting-edge research become immediately obvious. Consider, for example, Samsung Telemetry Service, an advanced tool helping enterprise customers remotely analyze and manage their devices. It guarantees the stability of data – allowing data center operators to prevent future drive failures, manage drive replacement, and migrate data.

“Through monitoring, we realized that multi-address CECC can become a UECC that can cause problems in the system in the future.”

The Telemetry presentation focuses on telemetry background, the latest improvements that Samsung is driving to add to the specification, and examples of the value they add to enable detection of drive failure. Of key interest is Samsung’s advanced machine learning-based anomaly prediction research.

Silwan Chang, VP of Software Development team, talks about Flexible Data Placement (FDP) and the ease of its implementation to dramatically reduce Write Amplification Factor (WAF). The discussion includes a comparative analysis of various Data Placement technologies including ZNS, showcasing a use case for Samsung’s FDP technology.

The underlying limitation of NAND is that data in a NAND cell cannot be overwritten – thus, a NAND block must be erased before writing the data. Data placement technology overcomes this limitation because ideal data placement can increase the performance and endurance of modern SSDs without additional H/W cost.

The host influences data placement through the Reclaim Unit (RU) handled by the SSD; knowing the most efficient size and boundaries of this basic SSD storage unit, the host can group data of similar life cycles to reduce or eliminate SSD garbage collection inefficiencies.

“The best thing about an FDP SSD is that this is possible with a very small change of the system SW.”

Following up, Ross Stenfort of Meta presents Hyperscale FDP Perspectives where he shows the progression of improvements to reduce WAF:

  • Overprovisioning – allocating extra blocks to use for garbage collection
  • Trim/Deallocate host commands – telling the SSD what can safely be deleted
  • FDP – telling the SSD how to group data in order to minimize future garbage collection.

The presentation includes a compelling workload example without and with FDP, noting that:

“Applications are not required to understand FDP to get benefits.”

In his next session, Silwan Chang continues with a discussion about the present and future of Samsung SSD virtualization technology using SR-IOV.

Efficiency has become a central focus for increasing datacenter processing capacity. With the number of datacenter CPU cores typically exceeding 100, the number of tenants (separate instances / applications) utilizing a single SSD has surged.

Virtualization provides each tenant its own private window into SSD storage space. The PCIe SR-IOV specification provided the basics for setting up a virtualized environment. With its research giving it an early lead, Samsung now has nearly a decade of experience with SR-IOV – and has identified and developed solutions for underlying security and performance issues:

  • Data Isolation – keeping data from one tenant secure from access by others, evolving from logical sharing to physically isolated partitioning
  • Performance Isolation – preventing activity by one tenant from adversely affecting performance of other tenants
  • Security Enhancement – encryption evolving from Virtual Function level to link level
  • Live Migration – moving data from one SSD to another while keeping both in active service to the datacenter host.

“To realize completely isolated storage spaces in a single SSD, we need to evolve into physical partitioning where NAND chips and even controller resources are dedicated to a namespace.”

Sunghoon Chun, VP of Solution Development team, talks about Samsung’s ongoing development of new solutions tailored to meet the challenges of rapidly evolving PCIe interface speeds and the trend towards high-capacity products.

The key focus is higher speeds at lower active power, aspects that tend to be mutually exclusive.

Samsung targets lower active-power in two main ways:

  • Designing lower power components by adding power rails to boost the efficiency of the voltage regulator
  • Introducing power-saving features to optimize the interaction between components, such as by modifying firmware to favor lower-power SRAM utilization over DRAM.

The higher speed target brings with it higher temperatures, which Samsung addresses with:

  • Form factor conversion to accommodate higher thermal dissipation for power demands going from 25W to 40W
  • Use of more effective and novel case construction materials and design techniques
  • Thermal management solutions using immersion cooling that yield strong experimental gains.

“The goal is to continue efforts to create a perfect SSD, optimized for use in immersion cooling systems over the next few years in line with the trend of the times.”

In summary, this presentation track reveals the Samsung SSD strategy for customer success.

  • Dramatically reduce WAF by taking advantage of Samsung’s advanced Flexible Data Placement technology
  • Vastly increase virtualization efficiency using Samsung’s performance regulation and space partitioning technology to maximize the processing capacity for each core of the multi-core datacenter CPU
  • Achieve significantly higher operating speeds while both reducing power and increasing heat dissipation by using Samsung’s novel design and packaging techniques
  • Remotely analyze and manage devices to virtually eliminate data loss and its crippling downtime through the innovative Samsung Telemetry Service.

The post Building the Better SSD appeared first on ELE Times.

Can Your Vision AI Solution Keep Up with Cortex-M85?

Fri, 01/12/2024 - 11:24

Kavita Char | Principal Product Marketing Manager | Renesas

Vision AI – or computer vision – refers to technology that allows systems to sense and interpret visual data and make autonomous decisions based on an analysis of this data. These systems typically have camera sensors for acquisition of visual data that is provided as input activation to a neural network trained on large image datasets to recognize images. Vision AI can enable many applications like industrial machine vision for fault detection, autonomous vehicles, face recognition in security applications, image classification, object detection and tracking, medical imaging, traffic management, road condition monitoring, customer heatmap generation and so many others.

In my previous blog, Power Your Edge AI Application with the Industry’s Most Powerful Arm MCUs, I discussed some of the key performance advantages of the powerful RA8 Series MCUs with the Cortex-M85 core and Helium that make them ideally suited for voice and vision AI applications. As discussed there, the availability of higher performance MCUs as well as thin neural network models more suited for the resource constrained MCUs used in end point devices, are enabling these sorts of edge AI applications.

In this blog, I will discuss a vision AI application built on the new RA8D1 graphics-enabled MCUs featuring the same Cortex-M85 core and use of Helium to accelerate the neural network. RA8D1 MCUs provide a unique combination of advanced graphics capabilities, sensor interfaces, large memory and the powerful Cortex-M85 core with Helium for acceleration of the vision AI neural networks, making them ideally suited for these vision AI applications.

Graphics and Vision AI Applications with RA8D1 MCUs

Renesas has successfully demonstrated the performance uplift with Helium, in various AI / ML use cases showing significant improvement over a Cortex-M7 MCU – more than 3.6x in some cases.

One such use case is a people detection application developed in collaboration with Plumerai, a leading provider of vision AI solutions. This camera-based AI solution has been ported and optimized for the Helium-enabled Arm Cortex-M85 core, successfully demonstrating both the performance as well as the graphics capabilities of the RA8D1 devices.

Accelerated with Helium, the application achieves a 3.6x performance uplift vs. Cortex-M7 core and 13.6 fps frame rate, a strong performance for an MCU without hardware acceleration. The demo platform captures live images from an OV7740 image-sensor-based camera at 640×480 resolution and presents detection results on an attached 800×480 LCD display. The software detects and tracks each person within the camera frame, even if partially occluded, and shows bounding boxes drawn around each detected person overlaid on the live camera display.

 Renesas People Detection AI Demo Platform, showcased at Embedded World 2023Figure 1: Renesas People Detection AI Demo Platform, showcased at Embedded World 2023

Plumerai people detection software uses a convolution neural network with multiple layers, trained with over 32 million labeled images. The layers that account for the majority of the total latency, are Helium accelerated, such as the Conv2D and fully connected layers, as well as depthwise convolution and transpose convolution layers.

The camera module provides images in YUV422 format which is converted to RGB565 format for display on the LCD screen. The 2D graphics engine integrated on the RA8D1 resizes and converts the RGB565 to ABGR8888 at resolution 256×192 for input to the neural network. The software then converts the ARBG8888 format to the neural network model input format and runs the people detection inference function. The graphics LCD controller and 2D drawing engine on the RA8D1 are used to render the camera input to the LCD screen as well as draw bounding boxes around detected people and present the frame rate. The people detection software uses roughly 1.2MB of flash and 320KB of SRAM, including the memory for the 256×192 ABGR8888 input image.

 People Detection AI application on the RA8D1 MCUFigure 2: People Detection AI application on the RA8D1 MCU

Benchmarking was done to compare the latency of Plumerai’s people detection solution as well as the same neural network running with TFMicro using Arm’s CMSIS-NN kernels. Additionally, for the Cortex-M85, the performance of both solutions with Helium (MVE) disabled was also benchmarked. This benchmark data shows pure inference performance and does not include latency for the graphics functions, such as image format conversions.

 The Renesas people detection demo based on the RA8D1 demonstrates a performance uplift of 3.6x over the Cortex-M7 coreFigure 3: The Renesas people detection demo based on the RA8D1 demonstrates a performance uplift of 3.6x over the Cortex-M7 core  Inference performance of 13.6 fps @ 480 MHz using RA8D1 with Helium enabledFigure 4: Inference performance of 13.6 fps @ 480 MHz using RA8D1 with Helium enabled

This application makes optimal use of all the resources available on the RA8D1:

  • High-performance 480 MHz processor
  • Helium for neural network acceleration
  • Large flash and SRAM for storage of model weights and input activations
  • Camera interface for capture of input images/video
  • Display interface to show the people detection results

Renesas has also demonstrated multi-modal voice and vision AI solutions based on the RA8D1 devices that integrate visual wake words and face detection and recognition with speaker identification. RA8D1 MCUs with Helium can significantly improve neural network performance without the need for any additional hardware acceleration, thus providing a low-cost, low-power option for implementing AI and machine learning use cases.

The post Can Your Vision AI Solution Keep Up with Cortex-M85? appeared first on ELE Times.

Getting Started with Large Language Models for Enterprise Solutions

Fri, 01/12/2024 - 11:06

ERIK POUNDS | Nvidia

Large language models (LLMs) are deep learning algorithms that are trained on Internet-scale datasets with hundreds of billions of parameters. LLMs can read, write, code, draw, and augment human creativity to improve productivity across industries and solve the world’s toughest problems.

LLMs are used in a wide range of industries, from retail to healthcare, and for a wide range of tasks. They learn the language of protein sequences to generate new, viable compounds that can help scientists develop groundbreaking, life-saving vaccines. They help software programmers generate code and fix bugs based on natural language descriptions. And they provide productivity co-pilots so humans can do what they do best—create, question, and understand.

Effectively leveraging LLMs in enterprise applications and workflows requires understanding key topics such as model selection, customization, optimization, and deployment. This post explores the following enterprise LLM topics:

  • How organizations are using LLMs
  • Use, customize, or build an LLM?
  • Begin with foundation models
  • Build a custom language model
  • Connect an LLM to external data
  • Keep LLMs secure and on track
  • Optimize LLM inference in production
  • Get started using LLMs

Whether you are a data scientist looking to build custom models or a chief data officer exploring the potential of LLMs for your organization, read on for valuable insights and guidance.

How organizations are using LLMs Figure 1. LLMs are used to generate content, summarize, translate, classify, answer questions, and much more

LLMs are used in a wide variety of applications across industries to efficiently recognize, summarize, translate, predict, and generate text and other forms of content based on knowledge gained from massive datasets. For example, companies are leveraging LLMs to develop chatbot-like interfaces that can support users with customer inquiries, provide personalized recommendations, and assist with internal knowledge management.

LLMs also have the potential to broaden the reach of AI across industries and enterprises and enable a new wave of research, creativity, and productivity. They can help generate complex solutions to challenging problems in fields such as healthcare and chemistry. LLMs are also used to create reimagined search engines, tutoring chatbots, composition tools, marketing materials, and more.

Collaboration between ServiceNow and NVIDIA will help drive new levels of automation to fuel productivity and maximize business impact. Generative AI use cases being explored include developing intelligent virtual assistants and agents to help answer user questions and resolve support requests and using generative AI for automatic issue resolution, knowledge-base article generation, and chat summarization.

A consortium in Sweden is developing a state-of-the-art language model with NVIDIA NeMo Megatron and will make it available to any user in the Nordic region. The team aims to train an LLM with a whopping 175 billion parameters that can handle all sorts of language tasks in the Nordic languages of Swedish, Danish, Norwegian, and potentially Icelandic.

The project is seen as a strategic asset, a keystone of digital sovereignty in a world that speaks thousands of languages across nearly 200 countries. To learn more, see The King’s Swedish: AI Rewrites the Book in Scandinavia.

The leading mobile operator in South Korea, KT, has developed a billion-parameter LLM using the NVIDIA DGX SuperPOD platform and NVIDIA NeMo framework. NeMo is an end-to-end, cloud-native enterprise framework that provides prebuilt components for building, training, and running custom LLMs.

KT’s LLM has been used to improve the understanding of the company’s AI-powered speaker, GiGA Genie, which can control TVs, offer real-time traffic updates, and complete other home-assistance tasks based on voice commands.

Use, customize, or build an LLM?

Organizations can choose to use an existing LLM, customize a pretrained LLM, or build a custom LLM from scratch. Using an existing LLM provides a quick and cost-effective solution, while customizing a pretrained LLM enables organizations to tune the model for specific tasks and embed proprietary knowledge. Building an LLM from scratch offers the most flexibility but requires significant expertise and resources.

NeMo offers a choice of several customization techniques and is optimized for at-scale inference of models for language and image applications, with multi-GPU and multi-node configurations. For more details, see Unlocking the Power of Enterprise-Ready LLMs with NVIDIA NeMo.

NeMo makes generative AI model development easy, cost-effective, and fast for enterprises. It is available across all major clouds, including Google Cloud as part of their A3 instances powered by NVIDIA H100 Tensor Core GPUs to build, customize, and deploy LLMs at scale. To learn more, see Streamline Generative AI Development with NVIDIA NeMo on GPU-Accelerated Google Cloud.

To quickly try generative AI models such as Llama 2 directly from your browser with an easy-to-use interface, visit NVIDIA AI Playground.

Begin with foundation models

Foundation models are large AI models trained on enormous quantities of unlabeled data through self-supervised learning. Examples include Llama 2, GPT-3, and Stable Diffusion.

The models can handle a wide variety of tasks, such as image classification, natural language processing, and question-answering, with remarkable accuracy.

These foundation models are the starting point for building more specialized and sophisticated custom models. Organizations can customize foundation models using domain-specific labeled data to create more accurate and context-aware models for specific use cases.

Foundation models generate an enormous number of unique responses from a single prompt by generating a probability distribution over all items that could follow the input and then choosing the next output randomly from that distribution. The randomization is amplified by the model’s use of context. Each time the model generates a probability distribution, it considers the last generated item, which means each prediction impacts every prediction that follows.

NeMo supports NVIDIA-trained foundation models as well as community models such as Llama 2, Falcon LLM, and MPT. You can experience a variety of optimized community and NVIDIA-built foundation models directly from your browser for free on NVIDIA AI Playground. You can then customize the foundation model using your proprietary enterprise data. This results in a model that is an expert in your business and domain.

Build a custom language model

Enterprises will often need custom models to tailor ‌language processing capabilities to their specific use cases and domain knowledge. Custom LLMs enable a business to generate and understand text more efficiently and accurately within a certain industry or organizational context. They empower enterprises to create personalized solutions that align with their brand voice, optimize workflows, provide more precise insights, and deliver enhanced user experiences, ultimately driving a competitive edge in the market.

NVIDIA NeMo is a powerful framework that provides components for building and training custom LLMs on-premises, across all leading cloud service providers, or in NVIDIA DGX Cloud. It includes a suite of customization techniques from prompt learning to parameter-efficient fine-tuning, to reinforcement learning through human feedback (RLHF). NVIDIA also released a new, open customization technique called SteerLM that allows for tuning during inference.

When training an LLM, there is always the risk of it becoming “garbage in, garbage out.” A large percentage of the effort is acquiring and curating the data that will be used to train or customize the LLM.

NeMo Data Curator is a scalable data-curation tool that enables you to curate trillion-token multilingual datasets for pretraining LLMs. The tool allows you to preprocess and deduplicate datasets with exact or fuzzy deduplication, so you can ensure that models are trained on unique documents, potentially leading to greatly reduced training costs.

Connect an LLM to external data

Connecting an LLM to external enterprise data sources enhances its capabilities. This enables the LLM to perform more complex tasks and leverage data that has been created since it was last trained.

Retrieval Augmented Generation (RAG) is an architecture that provides an LLM with the ability to use current, curated, domain-specific data sources that are easy to add, delete, and update. With RAG, external data sources are processed into vectors (using an embedding model) and placed into a vector database for fast retrieval at inference time.

In addition to reducing computational and financial costs, RAG increases accuracy and enables more reliable and trustworthy AI-powered applications. Accelerating vector search is one of the hottest topics in the AI landscape due to its applications in LLMs and generative AI.

Keep LLMs on track and secure

To ensure an LLM’s behavior aligns with desired outcomes, it’s important to establish guidelines, monitor its performance, and customize as needed. This involves defining ethical boundaries, addressing biases in training data, and regularly evaluating the model’s outputs against predefined metrics, often in concert with a guardrails capability. For more information, see NVIDIA Enables Trustworthy, Safe, and Secure Large Language Model Conversational Systems.

To address this need, NVIDIA has developed NeMo Guardrails, an open-source toolkit that helps developers ensure their generative AI applications are accurate, appropriate, and safe. It provides a framework that works with all LLMs, including OpenAI’s ChatGPT, to make it easier for developers to build safe and trustworthy LLM conversational systems that leverage foundation models.

Keeping LLMs secure is of paramount importance for generative AI-powered applications. NVIDIA has also introduced accelerated Confidential Computing, a groundbreaking security feature that mitigates threats while providing access to the unprecedented acceleration of NVIDIA H100 Tensor Core GPUs for AI workloads. This feature ensures that sensitive data remains secure and protected, even during processing.

Optimize LLM inference in production

Optimizing LLM inference involves techniques such as model quantization, hardware acceleration, and efficient deployment strategies. Model quantization reduces the memory footprint of the model, while hardware acceleration leverages specialized hardware like GPUs for faster inference. Efficient deployment strategies ensure scalability and reliability in production environments.

NVIDIA TensorRT-LLM is an open-source software library that supercharges large LLM inference on NVIDIA accelerated computing. It enables users to convert their model weights into a new FP8 format and compile their models to take advantage of optimized FP8 kernels with NVIDIA H100 GPUs. TensorRT-LLM can accelerate inference performance by 4.6x compared to NVIDIA A100 GPUs. It provides a faster and more efficient way to run LLMs, making them more accessible and cost-effective.

These custom generative AI processes involve pulling together models, frameworks, toolkits, and more. Many of these tools are open source, requiring time and energy to maintain development projects. The process can become incredibly complex and time-consuming, especially when trying to collaborate and deploy across multiple environments and platforms.

NVIDIA AI Workbench helps simplify this process by providing a single platform for managing data, models, resources, and compute needs. This enables seamless collaboration and deployment for developers to create cost-effective, scalable generative AI models quickly.

NVIDIA and VMware are working together to transform the modern data center built on VMware Cloud Foundation and bring AI to every enterprise. Using the NVIDIA AI Enterprise suite and NVIDIA’s most advanced GPUs and data processing units (DPUs), VMware customers can securely run modern, accelerated workloads alongside existing enterprise applications on NVIDIA-Certified Systems.

Get started using LLMs

Getting started with LLMs requires weighing factors such as cost, effort, training data availability, and business objectives. Organizations should evaluate the trade-offs between using existing models and customizing them with domain-specific knowledge versus building custom models from scratch in most circumstances. Choosing tools and frameworks that align with specific use cases and technical requirements is important, including those listed below.

The Generative AI Knowledge Base Chatbot lab ‌shows you how to adapt an existing AI foundational model to accurately generate responses for your specific use case. This free lab provides hands-on experience with customizing a model using prompt learning, ingesting data into a vector database, and chaining all components to create a chatbot.

NVIDIA AI Enterprise, available on all major cloud and data center platforms, is a cloud-native suite of AI and data analytics software that provides over 50 frameworks, including the NeMo framework, pretrained models, and development tools optimized for accelerated GPU infrastructures. You can try this end-to-end enterprise-ready software suite is with a free 90-day trial.

NeMo is an end-to-end, cloud-native enterprise framework for developers to build, customize, and deploy generative AI models with billions of parameters. It is optimized for at-scale inference of models with multi-GPU and multi-node configurations. The framework makes generative AI model development easy, cost-effective, and fast for enterprises. Explore the NeMo tutorials to get started.

NVIDIA Training helps organizations train their workforce on the latest technology and bridge the skills gap by offering comprehensive technical hands-on workshops and courses. The LLM learning path developed by NVIDIA subject matter experts spans fundamental to advanced topics that are relevant to software engineering and IT operations teams. NVIDIA Training Advisors are available to help develop customized training plans and offer team pricing.

Summary

As enterprises race to keep pace with AI advancements, identifying the best approach for adopting LLMs is essential. Foundation models help jumpstart the development process. Using key tools and environments to efficiently process and store data and customize models can significantly accelerate productivity and advance business goals.

The post Getting Started with Large Language Models for Enterprise Solutions appeared first on ELE Times.

2024 Predictions in storage, technology, and the world, part 1: the AI hype is real!

Fri, 01/12/2024 - 09:47

JEREMY WERNER | Micron

Over the past 100 years, driven by the introduction of increasingly connecting
technologies enabling richer communications and lower-latency information
transfer, the world has gotten closer than ever.

This increased connectivity has led to fantastic benefits for many people around the world, lifting people out of poverty, increasing information availability, revolutionizing business and education, connecting people with like-minded citizens of Earth no matter where they may be, and shining spotlights on injustices around the world that we can tackle as a human species, among myriad other benefits. But there have been downsides that are often lamented as we age and look back fondly on less connected times.

Our privacy has eroded as we are now traceable and trackable — from our phone locations to our online search history. Our ability to sustain concentration for tasks that require significant time and effort has diminished due to the nature of our always on and always reachable connectivity. Also, some of the worst human traits are brought forth through the power of social media and often misleading information that is difficult or impossible to discern as fact or fiction, often leading to hate, jealousy, greed, gluttony and self-loathing.

These technologies have remade the world and the world’s economy through the introduction of new capabilities including mass production in a global interconnected supply chain, which is driving productivity gains. Now that the information revolution has transformed the world, we sit on the cusp of another great revolution as we enter the Age of Intelligence1, undoubtedly greater than any we’ve seen in the history of humankind – built on the shoulders of the giant leaps that humans, as the world’s ultimate social and ingenious beings, have taken in the past.

Now, on to my first prediction.

Prediction 1: The AI hype is REAL and will change the world forever

Like all technologies as they first take off, questions abound about whether they are real or hype. Many technologies are hyped, only to flounder for years before becoming mainstream; others catch the momentum and take off, never looking in the rearview mirror; and some fade into the annals of history, a distant memory in nostalgia, the ever-common one-hit wonder.

Gartner writes about this in its famous Hype Cycle – and I think it’s a good way to look at where new technologies stand. One of the most common questions I get is, “Is the AI boom hype?” and my answer is, “Unequivocally not hype!” Now, it’s possible that the fine people (or algorithmic trading supercomputers these days) on Wall Street will fade the trade of AI companies as growth inevitably tames. But the impact that AI will have on our lives, on the future of the data center and personal devices, on the future of memory and storage technology, and on the growth rate of IT spending will be tremendous — and we are just at the very beginning of what is possible!

The introduction of ChatGPT and the other more than 100-billion-parameter large language models (LLMs) kicked off the generative AI revolution, although neural networks, deep learning and artificial intelligence (AI) have been in use for decades in fields such as image recognition and advertising recommendation engines. But something about the latest LLM AI capabilities makes them seem different than what came before – more capable, more intelligent, more thoughtful, more human? And these capabilities are advancing at an accelerating pace – especially as all the world’s largest companies race to monetize and productize the LLM-based applications that will change the world forever.

Let me provide a few examples of new near-term, medium-term, and long-term capabilities and applications that will reshape the world as we know it. In the process, they will reshape the need for faster, larger, more secure, and more capable memory, storage, networking, and compute devices, with a special focus on data creation, storage, and analytics from these new applications. Whether these technologies go mainstream today, tomorrow or in 20 years, the race to deploy them starts NOW and Micron is at the heart of all the innovation and ramping capabilities.

The basics: near-term capabilities guaranteed to explode in the next two to three years

Most of these technologies are applications that will be run in the data center and accessed remotely through a phone or PC by the consumer, or run in the backbone of business applications to speed time to market for new product development, gain insights on how companies are performing to drive improvements, uncover areas of saving and productivity gain, and bring these companies closer to their customers by enhancing their understanding of their customers desires and connecting them with the products that will interest them.

  • General generative AI – Want to create a new logo for your company, draw a funny picture for a friend, or express your ideas in art? Maybe write a blog or piece of marketing collateral, find or create a legal agreement template, brainstorm ideas for team building events, review the flow of your presentation and make suggestions to wow the audience – or even touch up your slides and presentation for you?

It’s all possible and it’s real, here and now, and the rollout into Office365 and Google Docs is happening, gated primarily by integrating these capabilities into applications, users learning how to use the new capabilities (that is, adoption), and the compute power on the backend supporting these new capabilities. (Note that rolling out that compute power will benefit memory and storage demand.)

  • Video chat monitoring – Need real-time language translation for cross-border meetings with team members fluent in different languages? Tired of taking meeting minutes and want an automated summary — including key points, attendees, and action items — to be saved to the location of your choice and sent out after your meeting? These are just a couple examples of the capabilities in trial or being developed already.
  • Code generation – The average compensation of a software engineer in the U.S. is about $155,0002. Code generation empowers entrepreneurs and creators by giving them the ability to program without needing to know how to code. It can also transform an experienced coder or software engineer into a super engineer, enhancing their productivity by an average of 55%, according to one study.3

At Micron we’ve been deploying early prototypes of AI coding tools for our software engineers from IT and product development to test and validation. And even early tools — not bespoke trained tools on our data specifically — are showing huge promise to drive software developer productivity. One simple example that most software programmers will appreciate was that the AI software automatically generated and inserted comments for the code we were writing that was highly accurate. This simple task saved our engineers up to 20% of their time while enhancing the consistency, quality, and readability of our code for others assigned to or joining projects.

  • Entrepreneurship and business partners – Have a new idea but don’t know where to get started? Your favorite generative AI assistant has your back. Tell ChatGPT or other generative AI tools you want to start a business together and it’s your new business partner! Explain your idea and ask for a business plan, a roadmap and a step-by-step guide on how to realize your dream. You’ll be amazed at what an enthusiastic and capable business partner you’ve found. It’s not perfect but is any co-worker?
Medium-term technologies that will disrupt trillion-dollar industries in the next three to seven years

Most of these technologies require some complex problems to be solved, including government regulations for safety reasons or new physical capabilities to be developed. These dependencies will inevitably delay the introduction of what is possible as they are added into the existing physical world built for humans and their imperfections.

  • Autonomous driving – Remember the hype this new technology got around 2021? Uber and Lyft stock soared on the belief that their platforms would provide the robo-taxi fleet for the rapid transition into autonomous vehicles. But indeed Level 5 (fully autonomous) cars have fallen somewhat into the trough of disillusionment. The reasons for the delay are many – underestimation of the complexity and computing power required to make split second decisions, the variance of the driving, road and weather conditions, the complexity of the moral and ethical decision-making, and societal and regulatory questions such as who is liable in the event of an accident or how you prioritize saving the lives of passengers or pedestrians when no perfect decision exists. Accidents happen, right? But we will figure these issues out, and eventually most vehicles on the road will be capable of full autonomy. And this will have an enormous impact on the amount of memory and storage in a car as the average L5 vehicle in 2030 will use approximately 200 times the amount of NAND used by a typical L2+/L3 vehicle today. Multiply that by approximately 122 million4 vehicles in 2030 and you see an increase in demand for NAND in automotive applications reliant on AI of a whopping 500 exabytes! That’s over half the amount of NAND expected to be produced in 2024.
  • Healthcare – Artificial intelligence is transforming healthcare in many ways, including radiology scans and cancer detection. AI algorithms can analyze images from MRI scans to predict the presence of an IDH1 gene mutation in brain tumors or find prostate cancer when it’s present and dismiss anything that may be mistaken for cancer4. Researchers are using machine learning to build tools in the realm of cancer detection and diagnosing, potentially catching tumors or lesions that doctors could miss5. AI is also being used to help detect lung cancer tumors in computed tomography scans, with the AI deep learning tool outperforming radiologists in detecting lung cancer6. And AI will bring the best practices and procedures to patients around the world, especially in locations lacking the quantity and quality of top doctors, which is likely to massively improve outcomes.
  • Personal AI assistant – Movies and books have been written — from Awaken Online7 to Her8 — romanticizing the idea of a personal AI assistant always with you, capable of truly understanding your desires, preferences, and needs. Imagine being able to give vague instructions like find me something to eat, plan my vacation for me, create my to-do list, or help me choose an outfit today. These are all within the realm of possibility but require privacy and performance that is likely best delivered locally instead of from the cloud. The training and retraining of these models may happen on more powerful servers, but the inferencing/running of the model and your private data is likely to be resident on your phone or PC of the future. This means massive increases in local storage (NAND/SSD) and memory (DRAM) in future personal devices.
  • Video training – How about a virtual avatar of your boss, trained on their capabilities and thought processes, to review your work and provide feedback, or give advice that is close to what they would actually deliver, or a video of your favorite leader or scientist or celebrity who could come to a school and interact with the students in an authentic and thoughtful manner? Training on video and the compute power necessary to scale hyperrealistic advanced digital AI avatars are costly endeavors compared to still image or text generation, but they’re technologically viable once costs come down and investment scales into the next wave of generative models.
  • Policing and law enforcement – Artificial intelligence has the potential to transform the field of policing and law enforcement, especially in video surveillance. AI can help detect and prevent crimes, identify and track suspects, and provide evidence and insights for investigations. However, the use of AI also raises ethical and social issues, such as the balance between government monitoring and individual privacy rights, the risk of government tyranny and abuse of power, and the impact of AI on human dignity and civil liberties. Different countries have different approaches and regulations on how to use AI for video surveillance, reflecting their cultural and political values. For example, the U.S. prioritizes individual privacy and limits the use of facial recognition and other biometric technologies by law enforcement agencies. On the other hand, Britain and China allow more state surveillance and use AI to monitor public spaces, traffic, and social media for crime prevention and social control. These contrasting examples show that society must weigh the benefits and risks of AI in video surveillance and decide how to regulate and oversee its use in a democratic and transparent manner. So, while the technology exists for much of this use today, the sticky ethical questions and subsequent regulations are likely to take longer before they fully disrupt this industry.
Longer-term technologies that will create multitrillion-dollar industries in the next 10-plus years
  • Home-assistant robotics – The aging population in the United States is facing a number of challenges when it comes to eldercare. The need for caregivers will increase significantly as the population ages. However, the supply of eldercare is not keeping up with the demand. The shortage of workers in the eldercare industry is a nationwide dilemma, with millions of older adults unable to access the affordable care and services that they so desperately need. According to the Bureau of Labor Statistics, the employment of home health and personal care aides is projected to grow 22% from 2022 to 2032, much faster than the average for all occupations. About 684,600 openings for home health and personal care aides are projected each year, on average, over the decade9. Meanwhile, according to the CDC, 66% of U.S. households (86.9 million homes) own a pet, with dogs being the most popular pet in the U.S. (65.1 million U.S. households own a dog), followed by cats (46.5 million households)10. In 2022, Americans spent $5.8 billion on pet care services, including pet sitting, dog walking, grooming, and boarding.11

And over one million home burglaries occur annually in the U.S.; that’s one every 25.7 seconds!12 Home-assistant robots with AI embedded into their capabilities could help seniors or disabled people maintain their independence, protect our homes when we are out, or take care of our pets when we travel, as well as assisting in myriad other helpful ways such as cooking or cleaning. Eventually Isaac Asimov’s vision of intelligent and helpful robots is likely to become a reality.

  • Battle bots and revolutionized warfare – Artificial intelligence is likely to transform modern warfare in unprecedented ways, creating new opportunities and challenges for humanity. AI could be a means to peace, discouraging warfare by enhancing deterrence, reducing casualties, and enabling humanitarian interventions. However, AI could also be a dangerous tool in the hands of an evil dictator, increasing the scale, speed, and unpredictability of violence, lowering the threshold for conflict, and undermining human rights and accountability. AI could enable the development and deployment of new weapons and systems — such as drones, microscopic hordes, and robots — that could autonomously operate on the battlefield, with or without human supervision. These technologies could have significant implications for the ethics and laws of war, as well as the security and stability of the world order. Therefore, it is imperative that governments around the world navigate the ethical implications of AI in warfare, cooperate to establish norms and regulations that ensure the responsible and peaceful use of AI, and (hopefully) drive our planet to peace and shared prosperity.
  • The new hire – Why work when you could get your AI robot to go to work for you? At some point in the future, we have the opportunity for more leisure time and socialization as the mundane tasks in life can be managed by superintelligent robots – as individuals or hive beings. How will society choose to share this wealth among its citizens? Will we allow only a few who invent the technology to benefit or will all humankind have their quality of life lifted? What will we do with all the time we find on our hands, and what does it mean for the values that many of us hold in high esteem like working hard and learning about new things if we won’t have as broad an opportunity to apply them? Lots of questions with many ethical and societal challenges that must be worked out and reimagined from how the world operates today. We may be worried about AI taking our jobs, but maybe we can move to a three- or four-day workweek and spend more time enjoying the fruits of our labor through the help of our trusty AI assistants!

The post 2024 Predictions in storage, technology, and the world, part 1: the AI hype is real! appeared first on ELE Times.

BoardSurfers: Reusing AWR Microwave Office RF Blocks in Allegro PCB Designs

Fri, 01/12/2024 - 08:59

While RF circuits might appear complex at first glance, with the right tools, you can incorporate RF designs into your PCB projects effortlessly and confidently.

This blog post will delve deep into the AWR Microwave Office to Allegro RF Design flow. The foundation of this design flow is Cadence Unified Library, which is used to exchange data between AWR Microwave Office and Allegro PCB Design applications. Cadence Unified Library contains all the necessary information to design an RF schematic and a layout in AWR Microwave Office, including PCB technology, manufacturable components, and vias. AWR Process Design Kit (PDK) is generated from Cadence Unified Library and used by AWR Microwave Office to capture the RF schematic and the layout. The RF design is exported as a single container (.asc) file from AWR Microwave Office.

Let’s go through the design flow tasks to bring an RF design created in AWR Microwave Office into Allegro System Capture and Allegro PCB Editor and reuse these RF designs.

Importing RF Design into Allegro System Capture

To import an RF design in Allegro System Capture, do the following:

  • Choose File – Import – MWO RF Design.
  • In the file browser that opens, browse to the location of the .asc file exported from AWR Microwave Office.

The RF design is imported as a block that can be used to create schematic blocks in Allegro System Capture.

To mark the block as a reuse block, do the following:

  • Select the RF block, right-click, and choose Export to Reuse Layout.
  • In the Options form, specify the input layout field value to the path of the board file used for generating Cadence Unified Library.

After the export process is completed, a new board file is generated with connectivity information.

Importing RF Design into Allegro PCB Editor

To import the RF design into the layout design, perform the following steps in Allegro PCB Editor:

  • Open the board created in the previous step.
  • Choose File – Import – Cadence Unified Library.

After the import process is completed, the RF layout is placed in the design canvas.

Creating RF Design Module

Saving the RF layout as a module helps you create multiple PCB designs with the same RF logic. To create a module in Allegro PCB Editor, do the following:

  • Choose the Tools – Create Module menu command.

  • Select the entire RF layout intended for inclusion in the module by drawing a rectangular boundary, then click anywhere on the design canvas.
  • Specify a name in the Save As file browser and click Save to save the module (.mdd) file.
Reusing RF Blocks in Existing PCB Designs

If marked for physical and logical reuse, the RF block can be instantiated as a reused RF block in an existing schematic design. When this schematic design is transferred to Allegro PCB Editor, the RF modules can be reused in a larger PCB. To instantiate the RF blocks in an existing schematic project, perform the following:

  • Right-click the RF block name in Allegro System Capture and choose Place as Schematic Block.

The packaging options appear when you place the block.

  • Select the Physical Reuse Block check box. This step is essential to link the schematic to the reuse RF module.
  • Repeat the above steps to place multiple instances of the RF Block.
  • Complete the schematic design.
  • Use the Export to Layout option to complete the packaging process.
Conclusion

The tightly integrated AWR Microwave Office-Allegro PCB solution is a step ahead of the traditional flows in ensuring a first-time-right verification and manufacturing of RF modules in the context of a real PCB. The key value lies in a shift left approach where the RF section is designed using real manufacturing parts and PCB technology, thereby eliminating the recapture and verification of the RF block in the later stages of the PCB design process.

The post BoardSurfers: Reusing AWR Microwave Office RF Blocks in Allegro PCB Designs appeared first on ELE Times.

Advanced motor control systems improve motor control performance

Fri, 01/12/2024 - 08:46

Courtesy: Arrow Electronics

Electric motors are widely used in various industrial, automotive, and commercial applications. Motors are controlled by drivers, which regulate their torque, speed, and position by altering the input power. High-performance motor drivers can enhance efficiency and enable faster and more precise control. This article introduces modern motor control system architectures and various motor control solutions offered by ADI.

A modern intelligent motor control system with a multi-chip architecture

With the advancement of technology, motor control systems are evolving towards greater intelligence and efficiency. Advanced motor control systems integrate control algorithms, industrial networks, and user interfaces, thus requiring more processing power to execute all tasks in real-time. Modern motor control systems typically employ a multi-chip architecture, utilizing a Digital signal processor (DSP) for motor control algorithms, Field Programmable Gate Array (FPGA) for high-speed I/O and networking protocols, and microprocessors for handling executive control.

With the emergence of System-on-chip (SoC) devices, such as the Xilinx Zynq All Programmable SoC, which combines the flexibility of a CPU with the processing power of an FPGA, designers are finally able to consolidate motor control functions and other processing tasks within a single device. Control algorithms, networking, and other processing-intensive tasks are offloaded to the programmable logic, while supervisory control, system monitoring and diagnostics, user interfaces, and debugging are handled by the processing unit. The programmable logic can include multiple parallel working control cores to achieve multi-axis machines or multiple control systems.

In recent years, driven by modeling and simulation tools like MathWorks Simulink, model-based design has evolved into a complete design workflow, from model creation to implementation. Model-based design changes the way engineers and scientists work, shifting design tasks from the lab and the field to the desktop. Now, the entire system, including the plant and controllers, can be modeled, allowing engineers to fine-tune controller behavior before deploying it in the field. This can reduce the risk of damage, accelerate system integration, and reduce dependence on equipment availability. Once the control model is completed, the Simulink environment can automatically convert it into C and HDL code that is run by the control system, saving time and avoiding manual coding errors.

A complete development environment that enables higher motor control performance leverages the Xilinx Zynq SoC for controller implementation, MathWorks Simulink for model-based design and automatic code generation, and ADI’s Intelligent Drives Kit for rapid prototyping of drive systems.

1211-adi-simulation (1)

An advanced motor control system comprehensively manages control, communication, and user interface tasks

An advanced motor control system must comprehensively handle control, communication, and user interface tasks, each of which has different processing bandwidth requirements and real-time constraints. To achieve such a control system, the chosen hardware platform must be robust and scalable to accommodate future system improvements and expansions. The Zynq All Programmable SoC, which integrates a high-performance processing system with programmable logic, offers exceptional parallel processing capabilities, real-time performance, fast computation, and flexible connectivity. This SoC includes two Xilinx analog-to-digital converters (XADC) for monitoring the system or external analog sensors.

Simulink is a block diagram environment that supports multi-domain simulation and model-based design, making it ideal for simulating systems with both control algorithms and plant models. Motor control algorithms adjust parameters such as speed, torque, and others for precise positioning and other purposes. Evaluating control algorithms through simulation is an efficient way to determine if the motor control design is suitable, reducing the time and cost of expensive hardware testing once suitability is determined.

Choosing the right hardware for prototyping is a significant step in the design process. The ADI Intelligent Drives Kit facilitates rapid prototyping. It supports rapid and efficient prototyping for high-performance motor control and dual-channel Gigabit Ethernet industrial networking connectivity.

The ADI Intelligent Drives Kit includes a set of Simulink controller models, the complete Xilinx Vivado framework, and the ADI Linux infrastructure, which streamline all steps needed for designing a motor control system, from simulation to prototyping, and eventual implementation in production systems.

The Linux software and HDL infrastructure provided by ADI for the Intelligent Drives Kit, together with tools from MathWorks and Xilinx, are well-suited for prototyping motor control applications. They also include production-ready components that can be integrated into the final control system, reducing the time and cost required from concept to production.

1211-adi-mathworks

Modulators and differential amplifiers to support motor control applications

ADI offers a range of modulators, differential amplifiers, instrumentation amplifiers, and operational amplifiers solutions for motor control applications.

The AD7401 is a second-order sigma-delta (Σ-Δ) modulator that utilizes ADI’s on-chip digital isolator technology, providing a high-speed 1-bit data stream from an analog input signal. The AD7401 is powered with a 5V supply and can accept differential signals in the range of ±200 mV (±320 mV full-scale). The analog modulator continuously samples the analog input signal, eliminating the need for an external sample-and-hold circuitry. The input information is encoded in the output data stream, which can achieve a data rate of up to 20 MHz. The device features a serial I/O interface and can operate on either a 5V or 3V supply (VDD2).

The digital isolation of the serial interface is achieved by integrating high-speed CMOS technology with monolithic air core transformers, providing superior performance compared to traditional optocouplers and other components. The device includes an on-chip reference voltage and is also available as the AD7400 with an internal clock. The AD7401 is suitable for applications in AC motor control, data acquisition systems, and as an alternative to ADCs combined with opto-isolators.

The AD8207 is a single-supply differential amplifier designed for amplifying large differential voltages in the presence of large common-mode voltages. It operates on a 3.3V to 5V single supply and features an input common-mode voltage range from -4V to +65V when using a 5V supply. The AD8207 comes in an 8-lead SOIC package and is ideal for applications like electromagnetic valve and motor control where large input PWM common-mode voltages are common.

The AD8207 exhibits excellent DC performance with low drift. Its offset drift is typically less than 500 nV/°C, and gain drift is typically less than 10 ppm/°C. It’s well-suited for bidirectional current sensing applications and features two reference pins, V1 and V2, which allow users to easily offset the device’s output to any voltage within the supply voltage range. By connecting V1 to V+ and V2 to GND pin, the output is set to half-scale. Grounding both reference pins provides unipolar output starting near ground voltage. Connecting both reference pins to V+ provides unipolar output starting near the V+ voltage. Applying an external low-impedance voltage to V1 and V2 allows for other output offsets.

1211-adi-ad8251

Low-noise and low-distortion instrumentation amplifiers and operational amplifiers

AD8251 is a digitally programmable gain instrumentation amplifier with features including GΩ-level input impedance, low output noise, and low distortion. It is suitable for interfacing with sensors and driving high-speed analog-to-digital converters (ADCs). It has a 10 MHz bandwidth, -110 dB total harmonic distortion (THD), and fast settling time of 785 ns to 0.001% accuracy (maximum). Guaranteed offset drift and gain drift are 1.8 µV/°C and 10 ppm/°C (G = 8), respectively.

In addition to its wide input common-mode voltage range, the device has a high common-mode rejection capability of 80 dB (G = 1, DC to 50 kHz). The combination of precision DC performance and high-speed capabilities makes the AD8251 an excellent choice for data acquisition applications. Moreover, this monolithic solution simplifies design and manufacturing and enhances the performance of test and measurement instrumentation through tightly matched internal resistors and amplifiers.

The AD8251 user interface includes a parallel port where users can set the gain in two different ways. One method is to use the WR input to latch 2-bit word sent over the bus. The other is to use the transparent gain mode, where the gain is determined by the logic level states at the gain port.

The AD8251 is available in a 10-lead MSOP package and is rated over the -40°C to +85°C temperature range. It is well-suited for applications with strict size and packaging density requirements, including data acquisition, biomedical analysis, and testing and measurement.

AD8646 is a 24 MHz rail-to-rail dual-channel operational amplifier. Additionally, AD8647 and AD8648 are dual-channel and quad-channel, rail-to-rail input and output, single-supply amplifiers with features such as low input offset voltage, wide signal bandwidth, low input voltage and low current noise. AD8647 also features low-power shutdown.

The AD8646 series combines a 24 MHz bandwidth with low offset, low noise, and extremely low input bias current characteristics, making these amplifiers suitable for a variety of applications. Devices such as filters, integrators, photodiode amplifiers, and high-impedance sensors can benefit from this combination of characteristics. The wide bandwidth and low distortion characteristics are beneficial for AC applications. The high output drive capability of AD8646/AD8647/AD8648 makes them ideal choices for driving audio line drivers and other low-impedance applications, with AD8646 and AD8648 suitable for automotive applications.

The AD8646 series features rail-to-rail input and output swing capabilities, enabling design engineers to buffer CMOS ADCs, DACs, ASICs, and other wide-output swing devices in single-supply systems.

ADA4084-2 (dual) is a 30 V, low-noise, rail-to-rail I/O, low-power operational amplifier, along with ADA4084-1 (single) and ADA4084-4 (quad). They are rated over the -40°C to +125°C industrial temperature range. The single-channel ADA4084-1 comes in 5-lead SOT-23 and 8-lead SOIC packages; the dual-channel ADA4084-2 is available in 8-lead SOIC, 8-lead MSOP, and 8-lead LFCSP packages; ADA4084-4 is offered in 14-lead TSSOP and 16-lead LFCSP packages.

ADA4084-2 supports rail-to-rail input/output and has low power consumption of 0.625 mA (±15 V, typical per amplifier), a gain bandwidth product of 15.9 MHz (AV = 100, typical), unity gain crossover frequency of 9.9 MHz (typical), and supports a -3 dB closed-loop bandwidth of 13.9 MHz (±15 V, typical) while providing low offset voltage of 100 μV (SOIC, maximum), unity gain stability, high slew rate of 4.6 V/µs (typical), and low noise of 3.9 nV/√Hz (1 kHz, typical).

Conclusion

Modern motor control systems, combined with tools and systems from FPGA, MathWorks, Xilinx, and ADI, can help achieve more efficient and precise motor control solutions. By integrating MathWorks’ model-based design and code generation tools with powerful Xilinx Zynq SoC and ADI’s isolation, power, signal conditioning, and measurement solutions, the design, validation, testing, and implementation of motor drive systems can be more efficient than ever before, thereby improving motor control performance and shortening time to market. ADI’s Intelligent Drives Kit provides an excellent prototyping environment to expedite system evaluation and assist in quickly starting motor control projects. Interested customers are encouraged to learn more.

The post Advanced motor control systems improve motor control performance appeared first on ELE Times.

3 Common Challenges Stopping Your Production Line

Fri, 01/12/2024 - 08:23

The efficiency of production lines is crucial for any successful hardware product development. However, several common challenges can significantly derail these processes. This article examines major operational efficiency issues and explores how manual, disjointed workflows, outdated documentation, and a lack of transparent design decisions can adversely affect manufacturing. Do you face these problems, too? Let’s find out!

Modern Design: The Era of Accelerated Product Development

Before focusing on the challenges mentioned above, let’s first look at a few industry trends and how hardware products are being developed to understand the topic’s complexity better.

Firstly, you can observe an undeniable surge in the intelligence of devices. Modern hardware is not just about physical components; it’s about embedding sophisticated intelligence into every machine. This evolution demands technical prowess and a strategic approach to design and development.

Secondly, the product development timelines have sped up. Remember the 1980s, when launching a new car model took 54 to 60 months? Fast forward to the 2020s, and this timeframe has dramatically shrunk to just 18 to 22 months, sometimes even less. This acceleration is dictated by a necessity to stay competitive and calls for an agile development process where multiple workstreams progress in parallel, demanding rapid iteration and tight collaboration across various engineering disciplines and business functions. The key to success here lies in using simulation and digitization to address issues before they manifest in the physical product.

However, something prevents hardware development teams from responding to these trends, namely the data and technology gap in electronics development. Even with Product Data Management (PDM) systems or Product Lifecycle Management (PLM) tools, discrepancies persist between software and mechanical domains. While tools like Altium Designer facilitate schematic and layout capture, the rest of the process often relies on inefficient, manual methods like PDFs, emails, and paper printouts. This disjointed approach leads to outdated component libraries, misaligned software-hardware integration, and delayed manufacturers’ involvement in the process, resulting in designs that may not be production-ready.

This disconnection extends to procurement, which, at the end of a design process, often copes with incomplete parts lists, finding components are unavailable or unaffordable. Mechanical engineers face hours of manual file exchanges, leading to fit and enclosure issues, while engineering managers, product managers, and system architects operate with limited visibility. This fragmented approach is costly and inefficient, underscoring the urgent need for a cohesive digital infrastructure in electronics development.

3 Core Challenges Affecting Operational Efficiency

As we explore the world of manufacturing and product development, it’s crucial to address three core challenges that significantly impact operational efficiency:

  • Time
  • Quality
  • Risk
Time: The Race Against the Clock

Our current workflows often suffer from being manual and siloed. Vital information becomes trapped within individual departments, lost in fragmented toolsets and local files. Fragmentation and disjointed communication channels make it challenging to decipher design intents and manage data efficiently. It’s like trying to piece together a complex puzzle without having all the pieces in hand.

This situation often leads to inefficient handling of critical design information, such as component lead times and end-of-life notices, which are essential for timely and successful product launches. We’ve all experienced how prolonged processes can hinder new releases and negatively impact our time-to-market. Such delays mean you’re at risk of losing your competitive edge. So, how do you turn these challenges into a smooth workflow and transform the way you handle time from a potential blocker into a strategic advantage?

The answer lies in enhancing connectivity across the processes. Start by implementing cross-functional collaboration to enable a free flow of information between departments. This approach helps break down data silos, ensuring everyone works with the latest data, thereby minimizing rework and fostering iterative improvements.

Next, shift your focus to efficient component selection. By putting the right systems in place, you can manage component information effectively and be sure every part of your design is available, compliant, and optimized for specific needs.

Finally, enhance your workflow visibility and management. When you can see the entire landscape of your project, you can collaborate more effectively, make informed decisions, and manage your processes with precision.

Quality: The Cornerstone of Customer Satisfaction

Quality is the foundation of customer trust and satisfaction. Yet, despite our best efforts, defects and quality issues can slip through, jeopardizing the product and your reputation. Why is this happening? Because most of your documentation is static, it often needs more context and is siloed from the design data it supports. This can lead to misinterpretation and a reliance on outdated information–a recipe for errors that only become apparent after production, resulting in waste and rework.

A typical day in a board-mounting department reveals several issues. Determining the quality of an electric board from its image alone is challenging without additional context. To make an informed decision, you need access to design information, part lists, ordering data, datasheets, identification of designators, analysis of nets, and test results. However, this information often resides in disparate systems, necessitating time-consuming searches and interpretation. This process, known as a ‘media break,’ is evident in nearly every stage of the board mounting assembly line, yet it often goes unnoticed.

The key to overcoming this challenge lies in leveraging the background provided by your design data, transitioning to digital documentation, and automating its management. Doing so ensures that your documents are always up-to-date and offer the context for your designs. It’s not just about having the correct data; it’s about understanding it within the framework of your entire design.

You can also introduce interactive data validation and verification processes. These systems reduce your reliance on human-based checks, which, while important, are prone to error. With automated checks, you can catch potential issues before they escalate. For example, you verify a design before it enters the reflow oven rather than after a flawed product has been fully assembled. This proactive strategy ensures quality is embedded in every stage of your design and manufacturing process.

Integrating advanced technologies like augmented microscopy suggests further improvements in PCB manufacturing. This leap forward promises to enhance quality control by optimizing performance, accuracy, quality, and consistency while reducing operational costs.

Risk: From Reactive to Proactive

Lastly, let’s look at compliance. The challenges we face here are multifaceted. You need to prove accountability in every aspect of your design and manufacturing, which requires a deep understanding of the impact of design changes the ‘where’ and the ‘scope.’ Without this, you risk the integrity of your products and the trust of your clients.

Lack of transparency and predictability in your operations hinders your project management and decision-making. If the ‘why’ behind your design decisions goes undocumented, it leads to confusion and potential non-compliance, consequences of which might be severe, ranging from penalties to, in the worst cases, businesses having to shut their doors.

The solution? Establishing a system of digital traceability. Having a transparent system for documenting design decisions means you have a clear record that supports your rationale and ensures adherence to standards, giving you an explicit audit trail from conception to production and understanding how every design decision influences the final product.

Implementing automated verification can help you track your project’s progress, solidify your compliance framework, anticipate risks, and make informed decisions. This way, you transform risk management from a reactive to a proactive strategy, staying in control even in the face of uncertainties. Integrating your validation processes with compliance measures makes ‘where used’ visibility and risk management a part of the design journey, not just afterthoughts.

lena-weglarzalena-weglarza| Altium

The post 3 Common Challenges Stopping Your Production Line appeared first on ELE Times.

Anritsu Collaborates with ASUS to Validate IEEE 802.11be (Wi-Fi 7) 320 MHz RF Performance Testing

Thu, 01/11/2024 - 12:39

Wireless Connectivity Test Set MT8862A Enables Flexible and Fast Advanced RF Measurement for Wi-Fi 7 320 MHz

Anritsu and ASUS have announced a partnership to validate the latest wireless communications standard, IEEE 802.11be (Wi-Fi 7) 320 MHz performance testing. This series of tests utilized the Anritsu Wireless Connectivity Test Set (WLAN Tester) MT8862A in Network Mode and in the ASUS ROG Phone 8 series smartphones.

The IEEE 802.11be standard incorporates innovative technologies, including a 320 MHz bandwidth, 4096 QAM modulation and Multiple RUs, which require comprehensive evaluation of RF performance. Anritsu’s Wireless Connectivity Test Set (WLAN Tester) MT8862A is designed to measure the TRx RF performance of IEEE 802.11a/b/g/n/ac/ax/be (across 2.4 GHz, 5 GHz and 6 GHz bands) WLAN devices. It supports performance evaluation as defined by the IEEE 802.11 standard and Over-The-Air (OTA) performance tests according to specifications defined by the Cellular Telecommunications and Internet Association (CTIA). With its Network Mode and Direct Mode, it offers flexible testing for the RF TRx characteristics (such as Tx power, modulation accuracy, Rx sensitivity, etc.) of WLAN devices that tailored to match the measurement environment.

“The ASUS ROG Phone series is dedicated to delivering exceptional performance, making the achievement of ultra-high-speed wireless connectivity technology crucial,” said Alvin Liao, Director of ASUS Wireless Communications R&D. “Anritsu has been an indispensable partner to us, consistently providing superior test solutions in the realm of IEEE 802.11be, which has been guiding our Wi-Fi 7 technology evolution and injecting significant momentum into our technological advancements.”

“The MT8862A network mode is equipped with a unique data rate control algorithm that allows users to specify data rates for transmission measurements,” said Ivan Chen, General Manager of Anritsu Taiwan. “We are proud of the continuous trust ASUS places in Anritsu’s verification of its devices’ advanced features. This collaboration once again demonstrates Anritsu’s capability to provide leading-edge technology, enabling Wi-Fi 7 product manufacturers to shorten their product development time and continue to play a pivotal role in developing next-generation communication devices.”

The post Anritsu Collaborates with ASUS to Validate IEEE 802.11be (Wi-Fi 7) 320 MHz RF Performance Testing appeared first on ELE Times.

Top 10 Data Science Companies in India – ELE Times

Wed, 01/10/2024 - 14:26

In the ever-evolving landscape of data science, professionals seek platforms that not only offer jobs but also opportunities to apply skills meaningfully and contribute to groundbreaking innovations. As industries recognize the transformative power of data analytics, certain companies actively create environments empowering data scientists. Here’s a curated list of top companies offering exciting opportunities for data scientists:

1. Accenture: Pioneering Data-Driven Innovation

Accenture leads in data-driven solutions, embracing cutting-edge technologies like machine learning and artificial intelligence. The Accenture Analytics division, powered by predictive analytics technology, offers an enriching experience for data analysts. Joining Accenture means becoming part of a global network comprising data scientists and analysts, contributing to transformative projects in spaces like Accenture Innovation Centers and Accenture Labs.

2. Fractal Analytics: Shaping Tomorrow’s Analytics

Established in 2000, Fractal Analytics is a leading analytics service provider with Fortune 500 clients in technology, insurance, and retail. Fostering a culture of innovation, Fractal encourages data scientists to craft bespoke analytics strategies. For those seeking a workplace where innovation is at the heart, Fractal Analytics provides an exciting and forward-looking journey.

3. Swiggy: Data-Driven Excellence in Food Delivery

As India’s premier convenience commerce platform, Swiggy is a leader in food delivery and is rapidly expanding its technology teams. With a tech-first approach to logistics and a solution-first attitude, Swiggy relies on robust machine learning technology, processing gigabytes of data daily. Joining Swiggy means contributing to the transformation of customer experiences through quick, easy, and dependable delivery services.

4. LatentView Analytics: Strategic Insights for Global Clients

LatentView Analytics challenges data scientists to approach projects with a comprehensive, 360-degree view. Assisting clients in making informed investment decisions, predicting revenue sources, and anticipating product trends, LatentView Analytics provides an intellectually stimulating environment.

5. Tiger Analytics: Unleashing the Power of Data

Established in 2011 and headquartered in the USA, Tiger Analytics is a top data analytics company offering diverse analysis options. With partnerships with industry giants, Tiger Analytics swiftly became a preferred destination for organizations seeking comprehensive data solutions.

6. Genpact: Nurturing Data Science Excellence

Genpact operates with a vast team of data scientists under a centralized hub model, emphasizing enhancing the client experience. Initiatives like the Machine Learning Incubator underscore the company’s commitment to cultivating undervalued data scientists into highly skilled professionals.

7. TheMathCompany: Multinational Excellence in Data Analytics

Collaborating with Fortune 500 companies, TheMathCompany enhances analytics capabilities using a cutting-edge platform. For data scientists seeking impactful projects, TheMathCompany offers a global stage.

8. Mu Sigma: Leading in Decision Science and Analytics Solutions

Based in Chicago, Mu Sigma is a leading provider of decision science and analytics solutions. With a global presence, Mu Sigma invites data scientists to shape decision science through data analysis and improvement.

9. IBM: A Century of Global Technology Solutions

A stalwart since 1911, IBM delivers consulting and global technology solutions worldwide. For data scientists wanting to gather, integrate, and manage substantial amounts of data, IBM India offers a legacy of innovation.

10. Oracle: Pioneering IT Services and Data Analytics

Founded in 1977, Oracle is a renowned IT company offering software, IT services, and data analytics. Utilizing machine learning, Oracle’s data analytics program assists firms in making data-driven decisions. Oracle is poised to rank among the largest data analytics companies globally.

In conclusion, these ten companies represent the pinnacle of opportunities for data scientists, each offering a unique environment and challenges to propel careers forward. Whether interested in pioneering research, impactful collaborations, or contributing to industry transformation, these companies provide diverse avenues in the dynamic field of data science.

 

The post Top 10 Data Science Companies in India – ELE Times appeared first on ELE Times.

Honeywell’s Game-Changing Partnership: Upgrading Commercial Buildings with Smart Connectivity, No Rewiring Needed

Wed, 01/10/2024 - 13:46

In a groundbreaking announcement at CES 2024, Honeywell revealed a strategic Memorandum of Understanding to revolutionize commercial building digitization. The partnership, formed with Analog Devices, Inc. (ADI), aims to explore the integration of digital connectivity technologies into existing infrastructure, eliminating the need for rewiring and offering cost-effective solutions for building management systems.

The move comes as a response to the challenges posed by outdated and inefficient commercial buildings in the United States, with a majority constructed before the year 2000. According to the U.S. Energy Information Administration (EIA), these structures contribute to increased energy consumption and lack the technological advancements required for efficient data transmission.

The collaboration with ADI introduces new technology to building management systems, allowing real-time decision-making for energy consumption reduction. By leveraging ADI’s single-pair Ethernet (T1L) and software configurable input/output (SWIO) solutions, Honeywell aims to provide a seamless upgrade to building networks without significant upfront investments or extensive remodelling.

Martin Cotter, Senior Vice President of Industrial and Multi Markets and President of ADI EMEA region, expressed excitement about expanding ADI technologies into building management systems. He emphasized the potential for reducing energy consumption, saving costs, improving resiliency, and meeting emissions reduction goals.

Suresh Venkatarayalu, Honeywell’s Chief Technology Officer, highlighted the revolutionary nature of the collaboration, stating that it offers building owners the ability to enhance their wiring infrastructure with minimal upfront investment, reduced labour, and environmental impact.

ADI’s single-pair Ethernet technology enables long-reach Ethernet connectivity, utilizing existing building wiring to reduce installation time and costs. This solution complements existing Ethernet connectivity in building management systems, fostering enhanced connectivity from the edge to the cloud and optimizing asset utilization.

Moreover, ADI’s solutions simplify product complexities, allowing Honeywell to build a single version of the product adaptable to various needs. This approach facilitates future-proofed control and automation, accommodating building renovations or changing requirements.

Honeywell’s move towards adopting ADI’s innovative technologies marks a significant step in addressing the challenges faced by commercial buildings, offering a pathway to smart, efficient, and cost-effective digitization without the need for extensive overhauls.

The post Honeywell’s Game-Changing Partnership: Upgrading Commercial Buildings with Smart Connectivity, No Rewiring Needed appeared first on ELE Times.

STMicroelectronics announces new organization

Wed, 01/10/2024 - 12:21
  • New organization to deliver enhanced product development innovation and efficiency, time-to-market as well as customer focus by end market
  • Company re-organized in two Product Groups, split in four Reportable Segments
  • New application marketing focus by end market across all Regions to complement existing sales and marketing organization

 STMicroelectronics, a global semiconductor leader serving customers across the spectrum of electronics applications, is announcing today its new organization, effective February 5th, 2024.

“We are re-organizing our Product Groups to further accelerate our time-to-market and speed of product development innovation and efficiency. This will enable us to increase value extraction from our broad and unique product and technology portfolio. In addition, we are getting even closer to our customers with an application marketing organization by end market which will boost our ability to complement our product offering with complete system solutions” said Jean-Marc Chery, President and CEO of STMicroelectronics. “This is an important step in the development of our established strategy, in line with our value proposition to all stakeholders and with the business and financial ambitions we set back in 2022”.

Moving from three to two Product Groups to further enhance product development innovation and efficiency, and time-to-market

The two new Product Groups will be:

  • Analog, Power & Discrete, MEMS and Sensors (APMS), led by Marco Cassis, ST President and member of the Executive Committee; and
  • Microcontrollers, Digital ICs and RF products (MDRF), led by Remi El-Ouazzane, ST President and member of the Executive Committee.

The APMS Product Group will include all ST analog products, including Smart Power solutions for automotive; all ST Power & Discrete product lines including Silicon Carbide products; MEMS and Sensors.

APMS will include two Reportable Segments: Analog products, MEMS and Sensors (AM&S); Power and discrete products (P&D).

The MDRF Product Group will include all ST digital ICs and microcontrollers, including automotive microcontrollers; RF, ADAS, and Infotainment ICs. MDRF will include two Reportable Segments: Microcontrollers (MCU); Digital ICs and RF Products (D&RF).

Concurrent with this new organization Marco Monti, ST President of the former Automotive and Discrete Product Group, will leave the Company.

To complement the existing Sales and marketing organization, a new application marketing organization by end market will be implemented across all ST Regions. This will provide ST customers with end-to-end system solutions based on the Company’s product and technology portfolio.

The company is implementing an application marketing organization by end market across all ST Regions, as part of its Sales & Marketing organization led by Jerome Roux, ST President and member of the Executive Committee. The application marketing organization will cover the following four end markets:

  • Automotive
  • Industrial Power and Energy
  • Industrial Automation, IoT and AI
  • Personal Electronics, Communication Equipment and Computer Peripherals.

The current regional Sales & Marketing organization remains unchanged.

The post STMicroelectronics announces new organization appeared first on ELE Times.

Top 10 Robotics Startups in India

Wed, 01/10/2024 - 12:16

Robots can visualize things in larger quantities, faster, more efficiently, and with much better accuracy. They don’t need breaks and never get bored. This is the age of Robot uprising, and bringing this facet closer to reality, the Indian robotics start-up ecosystem is getting dense and is building robots in every possible sector. 

We have listed the top 10 robotics startups in India that are solving some of the most critical, in-hand problems of society and industries through their unique robots and automation solutions

Genrobotics

Based out of Thiruvananthapuram, Genrobotics is on a mission to solve a very critical problem i.e. Manual Scavenging with their flagship robot – Bandicoot, which is the world’s first robotic scavenger. They have successfully sold more than 300+ robots that are reliable, safe, and affordable. Their robotic solution to manual scavenging – Bandicoot comes with important features such as – 

  • Precise and surgical cleaning
  • More grabbing area
  • More reachability in every corner
  • Compact design for portability
Ati Motors

Ati is a Bengaluru-based Industrial robotic start-up that is into developing electric autonomous robots for effective and convenient transport of cargo in warehouses and factories. Their highly distinctive robot Sherpa Tug can transport trolley payloads of up to 1000 kg. It comes with a swappable battery that takes around 2 hours to charge and works for around 8 hours, can be integrated with the factory management information system and warehouse management system. Ati has 18+ customers that are using its robots, with around 31 factory plants. Its Product portfolio includes- 

  • Sherpa Tug
  • Sherpa Lite
  • Sherpa Pivot
Addverb

Addverb is a pioneer in developing robots that pick, sort, and store products. As of today, Addverb is helping more than 100 businesses leverage technology to automate factories and warehouses. Their in-house product line includes – 

  • Mobile Robots – autonomous mobile robot, sorting robot, multi-carton picking robot, vertical sortation robot, rail-guided vehicle
  • Automated Storage and Retrieval System (ASRS)– carton shuttle, mother-child shuttle, pallet shuttle, multi-level shuttle, crane asrs
  • Person-to-Goods – pick-to-light, pick-by-voice
  • Software – warehouse management system, warehouse execution system, warehouse control system, fleet management system 
SVAYA Robotics

Svaya Robotics is headquartered in Hyderabad and develops par technology industrial and collaborative robots. Its product line includes – SR-L3, SR-L6, SR-L10, SR-L12, and SR-L16 which are laden with salient features like advanced motor control, built-in force sensing, easy configuring and reconfiguring between applications, and built-in redundant safety

Svaya provides a full-stack technology platform that works around making human-robot interaction simple. Their focus on the Digital Twin approach helps to provide total visibility into robot workflows. Built-in sensing combined with machine vision and AI is a technology streak that enhances the usability, flexibility, and scalability of the robots even in unstructured environments. Svaya has also collaborated with DRDO to develop India’s first quadruped robot and exoskeleton. 

Niqo Robotics

Niqo Robotics is a leading Bengaluru-based agri-tech start-up that is bringing AI-powered robotics revolution in agriculture. They build robots that make spraying simpler and are a technology-leveraged substitute for the unsustainable blanket spraying technique used by most farmers in India. With NIQO RoboSpray’s selective spray technique through the implementation of real-time AI-assisted computer vision, the chemical usage drops by 60% and limits excessive spraying on the soil. As per reports, 500+ farmers in Maharashtra and Karnataka have adopted the technology so far. 

Gridbots Technologies

Gridbots is an Ahmedabad-based indigenous robot manufacturer that develops robots for use cases across industries. Their first-ever development was an underwater robot that could be used inside water tanks for cleaning. Gridbots’ product portfolio includes robots for sectors like – industrial automation, defence, robotic services, and machine vision. 

Sastra Robotics

Sastra is a Kochi-based start-up that is into building robotic arms used for testing electronic devices. Many product companies spend an average of 256 days on rigorous testing of equipment and devices, Sastra’s robotic arms have been able to pull that number down to just 15 days. Starting with a collaboration with Bosch for testing Bosch’s car stereos, they now have 20+ clients including Honeywell, HCL, and Tech Mahindra. Its services span across industries such as automotive, banking, aviation, medical, mobile phone, and consumer electronics. 

EyeROV 

EyeROV works in the marine robotic space and is headquartered in Kochi, Kerala. Its robots have successfully inspected 40+ underwater assets so far. Their Remotely Operated Vehicles (ROVs) provide inspection services in the areas of dams, oil and gas, shipping, bridges, ports etc. The robots are very effective for search and rescue operations, research organizations can use EyeROV’s bots for testing sensors as an underwater platform or for oceanographic studies, etc. 

Product Catalogue – 

  • EyeROV TUNA
  • EyeROV iBOAT ALPHA
  • EyeROV NEOPIA UW-50
  • EVAP – EyeROV Visualisation and Analytics Platform)
Miko

Miko builds AI-powered robots for kids intending to deliver interactive companionship. The bots are capable of identifying kid’s faces and sensing their mood and based on their human-like interactive quality, they engage with the kids through mindful games, songs, or teaching a subject. Miko has been in business since 2016 and launched Miko3 in 2018 which is being sold in around 140 countries worldwide. 

Mukunda Foods

Mukunda Foods started as a food service provider and gradually ventured into developing kitchen automation solutions. As a fact, most Indian food chains are yet to be automated, and Mukunda is solving this very problem. As of today, Mukunda offers 6 kitchen automation bots including Dosamatic, and has sold more than 3000 bots across 22+ countries. 

The post Top 10 Robotics Startups in India appeared first on ELE Times.

HMS Networks releases Raspberry Pi Adapter Board – further simplifying the integration of the Anybus CompactCom

Wed, 01/10/2024 - 09:37

HMS Networks has launched the Raspberry Pi adapter board, providing industrial device manufacturers with a simplified method to test and evaluate the Anybus CompactCom, a ready-made communication interface that connects devices to any industrial network. While previous adapter boards were designed for testing Anybus CompactCom modules with STM32 or NXP (formerly Freescale) microcontroller platforms, this new adapter board is specifically tailored for use with the Raspberry Pi.

The new adapter board provides:
  • Compatibility with the widely popular Raspberry Pi.
  • Easy installation and usage.
  • Full compatibility with the free-to-download Anybus Host Application Example Code (HAEC).
Andreas Stillborg, Anybus Embedded Product Manager at HMS Networks, explains,
“The Raspberry Pi is incredibly popular, with over 45 million units in use around the world. Many of our customers already own a Raspberry Pi and are familiar with it. Therefore, we were keen to develop an adapter board that enables our customers to easily use the Raspberry Pi to test and evaluate Anybus CompactCom.”

The Raspberry Pi adapter board is fully compatible with the free-to-download Anybus Host Application Example Code (HAEC). This code includes a reference port designed for the Raspberry Pi, which customers can use with the adapter board and an Anybus CompactCom module to quickly start their embedded development project.

The post HMS Networks releases Raspberry Pi Adapter Board – further simplifying the integration of the Anybus CompactCom appeared first on ELE Times.

Team Group Unveils T-FORCE GE PRO PCIe 5.0 SSD, Redefining High-Performance Storage

Tue, 01/09/2024 - 14:09

In a significant leap forward in the realm of high-performance storage solutions, Team Group Inc. has introduced its latest innovation, the T-FORCE GE PRO PCIe 5.0 SSD. Positioned at the forefront of storage technology, this cutting-edge product exemplifies the brand’s unwavering commitment to excellence in product research and development.

Designed to revolutionize the gaming and performance-driven user experience, the T-FORCE GE PRO harnesses the power of the PCIe Gen 5 x4 interface and the NVMe 2.0 standard, delivering unparalleled storage speeds. With a focus on meeting the demands of gamers and users seeking peak performance from their storage devices, this next-generation SSD brings several key features to the forefront.

Key Features:

  • SSD Core: InnoGrit’s 12nm IG5666 controller
  • Controller Features: Multi-core and energy-efficient
  • NAND Flash Type: High-performance 2,400MT/s NAND flash
  • NAND Flash Capabilities: Supports DRAM and SLC (Single-level cell flash) caching
  • SSD Read Speeds: Impressive up to 14,000MB/s

Beyond raw performance, the T-FORCE GE PRO PCIe 5.0 SSD incorporates smart thermal regulation technology, ensuring optimal performance and longevity. This technology automatically adjusts performance based on internal temperature sensors. Additionally, the inclusion of 4K LDPC technology guarantees impeccable data transfer accuracy, enhancing the SSD’s reliability and stability. Team Group’s patented S.M.A.R.T. monitoring software allows users to easily monitor the health of their SSD, ensuring peace of mind and prolonged usage.

Recognizing the pivotal role of cooling in maintaining peak SSD performance, Team Group has developed a range of cooling solutions tailored to the demands of Gen 5 SSDs. Options include advanced graphene heat sinks, copper tube aluminium fin SSD air coolers, and even all-in-one liquid coolers exclusively designed for SSDs. These cooling solutions ensure that the T-FORCE GE PRO PCIe 5.0 SSD remains at optimal temperatures, enabling users to enjoy sustained high-speed performance without compromise.

The world will have its first glimpse of the T-FORCE GE PRO PCIe 5.0 SSD during CES 2024 at ASUS’ new product showcase. Pre-orders are set to open on February 9, 2024, with availability on Amazon and Newegg in North America and Amazon Japan.

Team Group’s latest innovation marks a significant stride in redefining the landscape of high-performance storage, promising a transformative experience for gamers and performance enthusiasts alike.

The post Team Group Unveils T-FORCE GE PRO PCIe 5.0 SSD, Redefining High-Performance Storage appeared first on ELE Times.

Pages