Feed aggregator

Top 10 Reinforcement Learning Companies in India

ELE Times - 3 hours 5 min ago

Reinforcement learning (RL), a subfield of machine learning in which agents learn by interacting with their surroundings, is gaining significant popularity in India’s quickly developing AI ecosystem. RL is being used in a variety of areas, including financial modeling, smart energy grids, and autonomous systems. Indian businesses are using RL to innovate and create scalable solutions that are on par with international standards, rather than merely adopting it. The top 10 reinforcement learning companies in India will be explored in this article:

  1. Tata Consultancy Services (TCS)

As the global IT leader, TCS focuses on integrating RL into supply chain optimization, autonomous systems, and intelligent automation. It is AI laboratories work on adaptive algorithms that learn from changing environments in logistics, manufacturing, and operations for better decision making. The company also uses its platform TCS iON to apply RL to the fields of education and skill development, employing gamified and tailored learning to increase motivation and achieve better educational results.

  1. Infosys

As led by the Infosys Topaz platform, the AI-first initiative of the company shows faster advances in Reinforcement Learning (RL). The platform’s robotics, enterprise automation, and conversational AI are improved by RL and RLHF (Reinforcement Learning with Human Feedback). The completion and integration of these technologies enable the creation of adaptive, scalable, and self-learning enterprise solutions, such as automated fraud detection systems, predictive analytics, and enhanced customer care.

  1. Wipro

Wipro is currently engaging with Reinforcement Learning (RL) to upgrade automation, simulation, and intelligent systems across multiple sectors. The company utilizes RL in industrial automation and flight simulation, employing adaptive learning models to improve control mechanisms and decision-making procedures. Wipro’s investigations also extend to scalable RL methodologies for manufacturing and financial services, which facilitate more intelligent resource allocation and operational forecasting.

  1. HCL Technologies

HCL Technologies is continuously refining the applications of Reinforcement Learning (RL) across various focus areas, including cybersecurity, workforce analytics, and education. In workforce analytics, HCLTech uses RL for the customization of learning pathways and the prediction of talent development, enabling companies to match employee evolution with their strategic objectives. Their partnership with Pearson brings even greater value in the education sector, where RL-driven adaptive learning systems customize services to the learners and enhance the mastery of skills.

  1. ValueCoders

ValueCoders is an Indian software company specializing in adaptive smart system software development for healthcare, finance, and education sectors. They use computer vision, reinforcement learning, and MLOps to ease decision automation, enhance personalization, and boost system performance over time for their clients.

  1. Locus

Locus is a top-class supply chain and logistics company that focuses on streamlining and automating supply chain operations with the use of reinforcement learning (RL). With Locus, businesses can now enhance the planning of delivery routes, scheduling of deliveries, and even the allocation of resources. This allows companies to better control and reduce costs, increase the efficiency of their operations, and better respond to fluctuating demand and traffic conditions.

  1. Mad Street Den

Mad Street Den is the only company to blend reinforcement learning and computer vision through its Vue.ai platform to enhance personalized retail experiences. Their adaptive systems are designed to optimize merchandising, styling, and customer engagement on behalf of global fashion and e-commerce brands.

  1. Arya.ai

With a deep focus on reinforcement learning and deep neural networks, Arya.ai addresses autonomous decision systems. Their SaaS products with real-time adaptation enabled for finance, insurance, and robotics industries address fraud detection, claims automation, and smart underwriting.

  1. Infilect

Infilect uses visual intelligence platforms to implement RL in retail. Their technologies optimize pricing, merchandising, and shelf availability using RL-driven analytics, which helps brands lower stockouts and increase in-store compliance.

  1. Flutura Decision Sciences

The major industries of oil and gas, chemicals, and heavy machinery benefit from Flutura Decision Sciences’ artificial intelligence and reinforcement learning approaches to machine learning, which are used to develop their industrial internet of things platform, Cerebra. With Flutura, these industries can improve asset performance, anticipate failures, and minimize downtime. To offer complex system digital twins, Cerebra delivers diagnostics and prognostics, which are supported by physics models, heuristics, and machine learning.

Conclusion:

With smart healthcare, smart agriculture, and smart city systems, autonomous systems powered by reinforcement learning are ready to take off, marking the beginning of the AI revolution. With the development of edge AI and quantum computing, real-time decision-making will be dominated by RL. Due to the culture of innovation, availability of skilled resources, and the country’s bold vision, India has the potential to lead the world in adaptive intelligent systems in the upcoming years.

The post Top 10 Reinforcement Learning Companies in India appeared first on ELE Times.

КПІ долучається до національної акції "Стіл пам'яті"

Новини - 5 hours 10 min ago
КПІ долучається до національної акції "Стіл пам'яті"
Image
kpi пт, 08/29/2025 - 12:33
Текст

🌻 Ми пам'ятаємо – кожного і кожну, хто захищає нас у цій війні. Хто віддає своє життя, аби ми мали змогу продовжувати навчання, обіймати рідних, будувати плани. КПІ ім.

Currently working on a electronics library

Reddit:Electronics - 8 hours 6 min ago
Currently working on a electronics library

Fusion360 does not have the best libraries available, so I decided to start building an electronics library for all the boards/components that came with my arduino starter kit (plus a pico). Once I finish this , I plan on adding many other components that aren't available in Fusion.

submitted by /u/teslah3
[link] [comments]

Nuvoton Technology Unveils Upgraded NuMicro M2354 MCU: Enhanced Security and Compact Footprint for Server, IoT, and Edge

ELE Times - 8 hours 33 min ago

High Security Integration, Low Power, and Small Package, Providing Cost-Effective RoT

Nuvoton Technology released the upgraded NuMicro M2354, tailored for applications such as server RoT, smart city, IoT, and smart metering.

NuMicro M2354 is an Arm TrustZone microcontroller based on the Armv8-M architecture and powered by the Arm Cortex-M23 CPU, designed to enhance IoT security. It is suitable for long-term confidentiality requirements and highly sensitive data protection scenarios.

The M2354 operates at frequencies up to 96 MHz, offers a wide operating voltage range of 1.7V to 3.6V, and a broad operating temperature range of -40°C to +105°C. The power consumption is 89.3 μA/MHz in LDO mode and 39.6 μA/MHz in DC-DC mode. The Standby Power-down mode consumes less than 2 µA, and the Deep Power-down mode without VBAT consumes less than 0.1 µA, effectively extending the device’s battery life and meeting the needs of long-term IoT operation.

For Secure FOTA, the M2354 has built-in dual-bank Flash Memory of up to 1024 KB and 256 KB of SRAM. In addition to supporting eXecute-Only-Memory (XOM) to prevent code theft, it also integrates a cryptographic hardware accelerator that supports FIPS PUB 197/180/180-2/180-4 and NIST SP 800-38A, as well as a hardware key store to protect against side-channel and fault injection attacks. In terms of secure boot mechanism, the upgraded M2354 supports the Root of Trust architecture based on DICE, implemented in Mask ROM, and supports ECDSA P-521. This feature automatically generates a unique device identity and establishes a chain of trust during boot, effectively verifying firmware version and preventing firmware rollback and tampering attacks. Furthermore, M2354 is compliant with PSA Level 3 and SESIP Level 3 security certifications, which meet the demands of the EU’s Cyber Resilience Act (CRA).

M2354 supports a wide range of peripherals, including CAN, USB 2.0 full-speed OTG, PWM, UART, SPI/I2S, Quad-SPI, I²C, and RTC.

M2354 also integrates several analog components, including analog comparators, ADC, and DAC.

The package options include LQFP-48, LQFP-64, and LQFP-128. The upgraded M2354 also offers a compact WLCSP49 package. With support of the SPDM (Security Protocol and Data Model) secure communication protocol, the upgraded M2354 is well-suited for Root of Trust applications in server motherboards and daughterboards.

The post Nuvoton Technology Unveils Upgraded NuMicro M2354 MCU: Enhanced Security and Compact Footprint for Server, IoT, and Edge appeared first on ELE Times.

Event-based vision comes to Raspberry Pi 5

EDN Network - 16 hours 51 min ago

A starter kit from Prophesee enables low-power, high-speed event-based vision on the Raspberry Pi 5 single-board computer. Based on the GenX320 Metavision event-based vision sensor, the kit accelerates development of real-time neuromorphic vision applications for drones, robotics, industrial automation, security, and surveillance. The camera module connects directly to the Raspberry Pi 5 via a MIPI CSI-2 (D-PHY) interface.

Consuming less than 50 mW, the 1.5-in. GenX320 sensor provides 320×320-pixel resolution with an event rate equivalent to ~10,000 fps. It offers >140-dB dynamic range and sub-millisecond latency (<150 µs at 1,000 lux).

Software resources include OpenEB, the open-source core of Prophesee’s Metavision SDK, with Python and C++ API support. Drivers, data recording, replay, and visualization tools can be found on GitHub.

The GenX320 starter kit is available for pre-order through Prophesee and authorized distributors. The Raspberry Pi 5 board is sold separately.

GenX320 starter kit product page

Prophesee

The post Event-based vision comes to Raspberry Pi 5 appeared first on EDN.

MCUs drive LCD and capacitive touch

EDN Network - 16 hours 51 min ago

Renesas’ RL78/L23 16-bit MCUs provide segment LCD control and capacitive touch sensing for responsive HMIs in smart home appliances, consumer electronics, and metering systems. Running at 32 MHz, these low-power MCUs include 512 KB of dual-bank flash memory, enabling seamless over-the-air firmware updates.

The MCUs offer an active current of 109 µA/MHz and a standby current as low as 0.365 µA, with a fast 1‑µs wakeup time. With a wide voltage range of 1.6 V to 5.5 V, they can operate directly from 5‑V power supplies commonly used in home appliances and industrial systems.

The reference mode of the integrated LCD controller reduces display power by approximately 30% compared to the RL78/L1X series. A snooze mode sequencer (SMS) enables dynamic segment updates without CPU intervention, further enhancing energy efficiency.

Development tools for the RL78/L23 include the Smart Configurator and QE for Capacitive Touch, which simplify system design and firmware setup. Renesas also provides the RL78/L23 Fast Prototyping Board, compatible with the Arduino IDE, and a capacitive touch evaluation system for hardware testing and validation.

RL78/L23 MCUs are available now from the Renesas website or distributors.

RL78/L23 product page 

Renesas Electronics 

The post MCUs drive LCD and capacitive touch appeared first on EDN.

Wireless SoC raises AI efficiency at the edge

EDN Network - 16 hours 51 min ago

The Apollo510B wireless SoC from Ambiq combines a 48-MHz dedicated network coprocessor with a Bluetooth LE 5.4 radio for power-efficient edge AI. Its Arm Cortex-M55 CPU, enhanced with Helium vector processing and Ambiq’s turboSPOT dynamic scaling, delivers up to 30× greater AI efficiency and 16× faster performance than Cortex-M4 devices. 

With 64 KB each of instruction and data cache, 3.75 MB of RAM, and 4 MB of embedded nonvolatile memory, the Apollo510B provides fast, real-time processing. Its 2D/2.5D GPU handles vector graphics, while SPI, I²C, UART, and high-speed USB 2.0 support flexible sensor and device connections. High-fidelity audio is enabled via a low-power ADC and stereo digital microphone PDM interfaces.

Apollo510B also integrates secureSPOT 3.0 and Arm TrustZone, enabling secure boot, firmware updates, and protection of data exchange across connected devices. These features make the device well-suited for always-on, intelligent applications such as wearables, smart glasses, remote patient monitoring, asset tracking, and industrial automation.

The Apollo510B SoC will be available in fall 2025.

Apollo510B product page 

Ambiq Micro

The post Wireless SoC raises AI efficiency at the edge appeared first on EDN.

Instruments work together to ensure design integrity

EDN Network - 16 hours 51 min ago

Smart Bench Essentials Plus is an enhanced set of Keysight test instruments offering improved precision and reliability. The core instruments—a power supply, waveform generator, digital multimeter, and oscilloscope—meet industry and safety standards such as ISO/IEC 17025, IEC 61010, and CSA. All instruments are managed from a single PC via PathWave BenchVue software, simplifying test automation and workflows.

According to Keysight, Smart Bench Essentials Plus delivers 10× higher DMM resolution, 5× greater waveform generator bandwidth, 4× more power supply capacity, and 64× higher oscilloscope vertical resolution over the previous series. Development engineers can test, troubleshoot, and qualify electronic designs while leveraging these benefits:

  • Reduce measurement errors with Truevolt technology in a 6.5-digit dual-display digital multimeter.
  • Generate accurate waveforms with Trueform technology in a 100-MHz waveform/function generator.
  • Deliver reliable, responsive power with a 400-W, four-channel DC power supply.
  • Capture even the smallest signals with a portable four-channel oscilloscope featuring a custom ASIC and 14-bit ADC.

Instruments have intuitive, color-coded interfaces and standardized menus to improve productivity. Built-in graphical charting tools make it easy to visualize and analyze test results.

To learn more about the Smart Bench Essentials Plus portfolio and request a bundled quote, click here.

Keysight Technologies 

The post Instruments work together to ensure design integrity appeared first on EDN.

AEC-Q100 LED driver delivers dynamic effects

EDN Network - 16 hours 51 min ago

Diodes’ AL5958Q matrix LED driver integrates a 48-channel constant-current source and 16 N-channel MOSFET switches for automotive dynamic lighting. Two cascade-connected drivers support up to 32 scans, well-suited for narrow-pixel mini- and micro-LED displays that use multiple RGB LEDs to deliver animated lighting effects and information.

The AEC-Q100 qualified driver employs multiplex pulse density modulation (M-PDM) control to raise the refresh rate of dynamic scanning systems without increasing the grayscale clock frequency or introducing EMI. Built-in matrix display command functions reduce processing overhead on the local MCU. These functions include automatic black-frame insertion, ghost elimination, and suppression of shorted-pixel caterpillars.

Operating from a 3-V to 5-V input, the AL5958Q’s 48 constant-current outputs supply up to 20 mA per LED channel string. Current accuracy between channels and matching across devices is typically ±1.5%.

The AL5958Q LED driver costs $1.60 each in lots of 2500 units.

AL5958Q product page

Diodes

The post AEC-Q100 LED driver delivers dynamic effects appeared first on EDN.

New Quantum Research Points Toward Practical Computing and Security

AAC - Thu, 08/28/2025 - 20:00
Three recent research efforts highlight how the quantum field is moving from laboratory experiments to scalable, commercial-ready technologies.

Mixed signals, on a power budget: Intelligent low-power analog in MCUs

EDN Network - Thu, 08/28/2025 - 18:20

It goes without saying that battery-powered devices are sensitive to power draw, especially during periods of inactivity. One such use case is in sensor nodes or portable sensors—these devices passively monitor a specific condition. When the threshold is exceeded, they trigger an alarm or log the event for further analysis. Since most devices incorporate some form of microcontroller (MCU), selecting an MCU with intelligent analog peripherals can reduce the Bill of Materials (BOM) by performing the same functions of a discrete device while potentially saving power by disabling the analog functionality when not needed.

To demonstrate these features, we built two demos on the PIC16F17576 microcontroller family. One demo aims to use as little power as possible while detecting temperature changes, while the other utilizes the embedded op-amps to dynamically adjust the gain based on the input signal.

Power consumption

Let’s start at the top—power consumption. No matter how you slice it, all roads will lead to the same basic tenets:

  • Keep VDD as low as possible
  • Minimize oscillator frequency
  • Turn off all unused peripherals and external circuits, when possible, and as much as possible
  • Avoid floating nodes on digital I/O

Beyond this advice, it becomes a lot more application-specific. For instance, most op-amps and ADCs don’t have an OFF switch. This is where intelligent analog peripherals fit into designs.

The “intelligent” part of their name is derived from the fact that they can be controlled in software. While most analog peripherals would not be considered power hungry, when optimizing battery life, every little bit of current matters, and generally, there is a higher quiescent current draw that the discrete device would have due to process limitations.

However, there are special low-power peripherals that allow for ultra-low power operation, even when enabled all the time. For instance, the Low Power Voltage Reference (VREFLP) and Low Power Analog Comparator (CMPLP) in the PIC16F17576 family of MCUs draw minimal power but can trigger interrupts to wake the CPU if action is needed.

For devices without these lower power peripherals, another peripheral available in PIC MCUs is the Analog Peripheral Manager (APM). The APM is a specialized counter that can toggle power ON/OFF to the analog peripherals while allowing the CPU to remain continuously in sleep.

If an event occurs, requiring intervention from the CPU, the peripherals can generate an interrupt to wake the device. This avoids having to perform the following sequence: wake the CPU, power on the peripherals, check the results, perform an action, shut down the peripherals, and return to deep sleep.

Low-power demo

The objective of the low-power demo is to demonstrate the new CMPLP and VREFLP as a temperature alarm. This application could be used for cold asset tracking to log when an event over the expected temperature occurs. For the demo implementation, we designed a circuit to detect when a person touches the thermistor(s), causing a rise in temperature.

Figure 1 A finished low power demo prototype that will detects the temperature rise that occurs when a person touches the thermistor(s).

Theory of operation

This circuit is composed of two PIC16F17576 MCUs; one device acts like the device under test (DUT) while the other handles power measurement and display.

Power measurement and display

To measure the minuscule amount of current pulled by the MCU DUT, it was important to design a circuit that could perform high-side current sensing while also being capable of maintaining the power supply at 1.8 V, which is the lowest recommended operating voltage for this device family. For reference, the minimum operating voltage is 1.62 V, which provides a 10% margin on the power supply before the device is out of specified operating conditions.

To measure the quiescent current of the MCU and low-power analog peripherals, a precision 1:1 current mirror IC was used to supply current to the DUT (Figure 2). This IC has a settable compliance output limit, but the tolerancing and ranging on the internal reference was not acceptable for our purposes, so we overdrive the integrated circuit with an external 1.8-V reference (MCP1501-18E) to avoid having to calibrate each unit individually.

Figure 2 The high-side current circuit to measure the minuscule amount of current pulled by the MCU DUT, and 1.8-V DUT power supply.

This ensures the power rail for the DUT is as close as possible to 1.8 V. Guard rings and planes are placed on the PCB to minimize the leakage current of this rail as much as possible. The 1:1 current output goes through a sense resistor, and then a differential measurement of the voltage at the resistor is performed with a 24-bit delta-sigma ADC (MCP3564R) with an external 2.048-V voltage reference (MCP1501-20E). This is shown in Figure 3. The resulting measurement is then displayed on the OLED screen attached to the board.

Figure 3 The ADC implementation where the differential measurement of the voltage at the resistor is performed with a 24-bit delta-sigma ADC with an external 2.048-V voltage reference.

A (good) problem we discovered late in the process was that the current measurement in this configuration is so stable, it looks hard-coded on the display. Thankfully, this can be easily disproved by gently touching the DUT’s decoupling capacitors with a finger or other slightly conductive object and observing the change in measured current.

DUT

The DUT device performs a simple but crucial role in detecting temperature changes with as little power consumption as possible. For this, CMPLP and VREFLP are used together with the Peripheral Pin Select (PPS) system to output the state of the CMPLP without waking the CPU.

In an actual application, CMPLP’s output edge (LOW ↔ HIGH) would be used to wake the CPU to perform some action like logging a temperature event or sounding an alarm.

Using the high-side current measurement circuit designed, we found the current of the microcontroller in this state is ~2.2 to 2.4 μA, but there is room for a tiny bit of extra power savings.

VREFLP is comprised of two separate subsystems: a low-power 1-V reference and a low-power DAC. This application uses the slightly more power-hungry low-power DAC instead of the fixed 1-V reference because the temperature change from physical contact is very small, and the system must recalibrate the threshold on startup to account for environmental variance. In an application where a few degrees of tolerance are acceptable, using the 1-V reference would save a few fractions of a microamp.

Notably, this demo does not use the APM because the APM requires an oscillator to remain active, consuming a little bit more power (~2.8 μA) than simply leaving these ultra-low power modules on. In a situation where multiple analog peripherals are being used, such as the integrated op-amps, ADC, etc., the APM would provide significant savings in power.

Dynamic gain

Another feature of intelligent analog peripherals is the ability to adjust on the fly. In some cases, a signal may have a large dynamic range that is tricky to measure without clipping.

Clipping a signal is usually considered undesirable, as waveform information about the signal is lost. A simple example of this is a microphone: whispering requires a high gain while shouting requires a low gain. With a fixed gain, designers pick the worst (reasonable) conditions to avoid signal clipping, but this, in turn, reduces the signal resolution.

A way around this problem is to use embedded op-amps. These op-amps aren’t going to outmatch the high-end op-amps, but they are often comparable to general-purpose ones.

And, in many cases, the integrated op-amps contain built-in resistor networks that allow the op-amp(s) to adjust the circuit gain as needed. This requires no extra components or specialized circuitry as it’s already integrated into the die.

Dynamic gain demo

One of the main use cases for the integrated op-amps inside MCUs is to dynamically switch gains depending on how strong the signal is. This is often performed to avoid clipping the signal when the signal strength is high.

This application creates a simple demonstration of this use case by amplifying the output of a pressure sensor and displaying it visually on an LED bar graph.

Figure 4 A dynamic gain demo that amplifies the output of a pressure sensor and displays it visually on an LED bar graph.

Theory of operation Pressure sensor

The pressure sensor in this application changes resistance depending on the amount of pressure applied. This resistor is used as part of a resistor divider network to generate an output signal from 0 to 2 V. Since both the discrete op-amp and the integrated op-amp have high-input impedances, the two circuits can share the same signal without loading down the network.

Dynamic gain circuit

The PIC16F17576 MCU has four op-amps, with two of them containing integrated resistor ladders. These ladders have eight steps, plus an additional option for unity gain (1x), for a total of nine options. Alternatively, resistors or other components can be connected to the I/O pins to assign an arbitrary gain or function, if desired.

In this demo, the MCU’s op-amp is switched between a gain of 2x (LOW) and 4x (HIGH) at runtime depending on the measured signal.

In most applications, when the signal strength is low, the gain would be HIGH. However, it is worth noting that in this demo, the inverse is true. This is purely for visual reasons; otherwise, the clipping condition would have more lights ON and thus appear “better” than the dynamic gain version at a glance. As the gain of the embedded op-amps is set up in software, it was easily reconfigured to match the desired behavior.

Measurement and display

The PIC16F17576 MCU also performs the measurement of both op-amp outputs to display on the LED bar graph. The internal Fixed Voltage Reference (FVR) is used to generate a stable 4.096 V from the +5-V (USB) supply for conversions. MCP23017 I2C I/O expanders are used to drive the LEDs of the display. 

Putting it all together

Adjusting the circuit gain without any external circuitry greatly simplifies designs where there are large signal ranges. These peripherals, of course, will not replace high-performance op-amps, ADCs, DACs, or voltage references, but embedded analog peripherals are a good way to handle signals that require some conditioning but aren’t particularly sensitive. This, coupled with low power functionality, makes them a useful tool to reduce circuit complexity, time to market, and ultimately the BOM in your design.

Robert Perkel is an application engineer for Microchip Technology. In this role, he develops technical content such as App Notes, contributed articles, and videos. He is also responsible for analyzing use cases of peripherals and the development of code examples and demonstrations. Perkel is a graduate of Virginia Tech, where he earned a Bachelor of Science degree in Computer Engineering.

Related Content

The post Mixed signals, on a power budget: Intelligent low-power analog in MCUs appeared first on EDN.

Toshiba launches 650V third-generation SiC MOSFETs in TOLL package

Semiconductor today - Thu, 08/28/2025 - 18:16
Toshiba Electronic Devices & Storage Corp of Kawasaki, Japan has launched three 650V silicon carbide (SiC) MOSFETs equipped with its latest third-generation SiC MOSFET chips and housed in general-purpose surface-mount TOLL packages...

SweGaN, Ericsson, Saab and Chalmers collaborate on 6G GaN power amplifier project

Semiconductor today - Thu, 08/28/2025 - 17:18
SweGaN AB of Linköping, Sweden — a manufacturer of custom gallium nitride on silicon carbide (GaN-on-SiC) epitaxial wafers, based on proprietary growth technology — is coordinator of the project ‘GaN6G+: Unlocking Performance and Efficiency in Future 6G Power Amplifiers’, partnered by communications network provider Ericsson, defense and security company Saab and Chalmers University of Technology in Gothenburg, Sweden. Funded by Swedish innovation agency Vinnova and lasting from September 2025 to August 2027, the project aims to revolutionize GaN-based power amplifier (PA) technology for next-generation 6G networks...

📰 Газета "Київський політехнік" № 29-30 за 2025 (.pdf)

Новини - Thu, 08/28/2025 - 15:57
📰 Газета "Київський політехнік" № 29-30 за 2025 (.pdf)
Image
Інформація КП чт, 08/28/2025 - 15:57
Текст

Вийшов 29-30 номер газети "Київський політехнік" за 2025 рік

Post-quantum cryptography (PQC) knocks on MCU doors

EDN Network - Thu, 08/28/2025 - 15:16

An MCU facilitating real-time control in motor control and power conversion applications incorporates post-quantum cryptography (PQC) requirements for firmware protection outlined in the Commercial National Security Algorithm (CNSA) Suite 2.0. These MCUs also support Platform Security Architecture (PSA) Level 3 compliance.

PSA Certified Level 3 is an Internet of Things (IoT) security standard that focuses on robust protection against software and hardware attacks on a chip’s root of trust. It provides an independently evaluated and validated environment that can securely house and execute the PQC algorithms.

Figure 1 PQC encompasses the replacement of Elliptic Curve Cryptography (ECC)-based asymmetric cryptography as well as increasing the size of Advanced Encryption Standard (AES) keys and Secure Hash Algorithm (SHA) sizes. Source: Infineon

“By adopting both PSA Certified Level 3 and PQC compliance with other regulations, companies can proactively address current and future cyber threats,” said Erik Wood, senior director of cryptography and product security at Infineon Technologies. He is responsible for defining the security requirements of Infineon MCUs.

Quantum computers, exponentially faster than classical computers, are still under development. However, cybercriminals can collect encrypted data now and decrypt it later using quantum computers. That calls for futureproofing of current systems to ensure that companies remain secure as quantum computing technologies advance.

Enter PQC, a collection of cryptographic algorithms designed to be secure against attacks from powerful quantum computers. In MCUs, which mainly use cryptography during boot-time and run-time operations, it commands significant changes in security architecture amid evolving regulations.

For instance, MCU’s memory size is a key design consideration. “More memory size is required because encryption keys are longer,” Wood said. “The certificate size is different because the signatures of these certificates are much bigger.”

Figure 2 PSOC Control C3 MCU’s embedded security provides stringent protection against quantum-based attacks on critical systems. Source: Infineon

Next comes the throughput shortfall. “While certificates are currently transferred through an I2C bus, the throughput falls short with QPC use,” he added. “Now you need to have three I3C buses.” Wood said that the industry is even procrastinating about whether every MCU will have a USB port in four years.

In other words, integrating QPC into MCUs will entail a primary upgrade of cryptographic algorithms. Next come memory upgrades, and finally, interface upgrades will follow.

Wood claimed that Infineon is the first MCU supplier to have integrated and ported PQC algorithms. “We offer an integrated library already hooked up to the accelerators for peak optimization and performance in a PSA-3 level device.”

Related Content

The post Post-quantum cryptography (PQC) knocks on MCU doors appeared first on EDN.

Reinforcement Learning Definition, Types, Examples and Applications

ELE Times - Thu, 08/28/2025 - 14:43

Reinforcement Learning (RL), unlike other machine learning (ML) paradigms, notably supervised learning, has an agent learning to act optimally within a given environment, one step at a time. At each step, it is given feedback in the form of a reward or a penalty. The goal is to learn a policy a strategy for selecting actions that maximize the total reward over a certain time horizon. There are no inputs or outputs to fit to (as in traditional supervised learning), so RL agents must balance exploring unknown actions to discover their worth and exploiting known good actions to maximize rewards.

Reinforcement Learning History:

Reinforcement learning began with behavioural psychology’s theory of behaviourism in the early 1900s. Behaviourism postulated learning as a trial and error process propelled by rewards and punishments. This concept was later adapted and formalised into computer science mathematical models that paved the way for the development of optimisation and machine learning algorithms. Reinforcement learning is akin to optimising methods where the desired function is not explicitly given but is instead hinted at through trial and error.

How does reinforcement learning work:

To enhance decision-making, reinforcement learning works by training an agent to interact with an environment. The agent gets to perform actions. After each action, the agent gets feedback in terms of rewards or penalties associated with the specific action.

Types of Reinforcement Learning:

  1. Value-Based Reinforcement Learning

This method requires an agent to learn a value function that predicts the reward for performing an action in a particular state and Q-learning is the most well-known. An agent updates its Q-values in Q-learning according to the received reward and acts to maximize these Q-values.

  1. Policy-Based Reinforcement Learning

Policy-based methods focus on learning the policy itself, which is the set of rules mapping states to actions, instead of estimating value functions. This is crucial in cases with complex or continuous action spaces. Methods like REINFORCE and Proximal Policy Optimization (PPO) are good examples of algorithms that follow this paradigm.

  1. Model-Based Reinforcement Learning

This refers to methods which try to construct a model of the environment that can predict the following state and reward given the current state and action. Using this model, the agent can plan and make decisions ahead of time. While this method is efficient in terms of samples, its implementation can be complicated to do correctly.

4. Actor-Critic Methods 

These hybrid methods combine the strengths of value-based and policy-based approaches. The actor updates the policy based on feedback from the critic, which evaluates the action taken. This results in more stable and efficient learning, especially in complex environments.

Applications of Reinforcement Learning:

  1. Self-Driving Cars

Self-driving cars use reinforcement learning to understand their surroundings. They identify the best routes, change lanes, avoid obstacles, and optimize their overall driving.

  1. Automated Machines

Automated machines use reinforcement learning to master new skills like walking, picking up objects, and putting them together. As they deal with new items and different tasks, they improve how they do things in due course.

  1. Medicine

Personalized treatment is now possible because of reinforcement, which allows crafting adaptive treatment plans for patients. It is also useful in optimizing clinical trials and in the management of chronic illness.

  1. Investment

In portfolio management and trading, reinforcement learning technologies attempt to make investment choices by evaluating prevailing market patterns and modifying tactics geared towards greater returns.

  1. Recommendation Systems

Reinforcement learning is used to improve the recommendation systems. As users interact with the content, the system learns users preferences and dynamically suggests content making the platform personalized and more engaging.

Reinforcement Learning Examples:

Reinforcement learning is integrated into numerous fields enabling the technology to thrive. In game playing, RL has enabled breakthroughs like AlphaGo which mastered complex games such as Go and chess through self-play. In autonomous driving, self-driving cars use RL to make decisions like lane changes and obstacle avoidance by learning from real and simulated environments. In robotics, RL helps machines learn tasks like walking, grasping, and assembling by adapting to physical feedback. In finance, RL algorithms optimize trading strategies and portfolio management by analyzing market data. Lastly, in recommendation systems, platforms like Netflix and Amazon use RL to suggest content or products based on user behavior, enhancing engagement and satisfaction.

Reinforcement Learning Advantages:

Reinforcement learning is adaptive and its methods are goal driven. As an example, it can be very effective in environments that are constantly changing and that require very little supervision. It is a type of learning that is guided by rewards or feedback, in which an agent learns to improve its behavior over time based on interaction with the environment.

Conclusion:

As the rest of intelligent systems, reinforcement learning is, for now, an incredible advancement and is bound to become even more so. The level of innovation that RL will bring about will be unimaginable given the availability of more processing power and much more sophisticated algorithms. Preemptive systems, self-learning autonomous agents, and machines that collaborate with humans are only the beginning. Personalized medicine, self-developing robots, and adaptive learning systems will all lean on RL technologies. These technologies will not just adapt to the world, but will actively ‘mold’ it, in essence, making the word ‘transformative’ obsolete in describing the level of change these technologies will bring.

The post Reinforcement Learning Definition, Types, Examples and Applications appeared first on ELE Times.

Infineon drives industry transition to Post-Quantum Cryptography on PSOC Control microcontrollers

ELE Times - Thu, 08/28/2025 - 13:52

Infineon Technologies AG announced that its microcontrollers (MCUs) in the new PSOC Control C3 Performance Line family are compliant with Post-Quantum Cryptography (PQC) requirements for firmware protection outlined in the Commercial National Security Algorithm (CNSA) Suite 2.0. The MCUs also support PSA (Platform Security Architecture) Level 3 compliance. By complying with both standards, Infineon’s PSOC Control C3 Performance Line meets the security needs of a wide range of industrial applications and eases their transition to increased security in the PQC era.

“With the PSOC Control C3 family, we are setting a new standard for security in industrial microcontrollers, building on decades of proven experience in MCUs and secured electronic systems,” said Steve Tateosian, SVP and General Manager, IoT, Consumer and Industrial MCUs, Infineon Technologies. “Infineon is committed to meeting and evolving industry requirements for MCU embedded security that provides stringent protection against quantum-based attacks on critical systems.”

Changes in security architecture for the PQC era include the replacement of Elliptic Curve Cryptography (ECC) based asymmetric cryptography as well as increasing the size of Advanced Encryption Standard (AES) keys and Secure Hash Algorithm (SHA) hash sizes. The algorithms and implementation guidelines provided by CSNA 2.0 help to facilitate a smoother transition to Post-Quantum Cryptography.

About PSOC Control C3 family

The PSOC Control C3 family of MCUs provide real-time control for motor control and power conversion applications. New MCUs of the PSOC Control C3 Performance Line enable system performance at high switching frequencies and increase control loop bandwidth. That is achieved with proprietary autonomous hardware accelerators as well as high resolution and high performing analog peripheral support. The family supports systems designed with wide-bandgap switches while achieving best-in-class control loop frequencies, accuracy and efficiency for applications such as data centers, telecom, solar and electric vehicle (EV) charging systems.

Specific security features include support for Leighton-Micali Hash-Based Signatures (LMS), which is an efficient post-quantum cryptography FW verification algorithm integrated with SHA-2 hardware acceleration for peak performance. To maximize ease of use, Infineon’s Edge Protect Tools and ModusToolbox will support everything a customer needs to provision LMS keys as well as options for hybrid post-quantum cryptography where customers may use both LMS and ECC to sign firmware updates which can be verified by Infineon chips.

The post Infineon drives industry transition to Post-Quantum Cryptography on PSOC Control microcontrollers appeared first on ELE Times.

Decision Tree Learning Definition, Types, Examples and Applications

ELE Times - Thu, 08/28/2025 - 12:12

Decision Tree Learning is a type of supervised machine learning used in classification as well as regression problems. It tries to mimic real-world decision making by representing decisions and their possible outcomes in the form of a tree. Each internal node in the tree denotes a test on a feature, each branch denotes an outcome of the test, and the leaf node gives the final decision. It is easy to understand, requires no complex data preprocessing, and is visually very informative.

Decision tree learning history:

The concept of decision trees has roots in decision analysis and logic, but their formal application in machine learning began in the 1980s. The ID3 algorithm, developed by Ross Quinlan in 1986, was one of the first major breakthroughs in decision tree learning. It introduced the use of information gain as a criterion for splitting nodes. This was followed by C4.5, an improved version of ID3, and CART (Classification and Regression Trees), developed by Breiman et al, which used the Gini index and supported both classification and regression tasks. These algorithms laid the foundation for modern decision tree models used today.

How does decision tree learning work:

Decision tree learning is a type of algorithm in machine learning where data gets split into smaller subsets and gets organized in the form of a tree. The splitting is based on the value of the data features. At the beginning, with the root node, a feature of the data gets selected. This selection feature tends to be the one that gets deemed most informative by the Gini impurity or entropy criteria. As mentioned earlier, internal nodes get to represent a certain decision rule. This process continues until the data is sufficiently partitioned or a stopping condition is met, resulting in leaf nodes that represent final predictions or classifications. The tree structure makes it easy to interpret and visualize how decisions are made step by step.

Types of Decision Trees:

  1. Classification Trees

These are utilized when the dependent variable is categorical. Such trees assist in categorizing the dataset into specific categories (e.g., spam and non-spam). Each split aims to enhance class separation based on certain features.

  1. Regression Trees

These trees are used when the dependent variable is continuous. Unlike categorization, these trees aim to provide numerical predictions (e.g., house prices). The split in these trees is done for minimizing prediction error.

Examples of Decision Tree Learning:

  • Email Filtering: Marking emails as spam or not using keywords and sender details.
  • Loan Approval: Deciding loan approval using income, credit score, and employment status.
  • Medical Diagnosis: Identifying a disease with the help of symptoms and test results.
  • Weather Prediction: Predicting rain using humidity, temperature, and wind speed.

Applications of Decision Tree Learning:

  1. Finance

Decision trees analyze customer data and transaction behavior for credit scoring, fraud detection, and risk management.

  1. Healthcare

With the use of medical records and test outcomes, they aid in disease diagnosis, treatment suggestions, and patient outcome predictions.

  1. Marketing

Segmenting customers, predicting buying behavior, and optimizing campaign strategies based on demographic and behavioral data.

  1. Retail

Forecasting sales, managing inventory, and personalizing product recommendations.

  1. Education

Predicting student performance, dropout risk, and tailoring learning paths based on academic data.

Decision Tree Learning Advantages:

Decision Tree learning has numerous benefits, all of which contribute to its widespread use in machine learning. It is simple to grasp and analyze because the structure of the tree is akin to human decision-making and can be easily visualised. It can process both numerical and categorical data without the need for advanced data preprocessing or feature scaling. Decision trees are not affected by outliers or missing data, and they can model non-linear patterns in data. It requires very little in the way of data preparation and is immensely powerful and user-friendly because it inherently takes into account feature combinations through its hierarchical splits.

Conclusion:

Decision Tree Learning is going to mature into a dynamic, real-time intelligence system processing complex data, providing direction to autonomous systems, and enabling accountable decision-making in all sectors. These trees will, in time, become self-optimizing systems that reason, tell stories, and co-exist with human cognition, and they will serve as the ethical and intellectual foundation of future AI.

The post Decision Tree Learning Definition, Types, Examples and Applications appeared first on ELE Times.

Pages

Subscribe to Кафедра Електронної Інженерії aggregator