EDN Network

Subscribe to EDN Network потік EDN Network
Voice of the Engineer
Адреса: https://www.edn.com/
Оновлене: 22 хв 50 секунд тому

How to implement MQTT on a microcontroller

Втр, 01/20/2026 - 16:28

One of the original and most important reasons Message Queuing Telemetry Transport (MQTT) became the de facto protocol for Internet of Things (IoT) is its ability to connect and control devices that are not directly reachable over the Internet.

In this article, we’ll discuss MQTT in an unconventional way. Why does it exist at all? Why is it popular? If you’re about to implement a device management system, is MQTT the best fit, or are there better alternatives?

Figure 1 This is how incoming connections are blocked. Source: Cesanta Software

In real networks—homes, offices, factories, and cellular networks—devices typically sit behind routers, network address translation (NAT) gateways, or firewalls. These barriers block incoming connections, which makes traditional client/server communication impractical (Figure 1).

However, as shown in the figure below, even the most restrictive firewalls usually allow outgoing TCP connections.

Figure 2 Even the most restrictive firewalls usually allow outgoing TCP connections. Source: Cesanta Software

MQTT takes advantage of this: instead of requiring the cloud or the user to initiate a connection into the device, the device initiates an outbound connection to a publicly visible MQTT broker. Once this outbound connection is established, the broker becomes a communication hub, enabling control, telemetry, and messaging in both directions.

Figure 3 This is how devices connect out but servers never connect in. Source: Cesanta Software

This simple idea—devices connect out, servers never connect in—solves one of the hardest networking problems in IoT: how to reach devices that you cannot address directly.

To summarize:

  • The device opens a long-lived outbound TCP connection to the broker.
  • Firewalls/NAT allow outbound connections, and they maintain the state.
  • The broker becomes the “rendezvous point” accessible to all.
  • The server or user publishes messages to the broker; the device receives them over its already-open connection.

Publish/subscribe

Every MQTT message is carried inside a binary frame with a very small header, typically only a few bytes. These headers contain a command code—called a control packet type—that defines the semantic meaning of the frame. MQTT defines only a handful of these commands, including:

  • CONNECT: The client initiates a session with the broker.
  • PUBLISH: It sends a message to a named topic.
  • SUBSCRIBE: It registers interest in one or more topics.
  • PINGREQ/PINGRESP: They keep alive messages to maintain the connection.
  • DISCONNECT: It ends the session cleanly.

Because the headers are small and fixed in structure, parsing them on a microcontroller (MCU) is fast and predictable. The payload that follows these headers can be arbitrary data, from sensor readings to structured messages.

So, the publish/subscribe pattern works like this: a device publishes a message to a topic (a string such as factory/line1/temp). Other devices subscribe to topics they care about. The broker delivers messages to all subscribers of each topic.

Figure 4 The model shows decoupling of senders and receivers. Source: Cesanta Software

As shown above, the model decouples senders and receivers in three important ways:

  • In time: Publishers and subscribers do not need to be online simultaneously.
  • In space: Devices never need to know each other’s IP addresses.
  • In message flow: Many-to-many communication is natural and scalable.

For small IoT devices, the publish/subscribe model removes networking complexity while enabling structured, flexible communication. Combined with MQTT’s minimal framing overhead, it achieves reliable messaging even on low-bandwidth or intermittent links.

Request/response over MQTT

MQTT was originally designed as a broadcast-style protocol, where devices publish telemetry to shared topics and any number of subscribers can listen. This publish/subscribe model is ideal for sensor networks, dashboards, and large-scale IoT systems where data fan-out is needed. However, MQTT can also support more traditional request/response interactions—similar to calling an API—by using a simple topic-based convention.

To implement request/response, each device is assigned two unique topics, typically embedding the device ID:

Request topic (RX): devices/DEVICE_ID/rx used by the server or controller to send a command to the device.

Response topic (TX): devices/DEVICE_ID/tx used by the device to send results back to the requester.

When the device receives a message on its RX topic, it interprets the payload as a command, performs the corresponding action, and publishes the response on its TX topic. Because MQTT connections are persistent and outbound from the device, this pattern works even for devices behind NAT or firewalls.

This structure effectively recreates a lightweight RPC-style workflow over MQTT. The controller sends a request to a specific device’s RX topic; the device executes the task and publishes a response to its TX topic. The simplicity of topic naming allows the system to scale cleanly to thousands or millions of devices while maintaining separation and addressing.

With it, it’s easy to implement remote device control using MQTT. One of the practical choices is to use JSON-RPC for the request/response.

Secure connectivity

MQTT includes basic authentication features such as username/password and transport layer security (TLS) encryption, but the protocol itself offers very limited isolation between clients. Once a client is authenticated, it can typically subscribe to wildcard topics and receive all messages published on the broker. Also, it can publish to any topic, potentially interfering with other devices.

Because MQTT does not define fine-grained access control in its standard, many vendors implement non-standard extensions to ensure proper security boundaries. For example, AWS IoT attaches per-client access control lists (ACLs) tied to X.509 certificates, restricting exactly which topics a device may publish or subscribe to. Similar policy frameworks exist in EMQX, HiveMQ, and other enterprise brokers.

In practice, production systems must rely on these vendor-specific mechanisms to enforce strong authorization and prevent devices from accessing each other’s data.

MQTT implementation on a microcontroller

MCUs are ideal MQTT clients because the protocol is lightweight and designed for low-bandwidth, low-RAM environments. Implementing MQTT on an MCU typically involves integrating three components: a TCP/IP stack (Wi-Fi, Ethernet, or cellular), an MQTT library, and application logic that handles commands and telemetry.

After establishing a network connection, the device opens a persistent outbound TCP session to an MQTT broker and exchanges MQTT frames—CONNECT, PUBLISH, and SUBSCRIBE—using only a few kilobytes of memory. Most implementations follow an event-driven model: the device subscribes to its command topic, publishes telemetry periodically, and maintains the connection with periodic ping messages. With this structure, even small MCUs can participate reliably in large-scale IoT systems.

An example of a fully functional but tiny MQTT client can be found in the Mongoose repository: mqtt-client.

WebSocket server: An alternative

If all you need is a clean way for your devices to talk to your back-end, MQTT can feel like bringing a whole toolbox just to tighten one screw. JSON-RPC over WebSocket keeps things minimal: devices open a WebSocket, send tiny JSON-RPC method calls, and get direct responses. No brokers, no topic trees, and no QoS semantics to wrangle.

The nice part is how naturally it fits into a modern back-end. The same service handling the WebSocket connections can also expose a familiar REST API. That REST layer becomes the human- and script-friendly interface, while JSON-RPC over WebSocket stays as the fast “device side” protocol.

The back-end basically acts as a bridge: REST in, RPC out. This gives you all the advantages of REST—a massive ecosystem of tools, gateways, authentication systems, monitoring, and automation—without forcing your devices to speak.

Figure 5 This is how REST to JSON-RPC over WebSocket bridge architecture looks like. Source: Cesanta Software

This setup also avoids one of MQTT’s classic security footguns, where a single authenticated client can accidentally gain visibility or access to messages from the entire fleet just by subscribing to the wrong topic pattern.

With a REST/WebSocket bridge, every device connection is isolated, and authentication happens through well-understood web mechanisms like JWTs, mTLS, API keys, OAuth, or whatever your infrastructure already supports. It’s a much more natural fit for modern access control models.

Beyond typical MQTT setup

This article offers a fresh look at IoT communication, going beyond the typical MQTT setup. It explains why MQTT is great for devices behind NAT/firewalls (devices only connect out to the broker) and highlights that the protocol’s lack of fine-grained access control can create security headaches. It also outlines an alternative solution: JSON-RPC over a single persistent WebSocket connection.

For a practical application demo of these MQTT principles, see the video tutorial that explains how to implement an MQTT client on an MCU and build a web UI that displays MQTT connection status, provides connect/disconnect control, and lets you publish MQTT messages to any topic.

In this step-by-step tutorial, we use STM32 Nucleo-F756ZG development board with Mongoose Wizard—though the same method applies to virtually any other MCU platform—and a free HiveMQ Public Broker. This tutorial is suitable for anyone working with embedded systems, IoT devices, or STM32 development stack, and looking to integrate MQTT networking and a lightweight web UI dashboard into their firmware.

Sergey Lyubka is co-founder and technical director of Cesanta Software Ltd. He is known as the author of the open-source Mongoose Embedded Web Server and Networking Library (https://mongoose.ws), which has been on the market since 2004 and has over 12k stars on GitHub.

Related Content

The post How to implement MQTT on a microcontroller appeared first on EDN.

Windows 10: Support hasn’t yet ended after all, but Microsoft’s still a fickle-at-best friend

Пн, 01/19/2026 - 23:59

Bowing to user backlash, Microsoft eventually relented and implemented a one-year Windows 10 support-extension scheme. But (limited duration) lifelines are meaningless if they’re DOA.

Back in November, within my yearly “Holiday Shopping Guide for Engineers”, the first suggestion in my list was that you buy you and yours Windows 11-compatible (or alternative O/S-based) computers to replace existing Windows 10-based ones (specifically ones that aren’t officially Windows 11-upgradable, that is). Unsanctioned hacks to alternatively upgrade such devices to Windows 11 do exist, but echoing what I first wrote last June (where I experimented for myself, but only “for science”, mind you), I don’t recommend relying on them for long-term use, even assuming the hardware-hack attempt is successful at all, that is:

The bottom line: any particular system whose specifications aren’t fully encompassed by Microsoft’s Windows 11 requirements documentation is fair game for abrupt no-boot cutoff at any point in the future. At minimum, you’ll end up with a “stuck” system, incapable of being further upgraded to newer Windows 11 releases, therefore doomed to fall off the support list at some point in the future. And if you try to hack around the block, you’ll end up with a system that may no longer reliably function, if it even boots at all.

A mostly compatible computing stable

Fortunately, all of my Windows-based computers are Windows 11-compatible (and already upgraded, in fact), save for two small form factor systems, one (Foxconn’s nT-i2847, along with its companion optical drive), a dedicated-function Windows 7 Media Center server:

(mine are white, and no, the banana’s not normally a part of the stack):

and the other, an XCY X30, largely retired but still hanging around to run software that didn’t functionally survive the Windows 10-to-11 transition:

And as far as I can recall, all of the CPUs, memory DIMMs, SSDs, motherboards, GPUs and other PC building blocks still lying around here waiting to be assembled are Windows 11-compliant, too.

One key exception to the rule

My wife’s laptop, a Dell Inspiron 5570 originally acquired in late 2019, is a different matter:

Dell’s documentation initially indicated that the Inspiron 5570 was a valid Windows 11 upgrade candidate, but the company later backtracked due to partner Microsoft’s increasingly-over-time stingy CPU and TPM requirements. Our secondary strategy was to delay its demise by a year by taking advantage of one of Microsoft’s Windows 10 Extended Support Update (ESU) options. For consumers, there initially were two paths, both paid: spending $30 or redeeming 1,000 Microsoft Rewards points, although both ESU options covered up to 10 devices (presumably associated with a common Microsoft account). But in spite of my repeated launching of the Windows Update utility over a several-month span, it stubbornly refused to display the ESU enrollment section necessary to actualize my extension aspirations for the system:

My theory at the time was that although the system was registered under my wife’s personal Microsoft account, she’d also associated it with a Microsoft 365 for Business account for work email and such, and it was therefore getting caught by the more complicated corporate ESU license “net”. So, I bailed on the ESU aspiration and bought her a Dell 16 Plus as a replacement, instead:

That I’d done (and to be precise, seemingly had to do) this became an even more bitter already-swallowed pill when Microsoft subsequently added a third, free consumer ESU option, involving backup of PC settings in prep for the delayed Windows 11 migration to still come a year later:

Belated success, and a “tinfoil hat”-theorized root cause-and-effect

And then the final insult to injury arrived. At the beginning of October, a few weeks prior to the Windows 10 baseline end-of-support date, I again checked Windows Update on a lark…and lo and behold, the long-missing ESU section was finally there (and I then successfully activated it on the Inspiron 5570). Nothing had changed with the system, although I had done a settings backup a few weeks earlier in a then-fruitless attempt to coax the ESU to reactively appear. That said, come to think of it, we also had just activated the new system…were I a conspiracy theorist (which I’m not, but just sayin’), I might conclude that Microsoft had just been waiting to squeeze another Windows license fee out of us (a year earlier than otherwise necessary) first.

To that last point, and in closing, a reality check. At the end of the day, “all” we did was to a) buy a new system a year earlier than I otherwise likely would have done, and b) delay the inevitable transition to that new system by a year. And given how DRAM and SSD prices are trending, delaying the purchase by a year might have resulted in an increased cash outlay, anyway. On the other hand, the CPU would have likely been a more advanced model than we ended up, too. So…🤷‍♂️

A “First World”, albeit baffling, problem, I’m blessed to be able to say in summary. How did your ESU activation attempts go? Let me (and your fellow readers) know in the comments: thanks as always in advance!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

 

The post Windows 10: Support hasn’t yet ended after all, but Microsoft’s still a fickle-at-best friend appeared first on EDN.

Handheld enclosures target harsh environments

Пн, 01/19/2026 - 19:48
Rolec’s handCASE (IP 66/IP 67) handheld enclosures.

Rolec’s handCASE (IP 66/IP 67) handheld enclosures for machine control, robotics, and defense electronics can now be specified with a choice of lids and battery options.

These rugged diecast aluminum enclosures are ideal for industrial and military applications in which devices must survive challenging environments but also be comfortable to hold for long periods.

Rolec’s handCASE (IP 66/IP 67) handheld enclosures.(Source: Rolec USA)

Robust handCASE can be specified with or without a battery compartment (4 × AA or 2 × 9 V). Two versions are available: S with an ergonomically bevelled lid, and R with a narrow-edged lid to maximize space. Both tops are recessed to protect a membrane keypad or front plate. Inside there are threaded screw bosses for PCBs or mounting plates.

The enclosures are available in three sizes: 3.15″ × 7.09″ × 1.67″, 3.94″ × 8.66″ × 1.67″ and 3.94″ × 8.66″ × 2.46″. As standard, Version S features a black (RAL 9005) base with a silver metallic top, while Version R is fully painted in light gray (RAL 7035).

Custom colors are available on request. They include weather-resistant powder coatings (F9) with WIWeB approvals and camouflage colors for military applications. These coatings are also available in a wet painted finish. They meet all military requirements, including the defense equipment standard VG 95211.

Options and accessories include a shoulder strap, a holding clip and wall bracket, and a corrosion-proof coating in azure blue (RAL 5009).

Rolec can supply handCASE fully customized. Services include CNC machining, engraving, RFI/EMI shielding, screen and digital printing, and assembly of accessories.

For more information, view the Rolec website: https://Rolec-usa.com/en/products/handcase#top

The post Handheld enclosures target harsh environments appeared first on EDN.

AI’s insatiable appetite for memory

Пн, 01/19/2026 - 09:22

The term “memory wall” was first coined in the mid-1990s when researchers from the University of Virginia, William Wulf and Sally McKee, co-authored “Hitting the Memory Wall: Implications of the Obvious.” The research presented the critical bottleneck of memory bandwidth caused by the disparity between processor speed and the performance of dynamic random-access memory (DRAM) architecture.

These findings introduced the fundamental obstacle that engineers have spent the last three decades trying to overcome. The rise of AI, graphics, and high-performance computing (HPC) has only served to increase the magnitude of the challenge.

Modern large language models (LLMs) are being trained with over a trillion parameters, requiring continuous access to data and petabytes of bandwidth per second. Newer LLMs in particular demand extremely high memory bandwidth for training and for fast inference, and the growth rate shows no signs of slowing with the LLM market size expected to increase from roughly $5 billion in 2024 to over $80 billion by 2033. And the growing gap between CPU and GPU performance, memory bandwidth, and latency is unmistakable.

The biggest challenge posed by AI training is in moving these massive datasets between the memory and processor, and here, the memory system itself is the biggest bottleneck. As compute performance has increased, memory architectures have had to evolve and innovate to keep pace. Today, high-bandwidth memory (HBM) is the most efficient solution for the industry’s most demanding applications like AI and HPC.

History of memory architecture

In the 1940s, the von Neumann architecture was developed and it became the basis for computing systems. The control-centric design stores a program’s instructions and data in the computer’s memory. The CPU fetched instructions and data sequentially, creating idle time while the processor waited for these instructions and data to return from memory. The rapid evolution of processors and the relatively slower improvement of memory eventually created the first system memory bottlenecks.

Figure 1 Here is a basic arrangement showing how processor and memory work together. Source: Wikipedia

As memory systems evolved, memory bus widths and data rates increased, enabling higher memory bandwidths that improved this bottleneck. The rise of graphics processing units (GPUs) and HPC in the early 2000s accelerated the compute capabilities of systems and brought with them a new level of pressure on memory systems to keep compute and memory systems in balance.

This led to the development of new DRAMs, including graphics double data rate (GDDR) DRAMs, which prioritized bandwidth. GDDR was the dominant high-performance memory until AI and HPC applications went mainstream in the 2000s and 2010s, when a newer type of DRAM was required in the form of HBM.

Figure 2 The above chart highlights the evolution of memory in more than two decades. Source: Amir Gholami

The rise of HBM for AI

HBM is the solution of choice to meet the demands of AI’s most challenging workloads, with industry giants like Nvidia, AMD, Intel, and Google utilizing HBM for their largest AI training and inference work. Compared to standard double-data rate (DDR) or GDDR DRAMs, HBM offers higher bandwidth and better power efficiency in a similar DRAM footprint.

It combines vertically stacked DRAM chips with wide data paths and a new physical implementation where the processor and memory are mounted together on a silicon interposer. This silicon interposer allows thousands of wires to connect the processor to each HBM DRAM.

The much wider data bus enables more data to be moved efficiently, boosting bandwidth, reducing latency, and improving energy efficiency. While this newer physical implementation comes at a greater system complexity and cost, the trade-off is often well worth it for the improved performance and power efficiency it provides.

The HBM4 standard, which JEDEC released in April of 2025, marked a critical leap forward for the HBM architecture. It increases bandwidth by doubling the number of independent channels per device, which in turn allows more flexibility in accessing data in the DRAM. The physical implementation remains the same, with the DRAM and processor packaged together on an interposer that allows more wires to transport data compared to HBM3.

While HBM memory systems remain more complex and costlier to implement than other DRAM technologies, the HBM4 architecture offers a good balance between capacity and bandwidth that offers a path forward for sustaining AI’s rapid growth.

AI’s future memory need

With LLMs growing at a rate between 30% to 50% year over year, memory technology will continue to be challenged to keep up with the industry’s performance, capacity, and power-efficiency demands. As AI continues to evolve and find applications at the edge, power-constrained applications like advanced AI agents and multimodal models will bring new challenges such as thermal management, cost, and hardware security

The future of AI will continue to depend as much on memory innovation as it will on compute power itself. The semiconductor industry has a long history of innovation, and the opportunity that AI presents provides compelling motivation for the industry to continue investing and innovating for the foreseeable future.

Steve Woo is a memory system architect at Rambus. He is a distinguished inventor and a Rambus fellow.

Special Section: AI Design

The post AI’s insatiable appetite for memory appeared first on EDN.

Zero maintenance asset tracking via energy harvesting

Птн, 01/16/2026 - 15:00

Real-time tracking of assets has enabled both supply chain digitalization and operational efficiency leaps. These benefits, driven by IoT advances, have proved transformational. As a result, the market for asset-tracking systems for transportation and logistics firms is set to triple, reaching USD 22.5 billion by 2034¹. And, if we look across all sectors, the asset tracking market is forecasted to grow at a CAGR of 15%, reaching USD 51.2 billion by 2030².

However, the ability for firms to maximize the benefits of asset tracking is being constrained by the finite power limitations of a single component, the battery. Reliance on batteries has a number of disadvantages. In addition to the battery cost, battery replacement across multiple locations increases operational costs and demands considerable time and effort.

At the same time, batteries can cause system-wide vulnerabilities. When a tag’s battery unexpectedly fails, for example, a tracked item can effectively disappear from the network and the corresponding data is no longer collected. This, in turn, leads to supply chain disruptions and bottlenecks, sometimes even production line downtime, and reduces the very efficiencies the IoT-based system was designed to deliver (Figure 1).

Figure 1 Real-time tracking of assets is transforming logistics operations, enabling supply chain digitalization and unlocking major efficiency gains.

Battery maintenance

A “typical” asset tracking tag will implement two core functions: location and communications. For long-distance shipping, GPS will primarily be used as the location identifier. In a logistics warehouse, GPS coverage can be poor, but Wi-Fi scanning remains an option. Other efficient systems include FSK or BLE beacons, Wirepas mesh, or Quuppa’s angle of arrival (AoA).

For data communication, several protocols are possible,

  • BLE if the assets remain indoors
  • LTE-M if global coverage is a key requirement, and the assets are outdoors
  • LoRaWAN if seamless indoor and outdoor coverage is needed, as this can use private, public, community, and satellite networks, with some of them offering native multi-country coverage.

Sensors can also improve functionality and efficiency. For example, an accelerometer can be added to identify when a tag moves and then initiate a wake-up. Other sensors can determine a package’s status and condition. In the case of energy harvesting, the power management chip can indicate the amount of energy that is available. Therefore, the behavior of the device can also be adapted to this information. The final important component on the board of an asset tracker will be an energy-efficient MCU.

The stated battery life of a 15-dollar tag will often be overestimated. This will mainly be due to the radio protocol behaviors. But even if the battery cost itself is limited, the replacement cost can be estimated at around 50 dollars once man-hours are factored into this.  

An alternative tag based on the latest energy harvesting technology might have an initial cost of around 25 dollars, but with no batteries to replace, its total cost over a decade remains essentially the same, whereas even a single battery replacement already pushes a 15-dollar tag above that level.

For example, in the automotive industry, manufacturers transport parts using large reusable metal racks. Each manufacturer will use tens of thousands of these, each valued at around 500 dollars. We have been told that, because of scanning errors and mismanagement, up to 10 percent go missing each year.

By equipping racks with tags powered from harvested energy, companies can create an automated inventory system. This results in annual OPEX savings that can be in the order of millions of dollars, a return on investment within months, and lower CAPEX since fewer racks are required for the same production volume.

Self-powered tracking

Unlike battery-powered asset trackers, Ambient IoT tags use three core blocks to supply energy to the system: the harvester, an energy storage element, and a power management IC. Together, these enable energy to be harvested as efficiently as possible.

Energy sources can range from RF through thermoelectric to vibration, but for many logistics and transport applications, the most readily available and most commonly used source is light. And this will be natural (solar) or ambient, depending on whether the asset being tracked spends most of its life outdoors (e.g., a container) or indoors (e.g., a warehouse environment).

For outdoor asset trackers on containers or vehicles, significant energy can be harvested from direct sunlight using traditional photovoltaic (PV) amorphous silicon panels. When space is limited, monocrystalline silicon technology provides a higher power density and still works well indoors. For indoor light levels, in addition to the traditional amorphous silicon, there are three additional technologies that become available and cost-effective for these use cases.

  • Organic photovoltaic (OPV) cells can provide up to twice the power density of amorphous silicon. Furthermore, the flexibility of these PV cells allows for easy mechanical implementation on the end device.
  • Dye-sensitized solar cells bring even higher power densities and exhibit low degradation levels over time, but they are sometimes limited by the requirement for a glass substrate, which prevents flexibility.
  • Perovskite PV cells also reach similar power densities as dye-sensitized solar cells, with the possibility of a flexible substrate. However, these have challenges related to lead content and aging.

Before selecting a harvester, an evaluation of the PV cell should be undertaken. This should combine both laboratory measurements and real-world performance tests, along with an assessment of aging characteristics (to ensure that the lifetime of the PV cell exceeds the expected end-of-life of the tracker) and mechanical integration into the casing. The manufacturer chosen to supply the technology should also be able to support large-scale deployments.

When it comes to energy storage, such a system may require either a small, rechargeable chemical-based battery or a supercapacitor. Alternatively, there is the lithium capacitor (a hybrid of the two). Each has distinct characteristics regarding energy density and self-discharge. The right choice will depend on a number of factors, including the application’s required operating temperature and longevity.

Finally, a power management IC (PMIC) must be chosen. This provides the interface between the PV cell and the storage element, and manages the energy flow between the two, something that needs to be done with minimal losses. The PMIC should be optimized to maximize the lifespan of the energy storage element, protecting it from overcharging and overdischarging, while delivering a stable, regulated power output to the tag’s application electronics (Figure 2).

For an indoor industrial environment, where ambient light levels can be low, there is the risk of the storage element becoming fully depleted. It is therefore crucial that the PMIC can perform a cold start in these conditions, when only a small amount of energy is available.

In developing the most appropriate system for a given asset tracking application, it will be important to undertake a power budget analysis. This will consider both the energy consumed by the application and the energy available for harvesting. With the size of the device and its power consumption, it is relatively straightforward to determine the number of hours per day and the luminosity (lux level) for any given PV cell technology to make the device capable of autonomously running by harvesting more energy over a 24-hour period than it consumes.

The storage element size is also critical as it determines how long the device can operate without any power at the source. And even if power consumption is too high to make it fully autonomous, the application of energy harvesting can be used to significantly extend battery life.

Figure 2 e-peas has worked with several leading tracking system developers, including MOKO SMART (top), Minew (left), and inVirtus (center), Jeng IoT (right) to implement energy harvesting in asset trackers. Source: e-peas

Examples of energy-harvested tracking systems

Companies such as inVirtus, Jeng IoT, Minew, and MOKO SMART, all leaders in developing logistics and transportation tracking systems, have already started transitioning to energy-harvesting-powered asset trackers. And notably, these devices are delivering significant returns in complex logistical environments.

Minew’s device, for example, implements Epishine’s ultra-thin solar cells to create a credit card-sized asset tracker. MOKO SMART’s L01A-EH is a BLE-based tracker with a three-axis accelerometer and temperature and humidity sensors. These tags, which can be placed on crates to track their journey through a production process, give precise data on lead times and dwell times at each station. This allows monitoring of efficiency and the highlighting of bottlenecks in the system.

A good example of such benefits can be found at Thales, where the InVirtus EOSFlex Beacon battery-free tag is being used. The company has cited a saving of 30 minutes on tracking during part movements when monitoring work orders after the company switched to a system where each work order was digitally linked to a tagged box. Because each area of the factory corresponds to a specific task, the tag’s indoor location provides accurate manufacturing process monitoring.

Additionally, the system saves time by selecting the highest priority task and activating a blinking LED on the corresponding box. It also improves both lead time prediction accuracy and scheduling adherence—the alignment between the planned schedule and actual work progress.

The tags have also been used to locate measurement equipment shared by multiple divisions, and Thales has reported savings of up to two hours when locating these pieces of equipment. This is a critical difference as each instance of downtime represents a major cost, and without this tracking, the company would incur significant maintenance delays that could stop the production line.

Additionally, one aviation manufacturer that is also using this approach to track the work orders has improved scheduling adherence from 30% up to 90%.

Ultimately, energy harvesting in logistics is not simply about eliminating batteries, but about building more resilient, predictable, and cost-effective supply chains. Perpetually powered tracking systems provide constant and reliable visibility, allow for more accurate lead-time predictions, better resource planning, and a significant reduction in the operational friction caused by lost or untraceable assets.

Pierre Gelpi graduated from École Polytechnique in Paris and obtained a Master’s degree from the University of Montreal in Canada. He has 25 years of experience in the telecommunications industry. He began his career at Orange Labs, where he spent eight years working on radio technologies and international standardization. He then served for five years as Technical Director for large accounts at Orange Business Services. After Orange, he joined Siradel, where he led sales and customer operations for wireless network planning and smart city projects, notably in Chile. He subsequently co-founded the first SaaS-based radio planning tool dedicated to IoT.
In 2016, he joined Semtech, where he was responsible for LoRa business development in the EMEA region, driving demand creation to accelerate market growth, particularly in the track-and-trace segment. He joined e-peas in 2024 to lead Sales in EMEA and to promote the vision of unlimited battery life.
References:

  1. Yahoo! (n.d.). Real Time Location Systems in transportation and Logistics Market Outlook Report 2025-2034 | AI, ML, and IOT, enhancing the capabilities of RTLS in real-time data collection and analysis. Yahoo! Finance. https://uk.finance.yahoo.com/news/real-time-location-systems-transportation-150900694.html?guccounter=2
  2. Asset tracking market size & share: Industry report, 2030. Asset Tracking Market Size & Share | Industry Report, 2030. (n.d.). https://www.grandviewresearch.com/industry-analysis/asset-tracking-market-report#:~:text=Industry:%20Technology,reducing%20losses%20and%20optimizing%20logistics.

Related Content

The post Zero maintenance asset tracking via energy harvesting appeared first on EDN.

AI workloads demand smarter SoC interconnect design

Птн, 01/16/2026 - 11:39

Artificial intelligence (AI) is transforming the semiconductor industry from the inside out, redefining not only what chips can do but how they are created. This impacts designs from data centers to the edge, including endpoint devices such as autonomous driving, drones, gaming systems, robotics, and smart homes. As complexity pushes beyond the limits of conventional engineering, a new generation of automation is reshaping how systems come together.

Instead of manually placing every switch, buffer, and timing pipeline stage, engineers can now use automation algorithms to generate optimal network-on-chip (NoC) configurations directly from their design specifications. The result is faster integration and shorter wirelengths, driving lower power consumption and latency, reduced congestion and area, and a more predictable outcome.

Below are the key takeaways of this article about AI workload demands in chip design:

  1. AI workloads have made existing SoC interconnect design impractical.
  2. Intelligent automation applies engineering heuristics to generate and optimize NoC architectures.
  3. Physically aware algorithms enhance timing closure, reduce power consumption, and shorten design cycles.
  4. Network topology automation is enabling a new class of AI system-on-chips (SoCs).

 

Machine learning guides smarter design decisions

As SoCs become central to AI systems, spanning high-performance computing (HPC) to low-power devices, the scale of on-chip communication now exceeds what traditional methods can manage effectively. Integrating thousands of interconnect paths has created data-movement demands that make automation essential.

Engineering heuristics analyze SoC specifications, performance targets, and connectivity requirements to make design decisions. This automation optimizes the resulting interconnect for throughput and latency within the physical constraints of the device floorplan. While engineers still set objectives such as bandwidth limits and timing margins, the automation engine ensures the implementation meets those goals with optimized wirelengths, resulting in lower latency and power consumption and reduced area.

This shift marks a new phase in automation. Decades of learned engineering heuristics are now captured in algorithms that are designing silicon that enables AI itself. By automatically exploring thousands of variations, NoC automation determines optimal topology configurations that meet bandwidth goals within the physical constraints of the design. This front-end intelligence enables earlier architectural convergence and provides the stability needed to manage the growing complexity of SoCs for AI applications.

Accelerating design convergence

In practice, automation generates and refines interconnect topologies based on system-level performance goals, eliminating the need for laborious repeated manual engineering adjustments, as shown in Figure 1. These automation capabilities enable rapid exploration and convergence of multiple different design configurations, shortening NoC iteration times by up to 90%. The benefits compound as designs scale, allowing teams to evaluate more options within a fixed schedule.

Figure 1 Automation replaces manual NoC generation, reducing power and latency while improving bandwidth and efficiency. Source: Arteris

Equally important, automation improves predictability. Physically aware algorithms recognize layout constraints early, minimizing congestion and improving timing closure. Teams can focus on higher-level architectural trade-offs rather than debugging pipeline delays or routing conflicts late in the flow.

AI workloads place extraordinary stress on interconnects. Training and inference involve moving vast amounts of data between compute clusters and high-bandwidth memory, where even microseconds of delay can affect throughput. Automated topology optimization ensures traffic flow to maintain consistent operation under heavy loads.

Physical awareness drives efficiency

In 3-nm technologies and beyond, routing wire parasitics are a significant factor in energy use. Automated NoC generation incorporates placement and floorplan awareness, optimizing wirelength and minimizing congestion to improve overall power efficiency.

Physically guided synthesis accelerates final implementation, allowing designs to reach timing closure faster, as Figure 2 illustrates. This approach provides a crucial advantage as interconnects now account for a large share of total SoC power consumption.

Figure 2 Smart NoC automation optimizes wirelength, performance, and area, delivering faster topology generation and higher-capacity connectivity. Source: Arteris

The outcome is silicon optimized for both computation and data movement. Automation enables every signal to take the best route possible within physical and electrical limits, maximizing utilization and overall system performance.

Additionally, automation delivers measurable gains in AI architectures. For example, in data centers, automated interconnect optimization manages multi-terabit data flows among heterogeneous processors and high-bandwidth memory stacks.

At the edge, where latency and battery life are critical, automation enables SoCs to process data locally without relying on the cloud. Across both environments, interconnect fabric automation ensures that systems meet escalating computational demands while remaining within realistic power envelopes.

Automation in designing AI

Automation has become both the architect and the workload. Automated systems can be used to explore multiple design options, optimize for power and performance simultaneously, and reuse verified network templates across derivative products. These advances redefine productivity, allowing smaller engineering teams to deliver increasingly complex SoCs in less time.

By embedding intelligence into the design process, automation transforms the interconnect from a passive conduit into an active enabler of AI performance. The result is a new generation of optimized silicon, where the foundation of computing evolves in step with the intelligence it supports.

Automation has become indispensable for next-generation SoCs, where the pace of architectural change exceeds traditional design capacity. By combining data analysis, physical awareness, and adaptive heuristics, engineers can build systems that are faster, leaner, and more energy efficient. These qualities define the future of AI computing.

Rick Bye is director of product management and marketing at Arteris, overseeing the FlexNoC family of non-coherent NoC IP products. Previously, he was a senior product manager at Arm, responsible for a demonstration SoC and compression IP. Rick has extensive product management and marketing experience in semiconductors and embedded software.

Special Section: AI Design

The post AI workloads demand smarter SoC interconnect design appeared first on EDN.

Сторінки