Українською
  In English
Microelectronics world news
Partners bring centimeter-level GNSS to IoT

Quectel is bundling its Real-Time Kinematic (RTK)-capable GNSS modules and antennas with Swift Navigation’s Skylark RTK correction service. Together, the hardware and service enable centimeter-level positioning accuracy for mass-market IoT applications and streamline RTK adoption.

Partnering with Swift allows Quectel to deliver optimized solutions for specific applications, helping equipment manufacturers navigate the complexities of RTK adoption. The Quectel RTK Correction Solution supports a wide range of use cases, including robotics, automotive, micro-mobility, precision agriculture, surveying, and mining. Swift’s Skylark provides multi-constellation, multi-frequency RTK corrections with broad geographic coverage across North America, Europe, and Asia-Pacific.
The RTK global offering ensures consistent compatibility and performance across regions, supporting quad–band GNSS RTK modules such as the LG290P, LG580P, and LG680P, as well as the dual-band LC29H series. These modules maintain exceptional RTK accuracy even in challenging environments. Quectel complements its hardware with full-stack services, including engineering support, precision antenna provisioning, and tuning.
The post Partners bring centimeter-level GNSS to IoT appeared first on EDN.
Multiprotocol firmware streamlines LoRa IoT design

Semtech’s Unified Software Platform (USP) for its LoRa Plus transceivers enables multiprotocol IoT deployments on a single hardware platform. It manages LoRaWAN, Wireless M-Bus, Wi-SUN FSK, and proprietary protocols, eliminating the need for protocol-specific hardware variants.

LoRa Plus LR20xx transceivers integrate 4th-generation LoRa IP that supports both terrestrial and non-terrestrial networks across sub-GHz, 2.4-GHz ISM, and licensed S-bands. The LoRa USP provides a unified firmware ecosystem for multiprotocol operation on various MCU platforms through open-source environments such as Zephyr. It also offers backward-compatible build options for Gen 2 SX126x and Gen 3 LR11xx devices.
LoRa USP succeeds LoRa Basics Modem as Semtech’s multiprotocol firmware platform. Both platforms share the same set of APIs, ensuring a seamless transition to the USP version. USP supports both bare-metal and Zephyr OS implementations.
The post Multiprotocol firmware streamlines LoRa IoT design appeared first on EDN.
Designer’s guide: PMICs for industrial applications

Power management integrated circuits (PMICs) are an essential component in the design of any power supply. Their main function is to integrate several complex features, such as switching and linear power regulators, electrical protection circuits, battery monitoring and charging circuits, energy-harvesting systems, and communication interfaces, into a single chip.
Compared with a solution based on discrete components, PMICs greatly simplify the development of the power stage, reducing the number of components required, accelerating validation and therefore the design’s time to market. In addition, PMICs qualified for specific applications, such as automotive or industrial, are commercially available.
In industrial and industrial IoT (IIoT) applications, PMICs address key power challenges such as high efficiency, robustness, scalability, and flexibility. The use of AI techniques is being investigated to improve PMIC performance, with the aim of reducing power losses, increasing energy efficiency, and reducing heat dissipation.
Achieving high efficiencyIndustrial and IIoT applications require multiple power lines with different voltage and current requirements. Logic processing components, such as microcontrollers (MCUs) and FPGAs, require very low voltages, while peripherals, such as GPIOs and communication interfaces, require voltages of 3.3 V, 5 V, or higher.
These requirements are now met by multichannel PMICs, which integrate switching buck, boost, or buck-boost regulators, as well as one or more linear regulators, typically of the low-dropout (LDO) type, and power switches, very useful for motor control. Switching regulators offer very high efficiency but generate electromagnetic noise related to the charging and discharging process of the inductor.
LDO regulators, which achieve high efficiency only when the output voltage differs slightly from the input voltage to the converter, are instead suitable for low-noise applications such as sensors and, more generally, where analog voltages with very low amplitude need to be managed.
Besides multiple power rails, industrial and IIoT applications require solutions with high efficiency. This requirement is essential for prolonging battery life, reducing heat dissipation, and saving space on the printed-circuit board (PCB) using fewer components.
To achieve high efficiency, one of the first parameters to consider is the quiescent current (IQ), which is the current that the PMIC draws when it is not supplying any load, while keeping the regulators and other internal functions active. A low IQ value reduces power losses and is essential for battery-powered applications, enabling longer battery operation.
PMICs are now commercially available that integrate regulators with very low IQ values, in the order of microseconds or less. However, a low IQ value should not compromise transient response, another parameter to consider for efficiency. Transient response, or response time, indicates the time required by the PMIC to adapt to sudden load changes, such as when switching from no load to active load. In general, depending on the specific application, it is advisable to find the right compromise between these two parameters.
Nordic Semiconductor’s nPM2100 (Figure 1) is an example of a low-power PMIC. Integrating an ultra-efficient boost regulator, the nPM2100 provides a very low IQ, addressing the needs of various battery-powered applications, including Bluetooth asset tracking, remote controls, and smart sensors.
The boost regulator can be powered from an input range of 0.7 to 3.4 V and provides an output voltage in the range of 1.8 V to 3.3 V, with a maximum output current of 150 mA. It also integrates an LDO/load switch that provides up to 50-mA output current with an output voltage in the range of 0.8 V to 3.0 V.
The nPM2100’s regulator offers an IQ of 150 nA and achieves up to 95% power conversion efficiency at 50 mA and 90.5% efficiency at 10 µA. The device also has a low-current ship mode of 35 nA that allows it to be transported without removing the battery inserted. Multiple options are available for waking up the device from this low-power state.
An ultra-low-power wakeup timer is also available. This is suitable for timed wakeups, such as Bluetooth LE advertising performed by a sensor that remains in an idle state for most of the time. In this hibernate state, the maximum current absorbed by the device is 200 nA.
Another relevant parameter that helps to increase efficiency is dynamic voltage and frequency scaling (DVFS).
When powering logic devices built with CMOS technology, such as common MCUs, processors, and FPGAs, a distinction can be made between static and dynamic power consumption. While the former is simply the product of the supply voltage by the current in idle conditions, dynamic power is expressed by the following formula:
Pdynamic = C × Vcc2 × fsw
where C is the load capacity, VCC is the voltage applied to the device, and fSW is the switching frequency. This formula shows that the power dissipated has a quadratic relationship with voltage and a linear relationship with frequency. The DVFS technique works by reducing these two electrical parameters and adapting them to the dynamic requirements of the load.
Consider now a sensor that transmits data sporadically and for short intervals, or an industrial application, such as a data center’s board running AI models. By reducing both voltage and frequency when they are not needed, DVFS can optimize power management, enabling significant improvements in energy efficiency.
NXP Semiconductors’ PCA9460 is a 13-channel PMIC specifically designed for low-power applications. It supports the i.MX 8ULP ultra-low-power family processor, providing four high-efficiency 1-A step-down regulators, four VLDOs, one SVVS LDO, and four 150-mΩ load switches, all enclosed in a 7 × 6-bump-array, 0.4-mm-pitch WSCSP42 package.
The four buck regulators offer an ultra-low IQ of 1.5 μA at low-power mode and 5.5 μA at normal mode, while the four LDOs achieve an IQ of 300 nA. Two buck regulators support smart DVFS, enabling the PMIC to always set the right voltage on the processors it is powering. This feature, enabled through specific pins of the PMIC, minimizes the overall power consumption and increases energy efficiency.
Energy harvestingThe latest generation of PMICs has introduced the possibility of obtaining energy from various sources such as light, heat, vibrations, and radio waves, opening up new scenarios for systems used in IIoT and industrial environments. This feature is particularly important in IIoT and wireless devices, where maintaining a continuous power source for long periods of time is a significant challenge.
Nexperia’s NEH71x0 low-power PMIC (Figure 2) is a full power management solution integrating advanced energy-harvesting features. Harvesting energy from ambient power sources, such as indoor and outdoor PV cells, kinetic (movement and vibrations), piezo, or a temperature gradient, this device allows designers to extend battery life or recharge batteries and supercapacitors.
With an input power range from 15 μW to 100 mW, the PMIC achieves an efficiency up to 95%, features an advanced maximum power-point tracking block that uses a proprietary algorithm to deliver the highest output to the storage element, and integrates an LDO/load switch with a configurable output voltage from 1.2 V to 3.6 V.
Reducing the bill of materials and PCB space, the NEH71x0 eliminates the need for an external inductor, offering a compact footprint in a 4 × 4-mm QFN28 package. Typical applications include remote controls, smart tags, asset trackers, industrial sensors, environmental monitors, tire pressure monitors, and any other IIoT application.
Figure 2: Nexperia’s NEH71x0 energy-harvesting PMIC can convert energy with an efficiency of up to 95%. (Source: Nexperia)
PMICs for AI and AI in PMICs
To meet the growing demand for power in the industrial sector and data centers, Microchip Technology Inc. has introduced the MCP16701, a PMIC specifically designed to power high-performance logic devices, such as Microchip’s PIC64GX microprocessors and PolarFire FPGAs. The device integrates eight 1.5-A buck converters that can be connected in parallel, four 300-mA LDOs, and a controller for driving external MOSFETs.
The MCP16701 offers a small footprint of 8 × 8 mm in a VQFN package (Figure 3), enabling a 48% reduction in PCB area and a 60% reduction in the number of components compared with a discrete solution. All converters, which can be connected in parallel to achieve a higher output current, share the same inductor.
A unique feature of this PMIC is its ability to dynamically adjust the output voltage on all converters in steps of 12.5 mV or 25 mV, with an accuracy of ±0.8% over the temperature range. This flexibility allows designers to precisely adjust the voltage supplied to loads, optimizing energy efficiency and system performance.
Figure 3: Microchip’s MCP16701 enables engineers to fine-tune power delivery, improving system efficiency and performance. (Source: Microchip Technology Inc.)
As in many areas of modern electronics, AI techniques are also being studied and introduced in the power management sector. This area of study is referred to as cognitive power management. PMICs, for example, can use machine-learning techniques to predict load evolution over time, adjusting the output voltage value in real time.
Tools such as PMIC.AI, developed by AnDAPT, use AI to optimize PMIC architecture and component selection, while Alif Semiconductor’s autonomous intelligent power management (aiPM) tool dynamically manages power based on AI workloads. These solutions enable voltage scaling, increasing system efficiency and extending battery life.
The post Designer’s guide: PMICs for industrial applications appeared first on EDN.
Basic design equations for three precision current sources

A frequently encountered category of analog system component is the precision current source. Many good designs are available, but concise and simple arithmetic for choosing the component values necessary to tailor them to specific applications isn’t always provided. I guess some designers feel such tedious details are just too trivially obvious to merit mentioning. But I sometimes don’t feel that.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Here are some examples I think some folks might find useful. I hope they won’t feel too terribly obvious, trivial, or tedious.
The circuit in Figure 1 is versatile and capable of high performance.
Figure 1 A simple high-accuracy current source that can source current with better than 1% accuracy.
With suitable component choices, this circuit can: source current with better than 1% accuracy and have Q1 drain currents ranging from < 1mA to > 10 A, while working with power supply voltages (Vps) from < 5V to > 100 V.
Here are some helpful hints for resistor values, resistor wattages, and safety zener D1. First note
- Vps = power supply voltage
- R1(W), Q1(W), and R2(W) = respective component power dissipation
- Id = Q1 drain current in amps
Adequate heat sinking for Q1(W). Another thing assumed is:
Vps > Q1 (Vgs ON voltage) + 1.24 + R1*100µA
The design equations are as follows:
- R1 = (Vps – 1.24)/1mA
- R1(W) = R1/1E6
- Q1(W) = (Vps – Vload – 1.24)*Id
- R2 = 1.24/Id
- R2(W) = 1.24 Id
- R2 precision 1% or better at the temperature produced by #5 heat dissipation
- D1 is needed only if Vps > 15V
Figure 2 substitutes an N-channel MOSFET for Figure 1’s Q1 and an anode-referenced 431 regulator chip in place of the cathode-referenced 4041 to produce a very similar current sink. Its design equations are identical.

Figure 2 A simple, high-accuracy current sink uses identical design math.
Okay, okay, I can almost hear the (very reasonable) objection that, for these simple circuits, the design math really was pretty much tedious, trivial, and obvious.
So I’ll finish with a very less obvious and more creative example from frequent contributor Christopher Paul’s DI “Precision, voltage-compliant current source.”
Taking parts parameters from Christopher Paul’s Figure 3, we can define:
- Vs = chosen voltage across the R3R4 divider
- V5 = voltage across R5
- Id = chosen application-specific M1 drain current
Then:
- Vs = 5V
- V5 = 5V – 0.65V = 4.35V
- R5 = 4.35V/150µA = 30kΩ
- I4 = Id – 290µA
- R3 = 1.24/I4
- R4 = (Vs – 1.24)/I4 = 3.76/I4
- R3(W) = 1.24 I4
- R4(W) = 3.76 I4
- M1(W) = Id(Vs – Vd)
For example, if Id = 50 mA and Vps = 15 V, then:
- I4 = 49.7 mA
- R5 = 30 kΩ
- R4 = 75.7 Ω
- R3 = 25.2 Ω
- R3(W) = 1.24 I4 = 100 mW
- R4(W) = 3.76 I4 = 200 mW
- M1(W) = 500 mW
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- A precision, voltage-compliant current source
- LM4041 voltage regulator impersonates precision current source
- Simple, precise, bi-directional current source
- A high-performance current source
- Precision programmable current sink
The post Basic design equations for three precision current sources appeared first on EDN.
CSconnected announces £1m third call to Supply Chain Development Programme
Microchip introduces edge-enabling LAN866x 10BASE-T1S ethernet for SDVs
As the automotive industry transitions to zonal architectures for in-vehicle networking, designers face increasing challenges in connecting a growing number of sensors and actuators. Traditional approaches often rely on microcontrollers and custom software for each network node, resulting in greater system complexity, higher costs and longer development cycles. To overcome these obstacles, Microchip Technology introduced its LAN866x family of 10BASE-T1S endpoint devices with Remote Control Protocol (RCP), extending Ethernet connectivity to the very edge of in-vehicle networks and enabling the vision of Software Defined Vehicles (SDVs).
The LAN866x endpoints are designed to simplify network integration by serving as bridges that translate Ethernet packets directly to local digital interfaces. Unlike conventional solutions, these endpoints are designed to be software-less, reducing the need for node-specific software programming, streamlining silicon usage and physical footprint. With support for standard-based RCP protocols, the endpoints enable centralized control of edge nodes for data streaming and device management. By utilizing a 10BASE-T1S multidrop topology, this solution supports an all-Ethernet, zonal architecture that helps reduce cabling, software integration and cost.
By removing the need for software development at every node, the LAN866x endpoints are designed to reduce both hardware and engineering costs, accelerate deployment timelines and simplify system architecture. The endpoints are well-suited for critical automotive applications such as lighting—covering interior, front and rear headlamps, as well as audio systems and a wide range of control functions. In these applications, the endpoints provide direct bridging of Ethernet data to local digital interfaces controlling LED drivers for lighting, transmitting audio data to and from microphones and speakers, as well as controlling sensors and actuators over the network.
“With the addition of these RCP endpoint devices, Microchip’s Single Pair Ethernet product line empowers designers to realize a true all-Ethernet architecture for Software-Defined Vehicles,” said Charlie Forni, corporate vice president of Microchip’s networking and communications business unit. “We are committed to delivering innovative solutions and supporting our customers with global technical expertise, comprehensive documentation and development tools to further reduce design complexity and help them bring vehicles to market faster.”
The post Microchip introduces edge-enabling LAN866x 10BASE-T1S ethernet for SDVs appeared first on ELE Times.
Tower extends 300mm wafer bonding technology across SiPho and SiGe BiCMOS
Ascent Solar and NovaSpark to team on lightweight power solutions for drones and terrestrial defense applications
NUBURU and Tekne forge renewed partnership
Bought a few sizes beefier than expected just look st that so cslled wire. They are like a wood nail.
| submitted by /u/Whyjustwhydothat [link] [comments] |
Caliber Interconnects Accelerates Complex Chiplet and ATE Hardware Design with Cadence Allegro X and Sigrity X Solutions
Caliber Interconnects Pvt. Ltd., announced that it has achieved accelerated turnaround times and first-time-right outcomes for complex chiplet and Automated Test Equipment (ATE) hardware projects. The company has refined its proprietary design and verification workflow, which integrates powerful Cadence solutions to optimize performance, power, and reliability from the earliest stages of design.
Caliber’s advanced methodology significantly enhances the efficiency and precision of designing high-complexity IC packages and dense PCB layouts. By leveraging the Cadence Allegro X Design Platform for PCB and advanced package designs, which features sub- rawing management and auto- routing, Caliber’s teams can work in parallel across various circuit blocks, compressing overall project timelines by up to 80 percent. This streamlined framework is reinforced by a rigorous in-house verification process and custom automation utilities developed using the Allegro X Design Platform’s SKILL-based scripting, ensuring consistent quality and compliance with design rules.
To meet the demands of next-generation interconnects operating at over 100 Gbps, Caliber’s engineers utilize Cadence’s Sigrity X PowerSI and Sigrity X PowerDC solutions. These advanced simulation tools allow the team to analyze critical factors such as signal loss, crosstalk, and power delivery network (PDN) impedance. By thoroughly evaluating IR drop, current density, and Joule heating, Caliber can confidently deliver design signoff, reducing the risk of costly respins and speeding time to market for its customers.
“Our team has elevated our engineering leadership by creating a disciplined workflow that delivers exceptional quality and faster turnaround times for our customers across the semiconductor ecosystem,” said Suresh Babu, CEO of Caliber Interconnects. “Integrating Cadence’s advanced design and simulation environment into our proprietary methodology empowers us to push the boundaries of performance and reliability in complex chiplet and ATE hardware design.”
The post Caliber Interconnects Accelerates Complex Chiplet and ATE Hardware Design with Cadence Allegro X and Sigrity X Solutions appeared first on ELE Times.
How to limit TCP/IP RAM usage on STM32 microcontrollers

The TCP/IP functionality of a connected device uses dynamic RAM allocation because of the unpredictable nature of network behavior. For example, if a device serves a web dashboard, we cannot control how many clients might connect at the same time. Likewise, if a device communicates with a cloud server, we may not know in advance how large the exchanged messages will be.
Therefore, limiting the amount of RAM used by the TCP/IP stack improves the device’s security and reliability, ensuring it remains responsive and does not crash due to insufficient memory.
Microcontroller RAM overview
It’s common that on microcontrollers, available memory resides in several non-contiguous regions. Each of these regions can have different cache characteristics, performance levels, or power properties, and certain peripheral controllers may only support DMA operations to specific memory areas.
Let’s take the STM32H723ZG microcontroller as an example. Its datasheet, in section 3.3.2, defines embedded SRAM regions:

Here is an example linker script snippet for this microcontroller generated by the CubeMX:

Ethernet DMA memory
We can clearly see that RAM is split into several regions. The STM32H723ZG device includes a built-in Ethernet MAC controller that uses DMA for its operation. It’s important to note that the DMA controller is in domain D2, meaning it cannot directly access memory in domain D1. Therefore, the linker script and source code must ensure that Ethernet DMA data structures are placed in domain D2; for example, in RAM_D2.
To achieve this, first define a section in the linker script and place it in the RAM_D2 region:

Second, the Ethernet driver source code must put respective data into that section. It may look like this:

Heap memory
The next important part is the microcontroller’s heap memory. The standard C library provides two basic functions for dynamic memory allocation:

Typically, ARM-based microcontroller SDKs are shipped with the ARM GCC compiler, which includes the Newlib C library. This library, like many others, has a concept of so-called “syscalls” featuring low level routines that user can override, and which are called by the standard C functions. In our case, the malloc() and free() standard C routines call the _sbrk() syscall, which firmware code can override.
It’s typically done in the sycalls.c or sysmem.c file, and may look this:

As we can see, the _sbrk() operates on a single memory region:

That means that such implementation cannot be used in several RAM regions. There are more advanced implementations, like FreeRTOS’s heap4.c, which can use multiple RAM regions and provides pvPortMalloc() and pvPortFree() functions.
In any case, standard C functions malloc() and free() provide heap memory as a shared resource. If several subsystems in a device’s firmware use dynamic memory and their memory usage is not limited by code, any of them can potentially exhaust the available memory. This can leave the device in an out-of-memory state, which typically causes it to stop operating.
Therefore, the solution is to have every subsystem that uses dynamic memory allocation operate within a bounded memory pool. This approach protects the entire device from running out of memory.
Memory pools
The idea behind a memory pool is to split a single shared heap—with a single malloc and free—into multiple “heaps” or memory pools, each with its own malloc and free. The pseudo-code might look like this:

The next step is to make each firmware subsystem use its own memory pool. This can be achieved by creating a separate memory pool for each subsystem and using the pool’s malloc and free functions instead of the standard ones.
In the case of a TCP/IP stack, this would require all parts of the networking code—driver, HTTP/MQTT library, TLS stack, and application code—to use a dedicated memory pool. This can be tedious to implement manually.
RTOS memory pool API
Some RTOSes provide a memory pool API. For example, Zephyr provides memory heaps:

The other example of an RTOS that provides memory pools is ThreadX:

Using external allocator
The other alternative is to use an external allocator. There are many implementations available. Here are some notable ones:
- umm_malloc is specifically designed to work with the ARM7 embedded processor, but it should work on many other 32-bit processors, as well as 16- and 8-bit processors.
- o1heap is a highly deterministic constant-complexity memory allocator designed for hard real-time high-integrity embedded systems. The name stands for O(1) heap.
Example: Mongoose and O1Heap
The Mongoose embedded TCP/IP stack makes it easy to limit its memory usage, because Mongoose uses its own functions mg_calloc() and mg_free() to allocate and release memory. The default implementation uses the C standard library functions calloc() and free(), but Mongoose allows user to override these functions with their own implementations.
We can pre-allocate memory for Mongoose at firmware startup, for example 50 Kb, and use o1heap library to use that preallocated block and implement mg_calloc() and mg_free() using o1heap. Here are the exact steps:
- Fetch o1heap.c and o1heap.h into your source tree
- Add o1heap.c to the list of your source files
- Preallocate memory chunk at the firmware startup

- Implement mg_calloc() and mg_free() using o1heap and preallocated memory chunk

You can see the full implementation procedure in the video linked at the end of this article.
Avoid memory exhaustion
This article provides information on the following design aspects:
- Understand STM32’s complex RAM layout
- Ensure Ethernet DMA buffers reside in accessible memory
- Avoid memory exhaustion by using bounded memory pools
- Integrate the o1heap allocator with Mongoose to enforce TCP/IP RAM limits
By isolating the network stack’s memory usage, you make your firmware more stable, deterministic, and secure, especially in real-time or resource-constrained systems.
If you would like to see a practical application of these principles, see the complete tutorial, including a video with a real-world example, which describes how RAM limiting is implemented in practice using the Mongoose embedded TCP/IP stack. This video tutorial provides a step-by-step guide on how to use Mongoose Wizard to restrict TCP/IP networking on a microcontroller to a preallocated memory pool.
As part of this tutorial, a real-time web dashboard is created to show memory usage in real time. The demo uses an STM32 Nucleo-F756ZG board with built-in Ethernet, but the same approach works seamlessly on other architectures too.
Sergey Lyubka is the co-founder and technical director of Cesanta Software Ltd. He is known as the author of the open-source Mongoose Embedded Web Server and Networking Library, which has been on the market since 2004 and has over 12K stars on GitHub. Sergey tackles the issue of making embedded networking simpler to access for all developers.
Related Content
- Developing Energy-Efficient Embedded Systems
- Can MRAM Get EU Back in the Memory Game?
- An MCU test chip embeds 10.8 Mbit STT-MRAM memory
- How MCU memory dictates zone and domain ECU architectures
- Breaking Through Memory Bottlenecks: The Next Frontier for AI Performance
The post How to limit TCP/IP RAM usage on STM32 microcontrollers appeared first on EDN.
New Vishay Intertechnology Silicon PIN Photodiode for Biomedical Applications
Vishay Intertechnology, Inc. introduced a new high speed silicon PIN photodiode with enhanced sensitivity to visible and infrared light. Featuring a compact 3.2 mm by 2.0 mm top-view, surface-mount package with a low 0.6 mm profile, the Vishay Semiconductors VEMD8083 features high reverse light current and fast response times for improved performance in biomedical applications such as heart rate and blood oxygen monitoring.
The device offers a smaller form factor than previous-generation solutions, allowing for integration into compact wearables, such as smart rings, and consumer health monitoring devices. However, while its chip size is reduced, the photodiode’s package is optimized to support a large radiant sensitive area of 2.8 mm², which enables high reverse light current of 11 μA at 525 nm, 14 μA at 660 nm, and 16 μA at 940 nm.
The VEMD8083’s high sensitivity is especially valuable in biomedical applications like photo plethysmography (PPG), where it detects variations in blood volume and flow by measuring light absorption or reflection from blood vessels. Accurate detection in these scenarios is essential for diagnosing and monitoring conditions such as cardiovascular disease.
Pin to pin compatible with competing solutions, the device detects visible and near infrared radiation over a wide spectral range from 350 nm to 1100 nm. For high sampling rates, the VEMD8083 offers fast rise and fall times of 30 ns and diode capacitance of 50 pF. The photodiode features a ± 60° angle of half-sensitivity and an operating temperature range of -40 °C to +85 °C.
RoHS-compliant, halogen-free, and Vishay Green, the device provides a moisture sensitivity level (MSL) of 3 in accordance with J-STD-020 for a floor life of 168 hours.
Samples and production quantities of the VEMD8083 are available now.
The post New Vishay Intertechnology Silicon PIN Photodiode for Biomedical Applications appeared first on ELE Times.
PCB I found in the recycling center
| | thought it looked coo [link] [comments] |
So I’m working on this stupid thing…
| | This is more of a vent I guess. So maybe it’s because my workbench is in such disarray; my home office is trashed so I started doing work in the downstairs dining room and fucked that place up, too. Wreaking havoc around the house and the other half isn’t having it lol I’m trying to work on this board and wasn’t thinking about serviceability. Only after everything was done, I was like, “oh shit this thing better work”. Got everything wired up and proper, did point to point verification with a multimeter and resolved shorts on the 5v bus and come to find out, when powered on, the ESP32 is not working as expected. Everything is point to point soldered. So I need to rebuild this stupid thing from scratch, but the proper way using wire wrap techniques and socketing the ESP32 and logic level converter boards. Just FYI, this board I’m trying to build is meant to drive a HUB75 RGB panel with text/graphics from a Raspberry Pi’s UART interface. Prototype wise, it’s working as you can see in the background, now I’m trying to put everything on this perfboard as it is mean to be displayed in the open. The ESP32 is also driving 8 x MAX7219 8x8 LED matrix. This is an effort to build a thing centered around AI/LLM. My idea/concept got everyone in the AI community in an uproar, so I’m making an art piece out of it [link] [comments] |
My family says I(18) live in a workshop.
| | submitted by /u/Ready_Rain_2646 [link] [comments] |
CreeLED sues Promier Products and Tractor Supply
SK keyfoundry accelerating development of SiC-based power semiconductor technology
Infineon’s CoolGaN technology used in Enphase’s new IQ9 solar microinverter
Predictive maintenance at the heart of Industry 4.0

In the era of Industry 4.0, manufacturing is no longer defined solely by mechanical precision; it’s now driven by data, connectivity, and intelligence. Yet downtime remains one of the most persistent threats to productivity. When a machine unexpectedly fails, the impact ripples across the entire digital supply chain: Production lines stop, delivery schedules are missed, and teams scramble to diagnose the issue. For connected factories running lean operations, even a short interruption can disrupt synchronized workflows and compromise overall efficiency.
For decades, scheduled maintenance has been the industry’s primary safeguard against unplanned downtime. Maintenance was rarely data-driven but rather scheduled at rigid intervals based on estimates (in essence, educated guesses). Now that manufacturing is data-driven, maintenance should be data-driven as well.
Time-based, or ISO-guided, maintenance can’t fully account for the complexity of today’s connected equipment because machine behaviors vary by environment, workload, and process context. The timing is almost never precisely correct. This approach risks failing to detect problems that flare up before scheduled maintenance, often leading to unexpected downtime.
In addition, scheduled maintenance can never account for faulty replacement parts or unexpected environmental impacts. Performing maintenance before it is necessary is inefficient as well, leading to unnecessary downtime, expenses, and resource allocations. Maintenance should be performed only when the data says maintenance is necessary and not before; predictive maintenance ensures that it will.
To realize the promise of smart manufacturing, maintenance must evolve from a reactive (or static) task into an intelligent, autonomous capability, which is where Industry 4.0 becomes extremely important.
From scheduled service to smart systemsIndustry 4.0 is defined by convergence: the merging of physical assets with digital intelligence. Predictive maintenance represents this convergence in action. Moving beyond condition-based monitoring, AI-enabled predictive maintenance systems use active AI models and continuous machine learning (ML) to recognize and alert stakeholders as early indicators of equipment failure before they trigger costly downtime.
The most advanced implementations deploy edge AI directly to the individual asset on the factory floor. Rather than sending massive data streams to the cloud for processing, these AI models analyze sensor data locally, where it’s generated. This not only reduces latency and bandwidth use but also ensures real-time insight and operational resilience, even in low-connectivity environments. In an Industry 4.0 context, edge intelligence is critical for achieving the speed, autonomy, and adaptability that smart factories demand.
AI-enabled predictive maintenance systems use AI models and continuous ML to detect early indicators of equipment failure before they trigger costly downtime. (Source: Adobe AI Generated)
Edge intelligence in Industry 4.0
Traditional monitoring solutions often struggle to keep pace with the volume and velocity of modern industrial data. Edge AI addresses this by embedding trained ML models directly into sensors and devices. These models continuously analyze vibration, temperature, and motion signals, identifying patterns that precede failure, all without relying on cloud connectivity.
Because the AI operates locally, insights are delivered instantly, enabling a near-zero-latency response. Over time, the models adapt and improve, distinguishing between harmless deviations and genuine fault signatures. This self-learning capability not only reduces false alarms but also provides precise fault localization, guiding maintenance teams directly to the source of a potential issue. The result is a smarter, more autonomous maintenance ecosystem aligned with Industry 4.0 principles of self-optimization and continuous learning.
Building a future-ready predictive maintenance frameworkTo be truly future-ready for Industry 4.0, a predictive maintenance platform must seamlessly integrate advanced intelligence with intuitive usability. It should offer effortless deployment, compatibility with existing infrastructure, and scalability across diverse equipment and facilities. Features such as plug-and-play setup and automated model deployment minimize the load on IT and operations teams. Customizable sensitivity settings and severity-based analytics empower tailored alerting aligned with the criticality of each asset.
Scalability is equally vital. As manufacturers add or reconfigure production assets, predictive maintenance systems must seamlessly adapt, transferring models across machines, lines, or even entire facilities. Hardware-agnostic solutions offer the flexibility required for evolving, multivendor industrial environments. The goal is not just predictive accuracy but a networked intelligence layer that connects all assets under a unified maintenance framework.
Real-world impact across smart industriesPredictive maintenance is a cornerstone of digital transformation across manufacturing, energy, and infrastructure. In smart factories, predictive maintenance monitors robotic arms, elevators, lift motors, conveyors, CNC machines, and more, targeting the most critical assets in connected production lines. In energy and utilities, it safeguards turbines, transformers, and storage systems, preventing performance degradation and ensuring safety. In smart buildings, predictive maintenance monitors HVAC systems and elevators for advanced notice of needed maintenance or replacement of assets that are often hard to monitor and cause great discomfort and loss of productivity during unexpected downtime.
The diversity of these applications underscores an Industry 4.0 truth: Interoperability and adaptability are as important as intelligence. Predictive maintenance must be able to integrate into any operational environment, providing actionable insights regardless of equipment age, vendor, or data format.
Intelligence at the industrial edgeThe edgeRX platform from TDK SensEI, for example, embodies the next generation of Industry 4.0 machine-health solutions. Combining industrial-grade sensors, gateways, dashboards, and cloud interfaces into a unified system, edgeRX delivers immediate visibility into machine-health conditions. Deployed in minutes, it immediately begins collecting data to build ML models for deployment from the cloud back to the sensor device for real-time inference on the sensor at the edge.
By processing data directly on-device, edgeRX eliminates the latency and energy costs of cloud-based analytics. Its ruggedized, IP67-rated hardware and long-life batteries make it ideal for demanding industrial environments. Most importantly, edgeRX learns continuously from each machine’s unique operational profile, providing precise, actionable insights that support smarter, faster decision-making.
TDK SensEI’s edgeRX advanced machine-health-monitoring platform (Source: TDK SensEI)
The road to autonomous maintenance
As Industry 4.0 continues to redefine manufacturing, predictive maintenance is emerging as a key enabler of self-healing, data-driven operations. EdgeRX transforms maintenance from a scheduled obligation into a strategic function—one that is integrated, adaptive, and intelligent.
Manufacturers evaluating their digital strategies should ask:
- Am I able to remotely and simultaneously monitor and alert on all my assets?
- Are our automated systems capturing early, subtle indicators of failure?
- Can our current solutions scale with our operations?
- Are insights available in real time, where decisions are made?
If the answer is no, it’s time to rethink what maintenance means in the context of Industry 4.0. Predictive, edge-enabled AI solutions don’t just prevent downtime; they drive the autonomy, efficiency, and continuous improvement that define the next industrial revolution.
The post Predictive maintenance at the heart of Industry 4.0 appeared first on EDN.



