Збирач потоків

Co-packaged optics accelerating towards commercialization

Semiconductor today - Втр, 04/08/2025 - 14:26
Engineered substrate manufacturer Soitec of Bernin, near Grenoble, France says that it welcomes recent industry steps to accelerate development and commercialization of co-packaged optics (CPO) solutions for data centers...

Acquire the Current Challenges of Indirect Time-of-Flight (iToF) Technology with Technological Advancements

ELE Times - Втр, 04/08/2025 - 13:53

Courtesy : Onsemi

One secret behind the success of modern industrial automation is the power of 3D vision. Traditional 2D sensors can only provide flat images, creating limitations in their effectiveness in applications like device inspection. They can read a barcode which may contain the items’ dimensions but cannot independently gauge true shape and size, or any potential dents, defects or irregularities. In addition, 2D readings are at the mercy of lighting conditions, which may obfuscate or distort important areas of interest.

A breakthrough to these constraints can be done with depth sensing, processing the Z-axis in 3D, much like human vision. Now, depth cameras can tell the fullness of an object, perform precise inspections on devices, and even detect subtle facial features for applications such as access control. Thanks to these capabilities, 3D vision is a game-changer across industries – from defense and aerospace to medical, automotive and micro-technology. Whether it’s obstacle detection, facial recognition, self-driving or robotic assistants, depth sensing is the key to modern industrial automation.

Depth sensing, however of type, relies on active or passive visual protectionism. Depth sensing based on passive componentular requires highly calilytic stereo sensors and parallax, very similar to the human eye. Active sensing uses an emitted light beam towards their targets and uses the reflected energy to determine depth. This requires an energy emitter, but offers advantages like penetrating clouds/smoke, 24/7 operation and more deterministic operation.

There are several active directional techniques: direct time-of-flight (dToF), indirect time-of-flight (iToF), structured light and active stereo. Indirect time-offlight uses phase shift between the transmitted and received signals to calculate distance – it’s very accurate and the understanding hardware is simple.

In this blog you will learn about onsemi’s latest family addition, Hyperlux ID has made significant advances in iToF technology and these advances can be utilized to improve depth sensing in current industrial and commercial applications.

Existing iToF Technology Constraints Reduce Widespread Adoption

iToF sensing lies at the heart of many applications. One such popular application is face recognition as seen on various smartphones. However, this access control feature can only function at close range. Other applications that use iToF include machine vision (MV), robotics, augmented reality/virtual reality (AR/VR), biometrics and patient monitoring. Currently these applications are restricted to innulin use at close range (< 5m) with stationary objects that do not require high resolution. Several challenges restrict the potential scope of iToF technology. Among these are motion, the overhead and complexity of the hardware and data processing architecture and the need for meticulous calibration.

These significant hurdles either force engineers to implement complex 3D and expensive solutions to obtain depth, or simply to not acquire depth information at all. With remarkable innovations, onsemi introduces the Hyperlux ID family.

Hyperlux ID family that enables the benefits of iToF without previously noted restrictions. Hyperlux ID’s iToF implementation can now enable a more widespread adoption of this important technology.

Detailing the Hyperlux ID Advances

Onsemi’s Hyperlux ID sensing family initially consists of two 1.2 megapixels (MP) iToF products, the AF0130 and AF0131. This family provides advanced sensor performance and development in four critical areas:

  1. Receiving reliable depth information with moving objects

Achieving optimal resolution/depth distance with high accuracy

  1. Reducing cost and size
  2. Decreasing calibration time

Each of the aforementioned areas and improvements are further detailed.

Momentum Motion Artifacts

To enable more widespread adoption, iToF sensors need to function well with moving objects, so they can produce accurate results without motion. As mentioned, iToF sensing on light reflections using four or more different phases to calculate depth. Nearly all existing iToF sensing solutions in the marketplace do not capture and process these phases simultaneously provide which issues with moving objects. Designed with a unique proprietary integration and readout structure, the Hyperlux ID depth sensor uses global shutter with on-chip storage and real-time processing to enable fast-moving object capture applications such as conveyor belt operation, robot arms, surveillance, collision collision, attachment detection and more.

iToF Applications in Warehouse Increased Resolution = Higher Accuracy and Expanded Depth

Most iToF sensors on the market today have only VGA resolution, which hinders their accuracy, and in turn, limits their applications. One reason VGA is more prevalent is due to the complex phase capture and data intensive processing mentioned prior. In contrast, the Hyperlux ID sensors are designed with 1.2 MP resolution (1280×960) using a high performance 3.5 μm back-side (BSI) pixel. As a product of its increased resolution over VGA, the Hyperlux ID sensor offers the additional critical advantage of expanded range depth. , at closer distances high-precision accuracy is provided and wider-angle optics can be used.

With higher resolution, the Hyperlux ID sensors also deliver improved quantum efficiency and reduced depth jitter. Taken together, these enhancements mean new applications for iToF sensors where high resolution and expanded depth are paramount, such as gesture recognition, quality control/inspection and access control.

Existing iToF solutions (left) vs onsemi’s new advanced Hyperlux ID iToF (right) Longer Range

As a product of increased resolution, the Hyperlux ID depth sensor can measure depth over a much greater range compared to other iToF sensors currently available. While current iToF offerings have an indoor range of less than 10 meters, the Hyperlux ID iToF sensor family can reach up to 30 meters. The usage of a high-performance global shutter pixel enables a full sensor array to closely align to active infrared lighting, which in turn limits noise provides from other infrared sources which are common indoor lights and most challenging of all – the sun.

Easier Calibration and Development

Accurately record and calculating phase differences in iToF sensors require precise calibration, an extremely time-consuming process. To ease this, we have developed a proprietary method that makes Hyperlux ID sensors easier to calibrate and thus faster to set up.

To aid in development, onsemi has constructed an easy-to-use development kit that includes a baseboard, a head sensorboard and a laser board. The kit can be used both indoors and outdoors with a range of 0.5 – 30 meters. It can produce depth maps, 3D point clouds, phase-out and depth-out data from an image.

Activated, by using spread-spectrum techniques many iToF (and other infrared-enabled devices) sensors can be used in the same system without worrying of other interference devices.

onsemi’s iToF Sensors Do More for Less

iToF sensors are excellent at making accurate 3D depth measurements, which have won them a solid place in industrial and commercial applications. With remarkable improvements in performance and design simplification, onsemi’s Hyperlux ID depth sensors open a new world of applications for iToF sensing depth.

Compared to iToF sensors on the market today, Hyperlux ID depth sensors work more effectively with objects in motion, outdoors and at greater distances. In addition, due to their novel design, Hyperlux ID depth sensors are more cost-effective, take up less board real estate and are easier to work with.

The Hyperlux ID family of depth sensors consists of two products: the AF0130 and AF0131. The AF0130 includes built-in depth processing while the AF0131 does not, for customers who prefer to use their own original algorithms.

The post Acquire the Current Challenges of Indirect Time-of-Flight (iToF) Technology with Technological Advancements appeared first on ELE Times.

Comptek launches Kontrox LASE 16 for industrial-scale edge-emitting laser facet passivation

Semiconductor today - Втр, 04/08/2025 - 13:43
Comptek Solutions Oy of Turku, Finland (which specializes in III-V compound semiconductor quantum surface engineering) has launched its industrial-grade Kontrox LASE 16. Following significant advances in optimizing Kontrox technology for passivating edge-emitting lasers, the Kontrox LASE 16 is designed to transform laser facet processing, providing manufacturers with a more efficient and cost-effective solution for enhanced laser diode performance...

Empower industrial IoT through integrated connectivity, precise positioning and value-added services with a new modem lineup from Qualcomm

ELE Times - Втр, 04/08/2025 - 13:13

Three new modems, purpose-built for IoT, bring an industry-first iSIM, cloud services and connectivity on NB-IoT and Cat 1bis networks for ubiquitous coverage.

The industrial Internet of Things (IIoT) is rapidly transforming industries, enabling businesses to achieve greater efficiency, productivity and visibility. However, deploying successful IIoT applications requires reliable connectivity, accurate positioning and cost-effective solutions. Three new modems from Qualcomm Technologies are purpose-built to address far-ranging use cases across industrial applications through an industry-first integrated SIM (iSIM), and LTE connectivity on Narrowband IoT (NB-IoT) and Cat 1bis networks, for coverage even in challenging signal environments.

The Qualcomm E41 4G Modem-RF

The Qualcomm E41 4G Modem-RF evolves IoT device capabilities by bringing integrated connectivity through an industry-first GSMA pre-certified iSIM. It offers device manufacturers the ability to simplify the device manufacturing process by reducing the need for additional parts and multiple models of the same device, helping accelerate the time to market of commercial devices, since those devices can be remotely provisioned to the desired network once manufactured through integrated connectivity capabilities. The E41 4G Modem-RF is also purpose-built for use with the Qualcomm Aware Platform so enterprises, OEMs, ODMs and developers can easily build, deploy and scale cloud-connected devices that can be tailored to solve various industrial challenges across businesses, through value-added, cloud-based services.

The Qualcomm E51 4G Modem-RF and Qualcomm E52 4G Modem-RF

Continuing the mission of advancing cellular connectivity for everyone and across every device, Qualcomm is proudly introducing a new generation of modem solutions for IoT, optimized for use on NB-IoT and Cat 1bis networks. Both the Qualcomm E51 4G Modem-RF and the Qualcomm E52 4G Modem-RF feature a highly integrated design that allows for power and cost optimizations for device manufacturers. These two low-power solutions contain an integrated power management unit, support for RF communications, and a rich array of peripherals.

The former of these two solutions also removes the need for dedicated GPS hardware through cloud-based GPS positioning services, further helping device manufacturers save on device costs, while reducing positioning error in open sky and dense urban environments. Regardless of which modem ODMs and OEMs choose, they can rest assured they can utilize low-power connectivity and intelligent power management capabilities, and NB-IoT or Cat 1bis connectivity, making these modems ideal for ultra-low power connectivity across a range of IoT devices including smart meters, smart city devices, intelligent parking solutions, healthcare devices, wearable devices, IP cameras, point-of-sale terminals and more.

Integrated global connectivity

The Qualcomm E41 4G Modem-RF and Qualcomm E52 4G Modem-RF are both Cat 1bis solutions that represent advancements in IIoT connectivity, including a breakthrough on the former of these modems, which features an industry-first, GSMA pre-certified iSIM solution that can be programmed during manufacturing or remotely via a SIM provisioning service. This will enable devices to more readily connect to a variety of cellular networks across the globe, thereby making it easier than ever for ODMs, OEMs, MNOs and MVNOs to integrate connectivity on devices across networks.

The potential applications for the E41 4G Modem-RF span across a variety of IoT devices, including smart meters that are placed in remote areas that have historically required frequent battery replacements or manual readings. Now, those meters can operate more efficiently by using integrated connectivity and remote management to send readings proactively over the air, and alert remote decision-makers when maintenance is needed.

Positioning in any environment

IoT devices are deployed in a variety of environments, including where location technologies have traditionally been challenged, such as indoor areas like warehouses and retail stores. The E41 4G Modem-RF uses several positioning techniques to address the needs of industrial IoT applications, including in these difficult signal environments, using ambient signals from existing Wi-Fi access points and cellular towers. Positioning can be achieved either directly through the modem, or through Qualcomm Aware Positioning Services, which adds cloud-based positioning services and available GNSS assistance, when paired with the all-new optional dual-band GNSS receiver, the Qualcomm QCG110. This is an ideal solution for positioning devices in open-sky environments that require precise positioning, using multiple constellations, in a power-conscious way.

With its variety of positioning technologies, the E41 4G Modem-RF provides a robust solution for IIoT applications including asset tracking and fleet management, energy and utilities, retail and mobile network operators, powering continuous asset visibility, monitoring and management capabilities even in the most challenging conditions.

Cost-conscious design

All three new modems will help device manufacturers simplify the development process and reduce the time and costs to develop devices through a highly integrated design architecture. Because the E41 4G Modem-RF incorporates iSIM technology directly into the hardware design, it reduces the total cost of assembling a device, since the cost of SIM card is included in the modem. OEMs are able to develop a single device model that can be remotely programmed to work in different regions around the globe and transform the traditional manufacturing model where it’s been necessary to build multiple models of the same device, each using a different SIM, to work with different connectivity providers across regions. By utilizing the E41 4G Modem-RF’s compact design, businesses can unlock the full potential of IIoT without compromising on quality or performance, and reduce design complexity.

Powering a variety of industrial uses

The capabilities of all three modems unlock a wide variety of possibilities across smart wearables in warehousing, industrial handheld devices in retail, smart metering in energy and utilities, guidance for autonomous robots across retail, warehouses and more.

In the energy and utilities sector, example uses for all three of these modems include:

  • Improved operational efficiency and energy distribution on a localized grid level with reduced costs through less manual intervention.
  • Long-lasting asset control capabilities for vital infrastructure, such as electric meters, through precise data collection and remote management capabilities.
  • High temperature support allows devices to be deployed and used in harsh environments that are typical of energy and utilities space.
  • IP cameras, wearable devices, smart meters and industrial handheld devices.

In the retail sector, examples of solutions the E41 4G Modem-RF can power include:

  • Real-time inventory management and security-focused payment processing to point-of-sale systems and industrial handheld devices.
  • On-device AI capabilities and advanced security surveillance functionality on IP cameras with real-time alerts and remote monitoring capabilities.

For autonomous robots in manufacturing, logistics and retail applications, the E41 4G Modem-RF provides:

  • Precise positioning and connectivity, delivering efficient navigation and automation.
  • Low-latency and security-focused processing for enhanced reliability during use.

At its core, the integrated and compact design of these three modems supports a wide range of IoT applications that demand both precise, low-power positioning and seamless connectivity, within a single, versatile design that can be selected depending on the target application, empowering businesses across multiple industries to achieve growth and seize new opportunities.

The post Empower industrial IoT through integrated connectivity, precise positioning and value-added services with a new modem lineup from Qualcomm appeared first on ELE Times.

👍 Конференція трудового колективу КПІ ім. Ігоря Сікорського

Новини - Втр, 04/08/2025 - 12:49
👍 Конференція трудового колективу КПІ ім. Ігоря Сікорського
Image
kpi вт, 04/08/2025 - 12:49
Текст

17 квітня 2025 року відбудеться конференція трудового колективу КПІ ім. Ігоря Сікорського у Центрі культури та мистецтв КПІ

Порядок денний на сайті профкому

Optimize power and wakeup latency in swift response vision systems – Part 2

EDN Network - Втр, 04/08/2025 - 12:42

Part 1 of this article series provided a detailed overview of a trigger-based vision system for embedded applications. It also delved into latency measurements of this swift response vision system while explaining latency-related design strategy and measurement methods. Now, Part 2 provides a detailed treatment of optimizing power consumption and wakeup latency of this embedded vision system.

In Linux, power management is a key feature that allows the system to enter various sleep states to conserve energy when the system is idle or in a low-power state. These sleep states are typically categorized into “suspend” (low-power modes) and “hibernate” (suspend to disk) modes that are part of the Advanced Configuration and Power Interface (ACPI) specification. Below are the main Linux sleep states.

Figure 1 Here is a highlight of Linux sleep states. Source: eInfochips

  • Wakeup (Idle): System fully active; CPU and components fully powered, used when the device is actively in use; high power consumption, no resume time needed.
  • Deep sleep (Suspend-to-RAM): CPU and motherboard components mostly disabled, RAM refreshed, used for deeper low-power states to save energy; low power consumption varying by C-state, fast resume time (milliseconds).
  • System sleep (Suspend-to-Idle): CPU frozen, RAM in self-refresh mode, shallow sleep state for low-latency, responsive applications (for example, network requests); low power consumption, higher than hibernate, fast resume time (milliseconds).
  • Hibernate (Suspend-to-Disk): Memory saved to disk, system powered off, used for deep power savings over long periods (for instance, laptops); almost zero power consumption, slow resume time (requires reading from disk).

Suspend To Ram (STR) offers a good balance, as it powers down most of the system but keeps RAM active (self-refresh mode) for a quick resume, making it suitable for devices needing quick wakeups and energy savings. Hibernate, on the other hand, saves more power by writing the system’s state to disk and powering down completely, but resulting in slower wakeup times.

Qualcomm’s chips, especially those found in Linux embedded devices, support two power-saving modes to help optimize battery life and improve efficiency. These power-saving modes are typically controlled through the system’s firmware, the operating system, and specific hardware components. Here are the main power-saving modes supported by Qualcomm-based chipsets:

  • Suspend to RAM (STR)
  • Suspend to Idle (S2Idle)

Triggers suspend mode by writing “mem” or “freeze” in /sys/power/state.

Figure 2 Here is how source flow looks like when device enters sleep and wakes up. Source: eInfochips

As the device goes into suspend modes, it performs the following tasks:

  • Check whether the suspend type is valid or not
  • Notify user space applications that device is going into sleep state
  • Freeze the console logs
  • Freeze kernel thread and buses and freeze unwalkable interrupts
  • Disable non-bootable CPU (CPU 1-7) and keep RAM into self-refresh mode
  • Keep the device into sleep state until any wakeup signal is received

Once the device receives the wakeup interrupt or trigger, it starts resuming the device in reverse order while suspending the device.

While the system is suspended, the current consumption of the Aikri QRB4210 system on module (SoM) comes around to ~7 mA at 3.7-V supply voltage. Below is the waveform of the current drained by the system on module.

Figure 3 Here is how current consumption looks like while Aikri QRB4210 is in suspend mode. Source: eInfochips

Camera sensor power modes

Camera sensors are designed to support multiple power modes such as:

  • Streaming mode
  • Suspend mode
  • Standby mode

Each mode has distinct power consumption and latency. Latency varies by power-saving level and sensor state. Based on use case, ensure the camera uses the most efficient mode for its function, especially while the system is in power saving mode like deep sleep or standby. This ensures balanced performance and power efficiency while maintaining quick reactivation.

In GStreamer, the pipeline manages data flow through various processing stages. These stages align with the GStreamer state machine, marking points in the pipeline’s lifecycle. The four main states are NULL, READY, PAUSED and PLAYING, each indicating the pipeline’s status and controlling data and event flow. Here’s a breakdown of each of the stages (or states) in a GStreamer pipeline:

Figure 4 The above image outlines GStreamer’s pipeline stages. Source: eInfochips

  1. Null
  • This is the initial state of the pipeline, and it represents an inactive or uninitialized state. The pipeline is not doing any work in this state. All elements in the pipeline are in their NULL state as well.
  • In this state, the master clock (MCLK) from the processor to the camera sensor is not active; the camera sensor is in reset state and the current consumption by the camera is almost zero.
  1. Ready
  • In this state, the pipeline is ready to be configured but has not yet started processing any media. It’s like a preparation phase before actual playback or processing starts.
  • GStreamer performs sanity check and plugin compatibility for the given pipeline.
  • Resources can be allocated (for example, memory buffers and device initialization).
  • GStreamer entering this state does not impact MCLK’s state or reset signal. If GStreamer enters from the NULL state to the READY state, the MCLK remains inactive. On the other hand, if it enters the READY state from the PLAYING state, the MCLK remains active.
  • The current consumption in the READY state depends on the previous state; this behavior can be further optimized.
  1. Paused
  • This state indicates that the pipeline is set up and ready to process media but is not actively playing yet. It’s often used when preparing for playback or streaming while maintaining control over when processing starts.
  • All elements in the pipeline are initialized and ready to start processing media.
  • Like the READY state, the current consumption in the PAUSED state depends on the previous state, so some optimization in the camera stack can help reduce the power consumption during this state.
  1. Playing
  • The PLAYING state represents the pipeline’s fully active state, where data is being processed and media is either being rendered to the screen, played back through speakers, or streamed to a remote system.
  • MCLK is active and the camera sensor is out of reset. The current consumption is highest in this state as all camera sensor data is being captured and passed through the pipeline.

To minimize wakeup latency of the camera sensor while maintaining the lowest sleep current, GStreamer pipeline should be put in the NULL state when the system is suspended. To understand the power consumption due to MCLK and RESET signals assertion, below is the comparison of current consumption between the NULL state of GStreamer pipeline and the READY state of GStreamer pipeline while QRB4210 is in the suspended state.

Figure 5 Current consumption shown while GStreamer is in NULL state and QRB4210 is in suspend mode at ~7 mA. Source: eInfochips

Figure 6 Current consumption shown while GStreamer is in READY state and QRB4210 is in suspend mode at ~30 mA. Source: eInfochips

While the camera is in the NULL state, the QRB4210 system on module draws a current of ~7mA, which is equivalent to the current drawn by the system on module in the suspended state when no camera is connected. When the camera is in the READY state, the QRB4210 system on module draws a current of around ~30 mA. The above oscilloscope snapshot shows the waveforms of the consumed current. All the measured currents are at 3.7-V supply voltage for the QRB4210 system on module.

Latency measurement results

Latency was measured between two trigger events: the first occurs when the device wakes up and receives the interrupt at the application processor, and the second occurs when the first frame becomes available in the DDR after image signal processor (ISP) runs.

As mentioned earlier in Part 1, the scenario is simulated using bash script that keeps the device into the suspend mode and triggers the QRB4210 platform from sleep and wakeup using the RTC wake alarm.

We have collected the camera wakeup latency by changing the camera state from PLAYING to READY and from PLAYING to NULL. In each scenario, three different use cases are followed, which are recording camera stream into eMMC, recording camera stream into SD card, and previewing camera stream to display. The resulting latency is as follows:

  • Camera state in READY

Table 1 Latency measurements are shown in READY state. Source: eInfochips

  • Camera state in NULL

Table 2 Latency measurements are shown in NULL state. Source: eInfochips

The minimum, maximum, and average values presented in the above tables have been derived by running each scenario for 100 iterations.

Apart from measuring the latency numbers programmatically, below are the results measured using the GPIO toggle operation between two reference events while switching the camera state from READY to PLAYING.

Table 3 Latency measurements are conducted using GPIO. Source: eInfochips

Now refer to the following oscilloscope images for different scenarios used in the GPIO toggle measurement method.

Figure 7 GPIO toggle measurements are conducted while recording into eMMC at 410.641 ms. Source: eInfochips

Figure 8 GPIO toggle measurements are conducted while recording into SD card at 382.037 ms. Source: eInfochips

Figure 9 GPIO toggle measurements are conducted during preview on display at 359.153 ms. Source: eInfochips

Trade-off between current consumption and wakeup latency

Based on the simulated result, we see that current consumption and wakeup latency are dependent on each other.

The consolidated readings show that a camera pipeline in the READY state consumes more current while it takes less time to wake up. On the other hand, if the camera pipeline is in the NULL state, it consumes less current but takes more time to wake up. Refer to the table below for average data readings.

Table 4 The above data shows trade-off between current consumption and wakeup latency. Source: eInfochips

All latency data is measured between the reception of the wakeup IRQ at the application processor and the availability of the frame in DDR after the wakeup. It does not include the time taken by a motion detection sensor to sense and generate an interrupt for the application processor. Generally, the time taken by a motion detection sensor is negligible compared to the numbers mentioned above.

Future scope

To reduce the current consumption of a device in the sleep state optimization, you can follow the steps below:

  • Disable redundant peripherals and I/O ports.
  • Prevent avoidable wakeups by ensuring that peripherals don’t resume from sleep unnecessarily.
  • Disable or mask unwanted wakeup triggers or subsystem that can wake the device from a sleep state.
  • Use camera standby (register retaining) mode so that MCLK can be stopped, or its frequency can be reduced.
  • Enable LCD display only when preview use case is running.

To optimize wakeup latency, follow the guidelines below:

  • Make use of the camera standby mode to further optimize latency to generate the first frame.
  • Reduce camera sensor frame size to optimize frame scan time and ISP processing time.
  • Disable redundant system services.
  • Trigger camera captures from lower-level interface rather than using the GStreamer.

Trigger-based cameras offer an efficient solution for capturing targeted events, reducing unnecessary operation, and managing resources effectively. They are a powerful tool in applications where specific, event-driven image or video capture is needed.

By conducting experiments on the Aikri QRB4210 platform and making minimal optimizations to the Linux operating system, it’s possible to replicate or create a robust trigger-based camera system, achieving ~400-500 ms latency with minimal current consumption.

Jigar Pandya—a solution engineer at eInfochips, an Arrow company—specializes in board bring-up, board support package porting, and optimization.

Priyank Modi—a hardware design engineer at eInfochips, an Arrow company—has worked on various Aikri projects to enhance technical capabilities.

Related content

The post Optimize power and wakeup latency in swift response vision systems – Part 2 appeared first on EDN.

Виконання бюджету за 2024 рік

Новини - Втр, 04/08/2025 - 08:56
Виконання бюджету за 2024 рік
Image
Інформація КП вт, 04/08/2025 - 08:56
Текст

Надходження до бюджету університету за 2024 рік перевищили відповідний показник минулого року на 12,5 % і склали 2 450,6 млн грн.

The (more) modern drone: Which one(s) do I now own?

EDN Network - Пн, 04/07/2025 - 18:10

Last September, I detailed why I’d decided to hold onto the first-gen DJI Mavic Air drone that I’d bought back in mid-2021 (and DJI had introduced in January 2018), a decision which then prompted me to both resurrect its long-drained batteries and acquire a Remote ID module to get it copacetic with current FAA usage regulations, as subsequently mentioned in October:

Within both blog posts, however, I intentionally alluded to (but didn’t delve into detail on) the newer drone that I’d also purchased to accompany it, aside from dropping hints that it offered (sneak peek: as-needed enabled) integrated Remote ID support and weighed (sneak peek: sometimes) less than 250 grams. That teasing wasn’t (just) to drive you nuts: to do the topic justice would necessitate a blog post all its own. That time is now, and that blog post is this one.

Behold DJI’s Mini 3 Pro, originally introduced in May 2022 and shown here with its baseline RC-N1 controller:

I bought mine (two of them, actually, as it turned out) roughly two years post-intro, in late June (from eBay) and early July (from Lensrentals) of last year. By that time, the Mini 4 Pro successor, unveiled in September 2023, had already been out for nearly a year. So, why did I pick its predecessor? The two drone generations look identical; they take the same batteries, propellers and other parts, and fit into the same cases. And as far as image capture goes, the sensors are identical as well: 48 Mpixel (effective) 1/1.3″ CMOS.

What’s connected to the image sensors, however, leads to one of several key differences between the two generations. The Mini 3 Pro captures video at up to 4K resolution at a 60-fps peak frame rate. The improved ISP (image signal processor) in the Mini 4 Pro, conversely, also captures video at 4K resolution, but this time up to a 100-fps frame rate. Dim-light image quality is also improved, along with the available capture-format options, now also encompassing both pre-processed HDR and post-processed D-LOG. And the camera now rotates a full 90° vertical for TikTok- and more general smartphone viewing-friendly portrait orientation video frames.

Speaking of cameras, what about the two drones’ collision avoidance systems? The DJI Mini 3 Pro has cameras both front and rear for collision avoidance purposes, along with another pointing downward to (for example) aid in landing. The Mini 4 Pro replaces them with four fisheye-lens cameras (at front, rear and both sides) for collision avoidance all around the drone as well as above it, further augmented by two downward facing cameras for stereo distance and a LiDAR sensor, the latter enhancing after-dark sensing and discerning distance-to-ground when the terrain is featureless. By the way, the rumored upcoming DJI Mini 5 Pro further bolsters the drone’s LiDAR facilities, if the leaked images are true and not just Photoshop-created fakes.

The final notable difference involves the contrasting wireless protocols used by both drones to communicate with and stream live video to the user’s controller and, if used, goggles. The Mini 3 Pro leverages DJI’s O3 transmission system, with an estimated range of 12 km while streaming live 1080p 30 fps video. With the Mini 4 Pro and its more advanced O4 system, conversely, the wirelessly connected range increases to an estimated 20 km. Two important notes here:

  • The controllers for the Mini 3 Pro also support the longer-range (15 km) and higher frame rate (1080p 60 fps) O3+ protocol used by larger DJI drones such as the Mavic 3
  • Unfortunately, however, the DJI Mini 4 is not backwards compatible with the O3 and O3+ protocols, so although I’ll be able to reuse my batteries and the like if I do a drone-generation upgrade in the future, I’ll need to purchase new controllers and goggles for it.

That all said, why did I still go with the Mini 3 Pro? The core reason was cost. In assessing the available inventory of used drone equipment, the bulk of the options I found were at both ends of the spectrum: either in like-new condition, or egregiously damaged by past accidents. But given that the Mini 3 Pro had been in the market nearly 1.5 years longer, its available used inventory was much more sizeable. I was able to find two pristine Mini 3 Pro examples for a combined price tag less than that of a single like-new (far from brand new) Mini 4 Pro. And the money saved also afforded me the ability to purchase two used upgraded integrated-display controllers, the mainstream RC and high-end RC Pro, the latter running full-blown Android.

Although enhancements such as higher quality video, more advanced object detection and longer range are nice, they’re not essential in my currently elementary use case, particularly counterbalanced against the fiscal savings I obtained by going prior-gen. The DJI Mini 4’s expanded-scope collision avoidance might be useful when flying the drone side-to-side for panning purposes, for example, or through a grove of trees, neither of which I see myself doing much if any of, at least for a while. And considering that after 12 km the drone will probably already be out of sight, combined with the alternative ability to record even higher quality video to local drone microSD storage, O4 transmission system support also isn’t a necessity for me.

Speaking of batteries (plenty of spares which I now also own, along with associated chargers, and refresh-charge them every two months to keep them viable) and range, let’s get to the drone’s earlier-alluded Remote ID facilities. The Mini 3 Pro (therefore also Mini 4 Pro) has two battery options: a standard 2453 mAh model that, as conveniently stamped right on it to answer enforcement agency inquiries, keeps the drone just below the 250-gram threshold:

and a “Plus” 3850 mAh model that weighs ~50% more (121 grams vs 80.5 grams). The DJI Mini 3 Pro has built-in Remote ID support, negating the need for an add-on module (which, if installed, would push total weight above 249 grams, even using a standard battery). But here’s the slick bit; when the drone detects that a standard battery is in use, it disables Remote ID transmission, both because the FAA doesn’t require it and to address user privacy concerns, given that scanning facilities are available to the masses, not just to regulatory and enforcement entities.

I’ve admittedly been too busy post-purchase to use the drone gear much yet, but I’m looking forward to harassing the neighbors 😉 (kidding!) with it in the future. I’ve also acquired a Goggles Integra set and a RC Motion 2 Controller, both gently used from Lensrentals:

to test out FPV (first-person view) flying, and even a LTE cellular dongle for remote-locale Internet access to the RC Pro controller (unfortunately, such dongles reportedly can’t also be used on the drone itself, at least in the US, for alternative long-range controller connectivity):

And finally, I’ve acquired used examples of the Googles Racing Edition Set (Adorama) and OcuSync Air System (eBay) for the Mavic Air, again for FPV testing purposes:

Stay tuned for more on all of this if (hopefully more accurately, when) I get time to actualize my drone gear testing aspirations. Until then, let me know your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post The (more) modern drone: Which one(s) do I now own? appeared first on EDN.

Кампанію декларування за 2024 рік завершено

Новини - Пн, 04/07/2025 - 16:49
Кампанію декларування за 2024 рік завершено
Image
kpi пн, 04/07/2025 - 16:49
Текст

Цьогоріч свої декларації подали майже 627 тисяч осіб, які виконують функції держави або місцевого самоврядування. Це важливий показник, адже, попри всі виклики, значна кількість посадовців вкотре підтвердила готовність діяти відкрито й прозоро.

[проєкт] Колективний договiр Національного технічного університету України «Київський політехнічний інститут імені Ігоря Сікорського» на період з квітня 2025 р. по квітень 2030 р.

Новини - Пн, 04/07/2025 - 13:30
[проєкт] Колективний договiр Національного технічного університету України «Київський політехнічний інститут імені Ігоря Сікорського» на період з квітня 2025 р. по квітень 2030 р. kpi пн, 04/07/2025 - 13:30
Текст

Колективний договір укладено відповідно до чинного законодавства, у тому числі з дотриманням Законів України "Про колективні договори і угоди", "Про освіту", "Про вищу освіту", "Про оплату праці", "Про охорону праці", "Про відпустки", "Про професійні спілки, їх права та гарантії діяльності", КЗпП України, Генеральної і Галузевої угод та ін.

PhotonDelta and Silicon Catalyst collaborate to drive innovation for early-stage photonic startups

Semiconductor today - Пн, 04/07/2025 - 13:21
Photonic chips industry accelerator PhotonDelta of Eindhoven, the Netherlands (which connects and collaborates with an ecosystem of photonic chip technology organizations worldwide) has established a strategic collaboration with Silicon Catalyst, the world’s only incubator and accelerator focused on the global semiconductor industry. Over the last five years, PhotonDelta has raised nearly $1.5bn to accelerate the commercialization of the photonics industry...

Optical inspection system for a complete 3D sintering paste check

ELE Times - Пн, 04/07/2025 - 13:17

Future-proof quality assurance for power electronics through sintering paste inspection with multi-line SPI

Higher operating temperatures, thinner interconnection layers, 10 times the longevity – the advantages of sintering pastes over solder pastes have long been recognized in the field of power electronics. Not least for this reason, sintering pastes are preferred in system-critical technologies such as “green energy” and e-mobility. Here, for example, IGBTs have become the central component in converters for all types of electric drives: wind turbines, solar power generation, battery charging – hardly any future technology would be conceivable without the “all-rounder” sintering paste. However, sintering is more prone to errors than soldering paste printing. Furthermore, defects are more difficult to detect and rectify – critical failures in the field are the result. To avoid this, GÖPEL electronic has now added an inspection system specifically for sintering paste to its Multi Line platform.

The Multi Line SPI is a cost-effective 3D inline system for automated inspection of sintering paste. Based on the Multi Line platform, it is a customized solution for small and medium-sized companies with high quality standards; it can also be used for solder pastes. The telecentric 3D camera module is used to inspect solder and sinter paste without shadows for shape, area, coplanarity, height, bridges, volume and X/Y offset. Equipped with two digital fringe projectors for shadow-free 3D image capturing, it has a resolution of 15µm/pixel, a height measurement accuracy of 1µm and a height resolution of 0.2 µm. This means that measurement values can be obtained precisely and repeatedly.

Generation of an inspection program for sinter paste inspection takes only a few minutes: CAD data or a reference layout is sufficient. Users who already use GÖPEL electronic software for programming SMD, THT or CCI systems can learn the additional sinter paste functions with little training. In addition, the data import, verification and statistics software are identical to the other inspection systems from GÖPEL electronic. This is where the platform concept of the Multi Line series really pays off: the uniform, powerful operating and evaluation software across all devices reduces training and programming effort, enabling flexible and optimized staff deployment planning.

The post Optical inspection system for a complete 3D sintering paste check appeared first on ELE Times.

Keysight and SAMEER Collaborate to Advance 6G and Healthcare Innovation in India

ELE Times - Пн, 04/07/2025 - 13:09
  • Collaboration brings together expertise and cutting-edge technology to drive innovation
  • Provides essential 6G research infrastructure to bolster the ‘Made in India’ vision

Keysight Technologies, Inc. announces it has signed a Memorandum of Understanding with Society for Applied Microwave Electronics Engineering & Research, a premier R&D organisation under Ministry of Electronics and Information Technology, Government of India to drive healthcare and 6G innovation across India. As part of the collaboration, both Keysight and SAMEER have proposed to create a healthcare center of excellence along with a research lab to drive 6G communication research.

In order to scale-up the Indian presence in 6G and other critical areas such as medical electronics, there is an urgent need to build strong expertise and an ecosystem in India. Keysight and SAMEER, which is part of the government research institute under the Ministry of Electronics & Information Technology will work together to address this. Building on existing work, the collaboration will focus on several key technology areas to meet the growing demand for innovation in both the strategic and civilian applications.

Under the MoU, Keysight will enable SAMEER to develop and demonstrate fully functional labs that support 6G research and development across various India institutions. Plans also include establishing a center of excellence for healthcare focused on advancing magnetic resonance imaging (MRI) technologies. Together, the two organizations will work on driving innovation and supporting the ‘Made in India’ initiative which is designed to generate local growth and development.

Dr P.Hanumantha Rao, Director General at SAMEER said: “We are leading 6G research in India after the successful demonstration of our end to end 5G stack along with IIT Madras. The proposed collaboration with Keysight will enhance this further and enable Indian research and academic institutions to get access to next generation technologies.”

SAMEER contributions in healthcare include fully indigenous Linear Accelerator for Cancer therapy and a fully functional affordable 1.5T MRI. The MoU will facilitate Keysight to leverage the products developed by SAMEER for democratization across India and continue research by complementing each other’s capabilities.

Sudhir Tangri, Vice President of Asia Pacific Sales and Country General Manager of India at Keysight said: “Establishing a center of excellence and building 6G research areas is a critical step towards driving innovation in India. Through this collaboration we are proud to provide the much-needed infrastructure and technology that will empower future research across healthcare and other sectors. SAMEER is a leader in its field, and we are excited to work together to accelerate our 6G and healthcare vision.”

The post Keysight and SAMEER Collaborate to Advance 6G and Healthcare Innovation in India appeared first on ELE Times.

Наш університет співпрацюватиме з литовською компанією Teltonika Networks

Новини - Пн, 04/07/2025 - 12:44
Наш університет співпрацюватиме з литовською компанією Teltonika Networks
Image
kpi пн, 04/07/2025 - 12:44
Текст

🇺🇦🇱🇹 Компанія Teltonika Networks спеціалізується на розробленні та виробництві високоякісного мережевого обладнання для промислового Інтернету речей та інноваційних рішень у сфері телекомунікацій.

КПІ ім. Ігоря Сікорського взяв участь у форумі Space for Ukraine

Новини - Пн, 04/07/2025 - 12:40
КПІ ім. Ігоря Сікорського взяв участь у форумі Space for Ukraine
Image
kpi пн, 04/07/2025 - 12:40
Текст

🛰 Організатор форуму Space for Ukraine — Міністерство оборони України. Серед учасників були представники уряду та бізнесу, командувач Космічного командування Франції, командувач Космічного командування Великої Британії, провідні фахівці у сфері космічних технологій.

Marktech launches MWIR LEDs in TO-can packages for harsh environments

Semiconductor today - Пн, 04/07/2025 - 11:52
Marktech Optoelectronics Inc of Latham, NY, USA, a vertically integrated designer and manufacturer of optoelectronics components and assemblies, has released a new series of mid-wave infrared (MWIR) LEDs in rugged, hermetically sealed TO-can packages, engineered to meet the rigorous demands of gas sensing and chemical analysis in industrial environments...

NUBURU signs joint pursuit agreement with defense company

Semiconductor today - Пн, 04/07/2025 - 11:41
Marking a milestone in its strategic expansion into the defense sector, NUBURU Inc of Centennial, CO, USA — which was founded in 2015 and develops and manufactures high-power industrial blue lasers — has signed a joint pursuit agreement (JPA) with a defense-tech company. This follows completion of an initial 20% acquisition interest in a defense and security hub, announced on 12 March...

ST and Innoscience sign GaN power device technology development and manufacturing agreement

Semiconductor today - Пн, 04/07/2025 - 11:28
STMicroelectronics of Geneva, Switzerland and InnoScience (Suzhou) Technology Holding Co Ltd - which manufactures gallium nitride (GaN) power chips on 8” silicon wafers - have signed an agreement on GaN technology development and manufacturing, leveraging the strengths of each company to enhance GaN power solutions and supply chain resilience...

A design platform for swift vision response system – Part 1

EDN Network - Пн, 04/07/2025 - 11:27

Trigger-based vision systems in embedded applications are used in various domains to automate responses based on visual input, typically in real-time. These systems detect specific conditions or events—for example, motion and object recognition or pattern detection—and trigger actions accordingly.

Key applications include:

  • Surveillance and security: Detecting motion or unauthorized individuals to trigger alarms or recording.
  • Robotics: Identifying and manipulating objects, triggering robotic actions like picking, or sorting based on visual cues.
  • Traffic Monitoring: Triggering traffic light changes or fines when specific conditions like running a red light are detected.
  • Forest monitoring: Trigger-based vision systems can be highly effective in forest environments for a range of applications, including wildlife monitoring, forest fire detection, illegal logging prevention, animal detection, trail camera, and more.
  • Military and defense: Vision systems used in drones, surveillance systems, and military robots for threat detection and target identification.

These systems leverage camera technologies combined with environmental sensors and AI-based image processing to automate monitoring tasks, detect anomalies, and trigger timely responses. For instance, in wildlife monitoring, vision systems can identify animals in remote areas, while in forest fire detection, thermal and optical cameras can spot early signs of fire or smoke.

Low wakeup latency in trigger-based systems is crucial for ensuring fast and efficient responses to external events such as sensor activations, button presses, and equivalent events. These systems rely on triggers to initiate specific actions, and minimizing latency ensures that the system can respond instantly to these stimuli. This ability of a device to quickly wake up when triggered allows the device to remain in a low-power state for a longer time. The longer a device stays in a low-power state, the more efficiently it conserves energy.

In summary, low wakeup latency improves a system’s responsiveness, reliability, scalability and energy efficiency, making it indispensable in applications that depend on timely event handling and quick reactions to triggers.

Aikri platform developed by eInfochips validates this concept. The platform is based on Qualcomm’s QRB4210 chipset and runs OpenEmbedded-based Linux distribution software.

To simulate the real-life trigger scenario, Aikri platform is put to low-power state using a shell script and is woken up by a real time clock (RTC) alarm. The latency between wakeup interrupt and frame reception interrupt at dual data rate (DDR) has been measured around ~400 ms to ~500 ms. Subsequent sections discuss the measurement setup and approach at length.

Aikri platform: Setup details

  1. Hardware setup

The Aikri platform is used to simulate the use case. The platform is based on Qualcomm’s QRB4210 chipset and demonstrates diverse interfaces for this chipset.

The current scope uses only a subset of interfaces available on the platform; refer to the following block diagram.

Figure 1 The block diagram shows hardware peripherals used in the module. Source: eInfochips

The QRB4210 system-on-module (SoM) contains Qualcomm’s QRB4210 application processor, which connects to DDR RAM, embedded multimedia card (eMMC) as storage, Wi-Fi, and power management integrated circuit (PMIC). The display serial interface (DSI)-based display panel is connected to the DSI connector available on the Aikri platform.

Similarly, the camera daughter board is connected to CSI0 port of the platform. The camera daughter card contains an IMX334 camera module. The camera sensor outputs 3864×2180 at 30 frames per second on four lanes of camera serial interface (CSI) port.

DSI panel is built around the OTM1901 LCD. This LCD panel supports 1920×1080 output resolution. Four lanes of the DSI port are used to transfer video data from the application processor to the LCD panel. PMIC available on QRB4210 SoM contains RTC hardware. While the application processor goes to the low-power mode, the RTC hardware inside the PMIC remains active with the help of a sleep clock.

  1. Software setup

The QRB4210 application processor runs an OpenEmbedded-based Linux distribution using the 5.4.210 Linux kernel version. The default distribution is trimmed down to reduce wakeup latency while retaining necessary features. A bash script is used to simulate the low-power mode entry and wakeup scenario.

The Weston server generates display graphics and GStreamer captures frames from camera sensors. Wakeup latency is measured by taking timer readings from Linux kernel when relevant interrupt service routines are called.

Latency measurement: Procedure overview

To simulate the minimal latency wakeup use case, a shell-based script is run on the Aikri platform. The script automates the simulation of trigger-based low latency vision system on Aikri QRB4210 module.

Below is the flow for the script performed on QRB4210 platform, starting from device bootup to measuring latency.

Figure 2 Test script flow spans from device bootup to latency measurement. Source: eInfochips

The above diagram showcases the operational flow of the script, beginning with the device bootup, where the system initializes its hardware and software. After booting, the device enters the active state, signifying that it’s fully operational and ready for further tasks, such as keeping Wi-Fi configured in an inactive state and probing the camera to check its connection and readiness.

Additionally, it configures the GStreamer pipeline for 1280×960@30 FPS stream resolution. The camera sensor registers are also configured at this stage based on the best-match resolution mode. During this exercise, 3840×2160@30 FPS is the selected resolution for IMX334 camera sensor. Once the camera is confirmed as configured and functional, the device moves to the camera reconfigure step, where it adjusts the camera stream settings like stop/start.

The next step is to set the RTC wake alarm, followed by triggering a device to suspend mode. In this state, the device waits for the RTC alarm to wake it up. Once the alarm triggers, the device transitions to the wakeup state and starts the camera stream.

The device then waits for the first frame to arrive in DDR and measures the latency between capturing the frame and device wakeup Interrupt Request (IRQ). After measuring latency, the device returns to the active state, where it remains ready for further actions.

The process then loops back to the camera reconfigure step, repeating the sequence of actions until the script stops externally. This loop allows the device to continuously monitor the camera, measure latency, and conserve power during inactive periods, ensuring efficient operation.

Latency measurement strategy

While the device is in a suspended state and the RTC alarm triggers, the time between two key events is measured: the wakeup interruption and the reception of the first frame from the camera sensor into the DDR buffer. The latency data is measured in three different scenarios, as outlined below:

  • When the camera is in the preview mode
  • When recording the camera stream to eMMC
  • When recording the camera stream to the SD card

Figure 3 Camera pipeline is shown in the preview mode. Source: eInfochips

Figure 4 Camera pipeline is shown in the recording mode. Source: eInfochips

As shown in the above figures, after the DDR receives the frame, it moves to the offline processing engine (OPE) before returning to the DDR. From there, the display subsystem previews the camera sensor data. In the recording use case, the data is transferred from DDR to the encoder and then stored in the storage. Once the frame is available in DDR, it ensures that it’s either stored in the storage or previewed on the display.

Depending on the processor CPU occupancy, it may take a few milliseconds to process the frame, based on the GStreamer pipeline and the selected use case. Therefore, while measuring latency, we consider the second polling point to be when the frame is available in the DDR, not when it’s stored or previewed.

Since capturing the trigger event is crucial, minimizing latency when capturing the first frame from the camera sensor is essential. The frame is considered available in the DDR when the thin front-end (TFE) completes processing the first frame from the camera.

Latency measurement methods

In the Linux kernel, there are several APIs available for pinpointing an event and time measurement, each offering varying levels of precision and specific use cases. These APIs enable tracking of time intervals, measuring elapsed time, and managing system events. Below is a detailed overview of the commonly used time measurement APIs in the Linux kernel:

  • ktime_get_boottime: Provides the current “time since boot” in a ktime_t value, expressed in nanoseconds.
  • get_jiffies: Returns the current jiffy count that represents the number of ticks since the system booted. Time must be calculated based on the system clock.

Jiffies don’t update during the suspend state, while ktime_t continues to run unaffected by interrupts even in sleep mode. Additionally, ktime_t offers time measurements in nanoseconds, making it highly precise compared to jiffies.

  1. Usage of GPIO toggle method for latency measurement

To get a second level of surety, a GPIO toggle-based method is also employed in the measurement. It creates a positive or negative pulse when a GPIO is toggled between two reference events. The pulse width can be measured on an oscilloscope, signifying latency between the two events.

When the device wakes up at that point, the GPIO value is set to zero, and once the camera driver receives the frame in the DDR, the GPIO value is set to one. This way the GPIO signal creates a negative pulse. Measuring the pulse width using an oscilloscope provides latency between the wakeup interrupt and the frame available at interrupt.

  1. Usage of RTC alarm as wakeup source

The RTC in a system keeps ticking while using a sleep clock even when the processor goes to the low-power mode, continuously maintains time, and triggers a wake alarm when it reaches a set time. This wakes the system or initiates a scheduled task that can be set in seconds from the Unix epoch or relative to the current time.

On Linux, tools like rtcwake and the /sys/class/rtc/rtc0/wakealarm file are used for configuration. The system can wake from power-saving modes like suspend-to-RAM or hibernation for tasks like backups or updates. This feature is useful for automation but may require time zone adjustments as the RTC stores time in UTC.

  • The RTC wake alarm is set by specifying a time in seconds in sysfs or using tools like rtcwake.
  • It works even when the system is in a low-power state like suspension or hibernation.
  • To clear the alarm, write a value of zero to the wake alarm file.

A typical trigger-based system receives triggers from external sources, such as an external co-processor or the external environment. When simulating the script, the RTC wakeup alarm is used as an external event, acting as a trigger for the QRB4210 application processor, which is equivalent to the external event.

Jigar Pandya—a solution engineer at eInfochips, an Arrow company—specializes in board bring-up, board support package porting, and optimization.

Priyank Modi—a hardware design engineer at eInfochips, an Arrow company—has worked on various Aikri projects to enhance technical capabilities.

Editor’s Note: The second part of this article series will further expand into wakeup latency and power consumption of this trigger-based vision system.

Related content

The post A design platform for swift vision response system – Part 1 appeared first on EDN.

Crane remote repair

Reddit:Electronics - Пн, 04/07/2025 - 07:30
Crane remote repair

Couple weeks ago I had one of the bigger oofs of my life, my crane remote fell of the back of my truck in deep sand, missed it during my walk around and I back over it with my 30,000lbs truck. Dang

A new replacement from the IMT dealer would have been 2550$. The remote is an Omnex t150 made by Eaton, they made them in a variety of different configurations, as luck would have it I could not fined a used one set up like mine. Upon closer inspection the board and switch panel for the remote were intact. The housing, proportional control switch and the ESD were done though. I rigged a toggle switch to the ESD circuit and was able to connect the radio to my crane reciever and activate the crane functions(minus the proportional solenoid on the hydraulics because that switch was wrecked)

I went on eBay and managed to find a t150 that was for a different machine than mine but the housing was the same. The board and the switch front plate were different. I figured I can switch it all over to the new remote and use the ESD that came with the remote. Hardest part was safely removing my board from the old housing. It was potted in there with exposu Using a heat gun, exacto knife, diagonal cutters on the housing and patience, I got the board out, plugged it into power supply and tested its connection with my crane reviver again before moving forward.

I was less careful with the other board as I would not be using it. Got it right out. One thing that was a different was on my old remote the power from the battery pack on the housing came around from behind the board plugged into a connection on the top side of the PCB whilst the new remote had wires soldered to the back. I cut the pigtail connection out of the old remote and soldered it to the wires on new one and then checked to make I had proper battery voltage. I potted the new board in and replaced two bent toggle switches on the front panel with two good ones off the parts remote and made new gaskets for it all and assembled it all and tested it out! It works! And I have a fresh remote now.

Only bummer was the ESD button on the new remote did not function properly, it's a open when depressed stitch, closed when pulled and when pulled the connection was intermittent, I modified the old one to work temporarily and I just ordered a new one of those. All in all I am glad I saved over 2000$

submitted by /u/BlackfootMechanical
[link] [comments]

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів