Українською
  In English
Feed aggregator
Semiconductor industry strategy 2025: Semiconductors at the heart of software-defined products

Electronics are everywhere. As daily life becomes more digital and more devices become software defined and interconnected, the prevalence of electronics will inevitably rise. Semiconductors are what makes this all possible. So, it is no surprise that the entire semiconductor industry is on a path to being a $1 trillion market by 2030.
While accelerating demand will help semiconductors reach impressive gains, many chip makers may be held back by the costs of semiconductor design and manufacturing. Already, building a cutting-edge fab costs about $19 billion and the design of each chip is around a $500 million investment on average. With AI integration on the rise in consumer devices also fueling growth, companies will need to push the boundaries of their electronic design and manufacturing processes to cost effectively supply chips at optimal performance and environmental efficiency.
Ensuring the semiconductor industry continues its aggressive growth will require organizations to approach both fab commissioning and operation as well as chip design with a more unique, collaborative strategy. The three pillars of this strategy are:
- Collaborative semiconductor business platform
- Software-defined semiconductor enabled for software-defined products
- The comprehensive digital twin
Creating next-generation semiconductors is expensive yet necessary as more products begin to rely heavily on software. Ensuring maximum efficiency within a business will be imperative. Consequently, many chip makers are striving to create metrics-driven environments for semiconductor lifecycle optimization. Typically, companies use antiquated methods to track roles and responsibilities, causing them to rely on information that can be weeks old. As a result, problem solving can become inefficient, negatively impacting the product lifecycle.
Chip makers must upgrade to a truly metrics-driven business platform that enables real-time analysis and facilitates the management of the entire process, from new product introduction through design and verification to final product delivery. By using semiconductor lifecycle management as the foundation and accessing the wealth of data generated during design and manufacturing, companies can take control of their new product introduction processes and have integrated traceability throughout the product lifecycle.
Figure 1 Semiconductor lifecycle optimization is driven by real-time metrics analysis, enabling seamless collaboration from design to final product delivery. Source: Siemens
With this collaborative business platform in place, businesses can know the status of their teams at any point during a project. For example, the design team can take advantage of real-time data to have accurate status of the project anytime, without relying on manually generated status reports with weeks old data. Meanwhile, manufacturing can focus on both the front and back ends of IC manufacturing planning with predictability based on actual data. Once all of this in place, companies can feasibly build AI metric analysis and a business intelligence platform on top of that.
Second pillar: Software-defined semiconductor for the software-defined product (SDP)Software is increasingly being used to define customer experience with a product, Figure 2. Because of this, SDPs will become increasingly central to the evolution of the semiconductor industry. And as AI and ML workloads continue to drive requirements, the traditional boundaries between hardware and software will blur.
Figure 2 Software-defined products are driving the evolution of semiconductors, as AI and ML blur the lines between hardware and software for enhanced innovation and efficiency. Source: Vertigo3d
The convergence of software and hardware will force the semiconductor industry to rethink everything from design methodologies to verification processes. Success in this new landscape will require semiconductor companies to position themselves as enablers of software innovation through holistic co-optimization approaches. No longer will hardware and software teams work in siloed environments; they will become a holistic engineering team that works together to optimize products.
Improved product optimization from integrated teams works in tandem with the industry’s trend toward purpose-built compute platforms to handle the software workload. Consumers are already seeking out customizable chips and they will continue to do so in even greater numbers as general-purpose processors lag expectations. Simultaneously, companies are already creating specialized parts for their products. Apple has several different processors for its host of products; this will become even more important as software becomes more crucial to the functionality of a product.
Supporting the software defined products not only impacts the semiconductors that drive the software but impacts everything from the semiconductor design through ECAD, E/E, and MCAD design. Chip makers need to create environments where they can handle these types of products while getting the requirements right and then drive all requirements to all design domains to develop the product correctly moving forward.
Third pillar: The comprehensive digital twinPart of creating improved environments to better fabricate next generation semiconductors is making sure that the process remains affordable. To combat production costs that are likely to rise, semiconductor companies should lean into digitalization and leverage the comprehensive digital twin for both the semiconductor design and fabrication.
The comprehensive and physics-based Digital Twin (cDT) addresses the challenge of how to weave together the disparate engineering and process groups needed to design and create tomorrow’s SW-defined semiconductor. To enable all these players to interact early and often, the cDT incorporates mechanical, electronic, electrical, semiconductor, software, and manufacturing to fully capture today’s smart products and processes.
Specifically, the cDT merges the real and digital worlds by creating a set of consistent digital models representing different facets of the design that can be used throughout the entire product and production lifecycle and the supply chain, Figure 3. Now it is possible to do more virtually before committing to expensive prototypes or physically commissioning a fab. The result is higher quality products while meeting aggressive cost, timeline and sustainability goals.
Figure 3 The comprehensive digital twin merges real and digital worlds, enabling faster product introductions, higher yields, and improved sustainability by simulating and optimizing semiconductor design and production processes. Source: Siemens
In design, this “shift-left” provides a physics-based virtual environment for all the engineering teams to interact and create, simulate, and improve product designs. Design and manufacturing iterations in the virtual world happen quickly and consume few resources outside of the engineer’s brain power, enabling them to explore a broader design space. Then in production, it empowers companies to virtually evaluate and optimize production lines, commission machines, and examine entire factories or networks of factories to improve production speed, efficiency, and sustainability. It can analyze and act on real data from the fab and then use that wealth of data for AI metrics analysis.
Businesses can also leverage the cDT to virtualize the entire product process design for the SW-defined product. This digital twin enables manufacturers to simulate and optimize everything from initial design concepts to manufacturing processes and final product integration, which dramatically reduces development cycles and improves outcomes. Companies can verify and test changes earlier in the design process while keeping teams across disciplines in sync and on track, leading to enhanced design exploration and optimization. And since sustainability starts at design, the digital twin can help chip makers meet sustainability metrics by enabling them to choose components that have lower carbon footprints, more thermal tolerance, and reduced power requirements.
The comprehensive digital twin for the semiconductor ecosystem helps businesses manage the complexities of the SDP as well as mechanical and production requirements while bolstering efficiency. Benefits of the digital twin include:
- Faster new product introductions: Virtualizing the entire semiconductor ecosystem allows faster time to yield. Along with the quest to pursue “More than Moore,” creating a virtual environment for heterogenous packaging allows for early verification and optimization of advanced packaging techniques.
- Faster path to higher yields: Simulating the production process makes enhancing IC quality easier, enabling workers to enact changes dynamically on the shop floor to quickly achieve higher yields for greater profitability
- Traceability and zero defects: It is now possible to update the digital twin of both the product and production in tandem with their real-world counterparts, enabling manufacturers to diagnose issues and detect anomalies before they happen in the pursuit of zero defects
- Dynamic planning and scheduling: Since the digital twin provides an adaptive comparison between the physical and digital counterparts, it can detect disturbances within systems and trigger rescheduling in a timely manner
Creating next-generation semiconductors is expensive. Yet, chip manufacturers must continue to develop and fabricate new designs that require ever-more advanced fabrication technology to efficiently create semiconductors for tomorrow’s software-defined products. To handle the changing landscape, businesses within the semiconductor industry will need to rely on the comprehensive digital twin and adopt a collaborative semiconductor business platform that enables them to partner both inside and outside of the industry.
The emergence of collaborative alliances within the semiconductor industry as well as across related industries will break down traditional organizational boundaries, enabling unprecedented levels of cooperation across and beyond the semiconductor industry. The result will be extraordinary innovation that leverages collective expertise and capabilities. Already, well-established semiconductor companies have begun partnering to move forward in this rapidly evolving ecosystem. When Tata Group wanted build fabs in India, Analog Devices, Tata Electronics, and Tata Motors signed an agreement that would allow Tata to use Analog Devices’ chips in its applications like electric vehicles and network infrastructure. At the same time, Analog Devices will be able to take advantage of Tata’s plants to fab their next generation chips.
And this is just one example of the many innovative collaborations starting to emerge. The marketplace is now moving toward cooperation and partnerships that have never existed before across different industries to develop the technology and capabilities needed to move forward. To ease this transition, the semiconductor industry is a cross-industry collaboration environment that will facilitate these strategic partnerships.
Michael Munsey is the Vice President of Electronics & Semiconductors for Siemens Digital Industries Software. In this role, Munsey is responsible for setting the strategic direction for the company with a focus on helping customers drive unprecedented growth and innovation in the semiconductor and electronics industries through digital transformation.
Munsey began his career as a designer at IBM more than 35 years ago and has the distinction of contributing to products that are currently in use on two planets: Earth and Mars, the latter courtesy of his work on the Mars Rover.
Before joining Siemens in 2021, Munsey spent his career working in positions of increasing responsibility across the semiconductor and electronics industries where he did everything from leading cross-functional teams to driving product creation and executing business development in new regions to setting the vision for corporate strategy. Munsey holds a BSEE in Electrical and Electronics Engineering from Tufts University.
Related Content
- CES 2025: A Chat with Siemens EDA CEO Mike Ellow
- Shift in electronic systems design reshaping EDA tools integration
- EDA toolset parade at TSMC’s U.S. design symposium
- Overcoming challenges in electronics design landscape
The post Semiconductor industry strategy 2025: Semiconductors at the heart of software-defined products appeared first on EDN.
New cardboard star wars droid with raspberry pi pico w
![]() | submitted by /u/OtisCanHelpYou [link] [comments] |
Improving DRAM Performance Using Dual Work-Function Metal Gate (DWMG) Structures
Courtesy : LAM RESEARCH
Gate-induced drain leakage (GIDL) presents a major challenge in scaling DRAM technology.
DRAM serves as the backbone of modern computing, enabling devices ranging from smartphones to high-performance servers. As the demand accelerates for higher density and lower power consumption in memory devices, innovation in reducing DRAM leakage currents and enhancing performance becomes essential. One significant challenge in scaling DRAM technology is GIDL, a primary source of standby charge loss. This article explores how a DWMG structure in DRAM buried word-line (BWL) can mitigate GIDL. By leveraging a full-scale process integration model that supports electrical analysis, we demonstrate how this approach reduces leakage current while maintaining robust device performance.
The Challenge of GIDL in Modern DRAM
GIDL is primarily caused by band-to-band tunneling (BTBT) at the drain junction under high electric field conditions. This phenomenon not only increases off-state leakage currents but also degrades memory state retention time in DRAM cells, particularly as feature sizes shrink below 20 nm.1
Factors such as thinner gate oxides and higher doping concentrations exacerbate GIDL, creating a synergistic effect that makes it a critical problem in designing low-power, high-density DRAM.2

The Solution
The introduction of a dual work-function metal gate structure provides a compelling solution to this challenge. By segmenting the buried word-line gate into regions with distinct work functions, the electric field along the channel is precisely controlled. Examples of some dual work-function metal gate structures are shown in Figure 2.
This structure suppresses BTBT generation, thereby reducing GIDL without compromising drive current or threshold voltage (Vt). As a result, this design is well-suited for advanced DRAM nodes.4,5

DWMG Alignment with Industry Trends
The DWMG approach aligns with broader semiconductor trends emphasizing advanced gate designs and channel engineering. Our study applies this innovation to DRAM technology, addressing GIDL challenges while preserving key performance metrics. Similar methods have been successfully implemented in FinFETs6 and tunnel FETs7 to reduce leakage and improve subthreshold slopes.
Leveraging Process Integration Modeling for Insights
Our process integration modeling platform (SEMulator3D) with built-in electrical analysis capabilities played a pivotal role in evaluating the DWMG design. This tool allowed us to:
- Simulate the full process flow of a DRAM cell array, from active area formation to capacitor integration (Figure 3a).
- Focus on the BWL transistor by extracting and refining a specific transistor for electrical characterization (Figure 3b–d).
- Analyze the interactions between process parameters—such as gate work-function, oxide thickness, and doping profiles—and their impact on electrical performance.
This simulation framework provided a holistic view of integration challenges and revealed the effectiveness of DWMG in reducing current leakage.

DWMG Design and Simulation Results
The DWMG structure is realized by splitting the gate into upper and lower regions with distinct work functions in the upper region’s metal gate of 3.5eV, 4.1eV, and 4.7eV (Figure 4). The device simulation considers the models of doping/field-dependent mobility, Shockley-Read-Hall (SRH) generation/recombination, and trap-assisted band-to-band tunneling effects.
The drift-diffusion equation is solved to obtain Idrain vs. Vgate curves, both in the linear and saturation regimes. The substrate current is measured (virtually) to determine the GIDL leakage amount.

Key results include the following:
- Leakage reduction (Figure 5): The low and high work-function regions, in the upper gate and lower gate, respectively, create a more relaxed electric field distribution than the same work-function without the DWMG, which suppresses BTBT at the drain junction and in turn reduces leakage current.

- Preserved device performance (Figure 6): Despite the GIDL reduction (I_subtrate), critical IV characteristics in both linear (Idlin_Vg) and saturation (Idsat_Vg) regimes remain intact when using the DWMG, ensuring reliable operation during read and write cycles.

- Process dependency (Figure 7): Gate oxide thickness and doping concentration significantly influence performance. For instance, thinner oxides improve field control but increase BTBT risk due to the reduced barrier width. Similarly, higher doping improves modulation capabilities but exacerbates BTBT by increasing the electric field intensity, which accelerates tunneling processes.

Advantages of Combining Device Electrical Analysis with Process Integration Modeling
Performing device electrical analysis during process integration modeling can enable the following types of advanced analyses that identify design-technology trade-offs:
- Electrical pathfinding: This type of analysis can be used to rapidly explore combinations of gate work-functions, oxide thicknesses, and doping profiles to pinpoint optimal designs. This approach has the potential to minimize the cost and time of physical experiments while reducing risks associated with late-stage failures.
- Variability analysis: Statistical simulations can identify the impact of process variations—such as gate oxide non-uniformity and doping fluctuations—on GIDL and IV characteristics. This type of analysis highlights critical design margins and has the potential to provide feedback on process optimization (such as active area formation) from very early process development stages.
The Future of DWMG and DRAM
The dual work-function metal gate (DWMG) is a robust, scalable solution for mitigating GIDL in DRAM technology. By optimizing the electric field distribution, this design effectively reduces leakage currents while maintaining critical IV performance. Process integration modeling combined with electrical analysis capabilities is instrumental in demonstrating the ability to reduce leakage current using DWMG, offering a comprehensive framework for addressing design and integration challenges.
Future research efforts could include:
- Integrating DWMG designs with high-k dielectrics or advanced junction engineering to further enhance leakage control.
- Assessing the impact of scaling trends, such as smaller metal pitches and EUV lithography, on DWMG performance.
- Developing predictive models for variability in advanced DRAM nodes.
The post Improving DRAM Performance Using Dual Work-Function Metal Gate (DWMG) Structures appeared first on ELE Times.
EconoDUAL(TM)Power Kit, Power up Commercial Agricultural vehicle
Courtesy : Infineon
As electric vehicles continue to gain traction in the agricultural, commercial, and construction sectors, the demand for efficient and reliable power systems grows. High-voltage traction systems ensure these vehicles operate effectively under heavy loads and demanding conditions such as 60,000 hours of operation time, up to 1.5 million km as well as low FIT rates. Infineon’s EconoDUAL(TM) 250kw Power Kit ,is a prime example that meets the evolving needs of Inverter systems in commercial and agricultural vehicles.
This 250kW three-phase inverter power kit is designed for eCAVs with 800V battery ,addressing the increasing demand for reliable and efficient solutions. It provides a consistent platform for developers working on eCAVs, offering numerous benefits, including a fast time to market via its system solution, and a flexible design with scalable module currents up to 900 A nominal and an easy migration path towards higher voltage class and SiC technology.
Key features
- High-Power Output: specifically designed for 800 V traction–inverter system in eCAVs.
- Accurate current measurement: It integrates our XENSIV TLE4973 Hall coreless current sensors in a compact and easy-to-mount Swoboda universal current sensor module.
- Custom Design Elements: The kit includes specially designed DC-link capacitors and a liquid-cooling system to maintain performance in challenging operating conditions.
- Component Integration: It features three FF900R12ME7 EconoDUAL(TM)3 IGBT7 power modules and 1ED3321MC12N EiceDRIVER gate drivers, ensuring compatibility and ease of assembly.
The EconoDUAL(TM) Power Kit includes three industrial grading EconoDUAL(TM) 3 IGBT7 modules capable of handling high currents efficiently, as well as gate drivers mounted on gate drive boards with booster stages that ensure reliable operation in demanding applications. Additionally, this kit is equipped with an integrated cooling system, which prevents overheating and ensures thermal stability, and is optimized for 800 V systems, with all components, including busbars and capacitors, specifically tailored for high-voltage operation.
Application Development in commercial and agricultural vehicles
The EconoDUAL(TM) Power Kit provides essential tools for addressing the challenges of designing and developing eCAVs. It is particularly suitable for light and medium-duty vehicles such as eBuses and medium-duty eTrucks, while also being applicable to other vehicle types like construction equipment and agricultural vehicles. Its integrated design and advanced components help streamline prototyping and development processes. Additionally, our 32-bit AURIX microcontrollers can be used to enhance the overall system design and ensure functional safety up to the highest ASIL D level. AURIX microcontrollers also offer integrated DS-ADC (delta sigma ADC) to enable a digital calculation of resolver positioning, eventually to replace external resolver IC e.g. Tamagawa, and reduce system complexity.
The microcontroller selection tree can be found below:
XENSIV TLE4973 current sensors, is based on TLE4973 core-less technology. It is highly accurate over temperature and lifetime due to high linearity, stray-field robustness and lack of hysteresis. There is also no need for magnetic concentrator nor a shield, achieving space optimization and design flexibility.
The post EconoDUAL(TM)Power Kit, Power up Commercial Agricultural vehicle appeared first on ELE Times.
Broadcom drives mass adoption of software-defined vehicles with expanded Ethernet switch portfolio
Courtesy : Broadcom
Broadcom’s portfolio of automotive Ethernet switches are built not only for today’s automotive network, they’re scalable for the network of the future.
Automakers have used Broadcom’s standard automotive switches for more than a decade to route data between various sensors, processing units, and actuators within the vehicle. As automakers transition from domain-based to zonal architectures, pre-planning allows the architectures to scale to newer features and benefits.Software-defined vehicles, or SDVs, have the connectivity and processing power to secure, monitor, upgrade, and update vehicle capabilities. The software for different computing functions, such as driver assist, infotainment, body control, and instrumentation, can all be distributed across different boards and processors. Sensor data can flow to multiple zones/boards versus being directly connected. It is the scalability of Ethernet hardware that allows an SDV to be improved after purchase. So, what features should you look for in a switch to support SDVs?

The first item to examine is the type of system on chip, or SoC, that is being used for compute processing in your architecture. New classes of automotive SoCs allow application processing, real-time processing, AI compute, and safety functionality in a single device. Zonal and central compute electronic control units (ECUs) can take advantage of these scalable SoCs. These SoCs have multiple multi-gigabit interfaces to the network to gather and transmit all the data they need to process. For example, AI models for autonomous drive systems can be updated to improve camera recognition and safety. As new software features are added, the amount of data sent over these SoC interfaces will increase. Just as the SoCs are optimized and designed to scale over time to handle larger compute and network needs, the Ethernet network must be designed from the start to support future needs. The automotive Ethernet switch must support multiple connections to the SoCs at the maximum line rate needed. The switch should also be able to support the scalability of each interface from 1Gbps to 10Gbps. If the SoC supports PCIe interfaces with virtualization, then the switch needs to support virtualization as well.
As the software feature workloads get distributed between compute devices, there will be a need for network performance optimizations and time-sensitive provisioning. SDVs will collect data across the network for data analytics and health monitoring. The Ethernet switches will use their packet filters to monitor specific traffic flows at line rate. Captured motor efficiency data, Ethernet network health, and autonomous drive data for AI model improvement can all traverse the Ethernet backbone to the car’s cloud connection. Dynamic configuration of the automotive Ethernet switches allows the automaker to scale the needed resources efficiently over time. Automotive Ethernet switches need to have the bandwidth scalability and timing control to handle future network needs.
As port count requirements for an ECU increase, the automotive Ethernet switch chip must be able to handle all the ports with a single die. A switch chip that uses more than one smaller switch die in a single package can cause numerous issues. The stacking or cascading switch cores have higher latency as the Ethernet packets must be stored and forwarded through each switch die. The high-speed interface between these embedded dies becomes a bottleneck for traffic that must flow from a port on one die to the port on the other die. Time synchronization becomes trickier as multiple gPTP protocol stacks are run inside the single package. Scalability is a key feature enabled effectively with a monolithic die based switch.
As mentioned in our blog,”Securing software-defined vehicles with zonal E/E architectures,” protecting SDVs using zonal electrical/electronic architectures is critically important. The SDV architecture requires a multilayer security approach. The switches need to boot authenticated images securely, and they must allow only authenticated images to be loaded during over-the-air updates. Since software-based protection is challenging at faster Ethernet speeds, MACsec packet authentication and encryption allows line-rate protection in hardware at speeds up to 10 Gbps. In addition, both DOS protection and packet filtering are needed in hardware. Additional levels of protection can be taken in hardware that are unique to an automotive network architecture. An automotive network is fixed, unlike an SMB or enterprise Ethernet network. A port on the switch connected to a RADAR will always be connected to that RADAR in every car. If the unique address of the RADAR on an Ethernet packet is ever seen ingressing on another port, then it is known that someone is spoofing that address, and the port should be quarantined. The same can be said if a second address is seen on the RADAR port, as there should only be one device connected to that port. The security features should be implemented by dedicated hardware in the switch with software running on the internal processor subsystem handling any exceptions. This enables all of the security functionality at line rate and makes the intrusion detection and prevention software clients to be efficient and effective.
50G Auto Ethernet Switch Portfolio ExpansionIn 2022, Broadcom unveiled the 50G automotive Ethernet switch product family to meet automakers’ needs and enable the future of SDVs. To drive mass adoption of SDVs, Broadcom is expanding the product family with a new cost-optimized 11-port version, the BCM89581MT. This device is a single die, lower-power, smaller-port count, 50G automotive Ethernet switch. To provide scalable flow of traffic, the BCM89581MT has multiple interfaces capable of 10Gbps connections to the latest SoCs and multi-gigabit automotive Ethernet PHYs. The high-speed interfaces can be 2.5G SGMII, USXGMII, PCIe Gen 4 single lane or XFI. This addition to Broadcom’s automotive Ethernet switch portfolio will allow for smaller port count central compute or zonal ECUs to fit into the SDV architecture. Broadcom’s automotive SDK can be seamlessly ported across the different family members.
The BCM89581MT enables original equipment manufacturers (OEMs) to realize the full network potential for smaller cost-optimized ECUs. With advanced security, scalable connections to SoCs, advanced time-synchronized networking features, and a full-feature SDK, the BCM89581MT easily allows the OEM to take advantage of the SDV features they need. Broadcom will showcase its expanded portfolio of 50G automotive Ethernet switches, including the new BCM89581MT, at the 2025 Automotive Ethernet Congress in Munich from February 18th-20th. Stop by our booth to learn more about our latest offerings and how our expanded portfolio of automotive Ethernet switch chips enable next-generation software-defined vehicles.
The post Broadcom drives mass adoption of software-defined vehicles with expanded Ethernet switch portfolio appeared first on ELE Times.
Co-packaged optics accelerating towards commercialization
Acquire the Current Challenges of Indirect Time-of-Flight (iToF) Technology with Technological Advancements
Courtesy : Onsemi
One secret behind the success of modern industrial automation is the power of 3D vision. Traditional 2D sensors can only provide flat images, creating limitations in their effectiveness in applications like device inspection. They can read a barcode which may contain the items’ dimensions but cannot independently gauge true shape and size, or any potential dents, defects or irregularities. In addition, 2D readings are at the mercy of lighting conditions, which may obfuscate or distort important areas of interest.
A breakthrough to these constraints can be done with depth sensing, processing the Z-axis in 3D, much like human vision. Now, depth cameras can tell the fullness of an object, perform precise inspections on devices, and even detect subtle facial features for applications such as access control. Thanks to these capabilities, 3D vision is a game-changer across industries – from defense and aerospace to medical, automotive and micro-technology. Whether it’s obstacle detection, facial recognition, self-driving or robotic assistants, depth sensing is the key to modern industrial automation.
Depth sensing, however of type, relies on active or passive visual protectionism. Depth sensing based on passive componentular requires highly calilytic stereo sensors and parallax, very similar to the human eye. Active sensing uses an emitted light beam towards their targets and uses the reflected energy to determine depth. This requires an energy emitter, but offers advantages like penetrating clouds/smoke, 24/7 operation and more deterministic operation.
There are several active directional techniques: direct time-of-flight (dToF), indirect time-of-flight (iToF), structured light and active stereo. Indirect time-offlight uses phase shift between the transmitted and received signals to calculate distance – it’s very accurate and the understanding hardware is simple.
In this blog you will learn about onsemi’s latest family addition, Hyperlux ID has made significant advances in iToF technology and these advances can be utilized to improve depth sensing in current industrial and commercial applications.
Existing iToF Technology Constraints Reduce Widespread AdoptioniToF sensing lies at the heart of many applications. One such popular application is face recognition as seen on various smartphones. However, this access control feature can only function at close range. Other applications that use iToF include machine vision (MV), robotics, augmented reality/virtual reality (AR/VR), biometrics and patient monitoring. Currently these applications are restricted to innulin use at close range (< 5m) with stationary objects that do not require high resolution. Several challenges restrict the potential scope of iToF technology. Among these are motion, the overhead and complexity of the hardware and data processing architecture and the need for meticulous calibration.
These significant hurdles either force engineers to implement complex 3D and expensive solutions to obtain depth, or simply to not acquire depth information at all. With remarkable innovations, onsemi introduces the Hyperlux ID family.
Hyperlux ID family that enables the benefits of iToF without previously noted restrictions. Hyperlux ID’s iToF implementation can now enable a more widespread adoption of this important technology.
Detailing the Hyperlux ID AdvancesOnsemi’s Hyperlux ID sensing family initially consists of two 1.2 megapixels (MP) iToF products, the AF0130 and AF0131. This family provides advanced sensor performance and development in four critical areas:
- Receiving reliable depth information with moving objects
Achieving optimal resolution/depth distance with high accuracy
- Reducing cost and size
- Decreasing calibration time
Each of the aforementioned areas and improvements are further detailed.
Momentum Motion ArtifactsTo enable more widespread adoption, iToF sensors need to function well with moving objects, so they can produce accurate results without motion. As mentioned, iToF sensing on light reflections using four or more different phases to calculate depth. Nearly all existing iToF sensing solutions in the marketplace do not capture and process these phases simultaneously provide which issues with moving objects. Designed with a unique proprietary integration and readout structure, the Hyperlux ID depth sensor uses global shutter with on-chip storage and real-time processing to enable fast-moving object capture applications such as conveyor belt operation, robot arms, surveillance, collision collision, attachment detection and more.

Most iToF sensors on the market today have only VGA resolution, which hinders their accuracy, and in turn, limits their applications. One reason VGA is more prevalent is due to the complex phase capture and data intensive processing mentioned prior. In contrast, the Hyperlux ID sensors are designed with 1.2 MP resolution (1280×960) using a high performance 3.5 μm back-side (BSI) pixel. As a product of its increased resolution over VGA, the Hyperlux ID sensor offers the additional critical advantage of expanded range depth. , at closer distances high-precision accuracy is provided and wider-angle optics can be used.
With higher resolution, the Hyperlux ID sensors also deliver improved quantum efficiency and reduced depth jitter. Taken together, these enhancements mean new applications for iToF sensors where high resolution and expanded depth are paramount, such as gesture recognition, quality control/inspection and access control.

As a product of increased resolution, the Hyperlux ID depth sensor can measure depth over a much greater range compared to other iToF sensors currently available. While current iToF offerings have an indoor range of less than 10 meters, the Hyperlux ID iToF sensor family can reach up to 30 meters. The usage of a high-performance global shutter pixel enables a full sensor array to closely align to active infrared lighting, which in turn limits noise provides from other infrared sources which are common indoor lights and most challenging of all – the sun.
Easier Calibration and DevelopmentAccurately record and calculating phase differences in iToF sensors require precise calibration, an extremely time-consuming process. To ease this, we have developed a proprietary method that makes Hyperlux ID sensors easier to calibrate and thus faster to set up.
To aid in development, onsemi has constructed an easy-to-use development kit that includes a baseboard, a head sensorboard and a laser board. The kit can be used both indoors and outdoors with a range of 0.5 – 30 meters. It can produce depth maps, 3D point clouds, phase-out and depth-out data from an image.
Activated, by using spread-spectrum techniques many iToF (and other infrared-enabled devices) sensors can be used in the same system without worrying of other interference devices.
onsemi’s iToF Sensors Do More for LessiToF sensors are excellent at making accurate 3D depth measurements, which have won them a solid place in industrial and commercial applications. With remarkable improvements in performance and design simplification, onsemi’s Hyperlux ID depth sensors open a new world of applications for iToF sensing depth.
Compared to iToF sensors on the market today, Hyperlux ID depth sensors work more effectively with objects in motion, outdoors and at greater distances. In addition, due to their novel design, Hyperlux ID depth sensors are more cost-effective, take up less board real estate and are easier to work with.
The Hyperlux ID family of depth sensors consists of two products: the AF0130 and AF0131. The AF0130 includes built-in depth processing while the AF0131 does not, for customers who prefer to use their own original algorithms.
The post Acquire the Current Challenges of Indirect Time-of-Flight (iToF) Technology with Technological Advancements appeared first on ELE Times.
Comptek launches Kontrox LASE 16 for industrial-scale edge-emitting laser facet passivation
Empower industrial IoT through integrated connectivity, precise positioning and value-added services with a new modem lineup from Qualcomm
Three new modems, purpose-built for IoT, bring an industry-first iSIM, cloud services and connectivity on NB-IoT and Cat 1bis networks for ubiquitous coverage.
The industrial Internet of Things (IIoT) is rapidly transforming industries, enabling businesses to achieve greater efficiency, productivity and visibility. However, deploying successful IIoT applications requires reliable connectivity, accurate positioning and cost-effective solutions. Three new modems from Qualcomm Technologies are purpose-built to address far-ranging use cases across industrial applications through an industry-first integrated SIM (iSIM), and LTE connectivity on Narrowband IoT (NB-IoT) and Cat 1bis networks, for coverage even in challenging signal environments.
The Qualcomm E41 4G Modem-RFThe Qualcomm E41 4G Modem-RF evolves IoT device capabilities by bringing integrated connectivity through an industry-first GSMA pre-certified iSIM. It offers device manufacturers the ability to simplify the device manufacturing process by reducing the need for additional parts and multiple models of the same device, helping accelerate the time to market of commercial devices, since those devices can be remotely provisioned to the desired network once manufactured through integrated connectivity capabilities. The E41 4G Modem-RF is also purpose-built for use with the Qualcomm Aware Platform so enterprises, OEMs, ODMs and developers can easily build, deploy and scale cloud-connected devices that can be tailored to solve various industrial challenges across businesses, through value-added, cloud-based services.
The Qualcomm E51 4G Modem-RF and Qualcomm E52 4G Modem-RFContinuing the mission of advancing cellular connectivity for everyone and across every device, Qualcomm is proudly introducing a new generation of modem solutions for IoT, optimized for use on NB-IoT and Cat 1bis networks. Both the Qualcomm E51 4G Modem-RF and the Qualcomm E52 4G Modem-RF feature a highly integrated design that allows for power and cost optimizations for device manufacturers. These two low-power solutions contain an integrated power management unit, support for RF communications, and a rich array of peripherals.
The former of these two solutions also removes the need for dedicated GPS hardware through cloud-based GPS positioning services, further helping device manufacturers save on device costs, while reducing positioning error in open sky and dense urban environments. Regardless of which modem ODMs and OEMs choose, they can rest assured they can utilize low-power connectivity and intelligent power management capabilities, and NB-IoT or Cat 1bis connectivity, making these modems ideal for ultra-low power connectivity across a range of IoT devices including smart meters, smart city devices, intelligent parking solutions, healthcare devices, wearable devices, IP cameras, point-of-sale terminals and more.

The Qualcomm E41 4G Modem-RF and Qualcomm E52 4G Modem-RF are both Cat 1bis solutions that represent advancements in IIoT connectivity, including a breakthrough on the former of these modems, which features an industry-first, GSMA pre-certified iSIM solution that can be programmed during manufacturing or remotely via a SIM provisioning service. This will enable devices to more readily connect to a variety of cellular networks across the globe, thereby making it easier than ever for ODMs, OEMs, MNOs and MVNOs to integrate connectivity on devices across networks.
The potential applications for the E41 4G Modem-RF span across a variety of IoT devices, including smart meters that are placed in remote areas that have historically required frequent battery replacements or manual readings. Now, those meters can operate more efficiently by using integrated connectivity and remote management to send readings proactively over the air, and alert remote decision-makers when maintenance is needed.

IoT devices are deployed in a variety of environments, including where location technologies have traditionally been challenged, such as indoor areas like warehouses and retail stores. The E41 4G Modem-RF uses several positioning techniques to address the needs of industrial IoT applications, including in these difficult signal environments, using ambient signals from existing Wi-Fi access points and cellular towers. Positioning can be achieved either directly through the modem, or through Qualcomm Aware Positioning Services, which adds cloud-based positioning services and available GNSS assistance, when paired with the all-new optional dual-band GNSS receiver, the Qualcomm QCG110. This is an ideal solution for positioning devices in open-sky environments that require precise positioning, using multiple constellations, in a power-conscious way.
With its variety of positioning technologies, the E41 4G Modem-RF provides a robust solution for IIoT applications including asset tracking and fleet management, energy and utilities, retail and mobile network operators, powering continuous asset visibility, monitoring and management capabilities even in the most challenging conditions.

All three new modems will help device manufacturers simplify the development process and reduce the time and costs to develop devices through a highly integrated design architecture. Because the E41 4G Modem-RF incorporates iSIM technology directly into the hardware design, it reduces the total cost of assembling a device, since the cost of SIM card is included in the modem. OEMs are able to develop a single device model that can be remotely programmed to work in different regions around the globe and transform the traditional manufacturing model where it’s been necessary to build multiple models of the same device, each using a different SIM, to work with different connectivity providers across regions. By utilizing the E41 4G Modem-RF’s compact design, businesses can unlock the full potential of IIoT without compromising on quality or performance, and reduce design complexity.

The capabilities of all three modems unlock a wide variety of possibilities across smart wearables in warehousing, industrial handheld devices in retail, smart metering in energy and utilities, guidance for autonomous robots across retail, warehouses and more.
In the energy and utilities sector, example uses for all three of these modems include:
- Improved operational efficiency and energy distribution on a localized grid level with reduced costs through less manual intervention.
- Long-lasting asset control capabilities for vital infrastructure, such as electric meters, through precise data collection and remote management capabilities.
- High temperature support allows devices to be deployed and used in harsh environments that are typical of energy and utilities space.
- IP cameras, wearable devices, smart meters and industrial handheld devices.
In the retail sector, examples of solutions the E41 4G Modem-RF can power include:
- Real-time inventory management and security-focused payment processing to point-of-sale systems and industrial handheld devices.
- On-device AI capabilities and advanced security surveillance functionality on IP cameras with real-time alerts and remote monitoring capabilities.
For autonomous robots in manufacturing, logistics and retail applications, the E41 4G Modem-RF provides:
- Precise positioning and connectivity, delivering efficient navigation and automation.
- Low-latency and security-focused processing for enhanced reliability during use.
At its core, the integrated and compact design of these three modems supports a wide range of IoT applications that demand both precise, low-power positioning and seamless connectivity, within a single, versatile design that can be selected depending on the target application, empowering businesses across multiple industries to achieve growth and seize new opportunities.
The post Empower industrial IoT through integrated connectivity, precise positioning and value-added services with a new modem lineup from Qualcomm appeared first on ELE Times.
👍 Конференція трудового колективу КПІ ім. Ігоря Сікорського
17 квітня 2025 року відбудеться конференція трудового колективу КПІ ім. Ігоря Сікорського у Центрі культури та мистецтв КПІ
Optimize power and wakeup latency in swift response vision systems – Part 2

Part 1 of this article series provided a detailed overview of a trigger-based vision system for embedded applications. It also delved into latency measurements of this swift response vision system while explaining latency-related design strategy and measurement methods. Now, Part 2 provides a detailed treatment of optimizing power consumption and wakeup latency of this embedded vision system.
In Linux, power management is a key feature that allows the system to enter various sleep states to conserve energy when the system is idle or in a low-power state. These sleep states are typically categorized into “suspend” (low-power modes) and “hibernate” (suspend to disk) modes that are part of the Advanced Configuration and Power Interface (ACPI) specification. Below are the main Linux sleep states.
Figure 1 Here is a highlight of Linux sleep states. Source: eInfochips
- Wakeup (Idle): System fully active; CPU and components fully powered, used when the device is actively in use; high power consumption, no resume time needed.
- Deep sleep (Suspend-to-RAM): CPU and motherboard components mostly disabled, RAM refreshed, used for deeper low-power states to save energy; low power consumption varying by C-state, fast resume time (milliseconds).
- System sleep (Suspend-to-Idle): CPU frozen, RAM in self-refresh mode, shallow sleep state for low-latency, responsive applications (for example, network requests); low power consumption, higher than hibernate, fast resume time (milliseconds).
- Hibernate (Suspend-to-Disk): Memory saved to disk, system powered off, used for deep power savings over long periods (for instance, laptops); almost zero power consumption, slow resume time (requires reading from disk).
Suspend To Ram (STR) offers a good balance, as it powers down most of the system but keeps RAM active (self-refresh mode) for a quick resume, making it suitable for devices needing quick wakeups and energy savings. Hibernate, on the other hand, saves more power by writing the system’s state to disk and powering down completely, but resulting in slower wakeup times.
Qualcomm’s chips, especially those found in Linux embedded devices, support two power-saving modes to help optimize battery life and improve efficiency. These power-saving modes are typically controlled through the system’s firmware, the operating system, and specific hardware components. Here are the main power-saving modes supported by Qualcomm-based chipsets:
- Suspend to RAM (STR)
- Suspend to Idle (S2Idle)
Triggers suspend mode by writing “mem” or “freeze” in /sys/power/state.
Figure 2 Here is how source flow looks like when device enters sleep and wakes up. Source: eInfochips
As the device goes into suspend modes, it performs the following tasks:
- Check whether the suspend type is valid or not
- Notify user space applications that device is going into sleep state
- Freeze the console logs
- Freeze kernel thread and buses and freeze unwalkable interrupts
- Disable non-bootable CPU (CPU 1-7) and keep RAM into self-refresh mode
- Keep the device into sleep state until any wakeup signal is received
Once the device receives the wakeup interrupt or trigger, it starts resuming the device in reverse order while suspending the device.
While the system is suspended, the current consumption of the Aikri QRB4210 system on module (SoM) comes around to ~7 mA at 3.7-V supply voltage. Below is the waveform of the current drained by the system on module.
Figure 3 Here is how current consumption looks like while Aikri QRB4210 is in suspend mode. Source: eInfochips
Camera sensor power modes
Camera sensors are designed to support multiple power modes such as:
- Streaming mode
- Suspend mode
- Standby mode
Each mode has distinct power consumption and latency. Latency varies by power-saving level and sensor state. Based on use case, ensure the camera uses the most efficient mode for its function, especially while the system is in power saving mode like deep sleep or standby. This ensures balanced performance and power efficiency while maintaining quick reactivation.
In GStreamer, the pipeline manages data flow through various processing stages. These stages align with the GStreamer state machine, marking points in the pipeline’s lifecycle. The four main states are NULL, READY, PAUSED and PLAYING, each indicating the pipeline’s status and controlling data and event flow. Here’s a breakdown of each of the stages (or states) in a GStreamer pipeline:
Figure 4 The above image outlines GStreamer’s pipeline stages. Source: eInfochips
- Null
- This is the initial state of the pipeline, and it represents an inactive or uninitialized state. The pipeline is not doing any work in this state. All elements in the pipeline are in their NULL state as well.
- In this state, the master clock (MCLK) from the processor to the camera sensor is not active; the camera sensor is in reset state and the current consumption by the camera is almost zero.
- Ready
- In this state, the pipeline is ready to be configured but has not yet started processing any media. It’s like a preparation phase before actual playback or processing starts.
- GStreamer performs sanity check and plugin compatibility for the given pipeline.
- Resources can be allocated (for example, memory buffers and device initialization).
- GStreamer entering this state does not impact MCLK’s state or reset signal. If GStreamer enters from the NULL state to the READY state, the MCLK remains inactive. On the other hand, if it enters the READY state from the PLAYING state, the MCLK remains active.
- The current consumption in the READY state depends on the previous state; this behavior can be further optimized.
- Paused
- This state indicates that the pipeline is set up and ready to process media but is not actively playing yet. It’s often used when preparing for playback or streaming while maintaining control over when processing starts.
- All elements in the pipeline are initialized and ready to start processing media.
- Like the READY state, the current consumption in the PAUSED state depends on the previous state, so some optimization in the camera stack can help reduce the power consumption during this state.
- Playing
- The PLAYING state represents the pipeline’s fully active state, where data is being processed and media is either being rendered to the screen, played back through speakers, or streamed to a remote system.
- MCLK is active and the camera sensor is out of reset. The current consumption is highest in this state as all camera sensor data is being captured and passed through the pipeline.
To minimize wakeup latency of the camera sensor while maintaining the lowest sleep current, GStreamer pipeline should be put in the NULL state when the system is suspended. To understand the power consumption due to MCLK and RESET signals assertion, below is the comparison of current consumption between the NULL state of GStreamer pipeline and the READY state of GStreamer pipeline while QRB4210 is in the suspended state.
Figure 5 Current consumption shown while GStreamer is in NULL state and QRB4210 is in suspend mode at ~7 mA. Source: eInfochips
Figure 6 Current consumption shown while GStreamer is in READY state and QRB4210 is in suspend mode at ~30 mA. Source: eInfochips
While the camera is in the NULL state, the QRB4210 system on module draws a current of ~7mA, which is equivalent to the current drawn by the system on module in the suspended state when no camera is connected. When the camera is in the READY state, the QRB4210 system on module draws a current of around ~30 mA. The above oscilloscope snapshot shows the waveforms of the consumed current. All the measured currents are at 3.7-V supply voltage for the QRB4210 system on module.
Latency measurement results
Latency was measured between two trigger events: the first occurs when the device wakes up and receives the interrupt at the application processor, and the second occurs when the first frame becomes available in the DDR after image signal processor (ISP) runs.
As mentioned earlier in Part 1, the scenario is simulated using bash script that keeps the device into the suspend mode and triggers the QRB4210 platform from sleep and wakeup using the RTC wake alarm.
We have collected the camera wakeup latency by changing the camera state from PLAYING to READY and from PLAYING to NULL. In each scenario, three different use cases are followed, which are recording camera stream into eMMC, recording camera stream into SD card, and previewing camera stream to display. The resulting latency is as follows:
- Camera state in READY
Table 1 Latency measurements are shown in READY state. Source: eInfochips
- Camera state in NULL
Table 2 Latency measurements are shown in NULL state. Source: eInfochips
The minimum, maximum, and average values presented in the above tables have been derived by running each scenario for 100 iterations.
Apart from measuring the latency numbers programmatically, below are the results measured using the GPIO toggle operation between two reference events while switching the camera state from READY to PLAYING.
Table 3 Latency measurements are conducted using GPIO. Source: eInfochips
Now refer to the following oscilloscope images for different scenarios used in the GPIO toggle measurement method.
Figure 7 GPIO toggle measurements are conducted while recording into eMMC at 410.641 ms. Source: eInfochips
Figure 8 GPIO toggle measurements are conducted while recording into SD card at 382.037 ms. Source: eInfochips
Figure 9 GPIO toggle measurements are conducted during preview on display at 359.153 ms. Source: eInfochips
Trade-off between current consumption and wakeup latency
Based on the simulated result, we see that current consumption and wakeup latency are dependent on each other.
The consolidated readings show that a camera pipeline in the READY state consumes more current while it takes less time to wake up. On the other hand, if the camera pipeline is in the NULL state, it consumes less current but takes more time to wake up. Refer to the table below for average data readings.
Table 4 The above data shows trade-off between current consumption and wakeup latency. Source: eInfochips
All latency data is measured between the reception of the wakeup IRQ at the application processor and the availability of the frame in DDR after the wakeup. It does not include the time taken by a motion detection sensor to sense and generate an interrupt for the application processor. Generally, the time taken by a motion detection sensor is negligible compared to the numbers mentioned above.
Future scope
To reduce the current consumption of a device in the sleep state optimization, you can follow the steps below:
- Disable redundant peripherals and I/O ports.
- Prevent avoidable wakeups by ensuring that peripherals don’t resume from sleep unnecessarily.
- Disable or mask unwanted wakeup triggers or subsystem that can wake the device from a sleep state.
- Use camera standby (register retaining) mode so that MCLK can be stopped, or its frequency can be reduced.
- Enable LCD display only when preview use case is running.
To optimize wakeup latency, follow the guidelines below:
- Make use of the camera standby mode to further optimize latency to generate the first frame.
- Reduce camera sensor frame size to optimize frame scan time and ISP processing time.
- Disable redundant system services.
- Trigger camera captures from lower-level interface rather than using the GStreamer.
Trigger-based cameras offer an efficient solution for capturing targeted events, reducing unnecessary operation, and managing resources effectively. They are a powerful tool in applications where specific, event-driven image or video capture is needed.
By conducting experiments on the Aikri QRB4210 platform and making minimal optimizations to the Linux operating system, it’s possible to replicate or create a robust trigger-based camera system, achieving ~400-500 ms latency with minimal current consumption.
Jigar Pandya—a solution engineer at eInfochips, an Arrow company—specializes in board bring-up, board support package porting, and optimization.
Priyank Modi—a hardware design engineer at eInfochips, an Arrow company—has worked on various Aikri projects to enhance technical capabilities.
Related content
- The State of Machine Vision
- What Is Machine Vision All About?
- Processors, Sensors Drive Embedded Vision
- Shaping the Scene for Vision Standardization
- Embedded Vision: Giving Machines the Power of Sight
The post Optimize power and wakeup latency in swift response vision systems – Part 2 appeared first on EDN.
Виконання бюджету за 2024 рік
Надходження до бюджету університету за 2024 рік перевищили відповідний показник минулого року на 12,5 % і склали 2 450,6 млн грн.
The (more) modern drone: Which one(s) do I now own?

Last September, I detailed why I’d decided to hold onto the first-gen DJI Mavic Air drone that I’d bought back in mid-2021 (and DJI had introduced in January 2018), a decision which then prompted me to both resurrect its long-drained batteries and acquire a Remote ID module to get it copacetic with current FAA usage regulations, as subsequently mentioned in October:
Within both blog posts, however, I intentionally alluded to (but didn’t delve into detail on) the newer drone that I’d also purchased to accompany it, aside from dropping hints that it offered (sneak peek: as-needed enabled) integrated Remote ID support and weighed (sneak peek: sometimes) less than 250 grams. That teasing wasn’t (just) to drive you nuts: to do the topic justice would necessitate a blog post all its own. That time is now, and that blog post is this one.
Behold DJI’s Mini 3 Pro, originally introduced in May 2022 and shown here with its baseline RC-N1 controller:
I bought mine (two of them, actually, as it turned out) roughly two years post-intro, in late June (from eBay) and early July (from Lensrentals) of last year. By that time, the Mini 4 Pro successor, unveiled in September 2023, had already been out for nearly a year. So, why did I pick its predecessor? The two drone generations look identical; they take the same batteries, propellers and other parts, and fit into the same cases. And as far as image capture goes, the sensors are identical as well: 48 Mpixel (effective) 1/1.3″ CMOS.
What’s connected to the image sensors, however, leads to one of several key differences between the two generations. The Mini 3 Pro captures video at up to 4K resolution at a 60-fps peak frame rate. The improved ISP (image signal processor) in the Mini 4 Pro, conversely, also captures video at 4K resolution, but this time up to a 100-fps frame rate. Dim-light image quality is also improved, along with the available capture-format options, now also encompassing both pre-processed HDR and post-processed D-LOG. And the camera now rotates a full 90° vertical for TikTok- and more general smartphone viewing-friendly portrait orientation video frames.
Speaking of cameras, what about the two drones’ collision avoidance systems? The DJI Mini 3 Pro has cameras both front and rear for collision avoidance purposes, along with another pointing downward to (for example) aid in landing. The Mini 4 Pro replaces them with four fisheye-lens cameras (at front, rear and both sides) for collision avoidance all around the drone as well as above it, further augmented by two downward facing cameras for stereo distance and a LiDAR sensor, the latter enhancing after-dark sensing and discerning distance-to-ground when the terrain is featureless. By the way, the rumored upcoming DJI Mini 5 Pro further bolsters the drone’s LiDAR facilities, if the leaked images are true and not just Photoshop-created fakes.
The final notable difference involves the contrasting wireless protocols used by both drones to communicate with and stream live video to the user’s controller and, if used, goggles. The Mini 3 Pro leverages DJI’s O3 transmission system, with an estimated range of 12 km while streaming live 1080p 30 fps video. With the Mini 4 Pro and its more advanced O4 system, conversely, the wirelessly connected range increases to an estimated 20 km. Two important notes here:
- The controllers for the Mini 3 Pro also support the longer-range (15 km) and higher frame rate (1080p 60 fps) O3+ protocol used by larger DJI drones such as the Mavic 3
- Unfortunately, however, the DJI Mini 4 is not backwards compatible with the O3 and O3+ protocols, so although I’ll be able to reuse my batteries and the like if I do a drone-generation upgrade in the future, I’ll need to purchase new controllers and goggles for it.
That all said, why did I still go with the Mini 3 Pro? The core reason was cost. In assessing the available inventory of used drone equipment, the bulk of the options I found were at both ends of the spectrum: either in like-new condition, or egregiously damaged by past accidents. But given that the Mini 3 Pro had been in the market nearly 1.5 years longer, its available used inventory was much more sizeable. I was able to find two pristine Mini 3 Pro examples for a combined price tag less than that of a single like-new (far from brand new) Mini 4 Pro. And the money saved also afforded me the ability to purchase two used upgraded integrated-display controllers, the mainstream RC and high-end RC Pro, the latter running full-blown Android.
Although enhancements such as higher quality video, more advanced object detection and longer range are nice, they’re not essential in my currently elementary use case, particularly counterbalanced against the fiscal savings I obtained by going prior-gen. The DJI Mini 4’s expanded-scope collision avoidance might be useful when flying the drone side-to-side for panning purposes, for example, or through a grove of trees, neither of which I see myself doing much if any of, at least for a while. And considering that after 12 km the drone will probably already be out of sight, combined with the alternative ability to record even higher quality video to local drone microSD storage, O4 transmission system support also isn’t a necessity for me.
Speaking of batteries (plenty of spares which I now also own, along with associated chargers, and refresh-charge them every two months to keep them viable) and range, let’s get to the drone’s earlier-alluded Remote ID facilities. The Mini 3 Pro (therefore also Mini 4 Pro) has two battery options: a standard 2453 mAh model that, as conveniently stamped right on it to answer enforcement agency inquiries, keeps the drone just below the 250-gram threshold:
and a “Plus” 3850 mAh model that weighs ~50% more (121 grams vs 80.5 grams). The DJI Mini 3 Pro has built-in Remote ID support, negating the need for an add-on module (which, if installed, would push total weight above 249 grams, even using a standard battery). But here’s the slick bit; when the drone detects that a standard battery is in use, it disables Remote ID transmission, both because the FAA doesn’t require it and to address user privacy concerns, given that scanning facilities are available to the masses, not just to regulatory and enforcement entities.
I’ve admittedly been too busy post-purchase to use the drone gear much yet, but I’m looking forward to harassing the neighbors (kidding!) with it in the future. I’ve also acquired a Goggles Integra set and a RC Motion 2 Controller, both gently used from Lensrentals:
to test out FPV (first-person view) flying, and even a LTE cellular dongle for remote-locale Internet access to the RC Pro controller (unfortunately, such dongles reportedly can’t also be used on the drone itself, at least in the US, for alternative long-range controller connectivity):
And finally, I’ve acquired used examples of the Googles Racing Edition Set (Adorama) and OcuSync Air System (eBay) for the Mavic Air, again for FPV testing purposes:
Stay tuned for more on all of this if (hopefully more accurately, when) I get time to actualize my drone gear testing aspirations. Until then, let me know your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Diagnosing and resuscitating a set of DJI drone batteries
- Teardown: DJI Spark drone
- Oh little drone, how you have grown…
- Drone regulation and electronic augmentation
- Keep your drone flying high with the right circuit protection design
The post The (more) modern drone: Which one(s) do I now own? appeared first on EDN.
Кампанію декларування за 2024 рік завершено
Цьогоріч свої декларації подали майже 627 тисяч осіб, які виконують функції держави або місцевого самоврядування. Це важливий показник, адже, попри всі виклики, значна кількість посадовців вкотре підтвердила готовність діяти відкрито й прозоро.
[проєкт] Колективний договiр Національного технічного університету України «Київський політехнічний інститут імені Ігоря Сікорського» на період з квітня 2025 р. по квітень 2030 р.
Колективний договір укладено відповідно до чинного законодавства, у тому числі з дотриманням Законів України "Про колективні договори і угоди", "Про освіту", "Про вищу освіту", "Про оплату праці", "Про охорону праці", "Про відпустки", "Про професійні спілки, їх права та гарантії діяльності", КЗпП України, Генеральної і Галузевої угод та ін.
PhotonDelta and Silicon Catalyst collaborate to drive innovation for early-stage photonic startups
Optical inspection system for a complete 3D sintering paste check
Future-proof quality assurance for power electronics through sintering paste inspection with multi-line SPI
Higher operating temperatures, thinner interconnection layers, 10 times the longevity – the advantages of sintering pastes over solder pastes have long been recognized in the field of power electronics. Not least for this reason, sintering pastes are preferred in system-critical technologies such as “green energy” and e-mobility. Here, for example, IGBTs have become the central component in converters for all types of electric drives: wind turbines, solar power generation, battery charging – hardly any future technology would be conceivable without the “all-rounder” sintering paste. However, sintering is more prone to errors than soldering paste printing. Furthermore, defects are more difficult to detect and rectify – critical failures in the field are the result. To avoid this, GÖPEL electronic has now added an inspection system specifically for sintering paste to its Multi Line platform.
The Multi Line SPI is a cost-effective 3D inline system for automated inspection of sintering paste. Based on the Multi Line platform, it is a customized solution for small and medium-sized companies with high quality standards; it can also be used for solder pastes. The telecentric 3D camera module is used to inspect solder and sinter paste without shadows for shape, area, coplanarity, height, bridges, volume and X/Y offset. Equipped with two digital fringe projectors for shadow-free 3D image capturing, it has a resolution of 15µm/pixel, a height measurement accuracy of 1µm and a height resolution of 0.2 µm. This means that measurement values can be obtained precisely and repeatedly.
Generation of an inspection program for sinter paste inspection takes only a few minutes: CAD data or a reference layout is sufficient. Users who already use GÖPEL electronic software for programming SMD, THT or CCI systems can learn the additional sinter paste functions with little training. In addition, the data import, verification and statistics software are identical to the other inspection systems from GÖPEL electronic. This is where the platform concept of the Multi Line series really pays off: the uniform, powerful operating and evaluation software across all devices reduces training and programming effort, enabling flexible and optimized staff deployment planning.
The post Optical inspection system for a complete 3D sintering paste check appeared first on ELE Times.
Keysight and SAMEER Collaborate to Advance 6G and Healthcare Innovation in India
- Collaboration brings together expertise and cutting-edge technology to drive innovation
- Provides essential 6G research infrastructure to bolster the ‘Made in India’ vision
Keysight Technologies, Inc. announces it has signed a Memorandum of Understanding with Society for Applied Microwave Electronics Engineering & Research, a premier R&D organisation under Ministry of Electronics and Information Technology, Government of India to drive healthcare and 6G innovation across India. As part of the collaboration, both Keysight and SAMEER have proposed to create a healthcare center of excellence along with a research lab to drive 6G communication research.
In order to scale-up the Indian presence in 6G and other critical areas such as medical electronics, there is an urgent need to build strong expertise and an ecosystem in India. Keysight and SAMEER, which is part of the government research institute under the Ministry of Electronics & Information Technology will work together to address this. Building on existing work, the collaboration will focus on several key technology areas to meet the growing demand for innovation in both the strategic and civilian applications.
Under the MoU, Keysight will enable SAMEER to develop and demonstrate fully functional labs that support 6G research and development across various India institutions. Plans also include establishing a center of excellence for healthcare focused on advancing magnetic resonance imaging (MRI) technologies. Together, the two organizations will work on driving innovation and supporting the ‘Made in India’ initiative which is designed to generate local growth and development.
Dr P.Hanumantha Rao, Director General at SAMEER said: “We are leading 6G research in India after the successful demonstration of our end to end 5G stack along with IIT Madras. The proposed collaboration with Keysight will enhance this further and enable Indian research and academic institutions to get access to next generation technologies.”
SAMEER contributions in healthcare include fully indigenous Linear Accelerator for Cancer therapy and a fully functional affordable 1.5T MRI. The MoU will facilitate Keysight to leverage the products developed by SAMEER for democratization across India and continue research by complementing each other’s capabilities.
Sudhir Tangri, Vice President of Asia Pacific Sales and Country General Manager of India at Keysight said: “Establishing a center of excellence and building 6G research areas is a critical step towards driving innovation in India. Through this collaboration we are proud to provide the much-needed infrastructure and technology that will empower future research across healthcare and other sectors. SAMEER is a leader in its field, and we are excited to work together to accelerate our 6G and healthcare vision.”
The post Keysight and SAMEER Collaborate to Advance 6G and Healthcare Innovation in India appeared first on ELE Times.
Наш університет співпрацюватиме з литовською компанією Teltonika Networks
🇺🇦🇱🇹 Компанія Teltonika Networks спеціалізується на розробленні та виробництві високоякісного мережевого обладнання для промислового Інтернету речей та інноваційних рішень у сфері телекомунікацій.
КПІ ім. Ігоря Сікорського взяв участь у форумі Space for Ukraine
🛰 Організатор форуму Space for Ukraine — Міністерство оборони України. Серед учасників були представники уряду та бізнесу, командувач Космічного командування Франції, командувач Космічного командування Великої Британії, провідні фахівці у сфері космічних технологій.
Pages
