Збирач потоків

BluGlass gives update on September-quarter activities and results

Semiconductor today - Чтв, 11/06/2025 - 11:50
BluGlass Ltd of Silverwater, Australia — which develops and manufactures gallium nitride (GaN) blue laser diodes based on its proprietary low-temperature, low-hydrogen remote-plasma chemical vapor deposition (RPCVD) technology — has provided the following update and financial report for its fiscal first-quarter 2026 (to end-September 2025)...

КПІ ім. Ігоря Сікорського та Львівська політехніка поглиблюють співпрацю

Новини - Чтв, 11/06/2025 - 11:46
КПІ ім. Ігоря Сікорського та Львівська політехніка поглиблюють співпрацю
Image
kpi чт, 11/06/2025 - 11:46
Текст

Під час візиту до КПІ представники Львівської політехніки ознайомилися з досвідом нашого університету щодо цифровізації адміністративних процесів і автоматизації обліку матеріально-технічних ресурсів.

The shift from Industry 4.0 to 5.0

EDN Network - Чтв, 11/06/2025 - 02:35
Ten-year humanoid robot hardware market forecast (2025–2035).

The future of the global industry will be defined by the integration of AI with robotics and IoT technologies. AI-enabled industrial automation will transform manufacturing and logistics across automotive, semiconductors, batteries, and beyond. IDTechEx predicts that the global sensor market will reach $255 billion by 2036, with sensors for robotics, automation, and IoT poised as key growth markets.

From edge AI and IoT sensors for connected devices and equipment (Industry 4.0) to collaborative robots, or cobots (Industry 5.0), technology innovations are central to future industrial automation solutions. As industry megatrends and enabling technologies increasingly overlap, it’s worth evaluating the distinct value propositions of Industry 4.0 and Industry 5.0, as well as the roadmap for key product adoption in each.

Sensor and robotics technology roadmap for Industry 4.0 and Industry 5.0.Sensor and robotics technology roadmap for Industry 4.0 and Industry 5.0 (Source: IDTechEx) What are Industry 4.0 and Industry 5.0?

Industry 4.0 emerged in the 2010s with IoT and cloud computing, transforming traditionally logic-controlled automated production systems into smart factories. Miniaturized sensors and industrial robotics enable repetitive tasks to be automated in a controlled and predictable manner. IoT networking, cloud processing, and real-time data management unlock productivity gains in smart factories through efficiency improvements, downtime reductions, and optimized supply chain integration.

Industry 4.0 technologies have gained significant traction in many high-volume, low-mix product markets, including consumer electronics, automotive, logistics, and food and beverage. Industrial robots have been key to automation in many sectors, excelling at tasks such as material handling, palletizing, and quality inspection in manufacturing and assembly applications.

If Industry 4.0 is characterized by cyber-physical systems, then Industry 5.0 is all about human-robot collaboration. Collaborative and humanoid robots better accommodate changing tasks and facilitate safer, more natural interaction with human operators—areas where traditional robots struggle.

Cobots are designed to work closely with humans without the need for direct control. AI models trained on tailored, application-specific datasets are employed to make cobots fully autonomous, with self-learning and intelligent behaviors.

The distinction between Industry 4.0 and Industry 5.0 technologies is ambiguous, particularly as products in both categories increasingly integrate AI. Nevertheless, technology innovations continue to enable the next generation of Industry 4.0 and Industry 5.0 products.

Intelligent sensors for Industry 4.0

In 2025, the big trend within Industry 4.0 is moving from connected to intelligent industrial systems using AI. AI models built and trained on real operation data are being augmented into sensors and IoT solutions to automate decision-making and offer predictive functionality. Edge AI sensors, digital twinning, and smart wearable devices are all key enabling technologies promising to boost productivity.

Edge-AI-enabled sensors are hitting the market, employing on-board neural processor units with AI models to carry out data inference and prediction on endpoint devices. Edge AI cameras capable of image classification, segmentation, and object detection are being commercialized for machine vision applications. Sony’s IMX500 edge AI camera module has seen early adoption in retail, factory, and logistics markets, while Cognex’s AI-powered 3D vision system gains traction for in-line quality inspection in EV battery and PCB manufacturing.

With over 15% of production costs arising from equipment failure in many industries, edge AI sensors monitoring equipment performance and automating maintenance can mitigate risks. Analog Devices, STMicroelectronics, TDK, and Siemens all now offer in-sensor or co-packaged machine-learning vibration and temperature sensors for industrial predictive maintenance. Predictive maintenance has been slow to take off, however, with industrial equipment suppliers and infrastructure service providers (rail, wind, and marine assets) being early adopters.

Simulating and modeling industrial operational environments is becoming more feasible and valuable as sensor data volume grows. Digital twins can be built using camera and position sensor data collected on endpoint devices. Digital twins enable performance simulation and maintenance forecasting to maximize productivity and minimize operational downtime. Proof-of-concept use cases include remote equipment operation, digital staff training, and custom AI model development.

Beyond robotics and automation, industrial worker safety is still a challenge. The National Safety Council estimates that the total cost of U.S. work injuries was $177 billion in 2023, with high incident rates in construction, logistics, agriculture, and manufacturing industries.

Smart personal protection equipment with temperature, motion, and gas sensors can monitor worker activity and environmental conditions, giving managers oversight to ensure safety. Wearable IoT skin patches offering hydration and sweat analysis are also emerging in the mining and oil and gas industries, reducing risk by proactively addressing the physiological and cognitive effects of dehydration.

Human-robot collaboration for Industry 5.0

Industry 4.0 relies heavily on automation, making it ideal for high-volume, low-mix manufacturing. As the transition to Industry 5.0 takes place, warehouse operators are seeking greater flexibility in their supply chains to support low-volume, high-mix production.

A defining aspect of Industry 5.0 is human-robot collaboration, with cobots being a core component of this concept. Humanoid robots are also designed to work alongside humans, aligning them with Industry 5.0 principles. However, as of late 2025, their technology and safety standards are still developing, so in most factory settings, they are deployed with physical separation from human workers.

Ten-year humanoid robot hardware market forecast (2025–2035).Ten-year humanoid robot hardware market forecast (2025–2035) (Source: IDTechEx)

Humanoid robots, widely perceived as embodied AI, are projected to grow rapidly over the next 10 years. IDTechEx forecasts that the humanoid robot hardware market is set to take off in 2026, growing to reach $25 billion by 2035. This surge is fueled by major players like Tesla and BYD, who plan a more than tenfold expansion in humanoid deployment in their factories between 2025 and 2026.

As of 2025, despite significant hype around humanoid robots, there are still limited real-world applications where they fit. Among industrial applications, the automotive and logistics sectors have attracted the most interest. In the short- to mid-term, the automotive industry is expected to lead humanoid adoption, driven by the historic success of automation, large-scale production demands, and stronger cost-negotiation power.

Lightweight and slow-moving cobots, designed to work next to human operators without physical separation, have also gained significant momentum in recent years. Cobots are ideal options for small and mid-sized enterprises due to their low cost, small footprint, ease of programming, flexibility, and low power consumption.

Cobots could tackle a key industry pain point: the risk of shutdown to entire production lines when a single industrial robot malfunctions, due to the need to ensure human operators can safely enter robot working zones for inspection. Cobots could be an ideal solution to mitigate this, as they can work closely and flexibly with human operators.

The most compelling application of cobots is in the automotive industry for assembly, welding, surface polishing, and screwing. Cobots are also attractive in high-mix, low-volume production industries such as food and beverage.

Limited technical capabilities and high costs currently restrict wider cobot adoption. However, alternative business models are emerging to address these challenges, including cobot-as-a-service and try-first-and-buy-later models.

Outlook for Industry X.0

AI, IoT, and robotics are mutually enabling technologies, with industrial automation applications positioned firmly within this nexus and poised to capitalize on advancements.

Key challenges for Industry X.0 technologies are long return-on-investment (ROI) timelines and bespoke application requirements. Industrial IoT sensor networks take an average of two years to generate returns, while humanoid robots in warehouses require 18 months of pilot testing before broader use. However, economies-of-scale cost reductions and supporting infrastructure can ease ROI concerns, while long-term productivity gains will also offset high upfront costs.

The next generation of industrial IoT technology will leverage AI to deliver productivity improvements through greater device intelligence and automated decision-making. With IDTechEx forecasting that humanoid and cobot adoption will take off by the end of the decade, the 2030s are set to be defined by Industry 5.0.

The post The shift from Industry 4.0 to 5.0 appeared first on EDN.

New Arduino Nesso N1 Appears in FCC Filing With Full Schematics Ahead of Release

Reddit:Electronics - Чтв, 11/06/2025 - 02:14
New Arduino Nesso N1 Appears in FCC Filing With Full Schematics Ahead of Release

FCC ID: 2AN9S-TPX00227

Arduino’s upcoming Nesso N1 has appeared in a recent FCC filing, offering one of the most detailed looks at the device so far. Although the board has been announced, it has not yet reached retail, and the filing confirms that development is nearing completion. The documents include complete schematics, which is uncommon and provides an unusually transparent view of the design.

The Nesso N1 is based on an ESP32 C6 controller with support for Wi Fi, Bluetooth Low Energy, and LoRa at 915 MHz. It includes a 1.14 inch color touchscreen, detachable antennas, a BMI270 motion sensor, Grove and Qwiic expansion ports, and a built in 200 mAh battery for portable use. Internal and external photos show a compact layout focused on prototyping flexibility.

submitted by /u/Electrical-Plum-751
[link] [comments]

Infineon and SolarEdge collaborate on high-efficiency power infrastructure for AI data centers

Semiconductor today - Срд, 11/05/2025 - 20:43
Infineon Technologies AG of Munich, Germany and smart energy technology firm SolarEdge Technologies Inc of Milpitas, CA, USA are collaborating to advance SolarEdge’s solid-state transformer (SST) platform for next-generation AI and hyperscale data centers. The collaboration focuses on the joint design, optimization and validation of a modular 2–5MW SST building block. It combines Infineon’s silicon carbide (SiC) switching technology with SolarEdge’s proven power-conversion and control topology set to deliver >99% efficiency, supporting the global shift towards high-efficiency, DC-based data-center infrastructure...

Veeco receives Propel300 MOCVD system order from GaN-on-Si power semiconductor IDM

Semiconductor today - Срд, 11/05/2025 - 20:28
Epitaxial deposition and process equipment maker Veeco Instruments Inc of Plainview, NY, USA has received an order for a Propel300 metal-organic chemical vapor deposition (MOCVD) system from a “major power semiconductor integrated device manufacturer” (IDM) for gallium nitride (GaN) epitaxy on 300mm silicon (Si) wafers...

A precision, voltage-compliant current source

EDN Network - Срд, 11/05/2025 - 17:00
A simple current source

It has long been known that the simple combination of a depletion-mode MOSFET (and before these were available, a JFET) and a resistor made a simple, serviceable current source such as that seen on the right side of Figure 1.

Figure 1 Current versus voltage characteristics of a DN2540 depletion mode MOSFET and the circuit of a simple current source made with one, both courtesy of Microchip.

Wow the engineering world with your unique design: Design Ideas Submission Guide

This is evident from the figure’s left side, which shows the drain current versus drain voltage characteristics for various gate-source voltages of a DN2540 MOSFET. Once the drain voltage rises above a certain point, further increases cause only very slight rises in drain current (not visible on this scale). This simple circuit might suffice for many applications, except for the fact that the VGS required for a specific drain current will vary over temperature and production lots. Something else is needed to produce a drain current with any degree of precision.

Alternative current source circuits

And so, we might turn to something like the circuits of Figure 2.

Figure 2 A current source with a more predictable current, left (IXYS) and a voltage regulator which could be employed as a current source with a more predictable current, right (TI). Source: IXYS and Texas Instruments

In these circuits, we see members of the ‘431 family regulating MOSFET source and BJT emitter voltages. The Texas Instruments circuit on the right demonstrates the need for an oscillation-prevention capacitor, and my experience has been that this is also needed with the IXYS circuit on the left.

Although RL1, RS, and R1 pass precise, well-regulated currents to the transistors in their respective circuits, resistors RB and R do not. RB’s current is subject to a not well-controlled VGS, and R’s is affected by whatever variations there might be in VBATT.

The MOSFET circuit is a true two-terminal current source, so a load can be connected in series with the current source at its positive or negative terminal. But then the load is always subjected to the poorly-controlled RB current.

The BJT is part of a three-terminal circuit, and for a load to avoid the VBATT-influenced current through R, it could only be connected between VBATT and the BJT collectors. Even so, variations in VBATT could produce currents, which lead to voltages that are not entirely rejected at the TLA431 cathode, and so would produce uncontrolled currents in the BJTs and therefore in the load.

A true two-terminal current source

Figure 3 addresses these limitations in circuit performance. In analyzing it, as always, I rely on datasheet maximum and minimum values whenever they are available, but resort to and state that I’m employing typical values when they are not.

Figure 3 This circuit delivers predictable currents to U1 and M1 and therefore to a load. It’s a true two-terminal current source which accommodates load connection to both low and high side.

U1 establishes 1.24 · ( 1 + R4 / R3 ) volts at VS and adds a current of VS / (R4 + R3) to the MOSFET drain.

An additional drain current comes from:

2 · ( VS – VBE(Q2) / ( R2 + R5 )

The “2” is due to the fact that R2 and R1 currents are identical (discounting the Early effect on Q1). The current through R1 is nearly constant regardless of the value of VGS. This current provides what U1 needs to operate.

The precision of the total DC current through the load is limited by the tolerances of R1 through R5, the U1 reference’s accuracy, and the value of the BJT’s temperature-dependent VBE drop. (U1’s maximum feedback reference current over its operating temperature is a negligible 1 µA.)

U1 requires a minimum of 100 µA to operate, so R5 is chosen to provide it with 150 µA. Per its On Semi datasheet, at this current and over Q1’s operating temperature range, the 2N3906’s typical VCE saturation voltage is 50 mV. Add that to the 15mV drop across R1 for a total of 65 mV, which is the smallest achievable VSG value.

Accordingly, we are some small but indeterminant amount shy of the maximum drain current guaranteed for the part (at 25°C, 25 V VDS, and 0 V VGS only) by its datasheet. At the other extreme, under otherwise identical conditions, a VGS of -3.5 V will guarantee a drain current of less than 10 µA. For such, U1 and the circuit as a whole will operate properly at a VS of 5 VDC.

Higher temperatures might require a more negative VGS by a maximum of -4.5 mV/°C and, therefore, possibly larger values of VS and, accordingly, of R5. This would be to ensure that U1’s cathode voltage remains above 1.24 V under all conditions.

D2 is selected for a Zener voltage which, when added to D1’s voltage drop, is greater than VS, but is less than the lesser of the maximum allowed cathode-anode voltage of U1 (18 V) and the maximum allowed VGS of M1 (20 V). D1‘s small capacitance shields the rest of the circuit from the Zener capacitance, which might otherwise induce oscillations. The diodes are probably not needed, but they provide cheap protection. Neither passes current or affects circuit performance during normal operation. C1 ensures stable operation.

U1 strives to establish a constant voltage at VS regardless of the DC and AC voltage variations of the unregulated supply V1. Working against it in descending order of impact are the magnitude of the conductance of the R3 + R4 resistor string, U1‘s falling loop gain with frequency, and M1’s large Rds and small Cds. Still, the circuit built around the 400-V VDS-capable M1 achieves some surprisingly good results in the test circuit of Figure 4.

Figure 4 Circuit used to test the impedance of the Figure 3 current source.

Table 1 and Figure 5 list and display some measurements. Impedances in megohms are calculated using the formula RLOAD · 10(-dB, VLOAD / VGEN) / 20 / 1E6.

Table 1 Impedances of the current source of Figure 3 at various frequencies, evaluated using the circuit of Figure 4.

Figure 5 Plotted curves of Figure 3 current source impedance from the data in Table 1.

Observations

There are several conclusions that can be drawn from the curves in Figure 5. The major one is that at low frequencies, the AC impedance Z is roughly inversely proportional to current. A more insightful way to express this is that Z is proportional to R3 + R4, which sets the current. With larger resistance, current variations produce larger voltages for the ‘431 IC to use for regulation; that is, there’s more gain available in the circuit’s feedback loop to increase impedance.

Another phenomenon is that in the 1 and 10-mA current curves, the impedance rises much more quickly as frequency increases above 1 kHz. This is consistent with the fact that the TLVH431B gain is more or less flat from DC to 1 kHz and falls thereafter. The following phenomenon masks this effect somewhat at the higher 100 mA current.

Finally, at all currents, there is an advantage to operating at higher values of VDS. This is especially apparent at the highest current, 100 mA. And this is consistent with the fact that for the characteristic curves of the DN2540 MOSFET seen in Figure 1, higher VDS voltages are required at higher currents before the curves become horizontal.

Precision current source

A precision high impedance, moderate-to high voltage-compliant current source has been introduced. Its two-terminal nature means that a load in series with it can be connected to the source’s positive or negative end. Unlike earlier designs, the ‘431 regulator IC’s operating current is independent of both the source’s supply voltage and of its MOSFET’s VGS voltage. The result is a more predictable DC current as well as higher AC impedances than would otherwise be obtainable.

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

Related Content

The post A precision, voltage-compliant current source appeared first on EDN.

Back EMF and electric motors: From fundamentals to real-world applications

EDN Network - Срд, 11/05/2025 - 16:33

Let us begin this session by revisiting a nostalgic motor control IC—the AN6651—designed for rotating speed control of compact DC motors used in tape recorders, record players, and similar devices.

The figure below shows the AN6651’s block diagram and a typical application circuit, both sourced from a 1997 Panasonic datasheet. These retouched visuals offer a glimpse into the IC’s internal architecture and its practical role in analog motor control.

Figure 1 Here is the block diagram and application circuit of the AN6651 motor control IC. Source: Panasonic

Luckily, for those still curious to give it a try, the UTC AN6651—today’s counterpart to the legacy AN6651—is readily available from several sources.

Before we dive deeper, here is a quick question—why did I choose to begin with the AN6651? It’s simply because this legacy chip elegantly controls motor speed using back electromotive force (EMF) feedback—a clever analog technique that keeps rotation stable without relying on external sensors.

In analog systems, this approach is especially elegant: the IC monitors the voltage generated by the motor itself (its back EMF), which is proportional to speed. By adjusting the drive current to maintain a target EMF, the chip effectively regulates motor speed under varying loads and supply conditions.

And yes, this post dives into back EMF (BEMF) and electric motors. Let’s get started.

Understanding back EMF in everyday motors

A spinning motor also acts like a generator, as its coils moving through magnetic fields induce an opposing voltage called back EMF. This back EMF reduces the current flowing through the motor once it’s up to speed.

At that point, only enough current flows to overcome friction and do useful work—far less than the surge needed to get it spinning. Actually, it takes very little time for the motor to reach operating speed—and for the current to drop from its high initial value.

This self-regulating behaviour of back EMF is central to motor efficiency and protection. As the mechanical load rises and the motor begins to slow, back EMF decreases, allowing more current to flow and generate the required torque. Under light or no-load conditions, the motor speeds up, increasing back EMF and limiting current draw.

This dynamic ensures that the motor adjusts its power consumption based on demand, preventing excessive current that could overheat the windings or damage components. In essence, back EMF reflects motor speed and actively stabilizes performance, a principle rooted in classical DC motor theory.

It ‘s worth noting that back EMF plays a critical role as a natural current limiter during normal motor operation. When motor speed drops—whether due to a brownout or excessive mechanical loading—the resulting reduction in back EMF allows more current to flow through the windings.

However, if left unchecked, this surge can lead to overheating and permanent damage. Maintaining adequate speed and load conditions helps preserve the protective function of back EMF, ensuring safe and efficient motor performance.

Armature feedback method in motion control

Armature feedback is a form of self-regulating (passive) speed control that uses back EMF and has been employed for decades in audio tape transport mechanisms, luxury toys, and other purpose-built devices. It remains widely used in low-cost motor control systems where precision sensors or encoders are impractical.

This approach leverages the motor’s ability to act as a generator: as the motor rotates, it produces a voltage proportional to its speed. Like any generator, the output also depends on the strength of the magnetic field flux.

Now let’s take a quick look at how to measure back EMF using a minimalist hardware setup.

Figure 2 The above blueprint presents a minimalist hardware setup for measuring the back EMF of a DC motor. Source: Author

Just to elaborate, when the MOSFET is ON, current flows from the power supply through the motor to ground, during which back EMF cannot be measured. When the MOSFET is OFF, the motor’s negative terminal floats, allowing back EMF to be measured. A microcontroller can generate the required PWM signal to drive the MOSFET.

Likewise, its onboard analog-to-digital converter (ADC) can measure the back EMF voltage relative to ground for further processing. Note that since the ADC measures voltage relative to ground, a lower input value corresponds to a higher back EMF.

That is, measuring the motor’s speed using back EMF involves two alternating steps: first, run the motor for a brief period; then, remove the drive signal. Due to inertia in the motor and mechanical system, the rotor continues to spin momentarily, and this coasting phase provides a window to sample the back EMF voltage and estimate the motor’s rotational speed.

The reference signal can then be routed to the PWM section, where the drive power is fine-tuned to maintain steady motor operation.

Still, in most cases, since the PWM driver outputs armature voltage as pulses, back EMF can also be measured during the intervals between those pulses. Keep note, when the transistor switches off, a strong inductive spike is generated, and the recirculation current flows through the antiparallel flyback diode. Therefore, a brief delay is demanded to allow the back EMF voltage to settle before measurement.

Notably, a high-side P-channel MOSFET can be used as a motor driver transistor instead of a low-side N-channel MOSFET. Likewise, discrete op-amps—rather than dedicated ICs—can also govern motor speed, but that is a topic for another day.

And while this is merely a blueprint, its flexibility allows it to be readily adapted for measuring back EMF—and thus the RPM—of nearly any DC motor. With just a few tweaks, this low-cost approach can be adapted to support a wide range of motor control applications—sensorless, scalable, and easy to implement. Naturally, it takes time, technical skill, and a bit of patience—but you can master it.

Back EMF and the BLDC motor

Back EMF in BLDC motors acts like a built-in feedback system, helping the motor regulate its speed, boost efficiency, and support smooth sensorless control. The shape of this feedback signal depends on how the motor is designed, with trapezoidal and sinusoidal waveforms being the most common.

While challenges like low-speed control and waveform distortion can arise, understanding and managing back EMF effectively opens the door to unlocking the full potential of BLDC motors in everything from fans to drones to electric vehicles.

So, what are the key effects of back EMF in BLDC motors? Let us take a closer look:

  • Design influence: The shape of the back EMF waveform—trapezoidal or sinusoidal—directly affects control strategy, acoustic noise, and how smoothly the motor runs. Trapezoidal designs suit simpler, cost-effective controllers, while sinusoidal profiles offer quieter, more refined motion.
  • Position estimation: Back EMF is widely used in sensorless control algorithms to estimate rotor position.
  • Speed control: Back EMF is directly tied to rotor speed, making it a reliable signal for regulating motor speed without external sensors.
  • Speed limitation: Back EMF eventually balances the supply voltage, limiting further acceleration unless voltage is increased.
  • Current modulation: As the motor spins faster, back EMF increases, reducing the effective voltage across the windings and limiting current flow.
  • Torque impact: Since back EMF opposes the applied voltage, it affects torque production. At high speeds, stronger back EMF draws less current, resulting in lower torque.
  • Efficiency optimization: Aligning commutation with back EMF waveform improves performance and reduces losses.
  • Regenerative braking: In some systems, back EMF is harnessed during braking to feed energy back into the power supply or battery, a valuable feature in electric vehicles and battery-powered devices where efficiency matters.

Oh, I nearly skipped over a few clever tricks that make BLDC motor control even more efficient. One of them is back EMF zero crossing—a sensorless technique where the controller detects when the voltage of an unpowered phase crosses zero, presenting it to time commutation events without physical sensors. To avoid false triggers from electrical noise or switching artifacts, this signal often needs debouncing, either through filtering or timing thresholds.

But this method does not work at startup, when the rotor is not spinning fast enough to generate usable back EMF. That is where open-loop acceleration comes in: the motor is driven with fixed timing until it reaches a speed where back EMF becomes detectable and closed-loop control can take over.

For smoother and more precise performance, field-oriented control (FOC) goes a step further. It transforms motor currents into a rotating reference frame, enabling accurate torque and flux control. Though traditionally used in permanent magnet synchronous motors (PMSMs), FOC is increasingly applied to sinusoidal BLDC motors for quieter, more refined motion.

A vast number of ICs nowadays make sensorless motor control feel like a walk in the park. As an example, below you will find the application schematic of the DRV10983 motor IC, which elegantly integrates power MOSFETs for driving a three-phase sensorless BLDC motor.

Figure 3 Application schematic of the DRV10983 chip, illustrating its function as a three-phase sensorless motor driver with integrated power MOSFETs. Source: Texas Instruments

That wrap up things for now. Talked too much, but there is plenty more to uncover. If this did not quench your thirst, stay tuned—more insights are brewing.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Back EMF and electric motors: From fundamentals to real-world applications appeared first on EDN.

EPC launches 3-phase BLDC motor drive inverter for robot joints and UAVs

Semiconductor today - Срд, 11/05/2025 - 12:17
Efficient Power Conversion Corp (EPC) of El Segundo, CA, USA — which makes enhancement-mode gallium nitride on silicon (eGaN) power field-effect transistors (FETs) and integrated circuits for power management applications — has launched the EPC91120, a high-performance 3-phase brushless DC (BLDC) motor drive inverter optimized for humanoid robot joints. Featuring EPC’s EPC23102 ePower Stage IC, the EPC91120 delivers what is claimed to be superior efficiency, high power density, and precise motion control in a compact 32mm-diameter footprint designed to integrate directly within robotic motor assemblies...

Тристороння зустріч для енергоефективності

Новини - Срд, 11/05/2025 - 11:52
Тристороння зустріч для енергоефективності
Image
kpi ср, 11/05/2025 - 11:52
Текст

КПІ ім. Ігоря Сікорського, 🇩🇪 GIZ (Німецьке агентство з міжнародного співробітництва) та Науковий парк «Фінкорд-Політех» об'єднують зусилля для розвитку енергетичного сектору.

Beyond the current smart grid management systems

EDN Network - Срд, 11/05/2025 - 09:07

Modernizing the electric grid involves more than upgrading control systems with sophisticated software—it requires embedding sensors and automated controls across the entire system. It’s not only the digital brains that manage the network but also the physical devices, like the motors that automate switch operations, which serve as the system’s hands.

Only by integrating sensors and robust controls throughout the entire grid can we fully realize the vision of a smart, flexible, high-capacity, efficient, and reliable power infrastructure.

Source: Bison

The drive to modernize the power grid

The need for increased capacity and greater flexibility is driving the modernization of the power grid. The rapid electrification of transportation and HVAC systems, combined with the rise of artificial intelligence (AI) technologies, is placing unprecedented demands on the energy network.

To meet these challenges, the grid must become more dynamic, capable of supporting new technologies while optimizing efficiency and ensuring reliability.

Integrating distributed energy resources (DERs), such as rooftop solar panels, battery storage, and wind farms, adds further complexity. So, advanced fault detection, self-healing capabilities, and more intelligent controls are essential to managing these resources effectively. Grid-level energy storage solutions, like battery buffers, are also critical for balancing supply and demand as the energy landscape evolves.

At the same time, the grid must address the growing need for resilience. Aging infrastructure, much of it built decades ago, struggles to meet today’s energy demands. Upgrading these outdated systems is vital to ensuring reliability and avoiding costly outages that disrupt businesses and communities.

The increasing frequency of climate-related disasters, including hurricanes, wildfires, and heat waves, highlights the urgency of a resilient grid. Therefore, modernizing the grid to withstand and recover from extreme weather events is no longer optional, it’s essential for the stability of our energy future.

The challenges posed by outdated infrastructure and climate-related disasters are accelerating the adoption of advanced technologies like Supervisory Control and Data Acquisition (SCADA) systems and Advanced Distribution Management Systems (ADMS). These innovations enhance grid visibility, allowing operators to monitor and manage energy flow in real time. This level of control is crucial for quickly addressing disruptions and preventing widespread outages.

Additionally, ADMS makes the grid smarter and more efficient by leveraging predictive analytics. ADMS can forecast energy demand, identify potential issues before they occur, and optimize the flow of electricity across the grid. It also supports conditional predictive maintenance, allowing utilities to address equipment issues proactively based on real-time data and usage patterns.

The key to successful digitization: Fully integrated systems

Smart grids follow the dynamics of the overall global shift toward digitization, aligning with advancements in Industry 4.0, where smart factories go beyond advanced software and analytics. It’s a complete system that integrates IoT sensors, robotics, and distributed controls throughout the production line, creating a setup that’s more productive, flexible, and transparent.

By offering real-time visibility into the production process and component conditions, these automated systems streamline operations, minimize downtime, boost productivity, lower labor costs, and enhance preventive maintenance.

Similarly, smart grids operate as fully integrated systems that rely heavily on a network of advanced sensors, controls, and communication technologies.

Devices such as phasor measurement units (PMUs) provide real-time monitoring of electrical grid stability. Other essential sensors include voltage and current transducers, power quality transducers, and temperature sensors, which monitor key parameters to detect and prevent potential issues. Smart meters also enable two-way communication between utilities and consumers, enabling real-time energy usage tracking, dynamic pricing, and demand response capabilities.

The role of motorized switch operators in grid automation

Among the various distributed components in today’s modern grid infrastructure, motorized switch operators are among the most critical. These devices automate switchgear functions, eliminating the need for manual operation of equipment such as circuit breakers, load break switches, air and SF6 insulated disconnects, and medium- or high-voltage sectionalizers.

By automating these processes, motorized switch operators enhance precision, speed, and safety. They reduce the risk of human error and ensure smoother grid operations. Moreover, these devices integrate seamlessly with SCADA and ADMS, enabling real-time monitoring and control for improved efficiency and reliability across the grid.

Motorized switch operators aren’t just valuable for supporting the smart grid, they also offer practical business benefits on their own, even without smart grid integration. Automating switch operations eliminates the need to send out trucks and personnel every time a switch needs to be operated. This saves significant time, reduces service disruptions, and lowers fleet operation and labor costs.

Motorized switch operators also improve safety. During storms or emergencies, sending crews to remote or hazardous locations can be dangerous. Underground vaults, for example, can flood, turning them into high-voltage safety hazards. Automating these tasks ensures that switches can be operated without putting workers at risk.

The importance of a reliable motor and gear system

When automating switchgear operation, the reliability of the motor and gear system is crucial. These components must perform flawlessly every time, ensuring consistent operation in all conditions, from routine use to extreme situations like storms or grid emergencies.

Given that the switchgear in power grids is designed to operate reliably for decades, motor operators must be engineered with exceptional durability and dependability to ensure they surpass these long-term performance requirements.

Standard off-the-shelf motors often fail to meet the specific demands of medium- and high-voltage switchgear systems. General-purpose motors are typically not engineered to withstand extreme environmental conditions or the high number of operational cycles required in the power grid.

On the other hand, utilities need to modernize infrastructure without expanding vault sizes, and switchgear OEMs want to enhance functionality without altering layouts. A “drop-in” solution offers a seamless and straightforward way to integrate advanced automation into existing systems, saving time, reducing costs, and minimizing downtime.

To meet the unique challenges of medium- and high-voltage switchgear, motor and gear systems must balance two critical constraints—compact size and limited amperage—while still delivering exceptional performance in speed and torque.

Here’s why these attributes matter:

  • Compact size: Space is at a premium in power grid applications, especially for retrofits where manual switchgear is being converted to automated systems. So, motors must fit within the existing contours and confined spaces of switchgear installations. Even for new equipment, utilities demand compact designs to avoid costly expansions of service vaults or installation areas.
  • Limited amperage draw: Motors often need to operate on as little as 5 amps, far less than what’s typical for other applications. Developing a motor and gear system that performs reliably within such constraints is essential to ensuring compatibility with power grid environments.
  • High speed: Fast operation is critical for the safe and effective functioning of switchgear. The ability to open and close switches rapidly minimizes the risk of dangerous electrical arcs, which can cause severe equipment damage, pose safety hazards, and lead to cascading power grid failures.
  • High torque: Overcoming the significant spring force of switchgear components requires motors with high torque. This ensures smooth and consistent operation, even under demanding conditions.

The challenge lies in meeting all four of these requirements. Compact size and low amperage requirements often compromise the speed and torque needed for reliable performance. That’s why motor and gear systems must be specifically engineered and rigorously tested to meet the stringent demands of medium- and high-voltage switchgear applications. Only purpose-built solutions can provide the durability, efficiency, and reliability required to support the long-term stability of the power grid.

Meeting environmental and installation demands

Beyond size, power, and performance considerations, motor and gear systems for medium- and high-voltage switchgear must also meet stringent environmental and installation requirements.

For example, these systems are often exposed to extreme weather conditions, requiring watertight designs to ensure durability in harsh environments. This is especially critical for applications where switchgear is housed in underground vaults that may be prone to flooding or moisture intrusion. Additionally, using specialized lubrication that performs well in both high and low temperature extremes is essential to maintain reliability and efficiency.

Equally important is the ease of installation. Rotary motors provide a significant advantage over linear actuators in this regard. Unlike linear actuators, which require precise calibration, a process that is time-consuming, labor-intensive, and potentially error-prone, rotary motors eliminate this complexity. Their straightforward setup not only reduces installation time but also enhances reliability by eliminating the need for manual adjustments.

To address the diversity of designs in switchgear systems produced by various OEMs, it is essential to work with a motor and gear manufacturer capable of delivering customized solutions. Retrofits often demand a tailored approach due to the unique configurations and requirements of different equipment. Partnering with a company that not only offers bespoke solutions but also has deep expertise in power grid applications is critical.

Future-proofing systems with reliable automation

Automating switchgear operation is a vital step in advancing the modernization of power grids, forming a critical component of smart grid development. Reliable, high-performance motor operators enhance operational efficiency and ensure longevity, providing a solid foundation for evolving power systems.

No matter where a utility is in its modernization journey, investing in durable and efficient motorized switch operators delivers lasting value. This forward-thinking approach not only enhances current operations but also ensures systems are ready to adapt and evolve as modernization advances.

Gary Dorough has advanced from sales representative to sales director for the Western United States and Canada during his 25-year stint at Bison, an AMETEK business. His experience includes 30 years of utility industry collaboration on harmonics mitigation and 15 years developing automated DC motor operators for medium-voltage switchgear systems.

Related Content

The post Beyond the current smart grid management systems appeared first on EDN.

Evolving Priorities in Design Process of Electronic Devices

ELE Times - Срд, 11/05/2025 - 08:06

One of the natal and most crucial stages of electronic devices’ production is the design stage. It encompasses the creative, manual, and technical facets incorporated in an electronic device. The design stage allows manufacturers and developers to convert a textual system definition into a detailed and functional prototype before mass production. Almost all the functional requirements of an electronic device are addressed at the design stage itself.

Considering the BOM and the DFM are crucial at this stage to maintain or improve quality,
while keeping a check on the cost and expected features and performance.

Fundamentals of the Design Process

  • Prior to investing in materials required for manufacturing, it is essential to establish a
    list of requirements. This helps the manufacturer understand the features required in
    the product. Similarly, it is essential to conduct a thorough market research to identify
    market gaps and consumer requirements to develop products that can address the
    consumer needs. A successful product is one that fulfills what the market of similar
    products lack.
  • Subsequently, after the conceptualization is complete, the focus shifts to creating a
    design proposal and project plan. This defines the projected expenses involved in the
    manufacturing process, an approximate timeline, along with other design and
    manufacturing process segments.
  • A final electronic device comprises of several small components like multiple
    microcontrollers, displays, sensors, and memory to name a few. The advent in
    technology has allowed us to leverage the usage of advanced software like Electronic
    Computer-aided design (ECAD) or Electronic Design Automation (EDA) tools to
    create the schematic diagram. These help in reducing the scope of error and act as
    catalysts to the design process.
  • Eventually, the detailed schematic design proves beneficial for the next step where the
    schematic is transformed into a PCB layout.

Growing Trends in the Design Process

  • The advancement in nanotechnology and microfabrication techniques have evolved
    the design process to allow for further miniaturization with increased integration on
    chips. Design engineers can now add more features than before on a single chip along
    with reducing its size.
  • Present day electronic designs are trying to incorporate the usage of renewable energy
    as the industry shifts from fossil fuels. This change has forced designers to maneuver
    the design of electronic devices along with incorporating advanced features like IoT
    efficiency.
  • The growing demand for sustainable devices has equally affected the design process
    which now needs to include features to reduce greenhouse gas emissions as well as
    reduce energy consumption. This has influenced electric designers to modify power
    converters and motor drives to reduce energy loss and increase efficiency.
  • The contemporary times also require the integration of automation and robotics for
    both industrial and consumer electronic devices. The design process hence, has to assimilate these requirements to maintain the longevity of the electronic device to
    allow its easy adoption of advanced technology. The same goes for integration with
    artificial intelligence, a growing rage and one that is bound to prove monumental in
    the simplification of the process and usage of electronic devices.
  • The major challenge in the design process is not the integration of such features but
    their human-friendly integration. Any feature in a device can fail to fulfil its purpose
    if it is not user-friendly, hence, the task falls on design engineers to make their access
    easy and durable.
  • Apart from the features and structural innovation, the design process in upcoming
    electronic devices has also undergone a change in the materials used. Newer, flexible
    devices have changed the dynamics from rigid circuit boards to flexible substrates and
    conductive polymers. Electronic designers are now compelled to adhere to even
    mechanical flexibility in their electronic layouts.

Simplifying the Process for Complex Designs

As the need for miniaturisation and integration grows, the complexity of the design follows
suit, however, advancements in software and applications have simplified the process,
allowing designers to experiment with more creative ideas without compromising on the
timeline and costs.
While ECAD is one such innovation which has been adopted extensively now, some other
EDA tools are:-

  • SPICE: This is a simulation tool used to analyse and predict the circuit's behaviour
    under different conditions before building a physical prototype. It helps, identify and
    fix potential issues in advance.
  • OptSim: This software tool allows designers to evaluate and optimize the
    performance of optical links within a sensor design, predicting how light will behave
    through components like lenses, fibres, and detectors.

Conclusion
Designing for electronic devices is a dynamic process and requires engineers to stay up-to-
date with the industry and market trends. As automation, robotics, and artificial intelligence garner a strong hold among electronics, their integration for the design process is inevitable. The design process is that vital and non-linear stage in manufacturing which often continues even post testing for refinement and then for documentation and certification.

The post Evolving Priorities in Design Process of Electronic Devices appeared first on ELE Times.

New Radiation-Tolerant, High-Reliability Communication Interface Solution for Space Applications

ELE Times - Срд, 11/05/2025 - 07:43
Microchip Technology announced the release of its Radiation-Tolerant (RT) ATA6571RT CAN FD Transceiver, a high-reliability communication solution designed specifically for space applications. This advanced transceiver supports flexible data rates up to 5 Mbps, making it well-suited for space systems such as satellites and spacecraft that require robust and efficient data transmission.
The ATA6571RT transceiver offers significant advantages over traditional CAN solutions, which are typically limited to a 1 Mbps communication bandwidth. With the ability to handle bit rates up to 5 Mbps and support for larger payloads of up to 64 bytes per frame, the ATA6571RT enhances efficiency and reduces bus load. Backward compatible with classic CAN, the ATA6571RT offers a smooth transition for existing systems.
Additionally, its Cyclic Redundancy Check (CRC) mechanism provides enhanced error detection, increasing reliability for safety-critical applications. The ATA6571RT is designed for space applications including platform data handling, propulsion system control, sensor bus control, robotics, on-board computers for nanosatellites and more. For easy integration at the PCB level, this RT device remains pin-distribution compatible with the original Commercial-Off-The-Shelf (COTS) plastic or ceramic versions.
“The ATA6571RT transceiver offers a cost-effective, size-optimized and power-efficient device designed to meet the stringent demands of space environments,” said Leon Gross, corporate vice president of Microchip’s aerospace and defense business.
The ATA6571RT transceiver is designed to withstand harsh space conditions with its resistance to Single-Event Effects (SEE) and Total Ionizing Dose (TID). It also features low power management with local and remote wake-up support, as well as short-circuit and overtemperature protection.

The post New Radiation-Tolerant, High-Reliability Communication Interface Solution for Space Applications appeared first on ELE Times.

A tutorial on instrumentation amplifier boundary plots—Part 1

EDN Network - Срд, 11/05/2025 - 05:11

In today’s information-driven society, there’s an ever-increasing preference to measure phenomena such as temperature, pressure, light, force, voltage and current. These measurements can be used in a plethora of products and systems, including medical diagnostic equipment, home heating, ventilation and air-conditioning systems, vehicle safety and charging systems, industrial automation, and test and measurement systems.

Many of these measurements require highly accurate signal-conditioning circuitry, which often includes an instrumentation amplifier (IA), whose purpose is to amplify differential signals while rejecting signals common to the inputs.

The most common issue when designing a circuit containing an IA is the misinterpretation of the boundary plot, also known as the common mode vs. output voltage, or VCM vs. VOUT plot. Misinterpreting the boundary plot can cause issues, including (but not limited to) signal distortion, clipping, and non-linearity.

Figure 1 depicts an example where the output of an IA such as the INA333 from Texas Instruments has distortion because the input signal violates the boundary plot (Figure 2).

Figure 1 Instrumentation amplifier output distortion is caused by VCM vs. VOUT violation. Source: Texas Instruments

Figure 2 This is how VOUT is limited by VCM. Source: Texas Instruments

This series about IAs will explain common- versus differential-mode signaling, basic operation of the traditional three-operational-amplifier (op amp) topology, and how to interpret and calculate the boundary plot.

This first installment will cover the common- versus differential-mode voltage and IA topologies, and show you how to derive the internal node equations and transfer function of a three-op-amp IA.

The IA topologies

While there are a variety of IA topologies, the traditional three-op-amp topology shown in Figure 3 is the most common and therefore will be the focus of this series. This topology has two stages: input and output. The input stage is made of two non-inverting amplifiers. The non-inverting amplifiers have high input impedance, which minimizes loading of the signal source.

Figure 3 This is how a traditional three-op-amp IA looks like. Source: Texas Instruments

The gain-setting resistor, RG, allows you to select any gain within the operating region of the device (typically 1 V/V to 1,000 V/V). The output stage is a traditional difference amplifier. The ratio of R2 to R1 sets the gain of the difference amplifier. The balanced signal paths from the inputs to the output yield an excellent common-mode rejection ratio (CMRR). Finally, the output voltage, VOUT, is referred to as the voltage applied to the reference pin, VREF.

Even though three-op-amp IAs are the most popular topology, other topologies such as the two op amps offer unique benefits (Figure 4). This topology has high input impedance and single resistor-programmable gain. But since the signal path to the output for each input (V+IN and V-IN) is slightly different, this topology degrades CMRR performance, especially over frequency. Therefore, this type of IA is typically less expensive than the traditional three-op-amp topology.

Figure 4 The schematic shows a two-op-amp IA. Source: Texas Instruments

The IA shown in Figure 5 has a two-op-amp IA input stage. The third op amp, A3, is the output stage, which applies gain to the signal. Two external resistors set the gain. Because of the imbalanced signal paths, this topology also has degraded CMRR performance (<90dB). Therefore, devices with this topology are typically less expensive than traditional three-op-amp IAs.

Figure 5 A two-op-amp IA is shown with output gain stage. Source: Texas Instruments

While the aforementioned topologies are the most prevalent, there are several unique IAs, including current mirror, current feedback, and indirect current feedback.

Figure 6 depicts the current mirror topology. This type of IA is preferable because it enables an input common-mode range that extends to both supply voltage rails, also known as the rail-to-rail input. However, this benefit comes at the expense of bandwidth. Compared to two-op-amp IAs, this topology yields better CMRR performance (100dB or greater). Finally, this topology requires two external resistors to set the gain.

Figure 6 This is how current mirror topology looks like. Source: Texas Instruments

Figure 7 shows a simplified schematic of the current feedback topology. This topology leverages super-beta transistors (Q1 and Q2) to buffer input signal and forces it across the gain-setting resistor, RG. The resulting current flows through R1 and R2, which create voltages at the outputs of A1 and A2. The difference amplifier, A3, then rejects the common-mode signal.

Figure 7 Simplified schematic displays the current feedback topology. Source: Texas Instruments

This topology is advantageous because super-beta transistors yield a low input offset voltage, offset voltage drift, input bias current, and input noise (current and voltage).

Figure 8 depicts the simplified schematic of an indirect current feedback IA. This topology has two transconductance amplifiers (gm1 and gm2) and an integrator amplifier (gm3). The differential input voltage is converted to a current (IIN) by gm1. The gm2 stage converts the feedback voltage (VFB-VREF) into a current (IFB). The integrator amplifier matches IIN and IFB by changing VOUT, thereby adjusting VFB.

Figure 8 This schematic highlights the indirect current feedback topology. Source: Texas Instruments

One significant difference when compared to the previous topology is the rejection of the common-mode signal. In current feedback IAs (and similar architectures), the common-mode signal is rejected by the output stage difference amplifier, A3. Indirect current feedback IAs, however, reject the common-mode signal immediately at the input (gm1). This provides excellent CMRR performance at DC over frequency and independent of gain.

CMRR performance does not degrade if there is impedance on the reference pin (unlike other traditional IAs). Finally, this topology requires two resistors to set the gain, which may deliver excellent performance across temperature if the resistors have well-matched drift behavior.

Common- and differential-mode voltage

The common-mode voltage is the average voltage at the inputs of a differential amplifier. A differential amplifier is any amplifier (including op amps, difference amplifiers and IAs) that amplifies a differential signal while rejecting the common-mode voltage.

The inverting terminal connects to a constant voltage, VCM. Figure 9 depicts a more realistic definition of the input signal where two voltage sources represent VD. Each source has half the magnitude of VD. Performing Kirchhoff’s voltage law around the input loop proves that the two representations are equivalent.

Figure 9 The above schematic shows an alternate definition of common- and differential-mode voltages. Source: Texas Instruments

Three-op-amp IA analysis

Understanding the boundary plot requires an understanding of three-op-amp IA fundamentals. Figure 10 depicts a traditional three-op-amp IA with an input signal—with input and output nodes A1, A2 and A3 labeled.

Figure 10 A three-op-amp IA is shown with input signal and node labels. Source: Texas Instruments

Equation 1 depicts the overall transfer function of the circuit in Figure 10 and defines the gain of the input stage, GIS, and the gain of the output stage, GOS. Notice that the common-mode voltage, VCM, does not appear in the output-voltage equation, because an ideal IA completely rejects common-mode input signals.

Noninverting amplifier input stage

Figure 11 depicts a simplified circuit that enables the derivation of node voltages VIA1 and VOA1.

Figure 11 The schematic shows a simplified circuit for VIA1 and VOA1. Source: Texas Instruments

Equation 2 calculates VIA1:

The analysis for VOA1 simplifies by applying the input-virtual-short property of ideal op amps. The voltage that appears at the RG pin connected to the inverting terminal of A2 is the same as the voltage at V+IN. Superposition results are shown in Equation 3, which simplifies to Equation 4.

Applying a similar analysis to A2 (Figure 12) yields Equation 5, Equation 6 and Equation 7.

Figure 12 This is a simplified circuit for VIA2 and VOA2. Source: Texas Instruments

Difference amplifier output stage

Figure 13 shows that A3, R1 and R2 make up the difference amplifier output stage, whose transfer function is defined in Equation 8.

Figure 13 The above schematic displays difference amplifier input (VDIFF). Source: Texas Instruments

Equation 9, Equation 10 and Equation 11 use the equations for VOA1 and VOA2 to derive VDIFF in terms of the differential input signal, VD, as well as RF and the gain-setting resistor, RG.

Substituting Equation 11 for VDIFF in Equation 8 yields Equation 12, which is the same as Equation 1.

In most IAs, the gain of the output stage is 1 V/V. If the gain of the output stage is 1 V/V, Equation 12 simplifies to Equation 13:

Figure 14 determines the equations for nodes VOA3 and VIA3.

Figure 14 This diagram highlights difference amplifier internal nodes. Source: Texas Instruments

The equation for VOA3 is the same as VOUT, as shown in Equation 14:

Using superposition as shown in Equation 15 determines the equation for VIA3. The voltage at the non-inverting node of A3 sets the amplifier’s common-mode voltage. Therefore, only VOA2 and VREF affect VIA3.

Since GOS=R2/R1, Equation 15 can be rewritten as Equation 16:

Part 2 highlights

The second part of this series will use the equations from the first part to plot each internal amplifier’s input common-mode and output-swing limitation as a function of the IA’s common-mode voltage.

Peter Semig is an applications manager in the Precision Signal Conditioning group at Texas Instruments (TI). He received his bachelor’s and master’s degrees in electrical engineering from Michigan State University in East Lansing, Michigan.

Related Content

The post A tutorial on instrumentation amplifier boundary plots—Part 1 appeared first on EDN.

Please, don't hurt me!

Reddit:Electronics - Срд, 11/05/2025 - 03:39
Please, don't hurt me!

Tonight I've sawn a to220 insulated mosfet, so It can fit where i want

This is a stereo audio amplifier for my car, and that MOSFET will turn switch the whole module on with the electric antenna signal

submitted by /u/ZealousidealAngle476
[link] [comments]

MACOM agrees exclusive license to manufacture products based on HRL’s 40nm T3L GaN-on-SiC process

Semiconductor today - Втр, 11/04/2025 - 22:03
MACOM Technology Solutions Inc of Lowell, MA, USA (which designs and makes RF, microwave, analog and mixed-signal and optical semiconductor technologies) has entered into an agreement to license and manufacture the proprietary 40nm T3L gallium nitride-on-silicon carbide (GaN-on-SiC) process technology of HRL Laboratories LLC of Malibu, CA, USA (a corporate R&D lab co-owned by The Boeing Company and General Motors)...

ADI upgrades its embedded development platform for AI

EDN Network - Втр, 11/04/2025 - 21:45
ADI's CodeFusion Studio 2.0 for AI development.

Analog Devices, Inc. simplifies embedded AI development with its latest CodeFusion Studio release, offering a new bring-your-own-model capability, unified configuration tools, and a Zephyr-based modular framework for runtime profiling. The upgraded open-source embedded development platform delivers advanced abstraction, AI integration, and automation tools to streamline the development and deployment of ADI’s processors and microcontrollers (MCUs).

CodeFusion Studio 2.0 is now the single entry point for development across all ADI hardware, supporting 27 products today, up from five in the last year, when first introduced in 2024.

Jason Griffin, ADI’s managing director, software and AI strategy, said the release of CodeFusion Studio 2.0 is a major leap forward in ADI’s developer-first journey, bringing an open extensible architecture across the company’s embedded ecosystem with innovation focused on simplicity, performance, and speed.

ADI's CodeFusion Studio 2.0 for AI development.CodeFusion Studio 2.0 streamlines embedded AI development. (Source: Analog Devices Inc.)

A major goal of CodeFusion Studio 2.0 is to help teams move faster from evaluation to deployment, Griffin said. “Everything from SDK [software development kit] setup and board configuration to example code deployment is automated or simplified.”

Griffin calls it a “complete evolution of how developers build on ADI technology,” by unifying embedded development, simplifying AI deployment, and providing performance visibility in one cohesive environment. “For developers and customers, this means faster design cycles, fewer barriers, and a shorter path from idea to production.”

A unified platform and streamlined workflow

CodeFusion Studio 2.0, based on Microsoft’s Visual Studio Code, features a built-in model compatibility checker, performance profiling tools, and optimization capabilities. The unified configuration tools reduce complexity across ADI’s hardware ecosystem.

The new Zephyr-based modular framework enables runtime AI/ML workload profiling, offering layer-by-layer analysis and integration with ADI’s heterogeneous platforms. This eliminates toolchain fragmentation, which simplifies ML deployment and reduces complexity, Griffin noted.

“One of the biggest challenges that developers face with multicore SoCs [system on chips] is juggling multiple IDEs [integrated development environments], toolchains, and debuggers,” Griffin explained. “Each core whether Arm, DSP [digital signal processor], or MPU [microprocessor] comes with its own setup and that fragmentation slows teams down.

“In CodeFusion Studio 2.0, that changes completely,” he added. “Everything now lives in a single unified workspace. You can configure, build, and debug every core from one environment, with  shared memory maps, peripheral management, and consistent build dependencies. The result is a streamlined workflow that minimizes context switching and maximizes focus, so developers spend less time on setup and more time on system design and optimization.”

CodeFusion Studio System Planner also is updated to support multicore applications and expanded device compatibility. It now includes interactive memory allocation, improved peripherals setup, and streamlined pin assignment.

ADI's CodeFusion Studio 2.0 memory allocation screenshot.CodeFusion Studio 2.0 adds interactive memory allocation (Source: Analog Devices Inc.)

The growing complexity in managing cores, memory, and peripherals in embedded systems is becoming overwhelming, Griffin said. The system planner gives “developers a clear graphical view of the entire SoC, letting them visualize cores, assign peripherals, and define inter-core communication all in one workspace.”

In addition, with cross-core awareness, the environment validates shared resources automatically.

Another challenge is system optimization, which is addressed with multicore profiling tools, including the Zephyr AI profiler, system event viewer, and ELF file explorer.

“Understanding how the system behaves in real time, and finding where your performance can improve is where the Zephyr AI profiler comes in,” Griffin said. “It measures and optimizes AI workflows across ADI hardware from ultra-low-power edge devices to high-performance multicore systems. It supports frameworks like TensorFlow Lite Micro and TVM, profiling latency, memory and throughput in a consistent and streamlined way.”

Griffin said the system event viewer acts like a built-in logic analyzer, letting developers monitor events, set triggers, and stream data to see exactly how the system behaves. It’s invaluable for analyzing, synchronization, and timing across cores, he said.

The ELF file explorer provides a graphical map of memory and flash usage, helping teams make smarter optimized decisions.

CodeFusion Studio 2.0 also gives developers the ability to download SDKs, toolchains, and plugins on demand, with optional telemetry for diagnostic and multicore support.

Doubling down on AI

CodeFusion Studio 2.0 simplifies the development of AI-enabled embedded systems with support for complete end-to-end AI workflows. This enables developers to bring their own models and deploy them in ADI’s range of processors from low-power edge devices to high-performance DSPs.

“We’ve made the workflow dramatically easier,” Griffin said. “Developers can now import, convert, and deploy AI models directly to ADI hardware. No more stitching together separate tools. With the AI deployment tools, you can assign models to specific cores, verify compatibility, and profile performance before runtime, ensuring every model runs efficiently on the silicon right from the start.”

Screenshot of the AI models manager in ADI's CodeFusion Studio 2.0.Manage AI models with CodeFusion Studio 2.0 from import to deployment (Source: Analog Devices Inc.) Easier debugging

CodeFusion Studio 2.0 also adds new integrated debugging features that bring real-time visibility across multicore and heterogeneous systems, enabling faster issue resolution, shorter debug cycles, and more intuitive troubleshooting in a unified debug experience.

One of the toughest parts of embedded development is debugging multicore systems, Griffin noted. “Each core runs its own firmware on its own schedule often with its own toolchain making full visibility a challenge.”

CodeFusion Studio 2.0 solves this problem, he said. “Our new unified debug experience gives developers real-time visibility across all cores—CPUs, DSPs, and MPUs—in one environment. You can trace interactions, inspect shared resources, and resolve issues faster without switching between tools.”

Developers spend more than 60% of their time doing debugging, Griffin said, and ADI wanted to address this challenge and reduce that time sink.

CodeFusion Studio 2.0 now includes core dump analysis and advanced GDB integration, which includes custom JSON and Python scripts for both Windows and Linux with multicore support.

A big advance is debugging with multicore GDP core dump analysis and RTOS awareness working together in one intelligent uniform experience, Griffin said.

“We’ve added core dump analysis, built around Zephyr RTOS, to automatically extract and visualize crash data; it helps pinpoint root causes quickly and confidently,” he continued. “And the new GDB toolbox provides advanced scripting performance, tracing and automation, making it the most capable debugging suite ADI has ever offered.”

The ultimate goal is to accelerate development and reduce risk for customers, which is what the unified workflows and automation provides, he added.

Future releases are expected to focus on deeper hardware-software integration, expanded runtime environments, and new capabilities, targeting growing developer requirements in physical AI.

CodeFusion Studio 2.0 is now available for download. Other resources include documentation and community support.

The post ADI upgrades its embedded development platform for AI appeared first on EDN.

32-bit MCUs deliver industrial-grade performance

EDN Network - Втр, 11/04/2025 - 21:04
GigaDevice's GD32F503/505 32-bit MCUs.

GigaDevice Semiconductor Inc. launches a new family of high-performance GD32 32-bit general-purpose microcontrollers (MCUs) for a range of industrial applications. The GD32F503/505 32-bit MCUs expand the company’s portfolio based on the Arm Cortex-M33 core. Applications include digital power supplies, industrial automation, motor control, robotic vacuum cleaners, battery management systems, and humanoid robots.

GigaDevice's GD32F503/505 32-bit MCUs.(Source: GigaDevice Semiconductor Inc.)

Built on the Arm v8-M architecture, the GD32F503/505 series offers flexible memory configurations, high integration, and built-in security functions, and features an advanced digital signal processor, hardware accelerator and a single-precision floating-point unit. The GD32F505 operates at a frequency of 280 MHz, while the GD32F503 runs at 252 MHz. Both devices achieve up to 4.10 CoreMark/MHz and 1.51 DMIPS/MHz.

The series offers up to 1024 KB of Flash and 192 KB of SRAM. Users can allocate code-flash, data-flash, and SRAM location through scatter loading based on their specific application, which allows users to tailor memory resources according to their requirements, GigaDevice said.

The GD32F503/505 series also integrates a set of peripheral resources, including three analog-to-digital converters with a sampling rate of up to 3 Ms/s (supporting up to 25 channels), one fast comparator, and one digital-to-analog converter. For connectivity, it supports up to three SPIs, two I2Ss, two I2Cs, three USARTs, two UARTs, two CAN-FDs, and one USBFS interface.

The timing system features one 32-bit general-purpose timer, five 16-bit general-purpose timers, two 16-bit basic timers, and two 16-bit PWM advanced timers. This translates into precise and flexible waveform control and robust protection mechanisms for applications such as digital power supplies and motor control.

The operating voltage range of the GD32F503/505 series is 2.6V  to 3.6 V, and it operates over the  industrial-grade temperature range of -40°C to 105°C. It also offers three power-saving modes for maximizing power efficiency.

These MCUs also provide high-level ESD protection with contact discharge up to 8 kV and air discharge up to 15 kV. Their HBM/CDM immunity is stable at 4,000 V/1,000 V even after three Zap tests, demonstrating reliability margins that exceed conventional standards for industries such as industrial and home appliances, GigaDevice said.

In addition, the MCUs provide multi-level protection of code and data, supporting firmware upgrades, integrity and authenticity verification, and anti-rollback checks. Device security includes a secure boot and secure firmware update platform, along with hardware security features such as user secure storage areas. Other features include a built-in hardware security engine integrating SHA-256 hash algorithms, AES-128/256 encryption algorithms, and a true random number generator. Each device has a unique independent UID for device authentication and lifecycle management.

A multi-layered hardware security mechanism is centered around multi-channel watchdogs, power and clock monitoring, and hardware CRC. In addition, the GD32F5xx series’ software test library is certified to the German IEC 61508 SC3 (SIL 2/SIL 3) for functional safety. The series provides a complete safety package, including key documents such as a safety manual, FMEDA report, and safety self-test library.

The GD32 MCUs feature a full-chain development ecosystem. This includes the free GD32 Embedded Builder IDE, GD-LINK debugging, and the GD32 all-in-one programmer. Tool providers such as Arm, KEIL, IAR, and SEGGER also support this series, including compilation development and trace debugging.

The GD32F503/505 series is available in several package types, including LQFP100/64/48, QFN64/48/32, and BGA64. Samples are available, along with datasheets, software libraries, ecosystem guides, and supporting tools. Development boards are available on request. Mass production is scheduled to start in December. The series will be available through authorized distributors.

The post 32-bit MCUs deliver industrial-grade performance appeared first on EDN.

I accidentally made a teardown museum

Reddit:Electronics - Втр, 11/04/2025 - 20:58
I accidentally made a teardown museum

Found out that the FCC basically lets you peek inside almost any device that emits RF energy. looked into a few cool products, then spent a bit too much time combing through filings that ended up becoming a huge photo set. Here are a few examples!

submitted by /u/benlolly04
[link] [comments]

Board-to-board connectors reduce EMI

EDN Network - Втр, 11/04/2025 - 20:49
Molex's quad-row board-to-board connectors.

Molex LLC develops a quad-row shield connector, claiming the industry’s first space-saving, four-row signal pin layout with a metal electromagnetic interference (EMI) shield. The quad-row board-to-board connector achieves up to a 25-dB reduction in EMI compared to a non-shielded quad-row solution. These connectors are suited for space-constrained applications such as smart watches and other wearables, mobile devices, AR/VR applications, laptops, and gaming devices.

Shielding protects connectors from external electromagnetic noise such as nearby components and far-field devices that can cause signal degradation, data errors, and system faults. The quad-row connectors eliminate the need for external shielding parts and grounding by incorporating the EMI shields, which saves space and simplifies assembly. It also improves reliability and signal integrity.

Molex's quad-row board-to-board connectors.(Source: Molex LLC)

The quad-row board-to-board connectors, first released in 2020, offers a 30% size reduction over dual-row connector designs, Molex said. The space-saving staggered-circuit layout positions pins across four rows at a signal contact pitch of 0.175 mm.

Targeting EMI challenges at 2.4-6 GHz and higher, the quad-row layout with the addition of an EMI shield mitigates both electromagnetic and radio frequency (RF) interference, as well as signal integrity issues that create noise.

The quad-row shield meets stringent EMI/EMC standards to lower regulatory testing burdens and speed product approvals, Molex said.

The new design also addresses the most significant requirements related to signal interference and incremental power requirements. These include how best to achieve 80 times the signal connections and four times the power delivery compared to a single-pin connector, Molex said.

The quad-row connectors offer a 3-A current rating to meet customer requirements for high power in a compact design. Other specifications include a voltage rating of 50 V, a dielectric withstanding voltage of 250 V, and a rated insulation resistance of 100 megohms. Click here for the datasheet.

Samples of the quad-row shield connectors are available now to support custom inquiries. Commercial availability will follow in the second quarter of 2026.

The post Board-to-board connectors reduce EMI appeared first on EDN.

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів