Feed aggregator

Команда студентів КПІ – серед переможців хакатону "Безпечне майбутнє"

Новини - Wed, 12/03/2025 - 14:28
Команда студентів КПІ – серед переможців хакатону "Безпечне майбутнє"
Image
kpi ср, 12/03/2025 - 14:28
Текст

В столиці України відбувся Всеукраїнський хакатон "Безпечне майбутнє: концепція VR/AR-тренажера для підготовки саперів", організований Міністерством економіки, довкілля та сільського господарства України спільно з національною платформою Demine Ukraine, Центром гуманітарного розмінування

Big opportunity for India to fill the 700,000 worker shortage in the chip industry : IESA chief Ashok Chandak

ELE Times - Wed, 12/03/2025 - 13:55

Speaking at the CNBC-TV18 and Moneycontrol UP Tech Next Electronics and Semiconductor Summit on December 2 in Lucknow, president of the India Electronics and Semiconductor Association (IESA), Ashok Chandak stated that the global chip industry faces a deficit of 700,000 workers by 2030, and this can be used as a potential opportunity by India to fill the urgent gap.

Highlighting this opportunity, Chandak also threw light on the existing lack of skill training curriculum. For this he suggested a two-step model, updating technical curriculum to meet future needs and building manufacturing-related training programmes as India scales chip production.

He also added that IESA has already begun discussions with institutes on curriculum reform.

As the advancement in the industry continues to grow, the skill set required also needs to be updates and modified accordingly. The assimilation of Ai and machine learning with the use of technologies like digital twin and AR/VR have opened up the potential for the large Indian population of engineers and scientists to fulfil the demand as India strides ahead on its own India Semiconductor Mission (ISM), wherein the first ‘Made In India’ chip is expected to roll out by the end of December 2025.

The post Big opportunity for India to fill the 700,000 worker shortage in the chip industry : IESA chief Ashok Chandak appeared first on ELE Times.

👍 Долучайтесь до встановлення рекорду України

Новини - Wed, 12/03/2025 - 13:49
👍 Долучайтесь до встановлення рекорду України
Image
kpi ср, 12/03/2025 - 13:49
Текст

Минулого року КПІшниками вже втретє був встановлений національний рекорд щодо найбільшої кількості антен, виготовлених за 24 години.

Не зупиняємося і запрошуємо долучатися до чергового оновлення рекорду України!

I spent several hours learning a 7-segment display to show this to my coworker.

Reddit:Electronics - Wed, 12/03/2025 - 12:41
I spent several hours learning a 7-segment display to show this to my coworker.

Used a 5V regulator, 2 buttons and 2 NPN transistors to control the shared segment.

I am still learning, this was my first attempt at trying a project without copying a YouTube tutorial.

submitted by /u/rerunn1234
[link] [comments]

Cree LED and SANlight partner on high-efficiency horticulture lighting

Semiconductor today - Wed, 12/03/2025 - 11:24
A partnership has been announced in which J Series products of Cree LED Inc of Durham, NC, USA (a Penguin Solutions brand) will be used in the new STIXX-Series luminaires of SANlight GmbH of Schruns, Austria, which specializes in LED lighting solutions for both commercial and home gardening applications...

Outages Won’t Wait: Why Grid Modernization Must Move Faster

ELE Times - Wed, 12/03/2025 - 11:18

Courtesy: Keysight Technologies

A routine click on a recommended link via the AI overview of my browser on November 18 yielded a glaring “internal server error” (Figure 1) when I clicked on a search-referenced website. The Cloudflare outage disrupted connectivity on various platforms, including ChatGPT, Canva, and X. Undaunted, the cyber community had a memes field day when services were restored, flooding their feeds with humorous outage memes.

Fig1. Data center downtime can cause a host of end-user disruptions.

On a more serious note, data center and internet outages are no laughing matter, impacting businesses from online shopping to cryptocurrency exchanges. While the November outage at Cloudflare was attributed to configuration errors, another outage two years earlier was due to a power failure at one of its data centers in Oregon. Cloudflare is not alone in its outage woes. In fact, power failures outweigh network and IT issues when it comes to disrupting online user experiences.

Data from the 2025 Uptime Institute Global Data Center Survey shows that although 50% of data centers experienced at least one impactful outage over the past three years, down from 53% in 2024 (see Figure 2), power issues remain the top cause.

Figure 2. Grid modernization is key to addressing power issues causing data center outages.

It’s not surprising that just a few years ago, electric vehicles (EVs) were deemed to be the new energy guzzlers of the decade, only to be rapidly overtaken by data centers. From crypto mining to generating “morph my cat to holiday mode” image creation prompts, each click adds strain to the power grid, not forgetting the heat generated.

Figure 3. Meta AI’s response when asked how much energy it used to turn my homebody kitty into a cool cat on vacation.

Why must grid modernization happen sooner rather than later?

Data centers currently consume almost five times as much electricity as electric vehicles collectively, but both markets are expected to see a rise in demand for power in the coming years. In developed countries, power grids are already feeling the strain from these new energy guzzlers. Grid modernization must happen sooner rather than later to buffer the impact of skyrocketing electricity demand from both data centers and the EV market, to ensure the power grid’s resilience, stability, and security. Without swift upgrades, older grids are at risk of instability, outages, and bottlenecks as digital infrastructure and EV adoption accelerate.

What does grid modernization entail?

Grid modernization requires a strategic overhaul of legacy power infrastructure at the energy, communications, and operations levels, as illustrated in Figure 4. Existing energy infrastructure must be scalable and be able to incorporate and integrate renewable and distributed energy resources (DERs). Bi-directional communication protocols must continue to evolve to enable real-time data exchange between power-generating assets, energy storage systems, and end-user loads.

This transformation demands compliance with rigorous interoperability standards and cybersecurity frameworks to ensure seamless integration across heterogeneous systems, while safeguarding grid reliability and resilience against operational and environmental stresses.

Figure 4. Grid modernization impacts a complex, interconnected energy ecosystem that must be thoroughly tested and validated to ensure grid reliability and resilience.

Towards Grid Resilience

Grid modernization can significantly reduce both data center outages and power shortages for EV charging, although the impact will depend on how fast the power infrastructure gets upgraded. The modernized grid will employ advanced sensors, automated controls, and predictive analytics to detect and isolate faults quickly. This will further reduce the number of data center outages due to power issues and mitigate the dips in power currently plaguing some cities’ EV charging infrastructure. As the world powers on with increasing load demands, our grid energy community must work together to plan, validate, and build a resilient grid.

Keysight can help you with your innovations for this exciting grid transformation. Our design validation and testing solutions cover inverter-based resources (IBRs) and distributed energy resources (DERs), to tools enabling systems integration and deployment, as well as operations.

The post Outages Won’t Wait: Why Grid Modernization Must Move Faster appeared first on ELE Times.

onsemi and Innoscience sign MoU to collaborate on speeding global rollout of GaN power portfolio

Semiconductor today - Wed, 12/03/2025 - 11:15
Intelligent power and sensing technology firm onsemi of Scottsdale, AZ, USA and China-based Innoscience (Suzhou) Technology Holding Co Ltd — which manufactures gallium nitride on silicon (GaN) power chips on 200mm silicon wafers — have signed a non-binding memorandum of understanding (MoU) to evaluate opportunities to accelerate deployment of GaN power devices, starting with 40–200V, and significantly broaden customer adoption...

The Unsung Hero: How Power Electronics is Fueling the EV Charging Revolution

ELE Times - Wed, 12/03/2025 - 08:12

The electrifying shift towards Electric Vehicles (EVs) often dominates headlines—all talk of battery range, colossal Gigafactories, and the race to deploy charging stations. Yet, behind the spectacle of a simple “plug-in” lies an unheralded, decisive force: power electronics. Every single charging point, from a home unit to a highway beast, is fundamentally a high-voltage, high-efficiency energy conversion machine.

It is power electronics that determines the triad of performance critical to mass adoption: how efficiently, how safely, and how quickly energy moves from the grid into the vehicle’s battery. EV charging is not just about supplying electricity; it’s about converting, controlling, and conditioning power with near-perfect precision.

The Hidden Complexity of Converting Power

For the user, charging is seamless. For the engineer, the act of connecting a cable triggers a tightly choreographed sequence of power processing. An EV battery, which is DC-based, requires regulated DC power at a precise voltage and current profile. Since the public grid supplies AC (Alternating Current), a sophisticated conversion stage is mandatory. This conversion occurs in one of two places:

  1. Onboard Charger (OBC) for AC Charging: The charger station itself is simple, providing raw AC power. The vehicle’s OBC handles the conversion from AC to the regulated DC needed by the battery. The speed is thus limited by the OBC’s rating. 
  2. DC Fast Charger (DCFC) for DC Charging: The station handles the entire conversion process, delivering high-power DC directly to the battery. This allows for speeds from 50kW up to 400kW or more, effectively eliminating range anxiety for long-distance travel.

Inside the Charger: The Power Stages

Further, let’s talk about the high-power DCFC, where power electronics is the key protagonist, executing a meticulous multi-stage architecture:

  1. AC-DC Power Factor Correction (PFC) Stage: Incoming AC from the three-phase grid is first rectified into DC. Crucially, active PFC circuits shape the input current waveform to be purely sinusoidal and in phase with the voltage. This is not just for efficiency; it is essential for grid stability, ensuring low Total Harmonic Distortion (THD) and preventing adverse effects on other loads connected to the same grid segment. This stage establishes a stable DC link voltage.
  2. DC-DC High-Frequency Conversion Stage: This is the heart of the fast charger. A high-frequency, isolated converter takes the DC link voltage and steps it up or down to precisely match the varying voltage requirements of the EV battery (which changes dynamically during the charging cycle). Topologies like the Phase-Shift Full Bridge (PSFB) or the Dual Active Bridge (DAB) converter are chosen for their ability to handle high power, achieve high efficiency, and, in the case of DAB, support bidirectional power flow.
  3. Output Filtering and Control: The final DC output passes through filters to remove ripple. Real-time digital controllers—often high-speed Digital Signal Processors (DSPs)—continuously monitor the battery’s voltage, current, and temperature, adjusting the DC-DC stage’s switching duty cycle every microsecond to adhere strictly to the battery’s requested charging profile.

The SiC-GaN Turning Point

The revolution in charging speed, size, and efficiency is inseparable from the emergence of Wide Bandgap (WBG) Semiconductors: Silicon Carbide (SiC) and Gallium Nitride (GaN). These materials possess a wider energy bandgap than traditional silicon, enabling them to operate at higher voltages, higher temperatures, and significantly higher switching frequencies. Let’s understand how these make  EV charging an easier game. 

  • SiC in High-Power Chargers: SiC MOSFETs have become the industry standard for fast chargers in the 30kW-350 kW range. They boast a breakdown voltage up to 1700V and substantially lower switching losses compared to silicon IGBTs or MOSFETs. This is critical because reduced losses mean less energy wasted as heat, which translates to:
    • Higher System Efficiency: Reducing the operational cost for the charging network operator.
    • Reduced Cooling Requirements: Simplifying the thermal management system, a crucial factor in India’s high ambient temperatures.
    • Smaller Component Size: Operating at higher frequencies allows for smaller, lighter passive components (like inductors and transformers), leading to denser, more compact charging cabinets.
  • GaN in High-Frequency Systems: GaN devices excel in extremely high-frequency switching, often used in auxiliary power supplies, high-density AC-DC stages, and compact onboard chargers. Their extremely low gate charge and fast switching characteristics allow for even lighter magnetics and smaller overall designs than SiC, pushing the boundaries of power density.

The combined adoption of SiC for the main power stages and GaN for high-frequency auxiliary and lower-power segments represents the current state-of-the-art in charging technology design.

Smart Power: Digital Control and Bidirectionality

A modern EV charger is far more than a simple power converter; it is a complex, intelligent electronic system. The digital controllers (DSPs and microcontrollers) not only manage the power stages but also the critical communications and safety protocols.

Embedded Control Systems

These control systems operate with microsecond-level precision, handling the generation of Pulse Width Modulation (PWM) signals for the power switches, monitoring multiple feedback loops (voltage, current, temperature), and executing complex thermal management algorithms. They are the guardians of safety, instantly detecting and shutting down fault events like overcurrent or ground faults.

Grid and Vehicle Communication

The intelligence extends to multiple layers of communication, ensuring seamless integration with the vehicle and the backend network:

  • OCPP (Open Charge Point Protocol): Used to communicate with the central management system (CMS) for remote monitoring, status updates, user authentication, and billing.
  • ISO 15118: A crucial standard for secure Plug-and-Charge functionality, allowing the vehicle and charger to negotiate power delivery and payment automatically.
  • PLC/CAN: The communication protocols used for the real-time Battery Management System (BMS) handshake, which dictates the exact power level the battery can safely accept at any given moment.

This digital brain is paving the way for the next critical frontier: bidirectional charging, or Vehicle-to-Grid (V2G).

The Policy Framework and India’s Drive for Localization

India’s aggressive push for electric mobility is backed by a robust, multi-layered policy structure designed to address both the demand and the infrastructure challenge, propelling the local power electronics ecosystem.

The recently notified PM E-DRIVE (Electric Drive Revolution in Innovative Vehicle Enhancement) Scheme, succeeding FAME-II, underscores the government’s commitment, with an outlay of ₹10,900 crore. Crucially, a significant portion of this fund is earmarked for EV Public Charging Stations (EVPCS).

Key Infrastructure Incentives:

  • PM E-DRIVE Incentives: This scheme offers substantial financial support for deploying charging infrastructure, with a specific focus on setting up a widespread network. The Ministry of Heavy Industries (MHI) has offered incentives for states to secure land, build upstream infrastructure (transformers, cables), and manage the rollout, bearing up to 80% of the upstream infrastructure cost in some cases.
  • Mandated Density: The EV Charging Infrastructure Policy 2025 sets clear mandates for density—aiming for a charging station every 3 km X 3 km grid in cities and every 25 km on both sides of highways.
  • Tariff Rationalization: The Ministry of Power has moved to ensure that the tariff for the supply of electricity to public EV charging stations is a single-part tariff and remains affordable, aiding the business case for operators.
  • Building Bylaw Amendments: Model Building Bye-Laws have been amended to mandate the inclusion of charging stations in private and commercial buildings, pushing for destination charging and easing urban range anxiety.

Focus on Localization and Self-Reliance:

A critical mandate across all policies, including PM E-DRIVE and the broader Production Linked Incentive (PLI) Scheme for Advanced Chemistry Cell (ACC) Battery Storage and Automotive Components, is localization. The MHI insists that all incentivized chargers comply with the Phased Manufacturing Programme (PMP), demanding an increasing percentage of domestic value addition in components like charging guns, software, controllers, and power electronic modules.

This push is creating a substantial market for Indian engineers to develop:

  • Custom SiC-based fast-charger modules.
  • Thermal management and enclosure designs optimized for Indian operating conditions (dust, heat, humidity).
  • Indigenous control algorithms and communication protocols.

The convergence of supportive policy and cutting-edge power electronics technology is making India a central stage for the global evolution of charging infrastructure. The country’s engineers are not just deploying technology; they are actively shaping it to meet a unique and demanding environment.

The success of electric mobility is a story often told about batteries and cars, but it is fundamentally a story about energy conversion. EV charging is not merely an electrical transition; it is a power electronics revolution. The engineers building these advanced, intelligent power systems are the true architects defining the future of transport.

The post The Unsung Hero: How Power Electronics is Fueling the EV Charging Revolution appeared first on ELE Times.

Emerging Technology Trends in EV Motors

ELE Times - Wed, 12/03/2025 - 08:06

Electric Vehicles (EVs), central to the global climate transition, are also becoming crucial drivers of engineering innovation. At the heart of this transformation lies the electric motor, an area now attracting intense R&D focus as automakers chase higher efficiency, lower material dependency, and superior driving performance.

Conventional Motor Choices: IM, PMSM, and BLDC

Most EVs today rely on three key motor architectures:

– Induction Motors (IM): Rugged but less efficient.

– Permanent Magnet Synchronous Motors (PMSM): Highly efficient and used in high-performance vehicles.

– Brushless DC Motors (BLDC): Lightweight and ideal for scooters and bikes, using electronic commutation instead of brushes.

While these motors have served well, next-generation EV demands—compact packaging, higher power density, optimized cooling, and smarter control—are pushing the industry toward more advanced technologies.

What’s Driving the Next Wave of Motor Innovation

Manufacturers today are actively pursuing:

– Reduction in installation space

– Higher power-to-weight ratios

– Improved thermal management

– Lower reliance on rare-earth materials

– Greater efficiency through refined control electronics

These needs are shaping emerging motor technologies that promise major shifts in EV design.

Axial Flux Motors: Compact Powerhouses of the Future

Axial flux motors—often called “pancake motors”—use disc-shaped stators and rotors. Unlike traditional radial flux machines, their magnetic field flows parallel to the shaft.

Key strengths:

– Extremely compact

– Exceptionally high power density

– Ideal for performance-focused EVs

A standout example is YASA, a Mercedes-owned company whose prototype axial flux motor delivers 550 kW (737 hp) at just 13.1 kg, achieving a record 42 kW/kg specific power.

Key challenge: Maintaining a uniform air gap is difficult due to strong magnetic attraction, making heat dissipation more demanding.

Switch Reluctance Motors (SRM): Simple, Strong, Magnet-Free

Switch reluctance motors operate using reluctance torque, relying solely on magnetic attraction rather than electromagnetic induction or permanent magnets. The rotor contains no windings or magnets, significantly reducing rare-earth dependence.

Advantages:

– Robust and simple construction

– Low material cost

– High torque potential

Companies like Enedym, Turnitude Technologies, and Advanced Electric Machines (AEM) are actively advancing SRM technology.

Challenges:

– High torque ripple

– Noise and vibration

– More complex control electronics due to trapezoidal DC waveforms

Increasing the number of stator/rotor teeth can reduce ripple but adds manufacturing complexity.

Synchronous Reluctance Motors (SynRM): Tackling Torque Ripple

SynRMs were developed to overcome the noise and vibration issues of SRMs. Their rotor design uses multiple layered air gaps, creating a shaped flux path that enhances torque production.

Key benefits:

– Operates on sinusoidal waveforms

– Much lower torque ripple

– No magnets required

– Improved noise characteristics

A well-known adaptation is seen in the Tesla Model 3, which uses a SynRM with internal segmented permanent magnets to reduce eddy-current losses and thermal buildup.

In-Wheel Motors: Reinventing Torque Delivery at the Wheels

In-wheel motor technology places a dedicated motor inside each wheel, eliminating conventional drivetrains.

Advantages:

– Increased interior space

– Reduced transmission losses

– Precise torque vectoring for improved handling

– Lower mechanical maintenance

GEM Motors, a Slovenian startup, has developed compact, modular in-wheel motor systems that claim up to 20% increased driving range without additional battery capacity.

Challenge:

In-wheel placement increases unsprung mass, affecting ride quality and requiring highly compact yet high-torque designs.

The Road Ahead: Designing Motors for a Resource-Constrained Future

With rising pressure on rare-earth supply chains and the push for higher efficiency, the next generation of EV motors must strike a balance between performance, sustainability, manufacturability, and cost. Technologies minimizing rare-earth usage, improving thermal robustness, and reducing weight will define the industry’s trajectory. As innovation accelerates, electric motors will not just power vehicles—they will shape the future of clean, intelligent, and resource-efficient mobility.

The post Emerging Technology Trends in EV Motors appeared first on ELE Times.

Terahertz Electronics for 6G & Imaging: A Technical Chronicle

ELE Times - Wed, 12/03/2025 - 07:40

As the demand for more spectrum increased with the extensive usage of mobile data, XR/VR, sensing, and autonomous systems, the sub-THz region (100–300 GHz and beyond) emerges as a compelling frontier. In effect, we are approaching the limits of what mm Wave alone can deliver at scale. The THz band promises immense contiguous spectrum, enabling links well above 100 Gbps, and the possibility of co-designing communication and high-resolution sensing (imaging/radar) in a unified platform.

Yet this promise confronts severe physical obstacles: high path loss, molecular absorption, component limitations, packaging losses, and system complexity. This article traces how the industry is navigating those obstacles, what is working now, what remains open, and where the first real systems might land.

The Early Milestones: Lab Prototypes That Matter

A landmark announcement came in October 2024 from NTT: a compact InP-HEMT front-end (FE) that achieved 160 Gbps in the 300 GHz band by integrating mixers, PAs, LNAs, and LO PAs in a single IC.

Key technical innovations in that work include:

  • A fully differential configuration to cancel local-oscillator (LO) leakage, critical at THz frequencies.
  • Reduction of module interconnections (thus insertion loss) by integrating discrete functions into a monolithic chip.
  • Shrinking module size from ~15 cm to ~2.8 cm, improving form factor while widening operational bandwidth.

More recently, in mid-2025, NTT (with Keysight and its subsidiary NTT Innovative Devices) demonstrated a power amplifier module capable of 280 Gbps (35 GBaud, 256-QAM) in the J-band (≈220–325 GHz), albeit at 0 dBm output power. This points toward simultaneous scaling of both bandwidth and linear output power, a crucial step forward.

On the standardization/architectural front, partnership experiments like Keysight + Ericsson’s “pre-6G” prototype show how new waveforms and stacks might evolve. In 2024, they demonstrated a base station + UE link (modified 5G stack) over new frequency bands, signaling industry interest in evolving existing layers to support extreme throughput. Ericsson itself emphasizes that 6G will mix evolved and new concepts spectrum aggregation, ISAC, spatial awareness, and energy-efficient designs.

These milestones are not “toy results” they validate that the critical component blocks can already support high-throughput, multi-GHz signals, albeit in controlled lab settings.

Technical Foundations: Devices, Architectures, and Packaging

To move from prototypes to systems, several technical foundations must be matured in parallel:

Device and Front-End Technologies
  • InP / III–V HEMTs and HBTs remain leading candidates for mixers, LNAs, and PAs at high frequencies, thanks to superior electron mobility and gain.
  • SiGe BiCMOS bridges the gap, often handling LO generation, control logic, and lower-frequency blocks, while III–V handles the toughest RF segments.
  • Schottky diodes, resonant tunneling diodes (RTDs), and nonlinear mixers play roles for frequency translation and LO generation.
  • Photonic sources such as UTC photodiodes or photomixing supplement generation in narrowband, coherent applications. For example, a modified uni-traveling-carrier photodiode (MUTC-PD) has been proposed for 160 Gbps over D-band in a fiber-THz hybrid link.

The challenge is achieving sufficient output power, flat gain over multi-GHz bandwidth, linearity, and noise performance, all within thermal and size constraints.

Architectures and Signal Processing
  • Multiplication chains (cascaded frequency multipliers) remain the standard path for elevating microwave frequencies into THz.
  • Harmonic or sub-harmonic mixing eases LO generation but while managing phase noise is critical.
  • Beamforming / phased arrays are essential. Directive beams offer path-loss mitigation and interference control. True-time delay or phase shifting (with very fine resolution) is a design hurdle at THz.
  • Waveforms must tolerate impairments (phase noise, CFO). Hybrid schemes combining single-carrier plus OFDM and FMCW / chirp waveforms are under study.
  • Joint sensing-communication (ISAC): Using the same waveform for data and radar-like imaging is central to future designs.
  • Channel modeling, beam training, blockage prediction, and adaptive modulation are crucial companion software domains.
Packaging, Antennas, and Interconnects

At THz, packaging and interconnect losses can kill performance faster than device limitations.

  • Antenna-in-package (AiP) and antenna-on-substrate (e.g. silicon lens, meta surfaces, dielectric lens) help reduce the distance from active devices to radiating aperture.
  • Substrate-integrated waveguides (SIW), micromachined waveguides, quasi-optical coupling replace lossy microstrip lines and CPWs.
  • Thermal spreaders, heat conduction, and material selection (low-loss dielectrics) are critical for sustaining device stability.
  • Calibration and measurement: On-wafer TRL/LRM up to sub-THz, over-the-air (OTA) test setups, and real-time calibration loops are required for production test.
Propagation, Channel, and Deployment Constraints

Propagation in THz is unforgiving:

  • Free-space path loss (FSPL) scales with frequency. Every additional decade in frequency adds ~20 dB loss.
  • Molecular absorption, especially from water vapor, introduces frequency-specific attenuation notches; engineers must choose spectral windows (D-band, G-band, J-band, etc.).
  • Blockage: Humans, objects, and materials often act as near-total blockers at THz.
  • Multipath is limited — channels tend toward sparse tap-delay profiles.

Thus, THz is suited for controlled, short-range, high-throughput links or co-located sensing+ communication. Outdoor macro coverage is generally impractical unless beams are extremely narrow and paths well managed. Backhaul and hotspot links are more feasible use cases than full wide-area coverage.

Imaging and Sensing Use Cases

Unlike pure communication, imaging demands high dynamic range, spatial resolution, and sometimes passive operation. THz enables:

  • Active coherent imaging (FMCW, pulsed radar) for 3D reconstruction, industrial NDT, and package inspection.
  • Passive imaging / thermography for detecting emissivity contrasts.
  • Computational imaging via coded apertures, compressed sensing, and meta surface masks to reduce sensor complexity.

In system designs, the same front-end and beam infrastructure may handle both data and imaging tasks, subject to power and SNR trade-offs.

Roadmap & Open Problems

While lab successes validate feasibility, many gaps remain before field-ready systems:

  1. Watt-class, efficient THz sources at room temperature (particularly beyond 200 GHz).
  2. Low-loss, scalable passives and interconnects (waveguide, delay lines) at THz frequencies.
  3. Robust channel models across environments (indoor, outdoor, humidity, mobility) with validation data.
  4. Low-cost calibration / test methodologies for mass production.
  5. Integrated ISAC signal processing and software stacks that abstract complexity from system integrators.
  6. Security and coexistence in pencil-beam, high-frequency environments.
Conclusion: What’s Realistic, What’s Ambitious

The next decade will see THz systems not replacing, but supplementing existing networks. They will begin in enterprise, industrial, and hotspot contexts (e.g. 100+ Gbps indoor links, wireless backhaul, imaging tools in factories). Over time, integrated sensing + communication systems (robotics, AR, digital twins) will leverage THz’s ability to see and talk in the same hardware.

The core enablers: heterogeneous integration (III-V + CMOS/BiCMOS), advanced packaging and optics, robust beamforming, and tightly coupled signal processing. Lab records such as 160 Gbps in the 300 GHz front-end by NTT, and 280 Gbps in a J-band PA module show that neither bandwidth nor throughput is purely theoretical — the next steps are scaling power, cost, and reliability.

The post Terahertz Electronics for 6G & Imaging: A Technical Chronicle appeared first on ELE Times.

When Tiny Devices Get Big Brains: The Era of Edge and Neuromorphic AI

ELE Times - Wed, 12/03/2025 - 07:30

From data-center dreams to intelligence at the metal

Five years ago “AI” largely meant giant models running in faraway data centers. However, today the story is different, where intelligence is migrating to the device itself, in phones, drones, health wearable’s, factory sensors. This shift is not merely cosmetic, instead it forces the hardware designers to ask: how do you give a tiny, thermally constrained device meaningful perception and decision-making power? As Qualcomm’s leadership puts it, the industry is “in a catbird seat for the edge AI shift,” and the battle is now about bringing capable, power-efficient AI onto the device.

Why edge matters practical constraints, human consequences

There are three blunt facts that drive this migration: latency (milliseconds matter for robots and vehicles), bandwidth (you can’t stream everything from billions of sensors), and privacy (health or industrial data often can’t be shipped to the cloud). The combination changes priorities: instead of raw throughput for training, the trophy is energy per inference and predictable real-time behavior.

How the hardware world is responding

Hardware paths diverge into pragmatic, proven accelerators and more speculative, brain-inspired designs.

  1. Pragmatic accelerators:  TPUs, NPUs, heterogeneous SoCs.
    Google’s Edge TPU family and Coral modules demonstrate the pragmatic approach: small, task-tuned silicon that runs quantized CNNs and vision models with tiny power budgets. At the cloud level Google’s new TPU generations (and an emerging Ironwood lineup) show the company’s ongoing bet on custom AI silicon spanning cloud to edge.
  2. Mobile/SoC players double down:  Qualcomm and others are reworking mobile chips for on-device AI, shifting CPU micro architectures and embedding NPUs to deliver generative and perception workloads in phones and embedded devices. Qualcomm’s public positioning and product roadmaps are explicit: the company expects edge AI to reshape how devices are designed and monetized.
  3. In-memory and analog compute:  to beat the von Neumann cost of moving data. Emerging modules and research prototypes put compute inside memory arrays (ReRAM/PCM) to slash energy per operation, an attractive direction for always-on sensing.

 The wild card: neuromorphic computing

If conventional accelerators are an evolutionary path, neuromorphic chips are a more radical reimagination. Instead of dense matrix math and clocked pipelines, neuromorphic hardware uses event-driven spikes, co-located memory and compute, and parallel sparse operations — the same tricks biology uses to run a brain on ~20 W.

Intel, one of the earliest movers, says the approach scales: Loihi research chips and larger systems (e.g., the Hala Point neuromorphic system) show how neuromorphic designs can reach hundreds of millions or billions of neurons while keeping power orders of magnitude lower than conventional accelerators for certain tasks. Those investments signal serious industrial interest, not just academic curiosity.

Voices from the field: what leaders are actually saying

  • “We’re positioning for on-device intelligence not just as a marketing line, but as an architecture shift,” paraphrase of Qualcomm leadership describing the company’s edge AI strategy and roadmap.
  • “Neuromorphic systems let us explore ultra-low power, event-driven processing that’s ideal for sensors and adaptive control,” Intel’s Loihi programme commentary on the promise of on-chip learning and energy efficiency.
  • A recent industry angle: big platform moves (e.g., companies making development boards and tighter dev ecosystems available) reflect a desire to lower barriers. The Qualcomm–Arduino alignment and new low-cost boards aim to democratize edge AI prototyping for millions of developers.

Where hybrid architecture wins: pragmatic use cases

Rather than “neuromorphic replaces everything,” the likely near-term scenario is hybrid systems:

  • Dense pretrained CNNs (object detection, segmentation) run on NPUs/TPUs.
  • Spiking neuromorphic co-processors handle always-on tasks: anomaly detection, low-latency sensor fusion, prosthetic feedback loops.
  • Emerging in-memory modules reduce the energy cost of massive matrix multiplies where appropriate.

Practical example: an autonomous drone might use a CNN accelerator for scene understanding while a neuromorphic path handles collision avoidance from event cameras with microsecond reaction time.

 Barriers: the messy middle between lab and product

  • Algorithmic mismatch: mainstream ML is dominated by backpropagation and dense tensors; mapping these workloads efficiently to spikes or in-memory analog is still an active research problem.
  • Tooling and developer experience: frameworks like PyTorch/TensorFlow are not native to SNNs; toolchains such as Intel’s Lava and domain projects exist but must mature for broad adoption.
  • Manufacturing & integration: moving prototypes into volume production and integrating neuromorphic blocks into SoCs poses yield and ecosystem challenges.

Market dynamics & the investment climate

There’s heavy capital flowing into edge AI and neuromorphic startups, and forecasts project notable growth in neuromorphic market value over the coming decade. That influx is tempered by a broader market caution — public leaders have noted hype cycles in AI investing but history shows that even bubble phases can accelerate technological foundations that persist.

Practical advice for engineering and product teams

  1. Experiment now prototype with Edge TPUs/NPUs and cheap dev boards (Arduino + Snapdragon/Dragonwing examples are democratizing access) to validate latency and privacy requirements.
  2. Start hybrid design thinking split workloads into dense inference (accelerator) vs event-driven (neuromorphic) buckets and architect the data pipeline accordingly.
  3. Invest in tooling and skill transfer train teams on spiking networks, event cameras, and in-memory accelerators, and contribute to open frameworks to lower porting costs.
  4. Follow system co-design unify hardware, firmware, and model teams early; the edge is unforgiving of mismatches between model assumptions and hardware constraints.

Conclusion: what will actually happen

Expect incremental but practical wins first: more powerful, efficient NPUs and smarter SoCs bringing generative and perception models to phones and industrial gateways. Parallel to that, neuromorphic systems will move from research novelties into niche, high-value roles (always-on sensing, adaptive prosthetics, extreme low-power autonomy).

The real competitive winners will be organizations that build the whole stack: silicon, software toolchains, developer ecosystems, and use-case partnerships. In short: intelligence will increasingly live at the edge, and the fastest adopters will design for hybrid, energy-aware systems where neuromorphic and conventional accelerators complement not replace each other.

The post When Tiny Devices Get Big Brains: The Era of Edge and Neuromorphic AI appeared first on ELE Times.

Inside the Hardware Lab: How Modern Electronic Devices Are Engineered

ELE Times - Wed, 12/03/2025 - 07:00

The engineering of contemporary electronic devices reflects a convergence of system thinking, material maturity, multidisciplinary collaboration, and accelerated development cycles. In laboratories across the world, each new product emerges from a structured, iterative workflow that integrates architecture, hardware, firmware, testing, and manufacturing considerations into a cohesive design process. As electronic systems become more compact, intelligent, and operationally demanding, the pathway from concept to certified production device requires a high level of methodological discipline.

This article outlines how modern electronics are engineered, focusing on workflows, design considerations, and the interdependencies that define professional hardware development today.

Requirements Engineering: Establishing the Foundation

The design of any electronic device begins with a comprehensive articulation of requirements. These requirements typically combine functional objectives, performance targets, environmental constraints, safety expectations, and compliance obligations.

Functional objectives determine what the system must achieve, whether sensing, processing, communication, actuation, or power conversion. Performance parameters such as accuracy, latency, bandwidth, power consumption, and operating lifetime define the measurable boundaries of the design. Environmental expectations—temperature range, ingress protection, shock and vibration tolerance, electromagnetic exposure, and mechanical stresses—shape the system’s robustness profile.

Regulatory frameworks, including standards such as IEC, UL, BIS, FCC, CE, and sector-specific certifications (automotive, medical, aerospace), contribute additional constraints. The initial requirement set forms the reference against which all subsequent design decisions are evaluated, creating traceability between intent and implementation.

System Architecture: Translating Requirements into Structure

System architecture bridges conceptual requirements and concrete engineering design. The process involves defining functional blocks and selecting computational, sensing, power, and communication strategies capable of fulfilling the previously established criteria.

The architecture phase typically identifies the processing platform—ranging from microcontrollers to SoCs, MPUs, or FPGAs—based on computational load, determinism, power availability, and peripheral integration. Communication subsystems are established at this stage, covering interfaces such as I²C, SPI, UART, USB, CAN, Ethernet, or wireless protocols.

The power architecture also takes shape here, mapping energy sources, conversion stages, regulation mechanisms, and protection pathways. Considerations such as thermal distribution, signal isolation, noise-sensitive regions, and preliminary enclosure constraints influence the structural arrangement. The architectural framework becomes the guiding reference for schematic and PCB development.

Component Selection: Balancing Performance, Reliability, and Lifecycle

Modern device design is deeply influenced by semiconductor availability, lifecycle predictability, and performance consistency. Component selection involves more than identifying electrically suitable parts; it requires an understanding of long-term supply chain stability, tolerance behaviour, temperature performance, reliability data, and compatibility with manufacturing processes.

Processors, sensors, regulators, discrete, passives, communication modules, and protection components are evaluated not only for electrical characteristics but also for de-rating behaviours, thermal performance, and package-level constraints. Temperature coefficients, impedance profiles, safe-operating-area characteristics, clock stability, and signal integrity parameters become central evaluation factors.

The resulting bill of materials represents an intersection of engineering decisions and procurement realities, ensuring the device can be produced reliably throughout its intended lifespan.

Schematic Design: The Logical Core of the Device

Schematic design formalizes the architectural plan into detailed electrical connectivity. This stage defines logical relationships, reference paths, power distribution, signal conditioning, timing sequences, and safety structures.

Circuit blocks—analog conditioning, digital logic, power conversion, RF front-ends, sensor interfaces, and display or communication elements—are designed with full consideration of parasitic behaviour, noise propagation, and functional dependencies. Power distribution requires careful sequencing, decoupling strategies, transient response consideration, and ripple management. Signal interfaces require appropriate level shifting, impedance alignment, and termination strategies.

Test points, programming headers, measurement references, and diagnostic interfaces are defined at this stage to ensure observability during validation. The schematic ultimately serves as the authoritative source for layout and firmware integration.

PCB Layout: Integrating Electrical, Mechanical, and Thermal Realities

PCB layout transforms the schematic into a physical system where electrical performance, manufacturability, and thermal behaviour converge. The arrangement of components, routing topology, layer stack-up, ground referencing, and shielding determines the system’s electromagnetic and thermal characteristics.

High-speed interfaces require controlled impedance routing, differential pair tuning, length matching, and clear return paths. Power networks demand minimized loop areas, appropriate copper thickness, and distribution paths that maintain voltage stability under load. Sensitive analog signals are routed away from high-noise digital or switching-power regions. Thermal dissipation—achieved through copper pours, thermal vias, and heat-spreading strategies—ensures the system can sustain continuous operation.

Mechanical constraints, such as enclosure geometry, connector placement, mounting-hole patterns, and assembly tolerances, influence layout decisions. The PCB thus becomes a synthesized embodiment of electrical intent and mechanical feasibility.

Prototyping and Hardware Bring-Up: Validating the Physical Implementation

Once fabricated, the prototype enters hardware bring-up, a methodical verification process in which the design is examined against its expected behavior. Validation typically begins with continuity and power integrity checks, ensuring that supply rails meet voltage, ripple, and transient requirements.

System initialization follows, involving processor boot-up, peripheral activation, clock stability verification, and interface-level communication checks. Subsystems are evaluated individually—power domains, sensor blocks, RF modules, analog interfaces, digital buses, and storage components.

Observations from oscilloscopes, logic analyzers, current probes, and thermal imagers contribute to a detailed understanding of the device’s operational profile. Any deviations from expected behavior guide iterative optimization in subsequent revisions.

Firmware Integration: Achieving Functional Cohesion

Firmware integration establishes coordination between hardware capabilities and system functionality. Board-support packages, peripheral drivers, middleware stacks, and application logic are aligned with the hardware’s timing, power, and performance characteristics.

Real-time constraints influence the choice of scheduling structures—whether bare-metal loops, cooperative architectures, or real-time operating systems. Communication stacks, sensor acquisition pipelines, memory management, and power-state transitions are implemented and tested on the physical hardware.

Interaction between firmware and hardware exposes edge cases in timing, voltage stability, electromagnetic sensitivity, or analog behavior, which often inform refinements in both domains.

Validation and Testing: Confirming Performance, Robustness, and Compliance

Comprehensive testing examines a device’s functionality under nominal and boundary conditions. Functional validation assesses sensing accuracy, communication stability, user-interface behavior, control logic execution, and subsystem interoperability. Reliability evaluation includes thermal cycling, vibration exposure, mechanical stress tests, humidity conditioning, and operational aging.

Electromagnetic compatibility testing examines emissions and immunity, including radiated and conducted profiles, ESD susceptibility, fast transients, and surge resilience. Pre-compliance evaluation during early prototypes reduces the probability of redesign during final certification stages.

Data collected during validation ensures that the system behaves predictably throughout its expected operating envelope.

Manufacturing Readiness: Transitioning from Prototype to Production

Production readiness involves synchronizing design intent with assembly processes, quality frameworks, and cost structures. Design-for-manufacturing and design-for-assembly considerations ensure that the device can be fabricated consistently across multiple production cycles.

Manufacturing documentation—including fabrication drawings, Gerber files, pick-and-place data, test specifications, and assembly notes—forms the reference package for contract manufacturers. Automated test equipment, in-circuit test fixtures, and functional test jigs are developed to verify each assembled unit.

Bill-of-materials optimization, yield analysis, and component sourcing strategies ensure long-term production stability.

Compliance and Certification: Meeting Regulatory Obligations

Final certification ensures that the device adheres to the safety, electromagnetic, and environmental requirements of the markets in which it will be deployed. Testing laboratories evaluate the system against regulatory standards, verifying electrical safety, electromagnetic behaviour, environmental resilience, and user-level protections.

The certification phase formalizes the device’s readiness for commercial deployment, requiring complete technical documentation, traceability data, and repeatable test results.

Lifecycle Management: Sustaining the Design Beyond Release

After the product reaches the market, lifecycle management ensures its sustained usability and manufacturability. Engineering change processes address component obsolescence, firmware enhancements, mechanical refinements, or field-observed anomalies.

Long-term reliability data, manufacturing feedback, and supplier updates contribute to ongoing revisions. In connected systems, firmware updates may be deployed over the air, extending functionality and addressing vulnerabilities.

Lifecycle management closes the loop between deployment and continuous improvement.

Conclusion

The design of a modern electronic device is a coordinated engineering endeavour that integrates requirements analysis, architectural planning, hardware design, firmware development, validation, manufacturing readiness, and lifecycle stewardship. Each stage influences the next, forming a continuous chain of interdependent decisions.

As technological expectations expand, the engineering methodologies supporting electronic design continue to mature. The result is a disciplined, multi-phase workflow that enables the creation of devices that are reliable, certifiable, scalable, and aligned with the complex operational demands of contemporary applications.

The post Inside the Hardware Lab: How Modern Electronic Devices Are Engineered appeared first on ELE Times.

Transitioning from Industry 4.0 to 5.0: It’s not simple

EDN Network - Tue, 12/02/2025 - 18:35
Industry 4.0 to Industry 5.0.

The shift from Industry 4.0 to 5.0 is not an easy task. Industry 5.0 implementation will be complex, with connected devices and systems sharing data in real time at the edge. It encompasses a host of technologies and systems, including a high-speed network infrastructure, edge computing, control systems, IoT devices, smart sensors, AI-enabled robotics, and digital twins, all designed to work together seamlessly to improve productivity, lower energy consumption, improve worker safety, and meet sustainability goals.

Industry 4.0 to Industry 5.0.(Source: Adobe Stock)

In the November/December issue, we take a look at evolving Industry 4.0 trends and the shift to the next industrial evolution: 5.0, building on existing AI, automation, and IoT technologies with a collaboration between humans and cobots.

Technology innovations are central to future industrial automation, and the next generation of industrial IoT technology will leverage AI to deliver productivity improvements through greater device intelligence and automated decision-making, according to Jack Howley, senior technology analyst at IDTechEx. He believes the global industry will be defined by the integration of AI with robotics and IoT technologies, transforming manufacturing and logistics across industries.

As factories become smarter, more connected, and increasingly autonomous, MES, digital twins, and AI-enabled robotics are redefining smart manufacturing, according to Leonor Marques, architecture and advocacy director of Critical Manufacturing. These innovations can be better-interconnected, contributing to smarter factories and delivering meaningful, contextualized, and structured information, she said.

One of those key enabling technologies for Industry 4.0 is sensors. TDK SensEI defines Industry 4.0 by convergence, the merging of physical assets with digital intelligence. AI-enabled predictive maintenance systems will be critical for achieving the speed, autonomy, and adaptability that smart factories require, the company said.

Edge AI addresses the volume of industrial data by embedding trained ML models directly into sensors and devices, said Vincent Broyles, senior director of global sales engineering at TDK SensEI. Instead of sending massive data streams to the cloud for processing, these AI models analyze sensor data locally, where it’s generated, reducing latency and bandwidth use, he said.

Robert Otręba, CEO of Grinn Global, agrees that industrial AI belongs at the edge. It delivers three key advantages: low latency and real-time decision-making, enhanced security and privacy, and reduced power and connectivity costs, he said.

Otręba thinks edge AI will power the next wave of industrial intelligence. “Instead of sending vast streams of data off-site, intelligence is brought closer to where data is created, within or around the machine, gateway, or local controller itself.”

AI is no longer an optional enhancement, and this shift is driven by the need for real-time, contextually aware intelligence with systems that can analyze sensor data instantly, he said.

Lisa Trollo, MEMS marketing manager at STMicroelectronics, calls sensors the silent leaders driving the industrial market’s transformation, serving as the “eyes and ears” of smart factories by continuously sensing pressure, temperature, position, vibration, and more. “In this industrial landscape, sensors are the catalysts that transform raw data into insights for smarter, faster, and more resilient industries,” she said.

Energy efficiency also plays a big role in industrial systems. Power management ICs (PMICs) are leading the way by enabling higher efficiency. In industrial and industrial IoT applications, PMICs address key power challenges, according to contributing writer Stefano Lovati. He said the use of AI techniques is being investigated to further improve PMIC performance, with the aim of reducing power losses, increasing energy efficiency, and reducing heat dissipation.

Don’t miss the top 10 AC/DC power supplies introduced over the past year. These power supplies focus on improving efficiency and power density for industrial and medical applications. Motor drivers are also a critical component in industrial design applications as well as automotive systems. The latest motor drivers and development tools add advanced features to improve performance and reduce design complexity.

The post Transitioning from Industry 4.0 to 5.0: It’s not simple appeared first on EDN.

Expanding power delivery in systems with USB PD 3.1

EDN Network - Tue, 12/02/2025 - 18:00
Microchip's MCP19061 USB dual-charging-port board.

The Universal Serial Bus (USB) started out as a data interface, but it didn’t take long before progressing to powering devices. Initially, its maximum output was only 2.5 W; now, it can deliver up to 240 W over USB Type-C cables and connectors, processing power, data, and video. This revision is known as Extended Power Range (EPR), or USB Power Delivery Specification 3.1 (USB PD 3.1), introduced by the USB Implementers Forum. EPR uses higher voltage levels (28 V, 36 V, and 48 V), which at 5 A will deliver power of 140 W, 180 W, and 240 W, respectively.

USB PD 3.1 has an adjustable voltage supply mode, allowing for intermediate voltages between 9 V and the highest fixed voltage of the charger. This allows for greater flexibility by meeting the power needs of individual devices. USB PD 3.1 is backward-compatible with previous USB versions including legacy at 15 W (5 V/3 A) and the standard power range mode of below 100 W (20 V/5 A).

The ability to negotiate power for each device is an important strength of this specification. For example, a device consumes only the power it needs, which varies depending on the application. This applies to peripherals, where a power management process allows each device to take only the power it requires.

The USB PD 3.1 specification found a place in a wide range of applications, including laptops, gaming stations, monitors, industrial machinery and tools, small robots and drones, e-bikes, and more.

Microchip USB PD demo board

Microchip provides a USB PD dual-charging-port (DCP) demonstration application, supporting the USB PD 3.1 specification. The MCP19061 USB PD DCP reference board (Figure 1) is pre-built to show the use of this technology in real-life applications. The board is fully assembled, programmed, and tested to evaluate and demonstrate digitally controlled smart charging applications for different USB PD loads, and it allows each connected device to request the best power level for its own operation.

Microchip's MCP19061 USB dual-charging-port board.Figure 1: MCP19061 USB DCP board (Source: Microchip Technology Inc.)

The board shows an example charging circuit with robust protections. It highlights charge allocation between the two ports as well as dynamically reconfigurable charge profile availability (voltage and current) for a given load. This power-balancing feature between ports provides better control over the charging process, in addition to delivering the right amount of power to each device.

The board provides output voltages from 3 V to 21 V and output currents from 0.5 A to 3 A. Its maximum input voltage range is from 6 V to 18 V, with 12 V being the recommended value.

The board comes with firmware designed to operate with a graphical user interface (GUI) and contains headers for in-circuit serial programming and I2C communication. An included USB-to-serial bridging board (such as the BB62Z76A MCP2221A breakout board USB) with the GUI allows different configurations to be quickly tested with real-world load devices charging on the two ports. The DCP board GUI requires a PC with Microsoft Windows operating system 7–11 and a USB 2.0 port. The GUI then shows parameter and board status and faults and enables user configuration.

DCP board components

Being a port board with two ports, there are two independent USB PD channels (Figure 2), each with their own dedicated analog front end (AFE). The AFE in the Microchip MCP19061 device is a mixed-signal, digitally controlled four-switch buck-boost power controller with integrated synchronous drivers and an I2C interface (Figure 3).

Block diagram shows two independently managed USB PD channels on Microchip's MCP19061-powered DCP board.Figure 2: Two independently managed USB PD channels on the MCP19061-powered DCP board (Source: Microchip Technology Inc.) Block diagram of Microchip's MCP19061 four-switch buck-boost device.Figure 3: Block diagram of the MCP19061 four-switch buck-boost device (Source: Microchip Technology Inc.)

Moreover, one of the channels features the Microchip MCP22350 device, a highly integrated, small-format USB Type-C PD 2.0 controller, whereas the other channel contains a Microchip MCP22301 device, which is a standalone USB Type-C PD port controller, supporting the USB PD 3.0 specification.

The MCP22350 acts as a companion PD controller to an external microcontroller, system-on-chip or USB hub. The MCP22301 is an integrated PD device with the functionality of the SAMD20 microcontroller, a low-power, 32-bit Arm Cortex-M0+ with an added MCP22350 PD media access control and physical layer.

Each channel also has its own UCS4002 USB Type-C port protector, guarding from faults but also protecting the integrity of the charging process and the data transfer (Figure 4).

Traditionally a USB Type-C connector embeds the D+/D– data lines (USB2), Rx/Tx for USB3.x or USB4, configuration channel (CC) lines for charge mode control, sideband-use (SBU) lines for optional functions, and ground (GND). The UCS4002 protects the CC and D+/D– lines for short-to-battery. It also offers battery short-to-GND (SG_SENS) protection for charging ports.

Integrated switching VCONN FETs (VCONN is a dedicated power supply pin in the USB Type-C connector) provide overvoltage, undervoltage, back-voltage, and overcurrent protection through the VCONN voltage. The board’s input rail includes a PMOS switch for reverse polarity protection and a CLC EMI filter. There are also features such as a VDD fuse and thermal shutdown, enabled by a dedicated temperature sensor, the MCP9700, which monitors the board’s temperature.

Block diagram of Microchip's UCS4002 USB port protector device.Figure 4: Block diagram of the UCS4002 USB port protector device (Source: Microchip Technology Inc.)

The UCS4002 also provides fault-reporting configurability via the FCONFIG pin, allowing users to configure the FAULT# pin behavior. The CC, D+/D –, and SG_SENS pins are electrostatic-discharge-protected to meet the IEC 61000-4-2 and ISO 10605 standards.

The DCP board includes an auxiliary supply based on the MCP16331 integrated step-down switch-mode regulator providing a 5-V voltage and an MCP1825 LDO linear regulator providing a 3.3-V auxiliary voltage.

Board operation

The MCP19061 DCP board shows how the MCP19061 device operates in a four-switch buck-boost topology for the purpose of supplying USB loads and charging them with their required voltage within a permitted range, regardless of the input voltage value. It is configured to independently regulate the amount of output voltage and current for each USB channel (their individual charging profile) while simultaneously communicating with the USB-C-connected loads using the USB PD stack protocols.

All operational parameters are programmable using the two integrated Microchip USB PD controllers, through a dynamic reconfiguration and customization of charging operations, power conversion, and other system parameters. The demo shows how to enable the USB PD programmable power supply fast-charging capability for advanced charging technology that can modify the voltage and current in real time for maximum power outputs based on the device’s charging status.

The MCP19061 device works in conjunction with both current- and voltage-sense control loops to monitor and regulate the load voltage and current. Moreover, the board automatically detects the presence or removal of a USB PD–compliant load.

When a USB PD–compliant load is connected to the USB-C Port 1 (on the PCB right side; this is the higher one), the USB communication starts and the MCP19061 DCP board displays the charging profiles under the Port 1 window.

If another USB PD load is connected to the USB-C Port 2, the Port 2 window gets populated the same way.

The MCP19061 PWM controller

The MCP19061 is a highly integrated, mixed-signal four-switch buck-boost controller that operates from 4.5 V to 36 V and can withstand up to 42 V non-operating. Various enhancements were added to the MCP19061 to provide USB PD compatibility with minimum external components for improved calibration, accuracy, and flexibility. It features a digital PWM controller with a serial communication bus for external programmability and reporting. The modulator regulates the power flow by controlling the length of the on and off periods of the signal, or pulse widths.

The operation of the MCP19061 enables efficient power conversion with the capability to operate in buck (step-down), boost (step-up), and buck-boost topologies for various voltage levels that are lower, higher, or the same as the input voltage. It provides excellent precision and efficiency in power conversions for embedded systems while minimizing power losses. Its features include adjustable switching frequencies, integrated MOSFET drivers, and advanced fault protection. The operating parameters, protection levels, and fault-handling procedures are supervised by a proprietary state machine stored in its nonvolatile memory, which also stores the running parameters.

Internal digital registers handle the customization of the operating parameters, the startup and shutdown profiles, the protection levels, and the fault-handling procedures. To set the output current and voltage, an integrated high-accuracy reference voltage is used. Internal input and output dividers facilitate the design while maintaining high accuracy. A high-accuracy current-sense amplifier enables precise current regulation and measurement.

The MCP19061 contains three internal LDOs: a 5-V LDO (VDD) powers internal analog circuits and gate drivers and provides 5 V externally; a 4-V LDO (AVDD) powers the internal analog circuitry; and a 1.8-V LDO supplies the internal logic circuitry.

The MCP19061 is packaged in a 32-lead, 5 × 5-mm VQFN, allowing system designers to customize application-specific features without costly board real estate and additional component costs. A 1-MHz I2C serial bus enables the communication between the MCP19061 and the system controller.

The MCP19061 can be programmed externally. For further evaluation and testing, Microchip provides an MCP19061 dedicated evaluation board, the EV82S16A.

The post Expanding power delivery in systems with USB PD 3.1 appeared first on EDN.

Нові щербини на старих стінах

Новини - Tue, 12/02/2025 - 15:10
Нові щербини на старих стінах
Image
Інформація КП вт, 12/02/2025 - 15:10
Текст

Нові щербини з'явилися під час одного з ракетно-дронових обстрілів Києва. А будівля, що отримала ці пошкодження, – це корпус №4 КПІ ім. Ігоря Сікорського, де містяться хіміко-технологічний факультет і факультет біотехнології і біотехніки.

Simple state variable active filter

EDN Network - Tue, 12/02/2025 - 15:00

The state variable active filter (SVAF) is an active filter you don’t see mentioned much today; however, it’s been a valuable asset for us old analog types in the past. This became especially true when cheap dual and quad op-amps became common place, as one can “roll their own” SVAF with just one IC package and still have an op-amp left over for other tasks!

Wow the engineering world with your unique design: Design Ideas Submission Guide

The unique features of this filter are having low-pass (LP), high-pass (HP), and band-pass (BP) filter results simultaneously available, with low component sensitivity, and an independent filter “Q” while creating a quadratic 2nd order filter function with 40-dB/decade slope factors. The main drawback is requiring three op-amps and a few more resistors than other active filter types.

The SVAF employs dual series-connected and scaled op-amp integrators with dual independent feedback paths, which creates a highly flexible filter architecture with the mentioned “extra” components as the downside.

With the three available LP, HP, and BP outputs, this filter seemed like a nice candidate for investigating with the Bode function available in modern DSOs. This is especially so for the newer Siglent DSO implementations that can plot three independent channels, which allows a single Bode plot with three independent plot variables: LP, HP, and BP.

Creating a SVAF with a couple of LM358 duals (didn’t have any DIP-type quad op-amps like the LM324 directly available, which reminds me, I need to order some soon!!), a couple of 0.01-µF mylar Caps, and a few 10 kΩ and 1 kΩ resistors seemed like a fun project.

The SVAF natural frequency corner is simply 1/RC, as shown in the notebook image in Figure 1 as ~1.59 kHz with the mentioned component values. The filter’s “Q” was set by changing R4 and R5.

Figure 1 The author’s hand-drawn schematic with R1=R2, R3=R6, and C1=C2, resistor values are 1 kΩ and 10 kΩ, and capacitors are 0.01 µF.

This produced plots of a Q of 1, 2, and 4 shown in Figure 2Figure 3, and Figure 4, respectively, along with supporting LTspice simulations.

The DSO Bode function was set up with DSO CH1 as the input, CH2 (red) as the HP, CH3 (cyan) as the LP, and CH4 (green) as the BP. The phase responses can also be seen as the dashed color lines that correspond to the colors of the HP, LP, and BP amplitude responses.

While it is possible to include all the DSO channel phase responses, this clutters up the display too much, so on the right-hand side of each image, the only phase response I show is the BP phase (magenta) in the DSO plots.

Figure 2 The left side shows the Q =1 LTspice plot of the SVAF with the amplitude and phase of the HP (magenta + dashed magenta), the amplitude and phase of the LP (cyan + dashed cyan), and the amplitude and phase of the BP (green + dashed green). The right side shows the Q =1 DSO plot of the SVAF with HP (red), LP (cyan), BP (green), and phase of the BP (magenta).

Figure 3 The left side shows the Q =2 LTspice plot of the SVAF with the amplitude and phase of the HP (magenta + dashed magenta), the amplitude and phase of the LP (cyan + dashed cyan), and the amplitude and phase of the BP (green + dashed green). The right side shows the Q =2 DSO plot of the SVAF with HP (red), LP (cyan), BP (green), and phase of the BP (magenta).

Figure 4 The left side shows the Q =4 LTspice plot of the SVAF with the amplitude and phase of the HP (magenta + dashed magenta), the amplitude and phase of the LP (cyan + dashed cyan), and the amplitude and phase of the BP (green + dashed green). The right side shows the Q =4 DSO plot of the SVAF with HP (red), LP (cyan), BP (green), and phase of the BP (magenta).

The Bode frequency was swept with 33 pts/dec from 10 Hz to 100 kHz using a 1-Vpp input stimulus from a LAN-enabled arbitrary waveform generator (AWG). Note how the three responses all cross at ~1.59 kHz, and the BP phase, or the magenta line for the images on the right side, crosses zero degrees here.

If we extend the frequency of the Bode sweep out to 1 MHz, as shown in Figure 5, well beyond where you would consider utilizing an LM358. The simulation and DSO Bode measurements agree well, even at this range. Note how the simulation depicts the LP LM358 op-amp output resonance ~100 kHz (cyan) and the BP Phase (magenta) response.

Figure 5 The left side shows the Q =7 LTspice plot of the SVAF with the amplitude and phase of the HP (magenta + dashed magenta), the amplitude and phase of the LP (cyan + dashed cyan), and the amplitude and phase of the BP (green + dashed green). The right side shows the Q =7 DSO plot of the SVAF with HP (red), LP (cyan), BP (green), and phase of the BP (magenta).

I’m honestly surprised the simulation agrees this well, considering the filter was crudely assembled on a plug-in protoboard and using the LM358 op-amps. This is likely due to the inverting configuration of the SVAF structure, as our experience has shown that inverting structures tend to behave better with regard to components, breadboard, and prototyping, with all the unknown parasitics at play!

Anyway, the SVAF is an interesting active filter capable of producing simultaneous LP, HP, and BP results. It is even capable of producing an active notch filter with an additional op-amp and a couple of resistors (requires 4 total, but with the LM324, a single package), which the interested reader can discover.

Michael A Wyatt is a life member with IEEE and has continued to enjoy electronics ever since his childhood. Mike has a long career spanning Honeywell, Northrop Grumman, Insyte/ITT/Exelis/Harris, ViaSat and retiring (semi) with Wyatt Labs. During his career he accumulated 32 US Patents and in the past published a few EDN Articles including Best Idea of the Year in 1989.

Related Content

The post Simple state variable active filter appeared first on EDN.

Візит школярів

Новини - Tue, 12/02/2025 - 13:54
Візит школярів
Image
kpi вт, 12/02/2025 - 13:54
Текст

КПІ ім. Ігоря Сікорського відвідали учні 10–11 класів закладів середньої освіти Тетіївської міської територіальної громади. Під час екскурсії старшокласники оглянули Бібліотеку КПІ, кампус університету та навчальний корпус №1.

💎 Грантові стипендії для викладачів КПІ ім. Ігоря Сікорського від MacPaw AI Lab в КПІ

Новини - Tue, 12/02/2025 - 13:11
💎 Грантові стипендії для викладачів КПІ ім. Ігоря Сікорського від MacPaw AI Lab в КПІ
Image
kpi вт, 12/02/2025 - 13:11
Текст

MacPaw AI Lab пропонує грантові стипендії для викладачів КПІ ім. Ігоря Сікорського.

8 000$ рівномірно протягом 6 місяців
+ 2 000$ за прийняті матеріали на конференцію top-tier рівня

Для участі необхідно:

EDOM Seminar Explores the Next Generation of Physical AI Robots Powered by NVIDIA Jetson Thor

ELE Times - Tue, 12/02/2025 - 12:53

The wave of innovation driven by generative AI is sweeping the globe, and AI’s capabilities are gradually extending from language understanding and visual recognition to action intelligence closer to real-world applications. This change makes physical AI, which integrates “perception, reasoning, and action,” the next important threshold for robotics and smart manufacturing. To help Taiwanese industries grasp this multimodal trend, EDOM Technology will hold the “AI ×Multimodal Robotics: New Era of Industrial Intelligence Seminar” on December 3, showcasing NVIDIA Jetson Thor, the ultimate platform for physical AI and robotics, and featuring insights from ecosystem partners who will share innovative applications spanning smart manufacturing, autonomous machines, and education.

As AI technology rapidly advances, robotics is shifting from the traditional perception and response model to a new stage where they can autonomously understand and participate in complex tasks. The rise of multimodal AI enables machines to simultaneously integrate image, voice, semantic, and spatial information, making more precise judgments and actions in the real world, making it possible to “know what to do” and “know how to do it.” As AI capabilities extend from the purely digital realm to the real world, physical AI has become a core driving force for industrial upgrading.

Multimodal × Physical AI: The Next Key Turning Point in Robotics
The seminar focuses on the theme of “Physical AI Driving the Intelligent Revolution of Robotics”, explores how AI, through multimodal perception and autonomous action capabilities, is reshaping the technical architecture and application scenarios of human-machine collaboration. Through technical sharing and case analysis, the seminar will help companies grasp the next turning points of smart manufacturing.

This event will focus on NVIDIA Jetson Thor and its software ecosystem, providing a panoramic view of future-oriented multimodal robotics technology. The NVIDIA Jetson Thor platform combines high-performance GPUs, edge computing, and multimodal understanding to complete perception, inference, decision-making, and action planning all at the device level, significantly improving robot autonomy and real-time responsiveness. Simultaneously, the platform is deeply integrated with NVIDIA IsaacNVIDIA Metropolis, and NVIDIA Holoscan, creating an integrated development environment from simulation, verification, and testing to deployment, thus accelerating the implementation of intelligent robots and edge AI solutions. NVIDIA Jetson Thor also supports LLM, visual language models (VLMs), and various generative AI models, enabling machines to interpret their surroundings, interact, and take action more naturally, becoming a core foundation for advancing physical AI.

In addition to the core platform analysis, the event features multiple demonstrations and exchange sessions. These includes a showcase of generative AI-integrated robotic applications, highlighting the latest capabilities of the model in visual understanding and action collaboration; an introduction to the ecosystem built by EDOM, sharing cross-field cooperation experiences from education and manufacturing to hardware and software integration; and a hands-on technology experience zone, where attendees can see the practical applications of NVIDIA Jetson Thor in edge AI and multimodal technology.

From technical analysis to industry exchange, Cross-field collaboration reveals new directions for smart machines:

  • Analyses of the core architecture of NVIDIA Jetson Thor and the latest developments in multimodal AI by NVIDIA experts.
  • Case studies on how Nexcobot introduces AI automation in smart manufacturing.
  • Ankang High School, which achieved excellent results at the 2025 FIRST Robotics Competition (FRC) World Championship, showcases how AI and robotics courses can cultivate students’ interdisciplinary abilities in education.
  • Insights into LLM and VLM applications in various robotic tasks given by Avalanche Computing.

Furthermore, EDOM will introduce its system integration approaches and deployment cases powered by NVIDIA IGX Orin and NVIDIA Jetson Thor, presenting the complete journey of edge AI technology from simulation to application implementation.

The event will conclude with an expert panel. Featuring leading specialists, the discussion covers collaboration, challenges, and international trends brought by multimodal robotics, helping industries navigate and anticipate the next phase of smart machine innovation.

Driven by physical AI and multimodal technologies, smart machines are entering a new phase of growth. The “AI × Multimodal Robotics: New Era of Industrial Intelligence Seminar” will not only showcase the latest technologies but also aim to connect the supply chain in Taiwan, enabling the manufacturing and robotics industries to seize opportunities in multimodal AI. The event will take place on Wednesday, December 3, 2025, at the Taipei Fubon International Convention Center, with registration and demonstration beginning at 12:30 PM. Enterprises and developers focused on AI, robotics, and smart manufacturing are welcome to join and stay at the forefront of multimodal technology. For more information, please visit https://www.edomtech.com/zh-tw/events-detail/jetson-thor-tech-day/

The post EDOM Seminar Explores the Next Generation of Physical AI Robots Powered by NVIDIA Jetson Thor appeared first on ELE Times.

Зустріч КПІшників із Ахметом Тораном Оздеміром — генеральним директором технологічної компанії ASPILSAN Enerji

Новини - Tue, 12/02/2025 - 12:05
Зустріч КПІшників із Ахметом Тораном Оздеміром — генеральним директором технологічної компанії ASPILSAN Enerji
Image
kpi вт, 12/02/2025 - 12:05
Текст

Під час другого візиту делегації турецької компанії ASPILSAN Enerji до КПІ ім. Ігоря Сікорського студенти Факультету електроенерготехнiки та автоматики (ФЕА) та Хіміко-технологічного факультету (ХТФ) долучилися до лекції «Технології батарей та енергетичні рішення, які формують майбутнє».

Pages

Subscribe to Кафедра Електронної Інженерії aggregator