Збирач потоків

Adoption of Electric Motorcycles: A Challenging Ride Ahead

ELE Times - 2 години 24 хв тому

India’s tryst with electric two-wheelers began as early as the 1990s, but meaningful momentum only arrived decades later. Early policy interventions—such as subsidies offered by the Ministry of New and Renewable Energy (MNRE) between 2010 and 2012—laid the groundwork. The launch of the FAME (Faster Adoption and Manufacturing of Electric Vehicles) scheme further accelerated adoption, drawing both startups and legacy manufacturers into the electric mobility race.

Yet, despite this progress, one segment remains conspicuously underdeveloped: electric motorcycles. A joint NITI Aayog–TIFAC study published in June 2022 estimates that India’s two-wheeler manufacturing capacity could exceed 20 million units annually by 2030. However, electric motorcycles account for only a negligible fraction of this future potential.

Royal Enfield’s Electric Bet Rekindles Debate

As India’s foremost motorcycle manufacturer, Royal Enfield’s move into electric mobility represents more than product diversification—it marks a fundamental technological transition for a brand long associated with large-displacement engines, mechanical character, and long-distance touring capability. Entering the electric motorcycle space forces a redefinition of performance, shifting the emphasis from displacement and acoustics to torque delivery, thermal control, software, and system integration.

The company has signalled its electric intent through the Flying Flea platform, with two derivatives planned for launch after 2026. Beyond the product itself, the real story lies in the underlying technology stack Royal Enfield is building for the electric era.

According to Matt Cardenas, Head of EV Product Strategy at Royal Enfield, much of the vehicle’s technology has been developed in-house to optimise user experience. Over 200 engineers are engaged in powertrain development, with 42 patents already filed. The company has also partnered directly with NXP and Snapdragon to co-develop motorcycle-specific chips—an uncommon move in the two-wheeler industry.

For a company built on the appeal of powerful combustion motorcycles, Royal Enfield’s electric strategy suggests a broader reimagining of power itself. In the absence of exhaust notes and mechanical mass, performance is being reconstructed through torque curves, thermal discipline, and digital intelligence. How successfully this engineering-first approach translates into rider acceptance may ultimately determine whether electric motorcycles can move beyond the margins of India’s two-wheeler market.

Where Does India’s Electric Motorcycle Market Stand?

In August 2025, NITI Aayog convened a high-level meeting with major two-wheeler OEMs—Hero MotoCorp, Bajaj Auto, TVS Motor, Ola Electric, Ather Energy, and Revolt Motors—to explore strategies for improving electric motorcycle adoption.

The current market structure highlights the challenge. Internal combustion engine (ICE) motorcycles still account for nearly two-thirds of India’s two-wheeler market. Electric scooters, by contrast, have achieved around 15% penetration within the EV segment and now represent almost one-third of all two-wheelers sold. Electric motorcycles, however, contribute just 0.1% of the total two-wheeler market, according to NITI Aayog.

It’s partly because motorcycle manufacturing is dominated by some heavy-loaded challenges, ranging from the weight of the battery to efficiency and experience.

The Weight of the Battery Problem

Battery weight remains one of the most fundamental constraints. Unlike four-wheelers, motorcycles are highly sensitive to mass distribution. For long-distance and off-road riding in particular, maintaining balance, agility, and stability becomes increasingly complex as battery size grows.

Engineers must carefully position the battery to preserve handling characteristics while reinforcing the frame without adding excessive weight. Ergonomics also demands rethinking, as the traditional engine layout is replaced by a bulkier energy storage system. Suspension and braking systems require complete recalibration to match the altered dynamics of an electric powertrain. In effect, designing an electric motorcycle is far more complex than simply swapping an engine for a motor.

Reflecting the same idea, Ather’s leadership has noted that practical and cost considerations currently favour electric scooters over bikes in India — an observation directly linked to the challenges electric motorcycles face in matching affordability and everyday utility.

Efficiency Gains That Don’t Fully Translate

On paper, electric motors are vastly more efficient than ICEs, converting 80–90% of electrical energy into motion, compared to just 20–30% for combustion engines. Regenerative braking and fewer moving parts further enhance theoretical efficiency.

However, these advantages shrink when evaluated on a well-to-wheel basis. Petrol has a far higher energy density by weight than today’s lithium-ion batteries. A small fuel tank can deliver long range with minimal mass, while an equivalent amount of stored electrical energy adds substantial weight. For long-distance riding at sustained speeds, the heavier battery pack can negate much of the motor’s inherent efficiency advantage, resulting in shorter real-world range than ICE motorcycles.

Transmission, Maintenance, and the Rider’s Bond

Electric scooters have benefited from simple, gearless drivetrains that translate easily into electric architectures. Motorcycles, especially those with gears, face greater design and manufacturing complexity. This also affects maintenance and repair—areas of particular importance to India’s motorcycle culture, where self-repair and aftermarket modifications are deeply ingrained. Limited service familiarity and proprietary components can dampen enthusiasm among seasoned riders.

The Sound of Silence

For many long-distance and enthusiast riders, motorcycles are as much about emotion as engineering. The sound, vibration, and mechanical feedback of a combustion engine form a core part of the riding experience. Electric motorcycles, with their near-silent operation, can feel less visceral—even if they deliver superior acceleration.

That said, this argument is not universal. Electric bikes offer tangible benefits: drastically reduced noise pollution, smoother power delivery, and instant torque that can feel both controlled and exhilarating. For a new generation of riders, these attributes may redefine what performance and pleasure mean on two wheels.

A Weak Global Reference Ecosystem

Globally, electric motorcycles lack the ecosystem support that helped electric scooters scale. The NITI Aayog study notes the absence of learning spillovers from international markets. While India benefited from mature Chinese supply chains and proven designs in e-scooters, no comparable global blueprint exists for electric motorcycles. As a result, manufacturers must build R&D capabilities largely from scratch.

Only a handful of players—Revolt Motors, Ultraviolette Automotive, Oben Electric, and Matter Motors—are currently active in India’s electric motorcycle space. This contrasts sharply with the success of Ola Electric and Ather Energy in the scooter segment. Internationally, the picture has also dimmed, highlighted by the bankruptcy of Italian high-performance EV motorcycle maker Energica in October 2024. While brands like Kawasaki and Zero remain prominent globally, volumes remain limited.

Also, executives from Bajaj Auto’s electric two-wheeler business have acknowledged that the industry initially struggled with supply chain readiness and cost structures, and that building a sustainable EV business requires a measured approach rather than aggressive cash burn — indirectly underscoring how complexity and cost are barriers for electrifying performance-oriented two-wheelers.

The Road Ahead

For now, innovation appears to be the only force capable of sustaining momentum in electric motorcycles. Breakthroughs in battery chemistry, thermal management, lightweight materials, and modular platforms will be critical. Until governments worldwide offer stronger policy support—through targeted subsidies, charging infrastructure, and R&D incentives—electric motorcycles are likely to remain a passion project rather than a mass-market reality.

The ride ahead, much like the machines themselves, will demand balance, patience, and relentless engineering ingenuity.

The post Adoption of Electric Motorcycles: A Challenging Ride Ahead appeared first on ELE Times.

Edge AI Chipsets: How On-Device Intelligence Is Reshaping Embedded System Design

ELE Times - 2 години 49 хв тому

The cloud-first model for embedded systems is becoming a legacy architecture. We’re moving away from simply piping data to remote servers and instead shifting the entire decision engine onto the bare metal. Driven by specialised Edge AI silicon-like NPUs and accelerated RISC-V cores, this evolution allows us to bake autonomous logic directly into sensors and controllers. In a production environment, on-device AI is a functional requirement, not a luxury. As NVIDIA CEO Jensen Huang noted in his 2025 GTC keynote, “The next wave is already happening… Robotics, which has been enabled by physical AI-AI that understands the physical world, is the new era,” marking a definitive shift toward intelligence that lives where the action occurs.

Here is why Several factors make on-device AI critical today:

  • Solving Latency: In robotics or power-grid monitoring, a cloud round-trip is a system failure. You need deterministic, sub-millisecond responses that only local inference provides.
  • Cutting the Bandwidth Tax: Constant streaming drains batteries and budgets. Local processing means we only transmit the “meaning,” not the raw noise, making massive IoT fleets cost-effective.
  • Hardened Privacy: For medical or industrial IP, data in transit is a liability. Keeping telemetry on the silicon is the most effective way to ensure confidentiality. Cristiano Amon, CEO of Qualcomm, reinforces this, stating: “When you do the processing on the device, it’s immediate. You don’t have to wait. It’s private. It’s your data. It’s your personal graph that stays with you.
  • True Autonomy: Your hardware shouldn’t brick when the Wi-Fi drops. Edge AI ensures the machine stays smart in remote or “noisy” environments.

These factors collectively make Edge AI an essential enabler of modern embedded intelligence.

Architectural Distinctions of Edge AI Chipsets

Edge AI chipsets differ from conventional microcontrollers (MCUs) and CPUs in architectural intent and operational efficiency. Core characteristics include:

  • AI Accelerators (NPUs/VPUs): Dedicated engines built for neural-network inference (convolutions, matrix multiplications) that significantly exceed CPUs in speed and power efficiency.
  • Heterogeneous SoC Architectures: A combination of CPU (control tasks), NPU (AI inference), and sometimes GPU (parallel processing), ensures optimised resource allocation across workloads.
  • Model Optimisation: Deep learning models can be deployed on devices with limited resources without experiencing significant accuracy loss thanks to techniques like quantisation, pruning, and compression.
  • Power & Thermal Management:  Edge AI can function within stringent power and temperature constraints thanks to dynamic voltage and frequency scaling, low-power modes, and thermal improvements.
  • Security & Reliability Features: Protection of sensitive operations-particularly in industrial deployments and critical infrastructure achieved through measures such as memory isolation, secure boot processes, and hardware-level tamper-resistant design.

By combining these features, edge-AI chipsets make intelligent behaviour feasible on devices previously incapable of complex decision-making.

Transforming Embedded System Design

The introduction of Edge AI fundamentally alters embedded system design:

  1. From Reactive to Cognitive Systems: Traditional embedded devices follow deterministic logic to detect defects, predict equipment failures and monitor the electronic equipment. Edge AI enables them to perceive, classify, and act autonomously.
  2. Real-Time Autonomy: With local inference, devices operate independently of cloud connectivity, critical for industrial, safety-critical, or remote applications.
  3. Distributed Intelligence & Scalability: Large IoT deployments can now distribute AI across nodes, reducing network load and ensuring real-time responsiveness.
  4. Energy and Bandwidth Efficiency: Local processing cuts down on data transmission, which saves energy and money and makes the system less reliant on centralized infrastructure.
  5. Cross-Layer Co-Design: Hardware-software co-design is now essential. Teams must optimise model architecture, memory allocation, runtime scheduling, and power management from the outset.

Edge AI thus transforms embedded systems from simple controllers into autonomous, intelligent agents capable of learning and decision-making.

Real-World Applications

Edge AI chipsets are already revolutionising multiple sectors:

  • Industrial Automation & Smart Manufacturing: Vision-based defect detection, predictive maintenance, anomaly detection, and real-time monitoring of inverters, EV chargers, and SMPS.
  • Embedded Vision & IoT: Smart cameras, object detection, robotics, drones, and smart sensors with on-device analytics.
  • Consumer Electronics & Wearables: Offline voice recognition, gesture detection, and biometric authentication while preserving privacy.
  • Energy & Power Electronics: Autonomous monitoring of power converters, predictive fault detection, and safety-critical decisions in EV and renewable energy systems.
  • Agriculture & Remote Infrastructure: Edge AI sensors classify crop health, monitor environmental conditions, and operate autonomously in rural or low-connectivity areas.

These applications illustrate that Edge AI is no longer experimental – it’s a practical enabler for real-world intelligence in embedded systems.

Challenges and Considerations

While Edge AI presents opportunities, several challenges require careful engineering:

  • Resource Constraints: Limited compute, memory, and power require model optimisation, which may impact accuracy or capability.
  • Hardware Heterogeneity: Diverse SoCs and NPUs make deployment across platforms complex.
  • Thermal and Power Management: Continuous inference can generate heat and consume power, impacting device lifespan.
  • Security & Trust: Edge devices handling sensitive data must ensure secure boot, encryption, and tamper resistance.
  • Model Lifecycle Management: Updating and maintaining models across fleets of devices, especially in remote locations, is a significant operational challenge.
  • Design Complexity: Effective deployment demands collaboration between ML engineers, hardware designers, and embedded software developers.

Addressing these challenges is essential for scalable, robust, and efficient Edge AI implementations.

Emerging Trends & the Road Ahead

Edge AI chipsets are evolving rapidly:

  • TinyML and Micro-Edge Devices: Ultra-low-power NPUs enable AI on minimal sensors and microcontrollers.
  • Chiplet-Based Modular SoCs: Modular architectures combining CPUs, AI accelerators, and memory provide scalable, upgradeable solutions.
  • Cross-Layer Automation: Improved toolchains for quantisation, pruning, scheduling, and deployment reduce manual tuning and accelerate development.
  • Hybrid Edge-Cloud Models: On-device inference combined with federated learning or cloud aggregation balances autonomy with long-term model improvement.
  • Enhanced Security: Trusted execution environments and secure hardware primitives protect distributed edge deployments.

These trends point toward embedded systems that are intelligent, autonomous, energy-efficient, and scalable across industries.

India’s Emerging Edge AI Ecosystem

India is rapidly contributing to the global Edge AI landscape. Startups and MNCs like Netrasemi, Mindgrove Technologies, InCore Semiconductors, and MosChip Technologies are developing edge-AI SoCs, NPUs, and embedded solutions tailored for industrial, automotive, and IoT applications. With government initiatives like Digital India and Make in India, combined with academic research, the country is fostering innovation in Edge AI for both domestic and global markets.

Conclusion

Edge AI chipsets are changing what we expect from embedded devices. Work that once had to be pushed to a central system can now be handled directly where the data is produced. This allows equipment to react immediately, even in environments where connectivity is unreliable or power is limited. Designing such systems is no longer just a matter of selecting a processor and writing code; it involves careful trade-offs between performance, power use, reliability, and long-term maintenance, with security built in from the start. As AI visionary Andrew Ng recently summarised, “The future AI wealth doesn’t belong to those who own the largest GPU clusters but to those who know how to use the smallest models to solve the most specific problems… Edge computing and small models are the wealth keys.

For engineers in the embedded domain, this is a practical turning point rather than a theoretical one. Devices are moving beyond fixed, single-purpose roles and taking on more responsibility within distributed setups. Edge AI enables the development of autonomous and efficient systems. These solutions deliver the consistent reliability required by various industries.

The post Edge AI Chipsets: How On-Device Intelligence Is Reshaping Embedded System Design appeared first on ELE Times.

Quantum-Safe Hardware: Why the Cybersecurity Deadline Is Closer Than the Quantum Breakthrough

ELE Times - 3 години 11 хв тому

The world’s digital infrastructure is quietly approaching a cryptographic inflexion point. For decades, global cybersecurity has relied on a small set of mathematical assumptions, most notably RSA and elliptic-curve cryptography. These assumptions underpin everything from cloud authentication and mobile networks to payment systems, defence communications, and digital identity. Quantum computing threatens not to weaken these foundations, but to invalidate them entirely.

What makes this transition uniquely dangerous is not the arrival of quantum computers themselves, but the mismatch between how fast cryptography can be broken and how slowly hardware trust systems can be replaced. This is not a software problem that can be patched overnight. It is a hardware lifecycle problem measured in decades. As quantum research accelerates and post-quantum cryptography standards solidify, a hard truth is emerging across the industry: quantum-safe hardware must be deployed well before quantum computers can break encryption. Waiting for certainty is, paradoxically, the fastest path to systemic insecurity.

From Mathematical Risk to Engineering Reality

Quantum computing represents a rare technological disruption where the threat is mathematically inevitable, not probabilistic. Shor’s algorithm does not weaken RSA or elliptic-curve cryptography; it nullifies them. Once fault-tolerant quantum computers emerge, today’s cryptographic foundations collapse outright. More critically, attacks need not be real-time: encrypted data captured today can be decrypted years later under the “harvest now, decrypt later” model.

As Ali El Kaafarani, CEO of PQShield, has consistently argued, the industry’s mistake is treating post-quantum cryptography as a future software upgrade rather than a present hardware engineering challenge. Once quantum decryption is possible, compromise is retroactive; you don’t lose today’s secrets, you lose yesterdays. This reality has shifted focus from cryptographic theory to what can be implemented in silicon. While many PQC schemes exist on paper, only a narrow set survives real-world constraints of memory, power, latency, and side-channel resistance. Lattice-based algorithms such as NIST-standardised Kyber and Dilithium are currently the most hardware-viable, though they still impose significant demands. Other approaches, like code-based cryptography, struggle with impractically large key sizes that overwhelm secure hardware resources.

Why Hardware, Not Software, Is the Real Battleground

Cryptography is enforced by hardware roots of trust, HSMs, secure elements, TPMs, SIMs, and embedded controllers that underpin secure boot, identity, and key protection. Post-quantum cryptography breaks the assumptions these systems were built on: larger keys and heavier computation quickly expose the limits of hardware optimised for RSA and ECC. While software can absorb this overhead, hardware-bound systems cannot.

This is where “crypto-agility” fractures. Many platforms claim algorithm agility, but in practice, it exists only at the firmware or API layer, not in silicon. As Todd Moore, Vice President of Data Security Products at Thales, has noted, true crypto-agility is about whether hardware can evolve without becoming a performance bottleneck. In high-throughput environments, first-generation “PQC-ready” hardware often works functionally but not economically, forcing a choice between degraded performance and early replacement. Crypto-agility is no longer a software feature; it is a physical property of hardware architecture.

The Myth of Universal Crypto-Agility

In response to quantum risk, many vendors describe their products as “crypto agile.” In practice, crypto-agility has often meant that software can be updated, while the underlying hardware remains fixed. That model is breaking down. True crypto-agility in the post-quantum era requires hardware that was designed with uncertainty in mind: spare compute capacity, flexible accelerators, sufficient secure memory, and firmware update paths that are themselves resistant to quantum attacks. Much of the hardware deployed today, particularly older HSMs and embedded roots of trust, does not meet this bar.

Newer platforms from vendors such as Thales and Entrust are moving toward genuinely post-quantum-ready architectures, but even here, the industry is candid that early PQC support is only a first step. In many environments, especially high-throughput systems like certificate authorities and telecom authentication servers, performance collapses when PQC is layered onto hardware never designed for it. The uncomfortable implication is that a significant portion of deployed cryptographic hardware will ultimately need physical replacement, not because it is insecure today, but because it cannot scale securely tomorrow.

The Long-Life Device Problem: Secure Boot and Identity

Quantum risk becomes existential when hardware is expected to remain trusted for 15–20 years. Industrial equipment, telecom infrastructure, vehicles, medical devices, and defence systems often outlive multiple cryptographic generations. If the firmware-signing keys protecting these systems become quantum-breakable mid-lifecycle, attackers gain the ability to install malicious updates that appear fully legitimate.

To address this, hardware designers are rethinking trust from the ground up. Secure boot chains are increasingly being designed around hybrid verification models, where classical and post-quantum signatures coexist during a long transition period. Device identity is shifting toward roots of trust that can support PQC natively, rather than bolting it on later.

Equally important is the ability to migrate trust anchors in the field. Hardware that cannot rotate its cryptographic identity without physical access becomes a liability the moment quantum attacks become practical. In this sense, quantum-safe hardware is no longer just a security feature; it is a prerequisite for product longevity.

Scale Changes Everything

Post-quantum cryptography is feasible at a small scale. The real challenge emerges at volume. Larger keys and signatures mean more data moving through networks, more bytes stored in certificates, and more cycles spent on verification. In isolation, these costs are manageable. At the scale of global telecom networks, payment systems, and cloud platforms, they become systemic.

Cloud providers have already found that naïve PQC deployment can inflate handshake sizes enough to affect latency and throughput. Telecom operators face similar issues in authentication signalling, where milliseconds matter. This is why hybrid cryptography combining classical and post-quantum algorithms has become the dominant near-term strategy. Hardware acceleration is the critical enabler here. When PQC is supported at the silicon level, much of the overhead becomes manageable. When it is not, performance penalties cascade across the system.

The Real Deadline No One Wants to Announce

Public timelines often place cryptographically relevant quantum computers in the early to mid-2030s. Internally, many security leaders plan as if the deadline is earlier. The reason is simple: hardware migration takes time. From design and validation to certification and deployment, replacing cryptographic hardware across critical infrastructure can take a decade. Waiting for definitive proof that quantum computers can break RSA guarantees that the response will come too late. A pragmatic consensus is forming across the industry. By the second half of this decade, all new cryptographic hardware procurement must be quantum-safe by design. By the end of the decade, high-value infrastructure must complete the transition. Anything beyond those risks turning today’s encrypted data into tomorrow’s breach archive.

As Marco Pereira, Global Head of Cybersecurity at Capgemini, has emphasised about quantum risk “Quantum readiness isn’t about predicting a date, it’s about managing irreversible risk. Every encrypted asset today could become tomorrow’s breach if organisations delay adopting post-quantum protections.

Quantum-safe hardware is not driven by fear of the unknown. It is driven by certainty that cryptography is bounded by physics, certainty that hardware lifecycles are slow, and certainty that data longevity outlasts algorithm lifetimes. The organisations that navigate this transition successfully will not be those that deploy post-quantum cryptography fastest in software, but those that redesign their hardware roots of trust early, with crypto-agility, scale, and long-term integrity at the core. In the quantum era, cybersecurity failures will not arrive with alarms or outages. It will arrive quietly, years after the data was first captured. The only winning strategy is to make that data permanently useless before the quantum key ever turns.

The post Quantum-Safe Hardware: Why the Cybersecurity Deadline Is Closer Than the Quantum Breakthrough appeared first on ELE Times.

SQL Server IoT 2025: Why It Matters for Embedded and OEM Systems

ELE Times - 3 години 28 хв тому

Courtesy: Arrow Electronics

The new SQL Server IoT 2025 is now available! If you build devices, appliances, or embedded systems that ship with a database inside, SQL Server IoT 2025 is worth a serious look. It brings the SQL Server 2025 engine into long-life, fixed-function products. You get the full engine, the same AI features, the same JSON and vector capabilities, and the same security improvements. The only difference is that it is packaged and licensed for OEM and embedded scenarios.

In my experience supporting embedded customers, the pattern is consistent. More data at the edge, tight security requirements, long product lifecycles, and pressure to support AI without adding cloud dependencies. SQL Server IoT 2025 helps you handle those problems without changing how you design your systems. You can use the same T-SQL, drivers, tools, containers, and development workflow.

AI where your device runs

The biggest change in SQL Server IoT 2025 is the built-in AI stack. The database now supports a native vector type, semantic search, hybrid search, and local or remote model execution. You can generate embeddings inside the engine, and you can run AI agents through a secure REST endpoint that SQL Server manages.

Nothing in this requires a cloud connection unless you choose to use one. You can keep models local by using Ollama or ONNX Runtime. You can also call cloud models through Azure OpenAI or OpenAI.

For embedded systems, this means you can build features that previously required a cloud round-trip. Examples include local anomaly detection, troubleshooting assistance, natural language search of manuals or logs, and smarter automation. If you already store your device data in SQL Server, the new vector features let you use that data immediately.

Security that matches modern requirements

The platform is secure out of the box. SQL Server IoT 2025 carries forward the security updates from SQL Server 2025. That includes TLS 1.3, TDS 8.0, PBKDF hashing, managed identities, and stricter defaults. This helps you ship hardware that is ready for audit and compliance checks. For teams in healthcare, manufacturing, or other controlled industries, this reduces significant design risk.

Performance improvements that help small systems

Most devices in the field run on constrained compute, so predictable behaviour underload becomes more important than raw horsepower. SQL Server IoT 2025 benefits from improvements like optimised locking, Lock After Qualification, tempdb governance, faster failover, and reduced contention during heavy workloads.

Your device can run more predictable workloads with fewer stalls. It starts faster, handles concurrency better, and gives you cleaner behaviour when something in the system misbehaves.

Better ways to move data out of the device

You also get Change Event Streaming, which pushes changes directly to Azure Event Hubs. The engine streams committed transactions without extra system tables. This helps when your design needs low-latency reporting or coordination with services outside the device.

If you use Microsoft Fabric, SQL Server IoT 2025 supports database mirroring directly into OneLake. That gives you a simple path to analytics or long-term storage without writing ETL code.

Developer workflow stays simple

Stability in the toolchain is just as important as stability in the engine. SQL Server IoT 2025 uses the same drivers, SSMS, VS Code extension, containers, and deployment workflow. You also get the new JSON type, JSON indexing, RegEx functions, Base64 utilities, and improved T-SQL functions that SQL Server 2025 introduces.

When an upgrade is worth it

If you are trying to decide whether this upgrade is worth it, these are the points that usually guide the decision:

  • If your device is running SQL Server 2014 or 2016, you are past or near the end of mainstream support, and the extended support runway is shrinking fast. SQL Server IoT 2025 offers a long-life option with a modern engine, stronger security, and a cleaner feature set for long-term maintenance. You also get improvements like accelerated recovery, better indexing behaviour, and up-to-date drivers.
  • If your product roadmap includes AI features or if customers are asking for analytics without sending data off the device, SQL Server IoT 2025 gives you a built-in way to handle that.
  • If your company is standardising on Fabric or Azure Arc, IoT 2025 fits neatly into that architecture.

If your design team is trying to reduce custom code around queues, logs, or sync processes, IoT 2025 reduces that work.

The post SQL Server IoT 2025: Why It Matters for Embedded and OEM Systems appeared first on ELE Times.

Microchip Releases Plastic Transient Voltage Suppressors for A&D Applications

ELE Times - 3 години 55 хв тому

Microchip Technology announced the release of its JANPTX family of non-hermetic plastic Transient Voltage Suppressor (TVS) devices that meet the MIL-PRF-19500 qualification, offering high-reliability protection for aerospace and defence applications. These TVS devices are the first in the industry to achieve MIL-PRF-19500 qualification in a plastic package, offering engineers a lightweight, cost-effective solution without sacrificing stringent military performance requirements. The JANPTX product line is available in voltage ranges from 5V to 175V and includes five variants: JANPTX1N5555UJ, JANPTX1N5558UG, JANPTX1N5629AUJ, JANPTX1N5665AUG, JANPTX1N5907UG and JANPTX1N5907UJ.

With a high peak pulse power rating of 1.5 kW and clamping response times measured at less than 100 picoseconds in internal tests, the JANPTX family is designed to help ensure the safety and reliability of sensitive electronic components in demanding environments. Designed for surface mounting, these unidirectional TVS devices deliver protection against voltage transients such as lightning strikes, electrostatic discharge (ESD), and electrical surges.

“Microchip’s JANPTX family sets a new standard for military-grade transient protection in aerospace and defence,” said Ronan Dillon, associate director of Microchip’s high-reliability and RF business unit. “By delivering the first MIL-PRF-19500 qualified plastic TVS devices, we enable engineers to achieve high reliability and performance in a lightweight, cost-effective package.”
The JANPTX TVS devices help safeguard airborne avionics, electrical systems and other mission-critical applications where low voltage and high reliability are essential. Their advanced design provides protection against switching transients, induced RF effects, Electromagnetic Pulse (EMP) events, and secondary lightning strikes, meeting the IEC61000-4-2, IEC61000-4-4, and IEC61000-4-5 standards.

Key Features of the TVS Devices

  • Surface-mount unidirectional TVS design
  • Can suppress transients up to 1,500W at 10/1000 µs
  • Capable of clamping transients in less than 100 ps
  • Working voltage range: 5V to 175V
  • Military qualification: MIL-PRF-19500/716
  • Equivalent hermetic packages available for surface mount and thru-hole
  • Weight: ~0.25 grams
Microchip’s scalable, customizable and high-reliability solutions for aerospace and defence applications are designed to meet rigorous industry standards. Supported by a dedicated aerospace and defence team, Microchip offers controlled manufacturing, full traceability, advanced testing, extended temperature operation and radiation-hardened options, helping to ensure long-term supply and dependable support for mission-critical designs. To learn more about Microchip’s aerospace and defence solutions, visit its website.

Development Tools

The JANPTX devices are supported by various SPICE models, which enable the virtual prediction and simulation of a circuit’s behaviour, eliminating the need for time-consuming physical prototyping and redesigns.

The post Microchip Releases Plastic Transient Voltage Suppressors for A&D Applications appeared first on ELE Times.

Týr-the-Pruner: Search-based Global Structural Pruning for LLMs

ELE Times - 4 години 22 хв тому

Courtesy: AMD

Key Takeaways:

  1. End-to-end global structural pruning: Týr-the-Pruner jointly optimises pruning and layer-wise sparsity allocation, avoiding two-stage global ranking pipelines.
  2. Multi-sparsity supernet with expectation-aware error modelling: Layers are pruned at multiple sparsity levels and evaluated collectively to capture cross-layer dependencies.
  3. Coarse-to-fine evolutionary search under a fixed sparsity budget: Sparsity-shift mutations preserve global constraints while progressively refining resolution (12.5% → 1.56%).
  4. Taylor-informed, backprop-free local pruning: First- and second-order saliency guides structured pruning with minimal functional drift.
  5. Near-dense accuracy with real hardware gains: Up to 50% parameter reduction retains ~97% accuracy on Llama-3.1-70B, accelerating inference on AMD Instinct GPUs.

As large language models (LLMs) scale into the tens and hundreds of billions of parameters, pruning has re-emerged as a critical lever for improving inference efficiency without sacrificing accuracy. AMD’s Týr-the-Pruner advances this frontier with a search-based, end-to-end framework for global structural pruning, delivering up to 50% parameter reduction while retaining ~97% of dense accuracy on Llama-3.1-70B—a new state of the art among structured pruning methods.

Accepted to NeurIPS 2025, the work also demonstrates tangible inference speedups on AMD Instinct GPUs, reinforcing pruning’s relevance not just as a compression technique, but as a practical path to deployment-scale efficiency.

Why global sparsity matters

Local structural pruning is appealing for its simplicity and efficiency: layers are pruned independently, often allowing even hundred-billion-parameter models to fit on a single device. However, this approach enforces uniform per-layer sparsity, overlooking how errors and redundancies propagate across layers.

Existing “global” pruning methods attempt to address this by first ranking substructures across layers and then pruning accordingly. While intuitive, this two-stage pipeline breaks end-to-end optimisation and struggles to capture inter-layer interactions.

Týr-the-Pruner flips the paradigm. Instead of ranking structures before pruning, it first constructs a multi-sparsity supernet and then searches directly for the optimal layer-wise sparsity distribution under a fixed global budget—yielding a truly end-to-end global pruning strategy.

Inside Týr-the-Pruner: How It Works

Figure 1. An overview of Týr-the-Pruner. Large language models (a) will be effectively locally pruned across multiple sparsity ratios and constructed into a supernet (b). An iterative prune-and-search strategy will be used to select the optimal sparse structure for each layer while maintaining a target overall sparsity ratio: pruning and sparsity-shift-driven evolutionary search are implemented iteratively with a coarse-to-fine sparsity interval granularity (c). Ultimately, the post-pruned LLM with the optimal sparsity distribution (d) is obtained.

Building a Reliable Supernet

The process begins by locally pruning every layer across multiple sparsity levels. Týr employs Taylor-informed saliency (first- and second-order) alongside backprop-free weight adjustment, applied progressively to minimise performance perturbations.

To ensure that different pruned variants remain mutually consistent, the framework introduces expectation-aware error accumulation, addressing the otherwise ambiguous error propagation that arises when multiple pruned copies coexist within a supernet.

Coarse-to-Fine Global Search

Once the supernet is established, Týr performs an evolutionary sparsity-shift search. Each mutation preserves the global sparsity budget—for example, making one layer slightly denser while another becomes equivalently sparser. Candidate models are evaluated using distillation-based similarity metrics over hidden activations and logits.

A naïve fine-grained search would be intractable: for an 80-sublayer model, even modest sparsity resolution would imply an astronomically large configuration space. Týr sidesteps this with an iterative coarse-to-fine strategy:

  • The search begins with a coarse sparsity interval (12.5%) and just nine candidates per layer.
  • After identifying a strong sparsity pattern, the search recentres and halves the interval (12.5% → 6.25% → 3.13% → 1.56%).
  • After four iterations, Týr reaches fine-grained sparsity resolution while keeping each iteration’s effective search space manageable.

This design steadily narrows the search, accelerates convergence, and efficiently uncovers the optimal global sparsity distribution.

Results: Accuracy and efficiency on AMD hardware

Across models and benchmarks, Týr-the-Pruner consistently preserves near-dense accuracy while delivering meaningful efficiency gains on AMD Instinct MI250 accelerators.

At 50% sparsity, the method retains 96–97% average accuracy on 70B-scale models—outperforming structured pruning approaches such as SliceGPT, LLM-Pruner, and FLAP. On smaller models, the runtime benefits are equally compelling: for Llama-3.1-8B and Mistral-Nemo, pruning cuts time-to-first-token by up to 1.75× and boosts decode throughput by up to 1.38×.

These results position pruning as a first-class optimisation technique for large-scale LLM inference, particularly on modern accelerator architectures.

Practical Considerations: Memory and Search Efficiency

While supernets can be large, Týr keeps memory usage close to that of a single dense model by storing pruned substructures on disk and loading only the active subnet into high-bandwidth memory. Disk footprints remain manageable—around 40 GB for 7–8B models and ~415 GB for 70B models—with older artefacts cleaned up between iterations.

The evolutionary search itself is computationally efficient. Evaluations proceed under progressively increasing token budgets (2K → 16K → 128K), converging rapidly thanks to the coarse-to-fine schedule. For 8B-scale models, a single search iteration completes in a few hours, keeping overall runtime well within practical limits.

Summary

Týr-the-Pruner represents a shift in how global structural pruning is approached. By unifying pruning and sparsity allocation into a single, end-to-end search process—and combining it with expectation-aware error modelling and coarse-to-fine optimisation—the framework achieves both high accuracy retention and real-world inference acceleration.

With up to 50% parameter reduction and ~97% accuracy preserved on Llama-3.1-70B, Týr-the-Pruner demonstrates that global pruning can be both principled and practical—setting a new benchmark for structured pruning in the era of large-scale LLM deployment.

The post Týr-the-Pruner: Search-based Global Structural Pruning for LLMs appeared first on ELE Times.

LEDS Manufactured Backwards

Reddit:Electronics - 5 годин 34 хв тому
LEDS Manufactured Backwards

My college Electronics class final was to simply solder on parts of a pre-made circuit, and in my case it was an LED Christmas Tree. After soldering 36 TINY AS HELL LEDS, I tested it and there was no lights turning on…. Decided to test an extra LED and turns out the legs were manufactured with the long leg as the negative side and the short leg as the positive side. I’m so cooked

submitted by /u/Prior-Scheme-572
[link] [comments]

Wolfspeed produces single-crystal 300mm silicon carbide wafer

Semiconductor today - Втр, 01/13/2026 - 23:00
Wolfspeed Inc of Durham, NC, USA — which makes silicon carbide (SiC) materials and power semiconductor devices — has announced what it says is a significant industry milestone with the production of a single-crystal 300mm (12-inch) silicon carbide wafer. Backed by a silicon carbide IP portfolios comprising more than 2300 issued and pending patents worldwide, Wolfspeed is pioneering the transition to 300mm technology, establishing a path to future volume commercialization...

Altum RF renews ISO 9001:2015 certification

Semiconductor today - Втр, 01/13/2026 - 17:59
Altum RF (which designs RF, microwave and millimeter-wave semiconductors) has successfully renewed its ISO 9001:2015 certification. Valid through 2029, the firm says its renewal highlights its ongoing commitment to quality, reliability and excellence across its global operations, including its headquarter and design center in Eindhoven, Netherlands and the design center in Sydney, Australia...

📱Конференція трудового колективу КПІ ім. Ігоря Сікорського

Новини - Втр, 01/13/2026 - 16:15
📱Конференція трудового колективу КПІ ім. Ігоря Сікорського kpi вт, 01/13/2026 - 16:15
Текст

26 січня 2026 року відбудеться конференція трудового колективу КПІ ім. Ігоря Сікорського у залі засідань Вченої ради.

5 octave linear(ish)-in-pitch power VCO

EDN Network - Втр, 01/13/2026 - 15:00

A few months back, frequent DI contributor Nick Cornford showed us some clever circuits using the TDA7052A audio amplifier as a power oscillator. His designs also demonstrate the utility of the 7052’s nifty DC antilog gain control input:

Eventually, the temptation to have a go at using this tricky chip in a (sort of) similar venue became irresistible.  So here it is. See Figure 1.

Figure 1 A2 feedback and TDA7052A’s antilog Vc gain control create a ~300-mW, 5-octave linear-in-pitch VCO. More or less…

The 5-V square wave from comparator A2 is AC-coupled by C1 and integrated by R1C2 to produce an (approximate) triangular waveshape on U1 pin 2. This is boosted by A1 by a gain factor of 0dB to 30dB (1 to 32) according to the Vcon gain control input to become complementary speaker drive signals on pins 5 and 8.

A2 compares the speaker signals to its own 5-V square wave to complete the oscillation-driven feedback loop thusly. Its 5-V square wave is summed with the inverted -1.7-Vpp U1 pin 8 signal, divided by 2 by the R2R3 divider, then compared to the noninverted +1.7-Vpp U1 pin 5 signal. The result is to force A2 to toggle at the peaks of the tri-wave when the tri-wave’s amplitude just touches 1.7 Vpp. This causes the triangle to promptly reverse direction. The action is sketched in Figure 2.

Figure 2 The signal at the A2+ (red) and A2- (green) inputs.

This results in (fairly) accurate regulation of the tri-wave’s amplitude at a constant 1.7 Vpp. But how does that allow Vcon to control oscillation frequency?

Here’s how.

The slope of the tri-wave on A1’s input pin 2 is fixed at 2.5v/(R1C2), or 340 v/s. Therefore, the slopes of the tri-waves on A1 output pins 5 and 8 equal ±U1gain*340 v/s. This means the time required for those tri-waves to ramp through each 1.7-V half-cycle = 1.7/(U1gain*340v/s) = 5ms/U1gain.

Thus, the full cycle time = 2*(5ms/U1gain) = 10ms/U1gain, making Fosc = 100Hz*A1gain.

A1 gain is controlled by the 0- to 2-V Vc input. The Vc input is internally biased to 1 V with a 14-kΩ equivalent impedance as illustrated in Figure 3.

Figure 3 R4 works with the 14 kΩ internal Vc bias to make a 5:1 voltage divider, converting 0 to 2 V into 1±0.2 V.

R4 works into this, making a 5:1 voltage division that converts the 0 to 2 V suggested Vc excursion to the 0.8 to 1.2 V range at pin 4. Figure 4 shows the 0dB to 30dB gain range this translates into.

Figure 4 Vc’s 0 to 2 V antilog gain control span programs A1 pin 4 from 0.8 V to 1.2 V for 1x to 32x gain and Fosc = 100HzA1gain = 100Hz(5.66Vc) = 100 to 3200Hz

The resulting balanced tri-wave output can make a satisfyingly loud ~300 mW warble into 8 Ω without sounding too obnoxiously raucous. A basic ~50-Ω rheostat in series with a speaker lead can, of course, make it more compatible with noise-sensitive environments. If you use this dodge, be sure to place the rheostat on the speaker side of the connections to A2. 

Meanwhile, note (no pun) that the 7052 data sheet makes no promises about tempco compensation nor any other provision for precision gain programming. So neither do I. Figure 1’s utility in precision applications (e.g., music synthesis) is therefore definitely dubious.

Just in case anyone’s wondering, R5 was an afterthought intended to establish an inverting DC feedback loop from output to input to promote initial oscillation startup. This being much preferable to a deafening (and embarrassing!) silence.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post 5 octave linear(ish)-in-pitch power VCO appeared first on EDN.

Swansea’s CISM to lead new UK Centre for Doctoral Training in semiconductor skills

Semiconductor today - Втр, 01/13/2026 - 14:33
Swansea University is to lead a major national initiative to address the UK’s semiconductor skills gap with the launch of a new Centre for Doctoral Training (CDT) in semiconductor skills...

Global Semiconductor Revenue Grew 21% in 2025, reports Gartner

ELE Times - Втр, 01/13/2026 - 14:08

Worldwide semiconductor revenue totalled $793 billion in 2025, an increase of 21% year-over-year (YoY), according to preliminary results by Gartner, Inc., a business and technology insights company.

“AI semiconductors — including processors, high-bandwidth memory (HBM), and networking components continued to drive unprecedented growth in the semiconductor market, accounting for nearly one-third of total sales in 2025,” said Rajeev Rajput, Sr. Principal Analyst at Gartner. “This domination is set to rise as AI infrastructure spending is forecast to surpass $1.3 trillion in 2026.”

NVIDIA Strengthened its Lead While Intel Continued to Lose Share

Among the top 10 semiconductor vendors ranking, the positions of five vendors have changed from 2024 (see Table 1).

  • NVIDIA extended its lead over Samsung by $53 billion in 2025. NVIDIA became the first vendor to cross $100 billion in semiconductor sales, contributing to over 35% of industry growth in 2025.
  • Samsung Electronics retained the No. 2 spot. Samsung’s $73 billion semiconductor revenue was driven by memory (up 13%), while non-memory revenue dropped 8% YoY.
  • SK Hynix moved into the No. 3 position and totalled $61 billion in revenue in 2025. This is an increase of 37% YoY, fueled by strong demand for HBM in AI servers.
  • Intel lost market share, ending the year at 6% market share, half of what it was in 2021.

Table 1. Top 10 Semiconductor Vendors by Revenue, Worldwide, 2025 (Millions of U.S. Dollars)

2025 Rank 2024 Rank Vendor 2025 Revenue 2025 Market Share (%) 2024 Revenue 2025-2024 Growth (%)
1 1 NVIDIA 125,703 15.8 76,692 63.9
2 2 Samsung Electronics 72,544 9.1 65,697 10.4
3 4 SK Hynix 60,640 7.6 44,186 37.2
4 3 Intel 47,883 6.0 49,804 -3.9
5 7 Micron Technology 41,487 5.2 27,619 50.2
6 5 Qualcomm 37,046 4.7 32,976 12.3
7 6 Broadcom 34,279 4.3 27,801 23.3
8 8 AMD 32,484 4.1 24,127 34.6
9 9 Apple 24,596 3.1 20,510 19.9
10 10 MediaTek 18,472 2.3 15,934 15.9
    Others (outside top 10) 298,315 37.6 270,536 10.3
    Total Market 793,449 100.0 655,882 21.0

Source: Gartner (January 2026)

The buildout of AI infrastructure is generating high demand for AI processors, HBM and networking chips. In 2025, HBM represented 23% of the DRAM market, surpassing $30 billion in sales while AI processors exceeded $200 billion in sales. AI semiconductors are set to represent over 50% of total semiconductor sales by 2029.

The post Global Semiconductor Revenue Grew 21% in 2025, reports Gartner appeared first on ELE Times.

India aims to be among the major semiconductor hubs by 2032, says Union Minister Ashwini Vaishnaw

ELE Times - Втр, 01/13/2026 - 13:34

India has joined the global race to manufacture semiconductor chips domestically to grow into a major global supplier. Amidst this progress, Union Minister for Electronics and Information Technology Ashwini Vaishnaw outlined how the government is positioning India as a key global technology player.

The Minister informed that the semiconductor sector is expanding rapidly, driven by demand from artificial intelligence, electric vehicles, and consumer electronics. India has made an early start with approvals for 10 semiconductor-related units. Four plants – CG Semi, Kaynes Technology, Micron Technology, and Tata Electronics’ Assam facility – are expected to commence commercial production in 2026.

He also highlighted the visible progress on the design and talent fronts. Currently, design initiatives involve 23 startups, while skill development programmes have been scaled across 313 universities. The domestic landscape is being strengthened by equipment manufacturers who are simultaneously setting up plants in India.

According to Vaishnaw, by 2028, these efforts are bound to make India a reckoning force in the global chip-making market. He said the period after 2028 would mark a decisive phase as industry growth reaches a tipping point. With manufacturing, design, and talent ecosystems in place, India aims to be among the major semiconductor hubs by 2032, including the capability to produce 3-nanometre chips, he added.

While addressing criticism that India’s AI growth is driven largely by global technology firms, Vaishnaw reiterated that sovereign AI remains a national goal. Indian engineers are working across all five layers of the AI stack – applications, models, chipsets, infrastructure, and energy. Twelve teams under the IndiaAI Mission are developing foundational models, several design teams are working on chipsets, and around $70 billion is being invested in infrastructure, supported by clean energy initiatives.

Subsequently, while responding to concerns on the utilisation of domestic OSAT and fabrication capacity, the minister said new industries inevitably face market-acceptance challenges. Success, he stated, will depend on the ability of Indian plants to deliver high-quality products at competitive prices.

The post India aims to be among the major semiconductor hubs by 2032, says Union Minister Ashwini Vaishnaw appeared first on ELE Times.

Public–private partnership investing $450m in ATALCO’s alumina refinery and USA’s first large-scale primary gallium production

Semiconductor today - Втр, 01/13/2026 - 11:50
Atlantic Alumina Company LLC (ATALCO), the only operating alumina refinery in the USA, has announced a partnership with the United States Department of War (DOW) and Concord Resources Holdings Ltd, in conjunction with Concord’s majority shareholder (a fund managed by commodities investment firm Pinnacle Asset Management L.P.), to sustain and increase domestic alumina production and establish the USA’s first large-scale primary gallium production circuit in Gramercy, Louisiana...

Київські політехніки отримали нагороди від Верховної Ради України!

Новини - Втр, 01/13/2026 - 10:59
Київські політехніки отримали нагороди від Верховної Ради України!
Image
kpi вт, 01/13/2026 - 10:59
Текст

Перший заступник Голови ВРУ, голова Наглядової ради КПІ ім. Ігоря Сікорського Олександр Корнієнко нагородив грамотами та подяками Верховної Ради України працівників нашого університету, відзначивши їхню професійну працю, відданість справі та служіння українському суспільству.

Enphase Energy starts shipping IQ9 Commercial Microinverters in USA

Semiconductor today - Пн, 01/12/2026 - 20:50
Enphase Energy Inc of Fremont, CA, USA (which supplies microinverter-based solar and battery systems) has begun production shipments of its IQ9N-3P Commercial Microinverter across the USA in late December. The product is manufactured in the USA to support domestic content requirements and Foreign Entity of Concern (FEOC) compliance for eligible commercial solar projects. This is Enphase’s first microinverter powered by gallium nitride (GaN) technology and designed for three-phase 480Y/277 V (wye) grid configurations, without using external transformers...

❄️ Як не замерзнути: 6 практичних правил

Новини - Пн, 01/12/2026 - 19:58
❄️ Як не замерзнути: 6 практичних правил
Image
kpi пн, 01/12/2026 - 19:58
Текст

Якщо тривалі тривоги змушують нас перебувати у холодних підвалах, паркінгах та під’їздах переохолодження настає швидше, ніж ми встигаємо це відчути – особливо коли температура падає. Поради також актуальні при перебуванні у приміщеннях із відсутнім опаленням.

Fundamentals in motion: Accelerometers demystified

EDN Network - Пн, 01/12/2026 - 15:40

Accelerometers turn motion into measurable signals. From tilt and vibration to g-forces, they underpin countless designs. In this “Fun with Fundamentals” entry, we demystify their operation and take a quick look at the practical side of moving from datasheet to design.

From free fall to felt force: Accelerometer basics

Accelerometer is a device that measures the acceleration of an object relative to an observer in free fall. What it records is proper acceleration—the acceleration actually experienced—rather than coordinate acceleration, which is defined with respect to a chosen coordinate system that may itself be accelerating. Put simply, an accelerometer captures the acceleration felt by people and objects, the deviation from free fall that makes gravity and motion perceptible.

An accelerometer—also referred to as accelerometer sensor or acceleration sensor—operates by sensing changes in motion through the displacement of an internal proof mass. At its core, it’s an electromechanical device that measures acceleration forces. These forces can be static, like the constant pull of gravity, or dynamic, caused by movement or vibrations.

When the device experiences acceleration, this mass shifts relative to its housing, and the movement is converted into electrical signals. These signals are measured along one, two, or three axes, enabling detection of direction, vibration, and orientation. Gravity also acts on the proof mass, allowing the sensor to register tilt and position.

The electrical output is then amplified, filtered, and processed by internal circuitry before reaching a control system or processor. Once conditioned, the signal provides electronic systems with accurate data to monitor motion, detect vibration, and respond to variations in speed or direction across real-world applications.

In a nutshell, a typical accelerometer uses an electromechanical sensor to detect acceleration by tracking the displacement of an internal proof mass. When the device experiences either static acceleration—such as the constant pull of gravity—or dynamic acceleration—such as vibration, shock, or sudden impact—the proof mass shifts relative to its housing.

This movement alters the sensor’s electrical characteristics, producing a signal that is then amplified, filtered, and processed. The conditioned output allows electronic systems to quantify motion, distinguish between steady forces and abrupt changes, and respond accurately to variations in speed, orientation, or vibration.

Figure 1 Pencil rendering illustrates the suspended proof mass—the core sensing element—inside an accelerometer. Source: Author

The provided illustration hopefully serves as a useful conceptual model for an inertial accelerometer. It demonstrates the fundamental principle of inertial sensing, specifically showing how a suspended proof mass shifts in response to gravitational vectors and external acceleration. This mechanical displacement is the foundation for the capacitive or piezoresistive sensing used in modern MEMS devices to calculate precise changes in motion and orientation.

Accelerometer families and sensing principles

Moving to the common types of accelerometers, designs range from piezoelectric units that generate charge under mechanical stress—ideal for vibration and shock sensing but unable to register static acceleration—to piezoresistive devices that vary resistance with strain, enabling both static and low-frequency measurements.

Capacitive sensors detect proof-mass displacement through changing capacitance, a method that balances sensitivity with low power consumption and supports tilt and orientation detection. Triaxial versions extend these principles across three orthogonal axes, delivering full spatial motion data for navigation and vibration analysis.

MEMS accelerometers, meanwhile, miniaturize these mechanisms into silicon-based structures, integrating low-power circuitry with high precision, and now dominate both consumer electronics and industrial monitoring.

It’s worth noting that some advanced accelerometers depart from the classic proof-mass model, adopting optical or thermal sensing techniques instead. In thermal designs, a heated bubble of gas shifts within the sensor cavity under acceleration, and its displacement is tracked to infer orientation.

A representative example is the Memsic 2125 dual-axis accelerometer, which applies this thermal principle to deliver compact, low-power motion data. According to its datasheet, Memsic 2125 is a low-cost device capable of measuring tilt, collision, static and dynamic acceleration, rotation, and vibration, with a ±3 g range across two axes.

In practice, the core device—formally designated MXD2125 in Memsic datasheets and often referred to as Memsic 2125 in educational kits—employs a sealed gas chamber with a central heating element and four temperature sensors arranged around its perimeter. When the device is level, the heated gas pocket stabilizes at the chamber’s center, producing equal readings across all sensors.

Tilting or accelerating the device shifts the gas bubble toward specific sensors, creating measurable temperature differences. By comparing these values, the sensor resolves both static acceleration (gravity and tilt) and dynamic acceleration (motion such as vehicle travel). MXD2125 then translates the differential temperature data into pulse-duration signals, a format readily handled by microcontrollers for orientation and motion analysis.

Figure 2 Memsic 2125 module hosts the 2125 chip on a breakout PCB, exposing all I/O pins. Source: Parallax Inc.

A side note: the Memsic 2125 dual-axis thermal accelerometer is now obsolete, yet it remains a valuable reference point. Its distinctive thermal bubble principle—tracking the displacement of heated gas rather than a suspended proof mass—illustrates an alternative sensing approach that broadened the taxonomy of accelerometer designs.

The device’s simple pulse-duration output made it accessible in educational kits and embedded projects, ensuring its continued presence in documentation and hobbyist literature. I include it here because it underscores the historical branching of accelerometer technology prior to MEMS capacitive adoption.

Turning to the true mechanical force-balance accelerometer, recall that the classic mechanical accelerometer—often called a G-meter—embodies the elegance of direct inertial transduction. These instruments convert acceleration into deflection through mass-spring dynamics, a principle that long predates MEMS yet remains instructive.

The force-balance variant advances this idea by applying active servo feedback to restore the proof mass to equilibrium, delivering improved linearity, bandwidth, and stability across wide operating ranges. From cockpit gauges to rugged industrial monitors, such designs underscore that precision can be achieved through mechanical transduction refined by servo electronics—rather than relying solely on silicon MEMS.

Figure 3 The LTFB-160 true mechanical force-balance accelerometer achieves high dynamic range and stability by restoring its proof mass with servo feedback. Source: Lunitek

From sensitivity to power: Key specs in accelerometer selection

When selecting an accelerometer, makers and engineers must weigh a spectrum of performance parameters. Sensitivity and measurement range balance fine motion detection against tolerance for shock or dynamic loads. Output type (analog vs. digital) shapes interface and signal conditioning requirements, while resolution defines the smallest detectable change in acceleration.

Frequency response governs usable bandwidth, ensuring capture of low-frequency tilt or high-frequency vibration. Equally important are power demands, which dictate suitability for battery-operated devices versus mains-powered systems; low-power sensors extend portable lifetimes, while higher-draw devices may be justified in precision or high-speed contexts.

Supporting specifications—such as noise density, linearity, cross-axis sensitivity, and temperature stability—further determine fidelity in real-world environments. Taken together, these criteria guide selection, ensuring the chosen accelerometer aligns with both design intent and operational constraints.

Accelerometers in action: Translating fundamentals into real-world life

Although hiding significant complexities, accelerometers are not too distant from the hands of hobbyists and makers. Prewired and easily available accelerometer modules like ADXL345, MPU6050, or LIS3DH ease up breadboard experiments and enable quick thru-hole prototypes, while high-precision analog sensors like ADXL1002 enable a leap into advanced industrial vibration analysis.

Now it’s your turn—move your next step from fundamentals to practical applications, starting from handhelds and wearables to vehicles and machines, and extending further into robotics, drones, and predictive maintenance systems. Beyond engineering labs, accelerometers are already shaping households, medical devices, agriculture practices, security systems, and even structural monitoring, quietly embedding motion awareness into the fabric of everyday life.

So, pick up a module, wire it to your breadboard, and let motion sensing spark your next prototype—because accelerometers are waiting to translate your ideas into action.

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Fundamentals in motion: Accelerometers demystified appeared first on EDN.

A failed switch in a wall plate = A garbage disposal that no longer masticates

EDN Network - Пн, 01/12/2026 - 15:00

How do single-pole wall switches work, and how can they fail? Read on for all the details.

Speaking of misbehaving power toggles, a few weeks back (as I’m writing this in mid-December), the kitchen wall switch that controls power going to our garbage disposal started flaking out. Flipping it to the “on” position sometimes still worked, as had reliably been the case previously, but other times didn’t.

Over only a few days’ time, the percentage of garbage disposal power-on failures increased to near-100%, although I found I could still coax it to fire up if I then pressed down firmly on the center of the switch. Clearly, it was time to visit the local Home Depot and buy-then-install a replacement. And then, because I’d never taken a wall switch apart before, it was teardown education time for me, using the original failed unit as my dissection candidate!

Diagnosing in the dark

As background, our home was originally built in the mid-1980s. We’re the third owners; we’ve never tried to track down the folks who originally built it, and who may or may not still be alive, but the second owner is definitely deceased. So, there’s really nobody we can turn to for answers to any residential electrical, plumbing, or other questions we have; we’re on our own.

Some of the wall switches scattered throughout the house are the traditional “toggle” style:

But many of them are the more modern decorator “rocker” design:

For example, here’s a Leviton Decora (which the company started selling way back in 1973, I learned while researching this piece) dual single-pole switch cluster in one of the bathrooms:

It looks just like the two-switch cluster originally in the kitchen, although you’ll have to take my word on this as I didn’t think to snap a photo until after replacing the misbehaving switch there.

In the cabinet underneath the sink is a dual AC outlet set. The bottom outlet is always “hot” and powers the dishwasher to the left of the sink. The top outlet (the one we particularly care about today) connects to the garbage disposal’s power cord and is controlled by the aforementioned wall switch. I also learned when visiting the circuit breaker box prior to doing the switch swap that the garbage disposal has its own dedicated breaker and electricity feed (which, it turns out, is a recommended and common approach).

A beefier successor

Even prior to removing the wall plate and extracting the failed switch, I had a sneaking suspicion it was a standard ~15A model like the one next to it, which controls the light above the sink. I theorized that this power handling spec shortcoming might explain its eventual failure, so I selected a heavier-duty 20A successor. Here’s the new switch’s packaging, beginning with the front panel (as usual, and as with successive photos, accompanied by a 0.75″/19.1 mm diameter U.S. penny for size comparison purposes). Note the claimed “Light Almond” color, which would seemingly match the two-switch cluster color you saw earlier. Hold that thought:

And here are the remainder of the box sides:

Installation instructions were printed on the inside of the box.

The only slight (and surprising) complication was that (as with the original) while the line and load connections were both still on one side, with ground on the other, the connection sides were swapped versus the original switch. After a bit of colorful language, I managed. Voila:

The remaining original switch on the left, again controlling the above-sink light, is “Light Almond” (or at least something close to that tint). The new one on the right, however, is not “Light Almond” as claimed (and no, I didn’t think to take a full set of photos before installing it, either; this is all I’ve got). And yes, I twitch inside every time I notice the disparity. Eventually, I’ll yank it back out of the wall and return it for a correct-color replacement. But for now, it works, and I’d like to take a break from further colorful language (or worse), so I just grin and bear it.

Analyzing an antique

As for the original, now-malfunctioning right-side switch, on the other hand…plenty of photos of that. Let’s start with some overview shots:

As I’d suspected, this was a conventional 15A-spec’d switch (at first, I’d thought it said 5A but the leading “1” is there, just faintly stamped):

Backside next:

Those two screws originally mounted the switch to the box that surrounded it. The replacement switch came with a brand-new set that I used for re-installation purposes instead:

Another set of marking closeups:

And now for the right side:

I have no clue what the brown goo is that’s deposited at the top, nor do I either want to know what it is or take any responsibility for it. Did I mention that we’re the third owners, and that this switch dated from the original construction 40+ years and two owners ago?

I’m guessing maybe this is what happens when you turn on the garbage disposal with hands still wet and sudsy from hand-washing dishes (or maybe those are food remnants)? Regardless, the goop didn’t seemingly seep down to the switch contacts, so although I originally suspected otherwise, I eventually concluded that it likely ended up not being the failure root cause.

The bottom’s thankfully more pristine:

Those upper and lower metal tabs, it turns out, are our pathway inside. Bend ‘em out:

And the rear black plastic piece pulls away straightaway:

Here’s a basic wall switch functional primer, as I’ve gathered from research on conceptually similar (albeit differing-implementation) Leviton Decora units dissected by others:

along with my own potentially flawed hypothesizing; reader feedback is as always welcomed in the comments!).

The front spring-augmented assembly, with the spring there to hold it in place in one of two possible positions, fits into the grooves of the larger of the two metal pieces in the rear assembly. Line current routes from the screw attached to the larger lower rear-assembly piece and to the front assembly through that same spring-assisted metal-to-metal press-together. And when the switch is in the “on” position, the current then further passes on to the smaller rear-assembly piece, and from there onward to the load via the other attached screw.

Snap, crackle, and pop

However, you’ve undoubtedly already noticed the significant degradation of the contact at the end of the front assembly, which you’ll see more clearly shortly. And if you peer inside the rear assembly, there’s similar degradation at the smaller “load” metal piece’s contact, too:

Let’s take a closer look; the two metal pieces pull right out of the black plastic surroundings:

Now for a couple of closeups of the smaller, degraded-contact piece (yes, that’s a piece of single-sided transparent adhesive tape holding the penny upright and in place!):

Zap

Next, let’s look at what it originally mated with when the toggle was in the “on” position:

Jeepers:

Another black plastic plate also thankfully detached absent any drama:

And where did all the scorched metal that got burned off both contacts end up? Coating the remainder of the assembly, that’s where, most of it toward the bottom (gravity, don’cha know):

Including all over the back of the switch plate itself, along with the surrounding frame:

Our garbage disposal is a 3/4 HP InSinkErator Badger 5XP, with a specified current draw of 9.5A. Note, however, that this is also documented as an “average load” rating; the surge current on motor turn-on, for example, is likely much higher, as well as not managed by any start capacitors inside the appliance, which would be first-time charging up in parallel in such a scenario (in contrast, by the way, the dishwasher next to it, a Kenmore 66513409N410, specs 8.1A of “total current”, again presumably average, and 1.2A of which is pulled by the motor). So, given that this was only a 15A switch, I’m surprised it lasted as long as it did. Agree or disagree, readers? Share your thoughts on this and anything else that caught your attention in the comments!

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post A failed switch in a wall plate = A garbage disposal that no longer masticates appeared first on EDN.

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів