Feed aggregator

Surface Mount Reflow Ovens

Reddit:Electronics - 2 hours 7 min ago
Surface Mount Reflow Ovens

Food Ninja turned reflow oven! My first board in 15 years went great other than my bad designs! Attempt at building a 6 channel sonar, dint go so great..... worked in air but not in water.

submitted by /u/TunaRado
[link] [comments]

Automotive silicon in the era of AI, functional safety, and cybersecurity

EDN Network - 5 hours 48 min ago

Automotive silicon design is entering a phase where functional safety, cybersecurity and artificial intelligence (AI) can no longer be treated as separate concerns. In connected, software-defined vehicles, safety outcomes depend not only on protection against random hardware faults, but also on resilience to malicious interference and software vulnerabilities. As a result, many of the decisions that determine system safety are now made at the silicon architecture level.

When ISO 26262 was first published in 2011, it marked a major step forward in structuring functional safety for automotive electronics. But the vehicles being designed today are fundamentally different. Autonomous driving, electrification, AI-based perception, vehicle-to-everything (V2X) connectivity, and centralized compute architectures were not primary considerations at the time.

The core objective remains unchanged: to avoid hazards to people. However, the way this objective is achieved is now deeply tied to how safety is architected into semiconductor devices.

Functional safety is no longer just a system-level concern; it’s a design-time challenge for ASIC and SoC engineers. For many safety-critical functions, whether ISO 26262 targets can be met depends on decisions made in the earliest stages of silicon architecture.

A growing and converging standards landscape

The industry has responded to new challenges by expanding the safety and security framework. ISO 26262:2018 addresses functional safety in road vehicles, while ISO 21448 (SOTIF) considers hazards arising from insufficient or incorrect system behavior. ISO/PAS 8800:2024 begins to address the safety implications of AI-based systems.

Alongside these, ISO/SAE 21434 introduces requirements for automotive cybersecurity, and platform-level schemes such as PSA Certified, while not automotive-specific, are shaping expectations for secure-by-design silicon, roots of trust, and independently evaluated security assurance.

In practice, these frameworks cannot be applied in isolation. Safety and cybersecurity requirements must be interpreted together and traced into silicon architecture, verification strategies, and ultimately the safety case. This convergence increases complexity, but it also reflects the reality of modern automotive systems: safety now depends on both fault tolerance and system integrity.

Figure 1 Functional safety is now a silicon architecture problem that must be addressed alongside cybersecurity and AI from the earliest stages of design. Source: EnSilica

Safety is implemented in silicon

In today’s vehicles, many critical safety mechanisms are implemented directly in hardware. Fault detection, redundancy schemes, error correction, watchdogs, and safe-state control are embedded within ASICs and SoCs. Typical techniques include lockstep CPU architectures for execution monitoring, ECC-protected memories to detect and correct bit errors, and dedicated safety islands that supervise system health and enforce safe-state transitions.

These mechanisms are responsible for ensuring that faults are either corrected or managed in a way that prevents hazardous behavior. Increasingly, they must also be robust against unintended interactions and deliberate manipulation, not just random faults.

This creates a fundamental shift. Functional safety is no longer something that can be added at the system level; it must be designed into silicon architecture from the outset. Decisions around redundancy affect area and cost. Diagnostic features influence power consumption and performance. Detection latency must be balanced against system constraints. These trade-offs are often made before the full system context is completely defined.

At the same time, safety mechanisms are only effective if the system enforcing them remains trustworthy. Ensuring that trust is now a core architectural concern.

Cybersecurity as a determinant of safety

Cybersecurity is no longer adjacent to functional safety—it’s a determinant of it. A system that meets ASIL targets for random faults may still be unsafe if it can be compromised through software, interfaces, or update mechanisms. In connected vehicles, a maliciously induced fault can have the same or greater impact than a hardware failure.

At the silicon level, this translates into requirements for hardware roots of trust, secure boot, run-time integrity checking, and domain isolation. These mechanisms ensure that only authenticated software can control safety-critical functions and that faults or compromises in non-critical domains cannot propagate into safety paths.

From a design perspective, this expands the traditional fault model. In addition to random hardware failures, engineers must now consider adversarial conditions such as fault injection attacks, privilege escalation, and corrupted firmware. Safety architectures must be capable of detecting, containing, and responding to both types of failure.

The limits of the V-model in silicon development

ISO 26262 promotes the V-model as a structured development approach, moving from requirements to implementation and back through verification. While this provides a useful framework, it does not always reflect how safety-critical ASICs are developed in practice.

Silicon design requires early decisions that cut across the V-model structure. Process technology selection, architectural partitioning, testability, and diagnostic coverage must all be considered at a very early stage. These decisions directly influence safety mechanisms and compliance with ASIL requirements.

In reality, ASIC development is highly iterative, moving between architecture, implementation constraints, and verification. The goal is not strict adherence to a linear process, but maintaining traceability, safety intent, and configuration control throughout the design cycle.

Traditional safety analysis is under pressure

Safety analysis methods such as failure modes and effects analysis (FMEA) and fault tree analysis (FTA) remain foundational. However, their application at the ASIC level is becoming increasingly challenging.

Modern automotive SoCs integrate CPUs, AI accelerators, high-speed interfaces, and complex interconnect structures on a single device. Applying traditional analysis techniques at this scale is difficult, often requiring abstraction that introduces uncertainty.

As complexity increases, the question is no longer whether analysis has been performed, but whether it’s sufficient to capture all relevant failure modes, particularly when both accidental faults and adversarial conditions must be considered.

Toward simulation-driven safety verification

To address these challenges, the industry is moving toward more dynamic, simulation-driven approaches. Fault simulation, long used in semiconductor tests, is increasingly applied in a functional safety context.

Instead of simply identifying faults, the focus shifts to system response. When a fault is injected, engineers must determine whether it is detected, whether it is corrected, and whether the system transitions to a safe state within the required time.

This approach integrates safety analysis with design verification and provides more concrete evidence that safety mechanisms operate correctly under realistic conditions. Increasingly, safety metrics such as single-point fault metric (SPFM) and latent fault metric (LFM) can increasingly be supported by fault-injection and simulation-based evidence, alongside analytical safety analysis.

Figure 2 The fault injection verification flow demonstrates how the design contains, detects, and correct faults. Source: EnSilica

AI moves the challenge further into silicon

AI introduces both new risks and new opportunities for functional safety. On the hardware side, AI workloads are implemented in dedicated accelerators within automotive SoCs, further shifting safety responsibility into silicon.

Designers must consider how these accelerators behave under fault conditions and how their outputs are monitored and validated. On the system side, AI raises fundamental challenges around verification. Unlike deterministic logic, AI systems exhibit probabilistic behavior influenced by data and operating conditions.

AI also reinforces the convergence between safety and security. Ensuring the integrity of inputs, models and execution becomes critical, as corrupted data or manipulated models can lead directly to hazardous behavior.

Memory safety and system integrity

One emerging approach to improving robustness is the use of hardware-enforced memory safety. Capability-based architectures, such as CHERI, provide fine-grained control over memory access, reducing the likelihood that software defects or exploitable vulnerabilities propagate into safety-critical behavior.

By mitigating broad classes of memory-corruption vulnerabilities at the hardware level, these techniques contribute to both system integrity and functional safety, particularly in complex software-defined environments.

Designing for long-term security

Automotive systems are expected to operate reliably over long lifetimes, often exceeding a decade. This introduces additional challenges for cybersecurity.

Cryptographic mechanisms that are secure today may not remain so over the lifetime of the vehicle. As a result, there is growing interest in cryptographic agility and support for post-quantum cryptography (PQC), particularly for secure boot, firmware updates, and vehicle communications.

These considerations further reinforce the need to treat security as a foundational aspect of silicon design, rather than a feature added later in the development process.

However, the automotive industry does not need to abandon existing safety standards; instead, it must adapt how they are applied in the context of semiconductor design. Take, for instance, functional safety, which is no longer just a system integration challenge. It’s a silicon architecture problem that must be addressed alongside cybersecurity and AI from the earliest stages of design.

At the silicon level, the distinction between safety and security is becoming increasingly artificial. Safety mechanisms must operate correctly in the presence of both accidental faults and malicious interference. This requires a unified architectural approach, where safety, security and system integrity are designed, verified, and validated together.

As vehicles become more intelligent, connected and autonomous, the role of custom silicon in delivering safe operation will only grow. The standards still matter, but increasingly, it’s silicon that determines whether those standards can be met in practice.

Enrique Martinez-Asensio is functional safety manager at EnSilica. He has more than 35 years of experience in the semiconductor industry, having worked on mixed-signal IC design and technical support and management in several semiconductor companies.

Related Content

The post Automotive silicon in the era of AI, functional safety, and cybersecurity appeared first on EDN.

STMicroelectronics Launches Next-Generation Ultralow-Power Image Sensors

ELE Times - 5 hours 59 min ago

STMicroelectronics, a global semiconductor leader serving customers across the spectrum of electronics applications, introduces a new generation of ultralow-power global-shutter image sensors. That delivers high-quality, always-on vision to compact devices operating on batteries or harvested energy. The VD55G4 (monochrome) and VD65G4 (RGB colour) sensors, part of the ST BrightSense portfolio, are now available to early adopters. Also, enabling customers to start designing their next generation of smart, ultralow-power vision devices today.

The new sensors serve applications including wearables, AR/VR and XR headsets, smart home appliances and medical devices. They deliver rich visual context and AI-ready data under tight constraints on power, size, and cost. The sensors combine an ultralow-power detect-and-wake architecture with a very small global-shutter optical format and interfaces that optimise low-power microcontrollers and cost-effective systems-on-chips (SoCs).

“Always‑on vision is becoming essential for the next generation of personal electronics, from smart glasses and AR/VR headsets to intelligent home appliances and medical devices. With VD55G4 and VD65G4, we are bringing this capability to smaller, lighter products that must run for a longer period on a tiny battery. These new sensors help our customers create more intuitive and responsive experiences, extend battery life, and bring embedded vision and edge AI into everyday devices,” said Alexandre Balmefrezol, Executive Vice President and General Manager of the Imaging Sub-Group at STMicroelectronics.

From wearables and AR/VR to smart appliances

VD55G4 and VD65G4 bring always‑on vision to products that must stay small, light, and extremely power-efficient. Building on the ST BrightSense family, they add a colour option, faster response for interactive use cases, and simple connectivity to low‑power microcontrollers, making it easier to add vision to space‑ and cost‑constrained designs.

In wearables, the sensors enable all‑day, always‑aware features such as glance detection, presence sensing, and contextual alerts, while fitting into very compact designs and working directly with microcontroller‑based platforms. For AR/VR and XR headsets, they combine low power and high‑quality capture to support accurate tracking and spatial awareness, helping extend battery life without compromising comfort.

In smart home appliances, IoT devices, and medical products, the sensors allow more intelligence to run locally on the device itself, reducing cloud dependence and standby power. Their tiny size and energy efficiency also make them well-suited to solar‑ or energy‑harvesting‑powered vision nodes.

Ultralow‑power design consumes up to 10x less power

VD55G4 and VD65G4 consume up to 10 times less power than conventional global‑shutter sensors. It watches for changes in a scene and wakes up the main processor only when needed, shifting from continuous streaming to event‑driven operation. This enables all‑day, always‑on experiences, longer battery life, and practical vision systems powered by small batteries or energy harvesting. The small footprint with integrated image processing simplifies design and reduces system cost, while supporting responsive AI‑ready vision features in a wide range of edge devices.

Growing design ecosystem

The VD55G4 (monochrome) and VD65G4 (RGB colour) image sensors generate 300 mm wafers using a 3D‑stacked 65 nm / 40 nm architecture, in-house process and manufacturing in ST Crolles plant.

ST is also offering the full companion ecosystem with multiple tools and resources, including:

  • Development boards for platforms such as STM32 and Raspberry Pi
  • Turnkey camera modules
  • Evaluation software, platform drivers
  • A software development kit to accelerate embedded vision projects

The post STMicroelectronics Launches Next-Generation Ultralow-Power Image Sensors appeared first on ELE Times.

I built a fully self-powered computer in actual credit-card size (~1mm thick)

Reddit:Electronics - Thu, 05/14/2026 - 22:06
I built a fully self-powered computer in actual credit-card size (~1mm thick)

For years, devices like the RbPi have been described as “credit-card sized”.

And of course the message is rather the footprint, but at some point I became obsessed with taking that idea one step further:

What would it take to build something that is literally sized like a credit card?

I've got a slight feeling that you really don't seem to like questions here, but I hope this rhetorical one is okay :P

That question slowly escalated into months of experiments to find solutions for things where default methods won't work. I can't use large, rigid components, connectors, and find a way to make my own custom flexPCB.

And after months of tinkering, I made the first prototype. Fragile, but it works within the goal of not exceeding 1 millimeter. Somehow, news pages have picked this up and described it as "revolutionary" which is a bit far fetched, but I feel flattered 🤭

To be fair, 'computer' might be a little overstatement, but it's technically perfectly within the definition of one. If you should have suitable words for it that sounds cool, feel free to suggest ^^

The prototype includes:

  • ESP32-C3FH4 w/ WiFi & BLE
  • NFC read/write
  • 1.54" 200*200 E-Paper display
  • ultra-thin LiPo battery including charging circuit and power path management
  • accelerometer

Finding small/thin enough components wasn't really the main challenge, mechanical stability was. Solder and general material fatigue, pressure distribution (particularly focused pressure) and other strain related issues were the real problem.

This doesn't even include battery protection and some other things to solve.

At this scale, the project turned into a weird mix of electrical, mechanical and chemical engineering.

A few things that became clear over time:

  • preventing strain is much easier than surviving strain
  • tiny real-world tolerances start dominating the entire design near the physical limit
  • many “thin enough” components stop being thin enough once assembly is considered
  • FPC connectors are basically obsolete, forcing me to get creative and solder each single wire for each 0.5mm pitch pad one by one.

The prototype is fully self-powered and running from its internal battery.

I documented a large part of the engineering process, including the process of etching my own flexPCB, on my GitHub repo.

And yes, it's not like this thickness is a necessity, going just 0.5mm thicker would probably have saved me months of engineering. This entire project was probably motivated way too much by the 'disbelief' factor 😄

I am curious on your thoughts on this! :)

submitted by /u/krauseler
[link] [comments]

Зустріч із випускниками університету — народними депутатами

Новини - Thu, 05/14/2026 - 17:36
Зустріч із випускниками університету — народними депутатами
Image
kpi чт, 05/14/2026 - 17:36
Текст

Економіка майбутнього та AI-технології як інструмент нової епохи: КПІшники зустрілися з випускниками університету — народними депутатами України Дмитром Кисилевським та Олександром Маріковським.

ПриватБанк, цифровий банкінг і стратегічне лідерство: зустріч КПІшників із Мікаелем Бьоркнертом

Новини - Thu, 05/14/2026 - 17:32
ПриватБанк, цифровий банкінг і стратегічне лідерство: зустріч КПІшників із Мікаелем Бьоркнертом
Image
kpi чт, 05/14/2026 - 17:32
Текст

🎓 КПІ ім. Ігоря Сікорського з відкритою лекцією відвідав голова правління АТ КБ «ПриватБанк» Мікаель Бьоркнерт. Сьогодні ПриватБанк — один із лідерів банківського сектору України, який задає тренди цифрової трансформації, розвитку інноваційних сервісів та є одним з надійних фінансових партнерів університету.

Guerrilla RF’s high-power GaN HEMT models now available in Modelithics COMPLETE Library

Semiconductor today - Thu, 05/14/2026 - 17:09
Modelithics Inc of Tampa, FL, USA, which provides RF and microwave simulation models for electronic design automation (EDA), says that new high-power gallium nitride GaN models from Guerrilla RF Inc (GRF) of Greensboro, NC, USA — which provides radio-frequency integrated circuits (RFICs) and monolithic microwave integrated circuits (MMICs) for wireless applications — have been added to the Modelithics COMPLETE Library. These new nonlinear models were developed through Guerrilla RF’s collaboration with Modelithics via the Sponsoring MVP (Modelithics Vendor Partner) Program...

CSA Catapult a core technology partner providing SiC power module for project SONATA

Semiconductor today - Thu, 05/14/2026 - 17:00
The UK’s Compound Semiconductor Applications (CSA) Catapult says that it is a core technology partner in project SONATA, which is funded by the ATI Programme, to develop an on-aircraft electric taxi system with regenerative braking and energy recovery...

How emerging robotics standards will shape next-gen automation

EDN Network - Thu, 05/14/2026 - 16:00

Walk into any modern fulfillment center or high‑precision inspection site and the pattern is unmistakable: robots are becoming smarter, more autonomous, and more deeply embedded in daily operations. They navigate cluttered aisles, collaborate with people, and execute tasks that once required years of human experience.

Yet behind the impressive demos and AI‑powered autonomy lies a quieter, more stubborn truth. The frameworks governing how these robots behave, communicate, and integrate with the rest of the factory are still playing catch‑up.

For years, robotics innovation has moved faster than the standards meant to ensure safety, reliability, and interoperability. That was manageable when robots lived in structured, predictable environments. But now that they’re entering aircraft wing boxes, nuclear vessels, medical labs, and public spaces, the gap is no longer sustainable.

The industry is reaching a point where the convergence of ISO/TC 299 and ASME Model‑Based Enterprise (MBE) frameworks is becoming essential. Together, they are laying the foundation for the next decade of automation.

Through my work in robotics and engineering standards, I’ve seen how the absence of a unified digital thread slows down certification, complicates integration, and turns validation into a guessing game. The industry is ready for a shift, and these standards are the mechanism for that shift.

Synergy: Behavior meets mechanical truth

In robotics, reliability is a marriage of autonomous behavior and physical reality. You cannot have one without the other. The relationship is best understood through a simple metaphor: a driver and a map.

ISO/TC 299 is the driver’s manual. It defines how a robot should behave when a human enters its workspace, how collaborative systems maintain predictable safety envelopes, and how mobile fleets negotiate shared space. These behavioral expectations create consistency across vendors and applications, which is critical as multi‑robot systems become the norm.

ASME MBE, particularly ASME Y14.41, is the map. It provides machine-readable geometry, tolerances, and load paths that tell the robot what the world looks like and how its own structure behaves under stress. It is the robot’s mechanical truth, which is the foundation for accurate motion planning, stiffness modeling, and digital twin fidelity.

When these two systems operate independently, problems emerge. A robot may follow every safety rule perfectly, but if it doesn’t understand its own deflection under load, it can still “safely” drill a hole in the wrong place. I’ve seen this disconnect repeatedly in real deployments: behavior and mechanical truth treated as separate concerns, even though they collide on every project.

The future of robotics depends on eliminating this separation.

Standards in action: Solving the validation gap

Consider a high‑precision assembly task inside a Brownfield environment. A long‑reach robot is working in an aircraft hangar where the temperature rises throughout the day. The robot plans its path using a static CAD model, unaware that its arm has expanded by a millimeter due to thermal drift. In a traditional setup, the robot executes the plan anyway, and the error shows up only after inspection and is often too late to avoid rework.

In a standard-integrated environment, the workflow looks very different. The robot pulls its geometry and stiffness information from an ASME Y14.41 model, uses ISO/TC 299 to manage safe behavior when a human enters the cell, and continuously adjusts its trajectory by comparing sensor feedback with its digital thread. The result is a sub‑millimeter accurate operation that remains safe and reliable even as conditions change.

This is not hypothetical. In aerospace and energy applications, thermal drift, compliance, and load‑path uncertainty are among the most common sources of failure. Standards give robots the context they need to correct these issues in real time.

A similar story plays out in dynamic warehouses. Mobile robots constantly encounter shifting pallets, narrowing aisles, and unpredictable human movement. ISO/TC 299 governs how they yield, reroute, and negotiate shared space. ASME MBE ensures that the robot’s internal map reflects real geometry rather than outdated floor plans. When a pallet is slightly misaligned, the robot doesn’t just detect it, it understands how that misalignment affects its own kinematics and load stability. This combination prevents collisions, downtime, and cascading errors that can shut down an entire facility.

The economic advantage: Eliminating the hidden tax

Beyond the engineering benefits, there is a major economic argument for this convergence. Today, companies pay a hidden tax in the form of custom integration. Every robot vendor uses a different data model, forcing end‑users to build expensive bridges between incompatible systems. These one‑off integrations accumulate over time, creating brittle automation ecosystems that are difficult to scale and nearly impossible to maintain.

When ISO governs behavior and ASME governs data, robots become vendor‑agnostic. A new robot can be dropped into an existing digital thread, and it will immediately understand the factory’s geometry, safety rules, and tolerances. Deployment times shrink from months to days. Total cost of ownership drops because automation no longer forms isolated islands that require constant reinvention.

In my experience, the companies that adopt standards early see the benefits almost immediately: fewer integration failures, faster certification cycles, and a more predictable automation roadmap. Standards don’t slow innovation; they accelerate it by removing friction.

The era of deterministic robotics

The last decade of robotics was defined by intelligence in AI, perception, and autonomy. The next decade will be defined by determinism. Robots will need to be predictable, traceable, and grounded in mechanical truth. The convergence of ISO/TC 299 and ASME MBE is pushing the industry toward systems that are not just automated, but self‑aware and self‑correcting.

From what I’ve seen in industry, the organizations that embrace this convergence early will be the ones shaping the next era of automation. As robots expand into more complex and safety‑critical environments, this integrated framework will influence the future of robotics as much as any breakthrough in neural networks.

The companies that act now will define the next generation of automation and the standards that make it possible.

Santosh Yadav is a hardware development engineer at Amazon Robotics and an IEEE Senior Member. His work focuses on the intersection of mechanical reliability and standardized automation frameworks.

Special Section: Smart Factory

The post How emerging robotics standards will shape next-gen automation appeared first on EDN.

Backup batteries and supercaps: Geriatric hardware traps

EDN Network - Thu, 05/14/2026 - 15:00

Batteries eventually die, whether due to excessive recharge cycles, deep-discharge, or other factors. Capacitors, in contrast, often don’t hold charge long enough. What’s an engineer to do?

When the Pentax Q compact digital camera that I told you about last month:

showed up at my front door, I excitedly opened the packaging, tossed the battery on the charger to rejuvenate it, then slotted the battery inside the camera, bayonet-mounted a lens to the body, pressed the power button to turn the Q on, and…was immediately prompted to enter the current time and date, along with the desired format for the latter:

Not a huge surprise, at least at first. The Pentax Q is nearly a decade-and-a-half old at this point, after all. I figured that the camera had been sitting around without a primary battery in it—or maybe that primary battery had just drained—with draining of the backup battery (commonly referred to as the CMOS battery in functionally-equivalent PC settings storage terminology) following in short order.

So, after thoroughly testing the camera to make sure it was otherwise operating properly, I popped the primary battery back out to top it off again, then slotted it back in the camera body and let everything sit overnight to recharge the backup battery.

Drained brains

The next day, I turned the Pentax Q back on and…once again was immediately prompted to enter the current time and date, along with the desired format for the latter. What the heck? I hit Google and learned that mine wasn’t remotely a unique issue. Unsurprisingly, in retrospect, the embedded battery had exceeded its maximum recharge cycle count and/or had experienced deep discharge degradation from which it was unable to recover. CR2032 cells on PCBs are admittedly prone to suffer similar fates:

with one key difference; they’re much easier to access and replace. I trust you’ll resonate with my reluctance to disassemble my photographic antique and attempt similar surgery on it. Yes?

At the end of the day, of course, this is a First World problem, the latest in an admitted long list that I’ve shared with all of you over the years. Time, date, location and directly related settings are the only ones that don’t survive primary-cell separations and drains; all the others (including the all-important user interface language setting, critical for someone who’s fond of Asian-sourced electronics but can’t visually-or-otherwise understand any Asian dialects) are generally stored in nonvolatile memory instead of battery-backed SRAM (because, speaking of cycles, these other settings change comparatively infrequently and are therefore unlikely to hit the max-rewrite cycle count of the EEPROM, flash memory or other technology housing them).

Imperfect workarounds and alternatives

If I could just remember to plug my primary battery-housing camera in to recharge every once in a while, I might also dodge the flaky-backup-battery issue that way…except that the Pentax Q can’t be recharged over its proprietary USB-derived connector. And anyway, I avoid recharging primary batteries in situ whenever possible, in case the cells were to swell and permanently embed themselves in the battery compartment. And speaking of ejecting batteries, I’ve found at least two other hacks:

  • Have a spare fully-charged battery sitting nearby, and pop out and replace the drained battery with it really quickly (the backup battery’s charge storage capability apparently isn’t completely neutered, only severely compromised)
  • Or just hit “cancel” after seeing the initial-settings screen to skip past it…with the obvious downside that subsequently logged date and time info will be (quite) incorrect!

What about so-called “supercapacitors” (aka, ultracapacitors) as an implementation alternative to conventional backup batteries?

The obvious key advantage here is that they support near-infinite recharge cycle counts. They also have comparatively high output power density (translation: high output current, although this attribute isn’t necessary in the application we’re discussing today) and there’s also no worry about the overheating-induced thermal runaway (translation: heat, smoke, flame) to which batteries are prone to varying degrees depending on implementation-technology specifics.

Nothing’s perfect in real life

Alas, there are also downsides. Although you can, to my previous high output power density comment, drain ‘em really fast, they also drain comparatively fast all by themselves (in weeks, versus months or even years for batteries), even in the absence of a load…which kind of defeats the purpose of using them for long-shelf-life settings-backup purposes, yes? Plus, as the above image suggests, they tend to have comparatively poor storage density. Translation: they’re huge in both linear size and volume in comparison to a comparable battery alternative.

Comparative size isn’t so much of a problem with an available volume-rich application such as a desktop computer or server. In a camera, or even a laptop computer, or any other diminutive device for that matter, it’s more likely to make a supercapacitor a non-starter. Alas, as I alluded to earlier, those same compact form factors are also more likely to be difficult-to-impossible to disassemble in order to do a backup battery swap, so…🤷‍♂️

In closing, I freely admit that I’m not a power electronics expert. That’d be my predecessor. So, I’ll stop pontificating at this point and pass the keyboard over to you.

What characteristics make a backup battery the obvious choice for a design, and conversely lead you to definitively select a supercapacitor instead? How do you decide between the two when the differentiation is more muted? How have any recent implementation innovations in either/both product categories evolved your thinking in this regard? And are there other technologies besides these two that your readers and I should also consider?

Let me know your thoughts in the comments, please!

Brian Dipert is the associate editor, as well as a contributing editor, at EDN.

Related Content

The post Backup batteries and supercaps: Geriatric hardware traps appeared first on EDN.

‼️ НМТ-2027 вже ближче, ніж здається. Почни підготовку вже зараз!

Новини - Thu, 05/14/2026 - 15:00
‼️ НМТ-2027 вже ближче, ніж здається. Почни підготовку вже зараз!
Image
kpi чт, 05/14/2026 - 15:00
Текст

Ми чекаємо слухачів і слухачок на наших підготовчих курсах. Це можливість розкрити ваші здібності до навчання, надолужить згаяне, усунути прогалини в знаннях або підготуватися до іспиту.

ams OSRAM sells CMOS Image Sensor business to indie for €40m

Semiconductor today - Thu, 05/14/2026 - 11:04
ams OSRAM AG of Premstaetten, Austria, and Munich, Germany has sold its CMOS Image Sensor business to Automotive semiconductor and software platform provider indie Semiconductor Inc of Aliso Viejo, CA, USA for €40m, comprising €35m in cash and a €5m seller’s note payable after two years...

Tower signs customer contracts for $1.3bn silicon photonics revenue for 2027

Semiconductor today - Thu, 05/14/2026 - 10:49
Specialty analog foundry Tower Semiconductor Ltd of Migdal Haemek, Israel has signed silicon photonics (SiPho) contracts for $1.3bn for 2027 revenue with its largest customers, and the receipt of $290m in customers’ prepayments for capacity reservation. This initial commitment is further reinforced by an even larger contractual wafer commitment for 2028, for which additional associated prepayments are due by January 2027...

Measuring G: The ultimate metrology challenge?

EDN Network - Thu, 05/14/2026 - 10:05

Four fundamental forces of nature—gravity, electromagnetism, strong nuclear force, and weak nuclear force—govern all known physical interactions in the universe. Of these four, gravity is the one with which we are all personally familiar, as we deal with it in our daily routine. Knowing these forces, along with the other defining constants of the International System of Units (SI), form the foundation of much of modern science and engineering (Figure 1).

Figure 1 This wallet card displays the fundamental constants and other physical values that will define a revised international system of units. Source: NIST

A good semi-technical, highly readable overview of the development of metrology, the people who made it happen, and its role in civilization and the industrial and technology revolution is the book “Beyond Measure: The Hidden History of Measurement from Cubits to Quantum Constants” by James Vincent (Figure 2).

Figure 2 This enjoyable book provides great insight into the hard-fought efforts of metrologies over the centuries, even if they were not called that. Source: W. W. Norton

The gravitational constant, informally dubbed “big G”, determines the strength of the attraction between two masses anywhere in the universe. It’s approximately 6.67 x 10-11 meters3/kilogram-second2. It is, of course, associated with Isaac Newton’s brilliant insight and law of universal gravitation, published in 1687, which states that every particle in the universe attracts every other particle with a force directly proportional to the product of their masses and inversely proportional to the square of the distance between them (Figure 3).

Figure 3 We can thank Isaac Newton for this simple equation that quantifies all-pervasive yet mysterious gravity. Source: Wikipedia

This “big G” is distinct from “little g”, which describes the acceleration that an object experiences due to the gravitational pull of a large mass, such as Earth, and it varies from location to location. For instance, the value of little g is approximately 9.8 meters/second2 at Earth’s surface but only 1.62 meters/second2 on the Moon.

The first implicit measurement is attributed to Henry Cavendish in a 1798 experiment with an accuracy of about 1%, which is impressive considering the year and available tools and technology. Yet, while other fundamental physical constants are known to 6 or more digits of confidence, measurement of this oldest-known force to comparable precision has eluded physicists, and it’s known with confidence only to about 4 digits.

While a better value for G wouldn’t affect the lives of most people or projects, there are some cases for which it would be needed, and it’s also a part of the broader science “quest”.

Why is G so hard to measure? There are three main reasons:

  • Gravity is the weakest of the four fundamental forces of physics; for comparison, it’s approximately 1038 times weaker than the strongest force.
  • The masses used in the experiment must fit inside a relatively small, constrained space of the experimental lab, and small masses generate small gravitational forces.
  • Since gravitational force is inherent by every object, it’s extremely challenging to make sure the force you measure in the laboratory really comes from the intended mass.

What was the next step?

Trying to determine G to higher accuracy has been an ongoing project for many institutions and researchers. I recently came across a news report from National Institute of Standards and Technology (NIST) site summarizing the 10-year quest led by physicist Stephan Schlamminger to improve that measurement (“NIST Weighs In on the Mystery of the Gravitational Constant”).

His team’s strategy was to painstakingly replicate a precision experiment conducted by the International Bureau of Weights and Measures (BIPM) in Sèvres, France, in 2007, which provides the value of G in use now. To do this, the team not only improved the precision of the physical parts of the experimental setup but also dived deeper to more fully identify sources of error, and then either reduce them or work out ways to have them self-cancel.

The basic arrangement is very simple and begins with a torsion balance similar to the one used by Cavendish (Figure 4). He placed two lead balls on opposite ends of a wooden beam horizontally suspended at its center by a thin wire. Nearby, he positioned two much heavier masses, suspended separately.

Figure 4 The traditional Cavendish experiment for measuring the strength of gravity was a torsion balance with an “optical” readout. Source: NASA

The gravitational attraction between the smaller and heavier masses caused the wooden beam to rotate, twisting the wire until the torque it exerted by counterbalancing the gravitational force. The motion of the wooden beam, measured with a mirror and light pointer, indicated the value of big G.

The Schlamminger team upgraded to eight cylindrical metal masses. Four of the cylinders sat on a rotating carousel, resembling four candlesticks in an old-fashioned chandelier. The other four smaller masses were placed inside the carousel, on a disk suspended by a ribbon of copper-beryllium about the thickness of a human hair.

They then added a modern-day “twist” not available to Cavendish: applying a voltage to electrodes placed alongside each of the inner masses (Figure 5). These voltages created an electrostatic torque that twisted the wire in a direction opposite to the gravitationally induced torque. By carefully setting a voltage that exactly counterbalanced and nulled the gravitational torque, the researchers prevented the torsion balance from rotating. The magnitude of the voltage provided another estimate of big G.

Figure 5 The latest version of the torsion balance is loosely based on the Cavendish design, but adds advanced features, including electrostatic torque nullification. Source: NASA

Of course, the actual unit is larger and much more sophisticated (Figure 6).

Figure 6 The NIST version of the Cavendish torsion balance bears little resemblance in actual implementation. Source: NASA

What’s the result?

The Committee on Data of the International Science Council, or CODATA, issues recommended values of fundamental physical constants. Its recommended numerical value for big G is a four-digit number with a measurement uncertainty of 22 points per million. To put this in perspective, a watch that runs 22 ppm late would measure the year 12 minutes too long.

So, to cut to the chase: How did the team do? Bluntly, not as well as they would like. In fact, their answer differed significantly from the BIPM number—which would be OK if it was “more correct” —but it had greater uncertainty as well. It was 0.0235% lower than the result that the researchers had attempted to replicate and is at odds with the CODATA figure.

I won’t try to summarize the project, as it has so many nuances and details. Fortunately, the published Metrologia paper titled “Redetermination of the gravitational constant with the BIPM torsion balance at NIST” is not a dry, academic-style presentation of the project. Instead, it’s a fascinating, highly readable 30-page recounting of a story that begins with an overview of the history of G measurement, then goes on to review the project step by step, covering the rationales for each step; the improbable, possible, and likely sources of error; the dilemmas they addressed; the qualitative issues as well as the quantitative analysis; and much more.

You could almost say it could be the basis for a scripted TV show or even a movie, perhaps not as dramatic as the 2023 Oppenheimer but still a “grabber.” As a very nice consideration for the readers, the paper begins with a full list of acronyms and abbreviations, which I wish all papers would do. It even includes a group photo of the 40+ team participants.

There’s one other interesting aspect of the project that I have very rarely seen. Schlamminger worried that he might unconsciously skew his measurement so that it agreed with the value of G that researchers found in the French experiment. To satisfy his own meticulous standards, he asked a colleague to scramble the data.

To accomplish this, colleague at NIST’s Mass and Force Group multiplied each Source Mass value by an unknown factor (1 + r) with r ∈ [−1 × 10−3, +1 × 10−3], stored in a secure envelope to be hidden from Schlamminger until the work was complete. This random offset number for the masses served to “blind” Schlamminger to the actual measurement he was taking.

By employing that strategy, Schlamminger would not know the actual value of big G that his team had measured. The envelope with the secret number was unsealed on a conference stage at the July 2024 Conference on Precision Electromagnetic Measurements (CPEM), and Schlamminger and his team finally found out the somewhat disappointing real results of their work.

Related Content

The post Measuring G: The ultimate metrology challenge? appeared first on EDN.

First dash prototype is done

Reddit:Electronics - Thu, 05/14/2026 - 02:10
First dash prototype is done

Finally got a working prototype for my cars instrument panel project. Just running a test script for now to make sure everything works at the same time.

We've got the gauges, warning lights, and LCDs to display the milage.

More updates will come as hardware is added and the actual code is written. GitHub link for anyone interested

submitted by /u/redravin12
[link] [comments]

Київська політехніка долучається до проєкту GIZ ReWarm для пошуку рішень із модернізації систем теплопостачання

Новини - Wed, 05/13/2026 - 23:37
Київська політехніка долучається до проєкту GIZ ReWarm для пошуку рішень із модернізації систем теплопостачання
Image
KPI4U-1 ср, 05/13/2026 - 23:37
Текст

🔥 КПІ ім. Ігоря Сікорського та проєкт «Реформування сектору централізованого теплопостачання в Україні» / ReWarm підписали Меморандум про взаєморозуміння.

Проректор КПІ на форумі «Mil Tech інновації»

Новини - Wed, 05/13/2026 - 23:33
Проректор КПІ на форумі «Mil Tech інновації»
Image
kpi ср, 05/13/2026 - 23:33
Текст

📌 КПІ ім. Ігоря Сікорського долучився до форуму «Mil Tech інновації», організованого Українською радою зброярів спільно із 7-м корпусом швидкого реагування ДШВ.

Воркшоп «Людський фактор безбар’єрності: психологія сприйняття різноманіття»

Новини - Wed, 05/13/2026 - 23:27
Воркшоп «Людський фактор безбар’єрності: психологія сприйняття різноманіття»
Image
kpi ср, 05/13/2026 - 23:27
Текст

На Факультеті соціології і права (ФСП) КПІ ім. Ігоря Сікорського відбувся воркшоп «Людський фактор безбар’єрності: психологія сприйняття різноманіття». ⚖️ Подію присвятили захисту прав осіб з інвалідністю, праву на гідність та розвитку безбар’єрності у сфері публічного управління.

Спеціалісти Світового центру даних з геоіфнорматики та сталого розвитку взяли участь у другому практичному воркшопі проєкту «Вона демінерка»

Новини - Wed, 05/13/2026 - 23:05
Спеціалісти Світового центру даних з геоіфнорматики та сталого розвитку взяли участь у другому практичному воркшопі проєкту «Вона демінерка»
Image
kpi ср, 05/13/2026 - 23:05
Текст

На базі Інформаційно-аналітичного ситуаційного центру КПІ ім. Ігоря Сікорського успішно завершився черговий етап підготовки фахівчинь у межах масштабної програми «Вона демінерка». Цей практичний воркшоп проходить уже вдруге, що свідчить про високу ефективність обраної методики навчання та критичну важливість підготовки кваліфікованих кадрів для сфери протимінної діяльності.

Mux switches deliver wide-bandwidth signal paths

EDN Network - Wed, 05/13/2026 - 19:56

A pair of 2:1 multiplexer/1:2 demultiplexer switches from Toshiba support PCIe 6.0 and USB4 Version 2.0 interfaces with bandwidths up to 34 GHz. The TDS5C212MX and TDS5B212MX are designed for reliable switching of high-speed differential signals in servers, industrial testers, robots, and PCs.

Manufactured using Toshiba’s TarfSOI (Toshiba advanced RF SOI) process, the TDS5C212MX and TDS5B212MX achieve typical differential 3-dB bandwidths of 34 GHz and 29 GHz, respectively. These wide bandwidths help suppress signal waveform distortion and improve reliability in high-speed data transmission.

The switches differ in their pin layouts. The TDS5C212MX minimizes signal path length to reduce reflections and losses, improving high-speed signal integrity. The TDS5B212MX retains the same pin layout as conventional products. Both devices operate over a temperature range of -40°C to +125°C and are now shipping.

TDS5C212MX product page 

TDS5B212MX product page

Toshiba Electronic Devices & Storage 

The post Mux switches deliver wide-bandwidth signal paths appeared first on EDN.

Pages

Subscribe to Кафедра Електронної Інженерії aggregator