Українською
  In English
Збирач потоків
AlixLabs closes €15m Series A with strategic investment from Stephen Industries
AlixLabs and VDL ETG Projects announce MoU for industrialization of APS patterning
Науково-випробувальному центру "Надійність" – 30 років
Уже понад 30 років у КПІ плідно працює Науково-випробувальний центр "Надійність" – важлива складова інфраструктури Національного технічного університету України "Київський політехнічний інститут імені Ігоря Сікорського".
Silanna UV adds TO-39 flat-window package to SF1 and SN3 series of UV-C LEDs
EEVblog 1744 - NEW Micsig DP700 High Voltage Differential Probe
TIFU by connecting a car battery to my computer USB lines due to my bad PCB design
| Pictured is the offender, my custom 84V 480A brushed DC motor driver. While testing, I had to make some adjustments to the rev1 routing, since apparently I forgot to run DRC before sending it to the fab. Tried to change the logic power supply to the FET drivers from 12V to 5V, forgot to cut one trace, and ended up bridging 5V to 12V. I used a lead acid battery instead of a current limited power supply for testing, connected it to my laptop without a USB isolator, and... well, I no longer have a laptop. I wonder how I'll explain to my professors why I won't be able to submit my paper draft that is due tonight. [link] [comments] |
Нові можливості для майбутніх інженерів-теплоенергетиків
Вітчизняна енергетична галузь переживає найважчі часи за останні вісімдесят років. Змінюється сама концепція забезпечення країни електричною та тепловою енергією. На передній край виходять нові сучасні технології її генерації (у тому числі розподіленої) та передачі, енергоефективності та накопичення тощо.
8 Wi-Fi security guidelines issued by Wireless Broadband Alliance

The Wireless Broadband Alliance (WBA) has released guidelines to strengthen security, privacy, and trust across Wi-Fi networks. These guidelines help organizations reduce exposure to common Wi-Fi threats, improve user trust, and simplify interoperability across networks and partners.
The guidelines also address the growing need for carrier-grade security that aligns with user expectations.
- Prevent connections to rogue and fake networks
Wi-Fi devices must validate network certificates before sharing credentials by using 802.1X and Extensible Authentication Protocol (EAP). That ensures users connect only to legitimate networks, significantly reducing the risk of evil-twin and rogue access point (AP) attacks.
- Protect data over the air
Data traffic confidentiality and integrity can be ensured by enforcing WPA2/WPA3-Enterprise with Advanced Encryption Standard (AES) and Protected Management Frames (PMF). That prevents passive sniffing, de-authentication attacks, and many man-in-the-middle techniques, bringing Wi-Fi security closer to cellular-grade protection.
- Preserve user identity privacy without breaking compliance
Balance privacy and traceability by using anonymous identities, encrypted inner identities, pseudonyms, and chargeable-user-identity (CUI). That protects personally identifiable information during authentication while still enabling lawful intercept, billing, and incident handling when required.
- Secure credentials end-to-end
Credentials are protected throughout their lifecycle, from device to network to backend systems. Secure OS key stores on devices and hardened credential storage in identity provider systems. So, tamper-resistant SIMs and USIMs for mobile credentials reduce the risk of large-scale credential theft.
- Harden the entire access network
Security extends beyond the radio link. Physical security of access points and controllers, encrypted AP-to-controller links, secure backhaul design, and local breakout architectures ensure that data traffic remains protected across the full network path.
- Secure AAA and roaming signaling
This guideline recognizes that the control plane is often overlooked; so, it strongly recommends RADIUS over TLS or DTLS for all AAA and roaming exchanges. That protects authentication and accounting traffic from interception or manipulation, aligning with OpenRoaming and WRIX requirements.
- Add layer-2 protections against lateral attacks
Layer-2 traffic inspection, client isolation, proxy ARP, and multicast and broadcast controls are employed to limit damage even if a malicious device connects and thus reduce client-to-client attacks such as ARP spoofing and broadcast abuse.
- Enforce security through federation and governance
Security is reinforced not only technically but operationally through OpenRoaming and the WRIX legal framework. As a result, security requirements, responsibilities, and privacy obligations can be consistently enforced across operators, identity providers, and hubs.
Related Content
- Securing a wireless network–The basics
- How to achieve better IoT security in Wi-Fi modules
- How to make 802.11 systems combine security with affordability
- 10 things to consider when securing an embedded 802.11 Wi-Fi device
The post 8 Wi-Fi security guidelines issued by Wireless Broadband Alliance appeared first on EDN.
UK Semiconductor Centre launches London HQ to support rapid sector growth
EPC releases 5kW GaN 3-phase inverters for robotics and light EVs
Double-duty current loop transmitter

Tracking down rodentia (or otherwise)-caused cable cuts, and differentiating them from normal open circuits, is critical. Evolving the circuit design for expanded functionality makes it even more valuable.
It’s just part of the job. Every design engineer learns early (if not so happily) about the inevitable necessity of detecting, confronting, and swatting “bugs” in circuitry.
Wow the engineering world with your unique design: Design Ideas Submission Guide
In a recent Design Idea, frequent contributor Jayapal Ramalingam extends this art of circuit defect detection and deletion from dealing with mere insects to coping with something much more formidable: rats!
With so many rodents and creatures around the plant, a cable cut can happen at any time
The cables being subjected to those toothy threats transport signals from field contacts monitoring pressure, temperature, valve position, limit switches, manual operator inputs, etc., to process control systems. The possible consequences of mistaking an undetected cable break for an open contact range from the merely inconvenient to the catastrophic. An example of the latter might be a critical valve that’s actually open but erroneously read as closed—viz., Three Mile Island?
Mr. Ramalingam’s clever solution to the problem of undetected cable cuts is a current transmitter design that adds a third current level to the two that are inherent to an ON/OFF contact. Thusly.
20mA = contact closed, cable intact
4mA = contact open, cable intact
0mA = cable cut, contact state unknown
It therefore explicitly verifies cable continuity, preventing the mistaking of an open circuit for an open contact. See his article for details.
Mr. Ramalingam’s circuit works, is proven, and has nothing significantly wrong with it. Its utility, however, is limited to that single function. It might be significantly more convenient and thrifty if its role could be combined with another in a multipurpose design, provided, of course, that said design would be of no greater cost or complexity than the single-purpose transmitter. Figure 1 and Figure 2 show such a circuit adapted from an earlier article.

Figure 1 0/20mA to 4/20mA current loop converter.

Figure 2 Field contact OFF/ON to 4/20mA current loop converter.
Note that the circuits are identical, so that only one design needs to be fabricated, documented, and stocked.
Calibration in this new role is quick and simple and completed in a single pass:
- Open contact.
- Tweak 4mA adj for 4mA output.
- Close contact.
- Tweak 20mA adj for 20mA output.
Stephen Woodward‘s relationship with EDN’s DI column goes back quite a long way. Over 200 submissions have been accepted since his first contribution back in 1974. They have included best Design Idea of the year in 1974 and 2001.
Related Content
- Is your PLC/DCS reading the field contacts reliably?
- Silly simple precision 0/20mA to 4/20mA converter
The post Double-duty current loop transmitter appeared first on EDN.
Photon Bridge and PHIX partner on DWDM external laser sources for hyperscale AI data centers
Navitas appoints Gregory M. Fischer as independent director
Громнадська Марина. Біотехнологи КПІ задля реалізації цілей сталого розвитку
Біотехнологія – це міждисциплінарна галузь, що виникла на стику біологічних, хімічних і технічних наук і результати наукових досліджень у якій можуть безпосередньо впливати на промисловість, сільське господарство, енергетику, екологію, фармацію та медицину. Одним із завдань біотехнології, пов'язаних із впровадженням цілей сталого розвитку, є забезпечення населення чистою водою та належними санітарними умовами.
Володимиру Володимировичу Пілінському – 85!
31 березня 2026 року відзначив поважний ювілей професор кафедри акустичних та мультимедійних електронних систем ФЕЛ Пілінський Володимир Володимирович.
How system-level validation compresses schedule risk in device design

Flagship consumer electronic device launches are among the most operationally complex events in modern engineering. They require years of coordination across hardware, silicon, RF, software, operations, supply chain, and manufacturing. Yet, despite mature processes and experienced teams, flagship programs remain vulnerable to schedule volatility.
The root cause is rarely inadequate engineering talent. More often, it’s structural. Manufacturing realities are integrated too late into architectural decision-making. System-level validation, when deployed early and continuously, functions not as a downstream quality checkpoint, but as an organizational mechanism for compressing schedule risk before capital and timeline commitments are locked.
Financial exposure at flagship scale
At flagship scale, schedule slip is not simply an engineering inconvenience. It’s a material financial event.
Apple’s fiscal year 2025 results reported approximately $416 billion in annual revenue, with iPhone revenue representing roughly half of total sales. Samsung’s Mobile Experience division reported approximately $26 billion in quarterly revenue during. For programs operating at this scale, a one-month delay during a peak launch cycle can defer revenue comparable to the annual revenue of many mid-sized technology firms.
Even outside tier-one OEMs, launch timing directly impacts channel readiness, carrier alignment, ecosystem momentum, and competitive positioning. In high-volume hardware, schedule is strategy.
The challenge is that many launch delays are not caused by unforeseen global disruptions, but by late-stage design changes triggered during production ramp. Industry analyses consistently show that a significant portion of late engineering change orders originate from integration and manufacturability issues that were technically detectable earlier in the development cycle.
When these issues surface during ramp, optionality has already collapsed. Tooling is frozen, suppliers are capacity-allocated, and marketing calendars are committed. At that stage, validation confirms risk rather than preventing it.
Why component-level validation fails at scale
Traditional validation strategies are optimized for component correctness. Subsystems are tested against modular specifications, and readiness decisions are based on aggregated subsystem pass rates. This approach ensures that parts function independently; however, it does not guarantee that the system functions reliably under real-world, high-volume conditions.
Many failure modes emerge only during full-system interaction. Digital signal interference, RF coexistence conflicts, thermal coupling between tightly integrated subsystems, and parasitic effects often cannot be fully replicated in isolated bench testing.
For example, a high-speed display flex cable may pass standalone signal integrity validation. During system-level engineering verification testing (EVT) under real RF load, that same cable can radiate broadband noise that desensitizes the primary cellular receiver. The result is a coexistence failure that frequently forces late-stage shielding changes or mechanical redesign.
Similarly, assembly processes introduce stress, tolerance stack-up, and handling variability that are absent in early prototypes. Component-level validation ensures parts are defect-free. It does not predict how those parts behave when integrated and manufactured at scale. The consequence is predictable: issues emerge when yield sensitivity tightens during ramp.
A defect observed in 1 out of 100 early validation units translates into 10,000 defective devices at a one-million-unit scale. At millions of units, small deltas compound rapidly.
The design–manufacturing impedance mismatch
A recurring root cause of late-stage validation failures is misalignment between design optimization and manufacturing constraints. Design teams optimize for performance, power efficiency, compact form factor, and cost targets. Manufacturing teams optimize for yield stability, throughput, repeatability, and process capability. Both are correct within their domains.
Failure occurs when manufacturing sensitivity is not structurally integrated into architectural trade-off decisions. In cross-functional reviews, performance metrics are often presented without quantified yield sensitivity analysis. Design freeze decisions may proceed based on functional validation, while manufacturing risk remains probabilistic rather than modeled. Schedule pressure can incentivize accepting integration risk with the assumption that ramp will resolve residual issues.
System-level validation acts as the translation layer between these domains. When embedded early, it exposes divergence between design intent and production feasibility while design changes remain affordable. The cost-of-change curve, widely cited in engineering economics literature, demonstrates that defects discovered during mass production can cost orders of magnitude more to correct than those identified during early design phases. Whether the multiplier is 10x or 100x depends on context, but the direction is consistent: late discovery amplifies cost and schedule exposure.
System-level validation as risk compression
Reframing system-level validation as a schedule-risk compression mechanism changes how engineering organizations deploy it. Risk compression means reducing the variance between projected and actual ramp performance before high-volume commitments are made. It means narrowing the gap between modeled yield and early ramp yield while architectural flexibility still exists.
Consider a ten-million-unit program targeting 97% yield but only achieving 94% during early ramp. A 3% delta produces 300,000 additional defective units. At a $500 bill-of-materials cost, that equates to $150 million in direct exposure: before accounting for logistics, containment actions, rework, warranty impact, and brand degradation.
When system-level validation is embedded earlier in the development cycle, integration uncertainty is resolved before tooling freeze and capacity allocation. Manufacturing sensitivity becomes an architectural input, not a downstream constraint. Validation shifts from reactive confirmation to proactive risk reduction.
Governance implications for senior managers
For senior engineering and manufacturing managers, the implication is structural. System-level validation must be positioned upstream of design freeze, not solely before ramp. In practice, this requires:
- Upstream integration: Embedding manufacturing engineering into early architecture discussions.
- Quantified sensitivity: Requiring quantified yield sensitivity data before design freeze.
- Strategic alignment: Aligning validation milestones with major financial commitments.
- Holistic ownership: Elevating system-level risk ownership to program leadership rather than distributing it across siloed subsystem teams.
Organizations that treat system-level validation as a downstream quality function implicitly accept schedule volatility as a cost of doing business. Organizations that embed it as a bridge between design architecture and manufacturing execution create structural advantage. They stabilize flagship launch timelines, reduce ramp inefficiency, and preserve optionality when trade-offs are still affordable.
Ayokunle Oni is a system engineering program manager at Apple, where he helps coordinate the iPhone hardware design and engineering process across cross-functional teams. He specializes in system integration and validation and has led complex engineering programs from concept through production, working closely with global manufacturing and vendor partners.
Related Content
- Basics of Bench Silicon Validation – PCB Passives
- Early verification and validation using model-based design
- Design Constraint Verification and Validation: A New Paradigm
- Design-Stage Analysis, Verification, and Optimization for Every Designer
- Hardware Verification: What AI Gets Right When It Generates Your Testbench — and What It Misses
The post How system-level validation compresses schedule risk in device design appeared first on EDN.
CEA-Leti, CEA-List and PSMC collaborate to integrate RISC-V and micro-LED silicon photonics into 3D stacking and interposer
Oldie but goodie: yet another Chua's circuit implementation
| About Chua's citcut: My implementation: Video: https://imgur.com/a/R0H5TSl [link] [comments] |



