Збирач потоків

Smartphone shipments to fall 7% in 2026 amid memory constraints and geopolitical pressures

Semiconductor today - 3 години 10 хв тому
Based on assumptions on first-quarter memory prices (which indicate that pricing pressure and constrained supply will begin to ease in second-half 2026), Omdia’s latest outlook forecasts that global smartphone shipments will fall by about 7% year-on-year in 2026...

Circuits Integrated Hellas and Reach Power sign multi-year strategic MOU

Semiconductor today - 3 години 1 хв тому
Satellite communication (Satcom) technology provider Circuits Integrated Hellas (CIH) of Athens, Greece and wireless power-at-a-distance technologies provider Reach Power of Redwood City, CA, USA, have announced a memorandum of understanding (MOU) establishing a multi-year strategic alliance. Focused on joint development of integrated radio frequency (RF)/millimeter-wave (mmWave) and wireless power and data transfer (WPDT) solutions, the alliance will target Satcom, defense, energy transfer, and other phased-array applications...

EV system design from components to modules to software

EDN Network - 4 години 3 хв тому

Electric vehicle (EV) design at the system level is a rapidly evolving landscape encompassing components, hardware modules, and software platforms. So, on the first day of Automotive Tech Forum 2026, which was dedicated to EV designs, a panel titled “Powering the Electric Vehicle: From Semiconductors to Systems” took a deep dive into the system-level intricacies of EV designs.

Carsten Himmele, marketing manager for automotive at Allegro MicroSystems, highlighted the growing presence of silicon carbide (SiC) in traction inverters due to its ability to deliver higher bandwidth and efficiency. However, while talking about motor control for EV traction, he also mentioned challenges in operating in harsher electrical environments.

“SiC brings in higher bandwidth for motor control, but it also makes the electrical environment somewhat harsher,” he said. Himmele added that advanced phase-current sensing and inductive rotor-position sensing are essential for overcoming these challenges. “Moreover, system-grade building blocks reduce the number of external components and improve design efficiency,” he concluded.

That’s where gallium nitride (GaN) offers key advantages, said Alex Lidow, CEO and co-founder of Efficient Power Conversion (EPC). “GaN is smaller, more efficient, and more rugged compared to silicon and SiC,” he said. “It’s particularly effective in 48-V systems, which complement the emerging 800-V architectures.”

Lidow added that while EVs with 48-V systems are now leading the way, GaN devices are 5 to 7 times more efficient than their MOSFET ancestors. “GaN is powering onboard chargers, DC/DC converters, battery cooling pumps, steering systems, and infotainment.”

Rohan Samsi, VP of GaN Business Division at Renesas, also talked about the paradigm shift GaN brings to power converters, enabling simplified single-stage designs. “The bidirectional switch allows you to take out something that was a multi-stage converter and replace it with a single stage.” To achive integration synergy, Samsi emphasized that GaN’s strengths in current sensing, temperature sensing, and gate drive enable holistic EV solutions.

Finally, Kerry Grand, marketing manager for Simulink Automotive at MathWorks, turned the discussion toward the software aspects of design. He was asked to inform the panel on the latest developments in EV traction from a system-integration standpoint. And what does hardware testing uncover about the present and future of EV drivetrain?

Grand began with an insight into EV system-level design through simulation and model-based design. Then he identified enduring challenges in EV system design, including high-voltage isolation, battery life optimization, and thermal management. “Simulating detailed thermal systems offers automotive OEMs the ability to trade off temperature limits without compromising system performance.”

At a time when EV design building blocks like traction inverters and battery management systems (BMS) are continually adding functionality, system-level challenges are a critical area to watch. The panel discussion in Automotive Tech Forum 2026 provides a glimpse of design challenges and viable solutions in this design realm.

You can watch this session along with all sessions from the Automotive Tech Forum 2026 virtual event on demand at www.automotiveforum.eetimes.com.

Related Content

The post EV system design from components to modules to software appeared first on EDN.

Cardiac monitors: Inconspicuous, robust data collectors

EDN Network - 4 години 4 хв тому

As follow-up to last month’s narrative of a cardiac abnormality thankfully detected by wearable devices, this engineer details the monitoring system he subsequently donned for a month.

Two-plus years ago, my contributor-colleague John Dunn described his most recent experience with a wearable cardiac monitor. And, as any of you who read one of my last-month blog posts already know, I more recently followed in his footsteps. I don’t yet know the outcome of my heart health study; my follow-up appointment with the cardiologist is a week away as I type these words. Regardless, I thought you might still find it interesting to learn about the gear I toted around, stuck to my chest (and in my pocket) for 30 days, and my experiences using it.

The system I used was Philips’ MCOT (Mobile Cardiac Telemetry), specifically its “patch” variant:

Here’s an overview video; others, plus documentation, are at the product support page:

I took several “selfies” of the sensor in place on my chest but ultimately decided to save you all the abject horror of seeing any of them. Instead, I’ll stick with these stock images:

My initial scheduled meeting with the cardiologist took place on December 12, 3+ weeks after our “introduction” at the emergency room. I’d been on both beta blockers (to regulate my heartbeat) and blood thinners (in case my prior irregular rhythm had resulted in the formation of a clot) since my initial visit to the hospital in mid-November. The cardiologist ordered the monitor, which arrived a bit more than a week later; I began wearing it the day after Christmas.

Here’s the box that the system comes in:

Open sesame:

The first thing I saw was the initial sensor patch, along with the return shipping packaging bag. Below it was the template I used for proper placement each time I stuck a patch on my chest:

The bulk of the contents were contained in two inner boxes, the first labeled “Getting Started” and the second referred to as “Monitoring”. Inside the first:

were several primary items:

along with installation and operation overview instructions:

The monitoring device, both here and in subsequent photos accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

whose dimensions and Android operating system foundation, along with the legacy presence of an analog headphone jack alongside the USB-C port:

and a multi-camera rear array in a specific arrangement:

suggest it to be a custom-software derivative of Samsung’s Galaxy A52 smartphone, introduced in March 2021:

It came with the translucent green case pre-installed, by the way. Here are some other overview images of the smartphone…err…monitoring device (its left side was unmemorable so I didn’t bother):

Next up was a small scrub pad used to further prepare my chest for patch application, after initial hair shaving. And, of course, there was the sensor itself:

Its edge arrived already abraded; I’m guessing that it had already been popped open, with its rechargeable battery subsequently replaced, at least once prior to its arrival at my residence:

Now for box #2:

More instructions, of course:

along with more patches, a more detailed instruction booklet, and the dual-charging unit:

The AC/DC adapter has two USB-A outputs:

which can be used in parallel:

One, connected to a red USB-A to USB-C cable, is used for daily recharge of the “monitoring device” (smartphone). The other (black, this time) cable terminates in a charging dock for the sensor, which I used every five days in conjunction with (and in-between) the patch removal and replacement steps:

Here’s how the initial “monitoring device” bootup went (since this was a custom Android-plus-app build, I wasn’t able to grab screenshots directly from the smartphone, perhaps obviously):

After initial charging of both the monitoring device and sensor, I continued the setup process:

Here’s what a patch looks like when you first take it out of the package; top:

and bottom:

Pressing down on the sensor while aligned with the patch base snaps it into place:

A briefly illuminated LED subsequently indicates that the sensor is correctly installed, at which point the monitoring device is able to “see” it (broadcasting over Bluetooth, presumably Low Energy):

At this point, you can peel away the protective clear plastic cover over the back side adhesive:

All that’s left is to press it into place on your chest…and then peel off the existing patch, pop out and recharge the sensor and redo the installation process five days later:

Lather, rinse, and repeat until the total 30-day cycle is over, which the system thoughtfully tracks on your behalf. Then ship it all back to the manufacturer.

The monitoring device, which regularly receives data transmissions from the sensor, periodically then uploads the data to the “cloud” server over an LTE or EV-DO cellular data connection.

If you forget to keep the monitoring device close by, data won’t be lost, at least for a while. There’s an unknown amount of memory onboard the sensor (yes, I searched for a teardown, alas unsuccessfully), albeit presumably not the full 2 GBytes allocated to this alternative device designed solely for local data logging. But the monitoring device will still alert you (both visually and audibly) to the lost wireless (again, presumably Bluetooth’s LE variant) connection:

You’ll also be alerted if the sensor’s integrated battery drops to a low level and recharge is necessary (I proactively did this every five days, as previously noted, since I’d received six total patches):

If you feel like something’s amiss with your “ticker” (heart pounding, fatigue, etc.) you can tap on the icon at the center of the display and the monitoring device will send an alert “flag” for subsequent correlation with the potential cardiac arrythmia data collected at that same time:

And in closing, here are some shots of other monitoring device display screens that I captured:

By the time you see this, assuming I don’t need to reschedule for some reason, I will have met with my cardiologist and gotten the (hopefully positive) results. I’ll follow up in the comments. And please also share your thoughts there! Thanks as always for reading.

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

 

The post Cardiac monitors: Inconspicuous, robust data collectors appeared first on EDN.

Volta initiates bioleaching gallium recovery study with Laurentian University

Semiconductor today - 6 годин 15 хв тому
Mineral exploration company Volta Metals Ltd of Toronto, Canada (which owns, has optioned and is currently exploring a critical minerals portfolio of rare-earths, gallium, lithium, cesium and tantalum projects in Ontario) has begun laboratory-scale bioleaching recovery test work primarily targeting gallium and secondarily rare-earth elements (REEs) at Dr Vasu Appanna’s laboratory of Biomine Research and Development at Laurentian University in Sudbury, Ontario. Laurentian University is recognized for its applied research expertise in mining, mineral processing, and earth sciences...

Semtech expands data-center portfolio by acquiring HieFo for $34m

Semiconductor today - 7 годин 37 хв тому
High-performance semiconductor, Internet of Things (IoT) systems and cloud connectivity service provider Semtech Corp of Camarillo, CA, USA has acquired HieFo Corp of Alhambra, CA –which manufactures indium phosphide (InP) optoelectronic devices for optical transceivers used across data-center interconnects (DCI) and intra-data-center interconnects – for about $34m in cash...

Navitas and EPFL demo 250kW solid-state transformer

Semiconductor today - 7 годин 45 хв тому
In booth #2027 at the IEEE Applied Power Electronics Conference (APEC 2026) in San Antonio, Texas (22–26 March), Navitas Semiconductor Corp of Torrance, CA, USA is exhibiting a 250kW solid-state transformer (SST) platform developed by the Power Electronics Laboratory of Switzerland’s École Polytechnique Fédérale de Lausanne (EPFL) that enables the grid architecture required by next-generation data centers, eliminating bulky low-frequency transformers while improving end-to-end efficiency...

Київська політехніка отримала додаткову грантову підтримку від Amazon Web Services

Новини - 9 годин 10 хв тому
Київська політехніка отримала додаткову грантову підтримку від Amazon Web Services
Image
kpi чт, 03/05/2026 - 09:54
Текст

Amazon Web Services (AWS) надав другий грант КПІ з початку повномасштабної війни. У 2022 році університету було надано перший терміновий грант, що дозволив оперативно здійснити міграцію інфраструктури до хмарного середовища. Тоді цифрові сервіси функціонували у партнерському середовищі компанії EPAM. Пізніше університет повністю перейшов на власний акаунт AWS.

Arrow Electronics and Infineon introduce 240W USB-C PD 3.2 reference design for battery-powered motor control applications

ELE Times - 10 годин 14 хв тому

Arrow Electronics and Infineon Technologies AG have announced REF_ARIF240GaN, a 240W USB Power Delivery (PD) 3.2 reference design for battery-powered motor control applications that require high performance and power efficiency in a compact form factor. This design complements the existing portfolio of joint reference design solutions from Arrow and Infineon, supporting the ongoing migration of customer designs to USB-C technology.

REF_ARIF240GaN is specifically designed to support the launch of EZ-PD™ PMG1-B2, Infineon’s newest USB PD 3.2 controller, featuring up to 240W USB sink capability and integrated buck-boost functionality in a compact single package. It provides developers with a ready-to-use platform for implementing high-power USB-C charging alongside efficient motor drive control features. It brings fast charging capabilities for 2- to 12-cell Li-ion battery packs, simplifying the overall design and reducing components count.

Motor control functionality is delivered using Infineon’s PSOC C3, a 180MHz Arm Cortex-M33 microcontroller, and highly efficient 100V CoolGaN G5 transistors. By combining a fully interoperable USB-C PD stack with high-performance sensor and sensorless GaN motor control on a single platform, the reference design enables compact, high-efficiency battery-powered systems while shortening development time, reducing bill of materials cost and space required.

Target applications include light electric vehicles (e-bikes, e-scooters and personal mobility devices), along with power tools, vacuum cleaners, kitchen appliances, garden equipment and robotics.

The reference design can be obtained upon request. Advanced technical support and customisation services are available from Arrow’s engineering solutions centre (ESC).

Visitors to embedded world 2026 can see the joint Arrow and Infineon solutions for motor control and battery-powered applications at Arrow’s stand 4A-342.

About Arrow Electronics
Arrow Electronics (NYSE:ARW) sources and engineers technology solutions for thousands of leading manufacturers and service providers. With 2025 sales of $31 billion, Arrow’s portfolio enables technology across major industries and markets. Learn more at arrow.com.

The post Arrow Electronics and Infineon introduce 240W USB-C PD 3.2 reference design for battery-powered motor control applications appeared first on ELE Times.

Robotics Engineering: The Architectural Evolution Behind IT–OT Convergence

ELE Times - 10 годин 42 хв тому

Factories today operate as dense mechanical ecosystems, whether in automotive assembly lines or semiconductor fabrication units. Traditionally, each robotic and mechanical element performed predefined, deterministic functions within isolated automation cells. However, as shop floors become increasingly machine-intensive and interconnected, operational complexity rises proportionally. Managing these environments now requires more than mechanical precision—it demands architectural coordination across layers of control and intelligence.

In this context, the convergence of Information Technology (IT) and Operational Technology (OT) is fundamentally reshaping robotics engineering. Data processing layers—analytics engines, business logic systems, and enterprise platforms—are no longer separated from operational control systems. At the same time, the physical layer, comprising sensors, actuators, servo drives, and Programmable Logic Controllers (PLCs), is becoming increasingly tightly integrated with edge compute and network infrastructure. Robotics systems are no longer designed as standalone motion units; they are engineered as nodes within a larger, connected control ecosystem.

“Traditional automation tools were built for a high-volume, low-variability environment. But today’s market demands agility,says Ujjwal Kumar, Former Group President of Teradyne Robotics.

This architectural integration is shifting robotics engineering from a purely mechanical discipline toward system-level design—where communication protocols, deterministic networking, cybersecurity, and software orchestration are as critical as torque curves, kinematics, and payload specifications.

Adaptive Systems

At the core of this transformation lies the emergence of adaptive robotic systems. In practical terms, adaptability on the shop floor means the ability to reconfigure, scale, and modify operational behavior through software-defined control and network orchestration, rather than through mechanical redesign. Modern robots are no longer confined to fixed, pre-programmed routines. Equipped with AI models, IIoT connectivity, and high-resolution sensor feedback, they can interpret environmental inputs, process real-time data streams, and dynamically adjust execution parameters.

“The big difference is that traditional automation was a custom-made, perfect solution for one application. The new age of AI-integrated robotics has standard products serving multiple applications. You go into multiple applications through software and some end-of-arm tooling differences,” says Ujjwal Kumar, Former Group President of Teradyne Robotics.

As manufacturers pursue higher efficiency alongside greater product diversity, such adaptability becomes essential. Integrated control and data layers allow robots to transition between production tasks or product variants with minimal downtime, supporting high-mix manufacturing environments. Simultaneously, context-aware operations enable robotic systems to respond to signals from enterprise platforms such as ERP and MES, aligning execution with demand fluctuations, material availability, and downstream constraints.

The Build Architecture: Sensors, Control, and Communication Layers

To understand the engineering behind IT–OT convergence, it is useful to examine the architectural layers that define modern shop-floor robotics. Traditionally, industrial systems followed hierarchical models such as ISA-95, where field devices, control systems, and enterprise platforms operated in structured tiers with limited cross-layer interaction. Today’s robotic systems, however, are increasingly designed around a more unified Industrial Internet of Things (IIoT) architecture—where sensing, control, computation, and enterprise integration operate within a tightly interconnected framework.

“The groundbreaking automation innovations of the future won’t come from one single company but from close cross-technology ecosystem collaborations,” says Ujjwal Kumar, Former Group President of Teradyne Robotics.

At the foundation lies the physical and sensing layer. Modern robots are embedded with dense networks of encoders, force–torque sensors, high-resolution vision systems, vibration monitors, and environmental sensors—particularly critical in semiconductor manufacturing. Unlike earlier generations, where sensors primarily supported local closed-loop motion control, today’s sensing infrastructure generates continuous, time-synchronised data streams. These data flows serve a dual purpose: ensuring precision motion control while simultaneously feeding analytics and optimisation engines upstream.

Above this sits the control and communication layer, where deterministic execution remains paramount. PLCs, motion controllers, industrial PCs, and real-time operating systems govern microsecond-level synchronisation of servo drives and actuators. However, this layer has evolved from rigid, ladder-logic-driven hierarchies to hybrid architectures that combine deterministic control with networked intelligence. Industrial Ethernet, fieldbus systems, and increasingly Time-Sensitive Networking (TSN) ensure that motion commands and data packets coexist without compromising latency or jitter requirements. Control systems are no longer isolated—they are communicative nodes within a broader industrial network.

The next shift occurs at the edge. Edge computing nodes now preprocess high-frequency sensor data, execute AI inference models, and filter operational information before it propagates upward. Event-driven architectures and publish–subscribe communication patterns allow machines to update a shared operational state across the plant continuously. Rather than relying solely on hierarchical polling mechanisms, modern factories operate through near real-time data dissemination, enabling contextual awareness across production assets.

James Davidson, Chief Artificial Intelligence Officer, Teradyne Robotics, says, ” AI is transforming robots from tools into intelligent collaborators that can perceive, learn, and adapt.” 

At the enterprise integration level, robotics systems increasingly interact with MES and ERP platforms, digital twin environments, and predictive maintenance engines. Data flow is no longer unidirectional. Demand signals, material constraints, and quality metrics can influence robotic execution parameters in near real time. This bidirectional exchange is the practical manifestation of IT–OT convergence—where business logic and machine logic intersect.

Underpinning all these layers is a security and infrastructure framework that ensures resilience. As robots become connected assets, cybersecurity, network segmentation, device authentication, and secure firmware management become integral engineering considerations rather than afterthoughts. Connectivity without security would undermine determinism and operational continuity.

Redefining the Core of Robotics Engineering 

For decades, robotics engineering on shop floors was largely centred on mechanical excellence. Engineers focused on motion accuracy, payload capacity, repeatability, structural rigidity, and cycle-time optimisation. The primary goal was to design a robot that could execute a defined task with precision and reliability within a controlled cell.

That foundation still matters—but it is no longer enough. As IT–OT convergence reshapes shop floors, robotics engineering now extends far beyond mechanical design. Engineers must integrate advanced sensors, real-time communication networks, edge computing systems, AI-driven analytics, and enterprise software interfaces into the robot’s architecture. A robot is no longer just a mechanical arm with a controller; it is a connected, data-producing, and data-consuming system embedded within a larger digital ecosystem.

This means engineering decisions are no longer confined to gears, motors, and control loops. Network latency can influence motion stability. Data accuracy affects predictive maintenance outcomes. Software updates can modify operational behaviour. Cybersecurity vulnerabilities can interrupt production. Mechanical performance is now intertwined with software reliability and network integrity.

Physical AI equips robots with the capacity to perceive and respond to the real world, providing the versatility and problem-solving capabilities that are often required by complex use cases that have been out of scope until now,” says James Davidson, Chief AI Officer, Teradyne Robotics. 

In practical terms, robotics engineers are moving from designing machines to designing intelligent systems. They must think about interoperability, data structures, communication protocols, and secure integration—alongside torque curves and kinematics. The robot is no longer an isolated automation asset; it is part of a coordinated production architecture that responds to real-time information from across the enterprise.

The shift is clear: robotics engineering is evolving from a purely mechanical discipline into a multidisciplinary field where mechanics, electronics, networking, and software operate as a unified whole.

Conclusion 

As factories continue to evolve into connected, data-driven environments, robotics can no longer be engineered as standalone mechanical systems. The convergence of IT and OT is embedding intelligence, connectivity, and responsiveness directly into the core of robotic architecture. What was once a discipline defined by mechanical precision is now defined by system integration. 

“Taking a modern Industry 5.0 approach requires prioritisation of adaptability, empowering line workers with robots that can be reprogrammed and redeployed as demand shifts, which is the biggest benefit of having these very flexible systems coming online quickly,”  says Ujjwal Kumar, Former Group President of Teradyne Robotics.

The competitive edge will not belong merely to the fastest or strongest robots, but to those designed as intelligent, interoperable components of a unified production ecosystem. In this new industrial reality, robotics engineering is no longer just about motion—it is about orchestration.

The post Robotics Engineering: The Architectural Evolution Behind IT–OT Convergence appeared first on ELE Times.

How AI Is Transforming Network Protocol Testing in Software-Defined Networks?

ELE Times - 10 годин 50 хв тому

As enterprises accelerate toward cloud-native infrastructure, edge computing, and virtualised network functions, data volumes and traffic patterns have become increasingly dynamic and unpredictable. This shift has significantly complicated network management, making traditional monitoring and testing approaches insufficient for modern workloads.

Software-Defined Networking (SDN) emerged as a response to this complexity. By decoupling the control plane from the data plane and centralising network intelligence in software-based controllers, SDN introduced programmability, agility, and fine-grained policy enforcement into network architecture. Networks were no longer static hardware constructs — they became programmable systems capable of real-time configuration and orchestration.

However, this programmability has introduced a new challenge: protocol behaviour is no longer deterministic. Dynamic flow rules, frequent controller updates, real-time policy changes, and multi-controller orchestration have made protocol validation exponentially more complex. Traditional pre-defined test scripts and static regression libraries struggle to keep pace with continuously evolving network states.

“AI applications are driving an entirely new set of requirements in our customers’ network equipment and in their network architectures,” says  Joel Conover, senior director at Keysight Technologies

In programmable environments, protocols must be validated not just for correctness, but for adaptive behaviour across changing topologies and traffic conditions. This is precisely where Artificial Intelligence is beginning to redefine network protocol testing — shifting it from rule-based verification to intelligent, adaptive validation.

Traditional Protocol Testing Failing with SDNs

With legacy traditional networks, the protocol behaviour remains largely uniform and predictable. Routing tables were static, firmware updates were infrequent, and network state changes followed predictable patterns. Testing technologies evolved accordingly – with pre-defined test cases, fixed traffic simulations, and rule-based regression suites. But with Software Defined Network, that isn’t the case. 

SDN disrupts this very uniformity and predictability. As with SDN, the control plane is abstracted into centralised controllers, and the network remains largely flexible- not hardcoded into individual devices. Flow rules are dynamically installed, modified, or withdrawn based on application demands, policy engines, and real-time telemetry. As a result, network state becomes fluid rather than fixed. This also puts forth tremendous testing challanges including: 

  • Dynamic Flow Table Updates: In SDN environments, flow entries can change in milliseconds. Traditional test scripts, designed for static configurations, cannot continuously validate transient states or short-lived rule conflicts.
  • Controller-Driven Logic Complexity: Unlike legacy networks, where protocols like Open Shortest Path First (OSPF) or Border Gateway Protocol (BGP) operate autonomously within devices, SDN controllers introduce centralized decision-making logic. Testing must now validate not only protocol compliance, but also controller algorithms, northbound applications, and southbound API interactions.
  • Multi-Controller and Multi-Domain Orchestration: Large deployments often rely on distributed controller clusters for scalability and redundancy. Synchronisation delays, inconsistent state propagation, or split-brain scenarios introduce validation complexity beyond conventional test frameworks.
  • CI/CD-Driven Network Updates: Modern SDN deployments increasingly follow DevOps models, where network policies and configurations are updated frequently. Regression cycles that once ran quarterly may now need to be executed daily or continuously.
  • Emergent Behavior in Programmable Networks: When multiple applications interact through a controller — security policies, load balancers, traffic optimizers — unintended rule interactions can produce emergent protocol behavior. Static test matrices cannot anticipate such combinations.

In this evolving environment, traditional test automation tools operate reactively. They verify what has been explicitly defined, but struggle to discover what has not been anticipated. As SDN architectures scale in complexity, protocol testing must evolve from deterministic validation — capable of learning network behaviour rather than merely executing predefined scenarios.

The Limits of Automation in Modern SDN Testing

As SDN environments grew in complexity, testing frameworks also adopted automation. Continuous integration pipelines began validating controller updates, traffic replay tools simulated workloads, and orchestration layers executed regression suites at scale. Usually, the traditional automated testing systems operate on predefined logic. They execute scripted scenarios, compare outputs against expected results, and flag deviations. While this approach accelerates validation cycles, it remains fundamentally reactive. It can only test what engineers anticipate. In programmable networks, however, not all behaviours are foreseeable.

With SDNs, Flow rules interact dynamically, policies overlap, and controllers adapt in real time to the telemetry inputs. Under such conditions, failure modes are often emergent rather than explicit. They arise from complex interactions between components rather than from isolated configuration errors.

This is where the limitations of deterministic automation become evident:

  • Static rule engines cannot adapt to evolving topology states.
  • Regression libraries cannot scale combinatorially with policy variations.
  • Manual definition of edge cases becomes impractical in large-scale SDN fabrics.

As networks increasingly resemble distributed software systems, testing must adopt characteristics of software intelligence — the ability to learn patterns, detect deviations autonomously, and anticipate risk scenarios. It is within this context that Artificial Intelligence begins to move from experimental concept to architectural necessity.

How is AI replacing the Automation Debate in Testing? 

As Software-Defined Networks evolve into highly dynamic, programmable infrastructures, testing frameworks must move beyond deterministic execution models. AI-driven protocol testing becomes the obvious and most promising strategy as it is enhanced with contextual learning, predictive analysis, and adaptive decision-making. An effective AI-enabled SDN testing architecture operates across multiple functional layers.

“AI is being infused into many aspects of communications technology – it shows particular promise in predicting channel conditions, essentially creating new forms of ‘smart radios’ that can achieve higher throughput and/or longer distances by incorporating machine learning in the radio itself,” says  Mr Conover. 

At the foundation lies a telemetry intelligence layer. SDN environments generate vast volumes of real-time data — including flow table updates, controller logs, latency metrics, packet drops, topology transitions, and API interactions across northbound and southbound interfaces. Rather than relying solely on post-event log analysis, AI models ingest and process this telemetry continuously. By establishing behavioural baselines, the system distinguishes between acceptable adaptive changes and genuine protocol anomalies.

Built upon this is the Behavioral Modeling Layer. In programmable networks, protocol validation must account for interactions between controllers, applications, and dynamic policies. Machine learning models analyse how control-plane decisions influence data-plane outcomes under varying traffic loads, topology shifts, and failover scenarios. Through supervised and unsupervised learning techniques, the system identifies normal operational patterns and detects deviations that static scripts might overlook — such as cascading latency effects, unstable rule propagation, or intermittent synchronization gaps.

The next layer introduces Intelligent Test Case Generation and Prioritisation. Traditional regression testing treats all scenarios uniformly, often leading to inefficiencies. AI-enhanced systems instead evaluate historical defect data, configuration change patterns, and policy dependency graphs to calculate risk scores. Testing resources are then dynamically allocated to high-risk areas. Reinforcement learning techniques can further simulate targeted disruptions, enabling adversarial-style validation that exposes weaknesses before deployment.

Finally, Predictive Validation capabilities elevate protocol testing from reactive detection to proactive assurance. By analysing patterns across multiple test cycles, AI systems can forecast potential congestion points, controller overload risks, and policy conflicts at scale. This predictive insight is particularly valuable in CI/CD-driven SDN environments, where frequent updates demand continuous and reliable validation.

Together, these layers transform protocol testing from a script-driven verification exercise into an adaptive, intelligence-led framework. As networks become software-defined, testing infrastructures are becoming learning-defined — capable not only of validating correctness, but of anticipating instability before it manifests in production environments.

Conclusion

Software-Defined Networking transformed networks into programmable, software-driven systems — but in doing so, it also made protocol validation far more complex. Static test scripts and deterministic regression cycles are no longer sufficient for environments defined by dynamic flows, controller logic, and continuous updates.

“The use case for network testing is emulating the unique properties of that environment, and delivering it at a scale we’ve never seen before,”  says  Mr Conover. 

Artificial Intelligence is emerging as the natural evolution of network testing. By learning behavioural patterns, detecting anomalies in real time, and prioritising risk intelligently, AI shifts protocol validation from reactive verification to predictive assurance.

The future of SDN will not depend solely on how programmable networks become, but on how intelligently they are tested. As infrastructure grows more dynamic, validation must become equally adaptive — combining automation, intelligence, and human oversight to ensure resilient, scalable network operations.

The post How AI Is Transforming Network Protocol Testing in Software-Defined Networks? appeared first on ELE Times.

CoolSem establishes Advisory Board to advance wafer-level thermal management

Semiconductor today - Срд, 03/04/2026 - 22:36
CoolSem Technologies of Eindhoven, the Netherlands (which was founded in 2025 and develops wafer-level thermal management technology to reduce thermal resistance and mechanical stress in advanced semiconductor and photonic devices) has established an Advisory Board to accelerate global growth and technology leadership...

I finally finished my z80 project.

Reddit:Electronics - Срд, 03/04/2026 - 22:24
I finally finished my z80 project.

After about 3 months and a lot of dedication, I successfully completed my project.

It's almost exactly Grant's project, the only modification is that the SRAM has 8KB, a 32KB one will arrive soon and, since the wiring was already done with it in mind, the change will be easy.

submitted by /u/Actual-Ad-6935
[link] [comments]

MWC 2026: Apple, Google, Samsung and Other Contending Contestants

EDN Network - Срд, 03/04/2026 - 21:46

Ever imagine that memory supply (translating to system capacity and price) concerns would ever dominate multiple companies’ announcements? “And so it goes”, to quote Kurt Vonnegut.

The Mobile World Congress (MWC) show, held each year in Barcelona, Spain (one of my favorite cities in the world) and in progress as I write these words, doesn’t have quite the same cachet as previously. Two primary reasons rationalize this impermanence: the cellphone market has subsequently (and notably so) consolidated, and it’s increasingly common for the market participants that remain to announce new products at their own events.

That said, these go-it-alone suppliers still often chronologically cluster their announcements at or near the MWC timeframe. Plus, the conference organizers have broadened the scope of the show beyond just cellphones (nowadays: smartphones) to also encompass other mobile devices such as tablets and laptop computers…although classifying a static desktop-based, AC-powered robot as “mobile” is a stretch, no matter how dynamic its joints and display may be:

Apple, Google, and Samsung were among the companies who made notable(-ish) news over the past week. I’ll cover them chronologically in the following sections.

Mountain View gets the jump on Cupertino (once again)

Last spring, Google unveiled its then-latest cost-focused phone, the Pixel 9a, a few weeks after Apple had rolled out its initial (albeit iPhone 16-numbered) “e” rebrand of prior “SE” multi-gen economical-tuned offerings. I subsequently bought a Pixel 9a for myself, replacing (and leveraging a then-lucrative trade-in value promotion for) my prior backup handset, a Pixel 6a.

That said, Google had already flip-flopped prior longstanding fast-follower precedence with the late summer 2024 launch of the mainstream Pixel 9 and high-end Pixel 9 Pro, which predated their iPhone 16 competitors by a month (versus the historical cadence of being a month belated). The same thing happened last year. And now, Google has extended its “eager beaver” behavior to the entry-level end of its smartphone product suite with the Pixel 10a, which the company sneak-peeked in early February, with a full unveil two weeks later complete with a pre-order opportunity, and shipments starting later this week.

Good news: skyrocketing DRAM and NAND flash memory prices haven’t led to handset price increases (or, alternatively, either integrated memory capacity decreases or the culling of lower-capacity product variants); the Pixel 10a price ($499) is unchanged from its Pixel 9a predecessor. Bad news (albeit good news for me, no longer FOMO-fraught): unless you’re insistent on a completely flat backside absent any camera “bumps”, the design is largely unchanged as well. Same chipset. Same memory generations and speed bins. The display is modestly enhanced—peak brightness, bezel thickness, and cover glass shock resistance—as are the wired and wireless charging power, therefore speeds, but that’s basically it. Oh…and still no Qi magnet inclusion. Hold that thought.

A higher-end attack

A week later, and a week ago, Samsung rolled out its Galaxy S26 product line, which competes against Apple’s iPhone 17 series launched last September, along with new-generation earbuds (but no new smart ring; was Oura’s legal-pressure campaign effective?):

Here again, not much has changed from the year-prior Galaxy S25 predecessors. The “adder” that seemingly got all the media attention, Privacy Display, derives from an OLED display tweak and is only available on the high-end Ultra variant. Unlike Google, Samsung is generationally raising prices, predominantly blaming memory cost increases as the root cause, and is also not offering comparable low-end storage capacity options as with S25-series predecessors. The memory blame assignment is particularly ironic in this case because the Samsung parent company also has a semiconductor (memory, specifically) division under its corporate umbrella.

That said, as my colleague Majeed recently wrote about at length and I’d also noted in my earlier 2026-forecast coverage, HBM memory is AI-cultivating the lion’s share of customer demand (therefore also supplier attention) right now, versus the DDR4- and DDR5-generation DRAM technologies found in computers, smartphones, tablets, and the like. Speaking of AI, Samsung Mobile (like Google, and in partnership with Google, along with Perplexity) is betting on it as a trend-setting differentiator from Apple’s underperforming alternative, no matter that it ended up not being a broadly effective sales pitch motivator last year. That Apple has now partnered with Google, too, must have been a hard pill for Cupertino to swallow. Oh, and by the way, once again, no Qi magnets, although the argument is pretty pervasive, at least to me. Paraphrasing: “Why bother doing so, bumping up the bill-of-materials cost in the process, since most everybody also uses phone cases anyway, and they already come with magnets?”

Not a one-trick pony

All of which leads us to Apple itself, which yesterday (as I’m writing these words on Tuesday afternoon, March 3) released its latest entry-level smartphone, the iPhone 17e:

Minutia first: a year ago, I gave the company grief for busting through the $500 price barrier while, as the original MagSafe innovator, bafflingly leaving magnets off its wireless charging implementation. First World problem solved: unlike with Google and Samsung, as earlier mentioned, they’re there in the iPhone 17e. We can all now once again sleep soundly.

Now, for memory, specifically (in this case) flash memory. Like Samsung but unlike Google, Apple lopped the prior-generation 128 GByte storage capacity option off the low end of the product suite. But unlike both Samsung and Google, the capacity increase comes with no associated price increase; Apple has stuck with $599 for the now-256 GByte variant this time. The SoC is also upgraded, from the A18 to A19 (the same generation as in the iPhone 17), albeit with only 4 GPU cores (versus 5 with the iPhone 17), as is the cellular modem (the newer C1X). And a few other tweaks: a third color option (pink) and updated Ceramic Shield 2 front glass protection.

Since, as I mentioned at the beginning, MWC has expanded beyond phones into tablets (among other things), I’ll also lump into today’s coverage the latest M4 SoC-based generation of the iPad Air, which Apple also announced yesterday.

As before, it comes in both 11” and 13” variants; the N1 networking and C1X cellular chips are also on board for the ride this time. Echoing back to my earlier highlight of the iPhone 17-vs-17e A19 SoC core-count discrepancy, the version of the M4 SoC in the new iPad Air is also downbinned from the ones in the various versions of the M4 iPad Pro, albeit this time from both CPU (both performance and efficiency, in fact) and GPU core-count standpoints, with requisite benchmarking-results impacts. And once again, memory is the most notable news (IMHO, at least) with these devices. But this time, DRAM is in the spotlight. Likely with locally stored AI model sizes in mind, the low-end M4 iPad Air variants deliver a 50% capacity increase (from 8 GBytes to 12 GBytes), still with no corresponding price increase…

…which circles us back to my memory-related comments that kicked off this piece. If volatile (DRAM) and nonvolatile (flash memory) supplies are constrained, and prices are therefore skyrocketing, why is Google able to hold steady on its device pricing, and Apple to go even further, holding prices while simultaneously boosting on-device capacities? Right now, I suspect, both companies’ sizes have enabled them to negotiate favorable pricing and volume contracts with memory suppliers. And further to the “sizes” point, even after those contracts time out, I suspect that both companies will be willing (albeit not necessarily delighted) to endure short-term profit margin pain in order to squeeze smaller, less profitable competitors out of the long-term market.

More to come

When I saw yesterday that Apple had released new public beta versions of its next operating system updates for phones and tablets, but not for computers, I suspected that this delay was only temporary and related to new computers planned for announcement today. And right on schedule, they (therefore it) came this morning; updated versions of the 14” and 16” MacBook Pro, based on the new Pro and Max variants of last fall’s M5 SoC (now also inside the MacBook Air), along with a duet of new displays.

I doubt we’re done; a new low-end MacBook (likely named the Neo) based on the iPhone 16 Pro’s A18 Pro SoC is rumored to still be on queue for Apple’s “big week ahead”, for example, and I can’t help but wonder if we’ll also get a M5-based Mac mini (last updated in November 2024). Stay tuned for more coverage to come from yours truly, hopefully later this week. And until then, let me know your so-far thoughts in the comments!

p.s…Two more MWC-related tidbits. Qualcomm has a promising next-generation SoC for smart watches and other wearables on the way. And speaking of Qualcomm, ready or not, 6G is coming

Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.

Related Content

The post MWC 2026: Apple, Google, Samsung and Other Contending Contestants appeared first on EDN.

23MHz oscillator without schematic. Random design.

Reddit:Electronics - Срд, 03/04/2026 - 17:15
23MHz oscillator without schematic. Random design.

As you can see i have gone completely my own way to make this oscillator, it uses a 25KHz xtal and a 2n3904 transistor, 1M ohm pot and one 5k pot, the power supply comes from 15Vscaled down to 9V using 100k pot + 2n3904 + 1k resistor, i know the picture shows 10k but that didn't give me full voltage range so use 100k instead. I have no idea how it got this working and i am somewhat suprised that 2n3904 can oscillate at 20MHz+.

submitted by /u/Whyjustwhydothat
[link] [comments]

Blue Moon to acquire Apex germanium and gallium mine from Teck

Semiconductor today - Срд, 03/04/2026 - 15:19
Teck American Inc of Spokane, WA, USA (a subsidiary of Canada-based Teck Resources Ltd) has agreed to sell the past-producing Apex germanium (Ge), gallium (Ga) and copper (Cu) mine in Utah to Blue Moon Metals Inc of Toronto, ON, Canada, which says it will become a key stakeholder to support an integrated pipeline of US critical mineral projects to secure North American supply. The transaction adds to a working relationship with key shareholder Hartree Partners LP, which is described as a key partner with the US government on its recently announced US$12bn critical metals stockpile...

Blue Moon to acquire Apex germanium and gallium mine from Teck

Semiconductor today - Срд, 03/04/2026 - 15:19
Teck American Inc of Spokane, WA, USA (a subsidiary of Canada-based Teck Resources Ltd) has agreed to sell the past-producing Apex germanium (Ge), gallium (Ga) and copper (Cu) mine in Utah to Blue Moon Metals Inc of Toronto, ON, Canada, which says it will become a key stakeholder to support an integrated pipeline of US critical mineral projects to secure North American supply. The transaction adds to a working relationship with key shareholder Hartree Partners LP, which is described as a key partner with the US government on its recently announced US$12bn critical metals stockpile...

Stretching a bit

EDN Network - Срд, 03/04/2026 - 15:00

I love Design Ideas (DIs) with a backstory.  Recently, frequent DI contributor Jayapal Ramalingam published an engaging tale of engineering ingenuity coping with a design feature requirement added unexpectedly and very (very!) late in product development: “Using a single MCU port pin to drive a multi-digit display.”

Jayapal writes, “Imagine a situation where you have only one port line left out, and you are suddenly required to add a four-digit display.”

Yikes!  Add a looming delivery deadline to build suspense, and this becomes a classic nightmare scenario. It could easily develop, from an engineering standpoint, into a horror story straight out of the pages of Stephen King. Well, okay. Almost.

Wow the engineering world with your unique design: Design Ideas Submission Guide

But in a clever plot twist, engineer Jayapal shows how a bit (no pun!) of ingenuity turns this tale of terror into an opportunity for some cool circuit design. In his DI, different durations of software-generated pulses on that lonely port line become the control signals necessary for running the newly needed decimal display.

Crisis and calamity averted.

So I wondered how the same basic plot could make a basis for a more generalized storyline. In this version, not just four digits of numerical binary-coded decimal (BCD), but N bits of arbitrary parallel binary outputs would be driven in a similar solitary serial fashion. And all this would be achieved by the same singleton GPIO port bit. Figure 1 shows how the story takes shape.

Figure 1 A lonely GPIO bit loads a lengthy serial string of parallel registers. 

Incoming pulses of variable length on GPIO are buffered by noninverting gate U1a and drive three sets of inputs. 

  1. Timing circuits U1b (400us R1C3 SER input zero/one discriminator),
  2. U1cd (2.4ms R4C2 parallel RCLK clock AC coupled Schmidt trigger),
  3. SRCLK shift registers serial clock.

As illustrated in Figure 2, the interpulse (idle) state of the GPIO is high = 1. 

Figure 2 GPIO pulse timing.

A serial bit transfer pulse starts when the GPIO goes low = 0, releasing the timing RCs. Whether the pulse shifts to a 0 or 1 bit depends on its duration.  If < 100 μs (T0), the R1C3 timeconstant will still hold SER low when the rising edge of SRCLK clocks the serial registers. This will cause a 0 bit to be shifted in. If > 400 μs (T1), the opposite will occur, and the shift register gets a one.

In this way, a data rate between 2 kbps and 10 kbps (depending on the relative frequencies of ones and zeros) can be maintained as long as the idle period between pulses remains less than 600 μs. Completion of data transfer is signaled by allowing GPIO to remain idle for > TR = 3.5 ms.  This allows R4C2 to time out and a transfer pulse to occur on RCLK, commanding a broadside parallel data transfer from the shift registers to the parallel output bits.

Note that, going back to the original horror story, four BCD digits = 16 bits, two 8-bit shift registers, and 12 ms would be enough logic and time. I think that makes for a pretty good ending for a yarn about a far stretch of a single bit.

Stephen Woodward‘s relationship with EDN’s DI column goes back quite a long way. Over 200 submissions have been accepted since his first contribution back in 1974.  They have included best Design Idea of the year in 1974 and 2001.

Related Content

The post Stretching a bit appeared first on EDN.

EV design: The truth about 400-V to 800-V battery transition

EDN Network - Срд, 03/04/2026 - 14:45

In electric vehicle (EV) designs, the shift from 400-V to 800-V battery systems is now a pressing issue. So, the panel discussion on the first day of Automotive Tech Forum 2026 was a good venue for a reality check on the future of 800-V EV architectures.

The panel titled “Powering the Electric Vehicle: From Semiconductors to Systems” explored the latest in battery management system (BMS) designs and what battery modeling tells us about the design challenges as we move toward 800-V systems. And how design building blocks like motor control in EV traction are coping with this transition.

The panelists discussed how 800-V EV architectures could reshape vehicle power distribution. Jerry Shi, sector general manager for EV, HEV, and Powertrain at Texas Instruments, spoke about the emerging 800-V EV design landscape, specifically from a drivetrain standpoint. He also outlined critical design challenges and viable solutions in this design arena.

Carsten Himmele, marketing manager for Automotive at Allegro MicroSystems, cautioned about the industry-wide adoption of 800-V battery systems. “The 400-V battery systems will still dominate mainstream markets due to cost and complexity trade-offs.”

Rohan Samsi, VP of GaN Business Division at Renesas, echoed similar sentiments while envisioning a deeper adoption of 800-V architectures to address range anxiety and efficiency concerns. He acknowledged the challenges such as cost, complexity, and consumer preferences. “The trade-offs between 400-V and 800-V architectures relate to component complexity and service warranty costs.”

So, in the 400-V to 800-V transition, there was a consensus that 800-V systems offer advantages in fast charging and reduced weight. However, for now, panelists expect that 400-V systems will remain dominant in mainstream markets due to their affordability.

Related Content

The post EV design: The truth about 400-V to 800-V battery transition appeared first on EDN.

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів