Українською
  In English
Feed aggregator
Keysight & Samsung: Industry-First NR-NTN S-Band & Satellite Mobility Success
Keysight Technologies announced a groundbreaking end-to-end live new radio non-terrestrial networks (NR-NTN) connection in band n252, as defined by the Third Generation Partnership Project (3GPP) under Release 19, achieved using Samsung Electronics’ next-generation modem chipset. The demonstration, taking place at CES 2026, includes live satellite-to-satellite (SAT-to-SAT) mobility using commercial-grade modem silicon and cross-vendor interoperability, marking an important milestone for the emerging direct-to-cell satellite market.
The achievement also represents the public validation of n252 in an NTN system, a new band expected to be adopted by next-generation low Earth orbit (LEO) constellations.
Reliable global connectivity is a growing requirement for consumers, vehicles, IoT devices, and critical communications. As operators, device manufacturers, and satellite providers accelerate investment in NTN technologies, this achievement shows decisive progress toward direct-to-cell satellite coverage.
With the addition of n252 alongside earlier NTN demonstrations in n255 and n256, all major NR-NTN FR1 bands have now been validated end-to-end. This consolidation of band coverage is critical for enabling modem vendors, satellite operators, and device manufacturers to evaluate cross-band performance and mobility holistically as they prepare for commercial NTN services.
Keysight’s NTN Network Emulator Solutions recreate realistic multi-orbit LEO conditions, SAT-to-SAT mobility, and end-to-end routing while running live user applications over the NTN link. Together with Samsung’s chipset, the system validates user performance, interoperability, and standards conformance, providing a high-fidelity test environment that reduces risk, accelerates trials, and shortens time-to-market for NR-NTN solutions expected to scale in 2026.
The demonstration integrates Samsung’s next-generation modem chipset with Keysight’s NTN emulation portfolio to deliver real, standards-based NTN connectivity across a complete system. The setup validates end-to-end link performance, mobility between satellites, and multi-vendor interoperability, essential requirements for large-scale NTN deployments.
Peng Cao, Vice President and General Manager of Keysight’s Wireless Test Group, Keysight, said: “Together with Samsung’s System LSI Business, we are demonstrating the live NTN connection in 3GPP band n252 using commercial-grade modem silicon with true SAT-to-SAT mobility. With n252, n255, and n256 now validated across NTN, the ecosystem is clearly accelerating toward bringing direct-to-cell satellite connectivity to mass-market devices. Keysight’s NTN emulation environment enables chipset and device makers a controlled way to prove multi-satellite mobility, interoperability, and user-level performance, helping the industry move from concept to commercialisation.”
The post Keysight & Samsung: Industry-First NR-NTN S-Band & Satellite Mobility Success appeared first on ELE Times.
Quantum Technology 2.0: Road to Transformation
Courtesy: Rhode & Schwarz
After more than 100 years of research, quantum technology is increasingly finding its way into everyday life. Examples include its use in cell phones, computers, medical imaging methods and automotive navigation systems. But that’s just the beginning. Over the next few years, investment will increase significantly, and lots of other applications will take the world by storm. While test & measurement equipment from Rohde & Schwarz and Zurich Instruments is helping develop these applications, the technology group’s encryption solutions are ensuring more secure communications based on quantum principles.
Expectations for quantum technology are greater than in almost any other field. That’s no surprise, given the financial implications associated with the technology. For example, consulting firm McKinsey & Company estimates the global quantum technology market could be worth 97 billion dollars by 2035. According to McKinsey, quantum computing alone could be worth 72 billion dollars, and quantum communications up to 15 billion.
Previous developments clearly show that the projected values are entirely realistic. Many quantum effects have become part of our everyday lives. Modern smartphones, for example, contain several billion transistors, predominantly in flash memory chips. Their function – controlling currents and voltages – is based on the quantum mechanical properties of semiconductors. Even the GPS signals used in navigation systems and the LEDs used in smartphone flashlights are based on findings from quantum research.
To celebrate these achievements, UNESCO declared 2025 the “International Year of Quantum Science and Technology” – exactly 100 years after German physicist Werner Heisenberg developed his quantum mechanics theory based on the research findings of the time. Quantum technology was also in the spotlight with the 2025 Nobel Prize in Physics, which was awarded to quantum researchers John Clarke, Michel Devoret, and John Martinis.
Quantum technology 2.0: what can we expect?Quantum physics in secure communications: Whether personal or professional, beach holiday snapshots or development proposals for new products, our data and data transmission need to be protected. Companies today consistently name cyberattacks and the resulting consequences as the top risk to their business. Developments in quantum computing are revealing the limits of conventional encryption technologies. Innovations in quantum communications are the key to the future, as they enable reliable detection of unauthorised access. This means you can create a genuine high-security channel for sensitive data.
Upgrading supply chains: Global flows of goods reach every corner of the Earth, and everything is now just a click away: a new tablet for home use or giveaways for a company party. But behind the scenes lies a complex logistics network of manufacturers, service providers, suppliers, merchants, shipping companies, courier services, and much more. The slightest backlog at a container port or change in the price of purchased items means alternatives must be found – preferably in real time. But the complexity of this task is also beyond what conventional computers can handle.
Personalised medicine: Everyone is different, and so are our illnesses. Cancer cells, for example, differ from one person to the next and often change over time. These differences and changes are already well documented in analytical terms, which has created huge amounts of data. Big Data is the buzzword. But evaluating this data quickly and effectively, to develop personalised forms of treatment, is impossible for conventional computers.
Fast. Faster. Quantum computing.
Our world is controlled by binary code. Conventional computers process data as sequences of ones and zeros, true or false, off or on. This applies to everything, from simple text processing to virtual reality in the metaverse. But the world we live and work in is becoming increasingly complex. The amount of data we need to process is growing rapidly. In 2024, global digital data traffic had more than quadrupled over the space of just five years to 173.4 zettabytes. By 2029, experts believe this number will reach 527.5 zettabytes, equivalent to 527.5 trillion gigabytes.
Conventional computers face two insurmountable obstacles as a result: time and complexity. The larger the volume of data, the more time you need to process that data sequentially. The more complex the problem, the lower the probability that a binary code, with only two states, will be able to efficiently calculate a solution. Quantum computers have the potential to overcome both obstacles using insights from modern physics.
Hand in hand instead of either-or
Like conventional bits, quantum bits (qubits) form quantum mechanical memory units. In addition to just zeros and ones, they can also assume overlapping, mixed states. This simultaneity represents a fundamental technological paradigm shift. We can now run conventional sequential calculation methods simultaneously, which is why a quantum computer can save so much time.
But above all, the new quantum mechanical approach allows us to process new and much more complex questions. However, it’s not an either-or decision, either conventional processing power or quantum computing. Instead, what matters is integrating existing and quantum systems depending on the task.
Physics versus logic
In the quantum world, a particle can be in two places at the same time. Only when it is observed can you narrow down its location, for example, by measuring it. This unusual property is also why it is extremely unstable. Instead of using individual physical qubits, which can be very error-prone, multiple qubits are grouped into a logical qubit. However, the challenge here is that you need quantum systems with as many as one million logical qubits in order to answer practical questions, like protein folding. A logical qubit can contain up to 100 physical qubits, but the highest processing capacity is currently only 1,225 physical qubits.
Zurich Instruments has been part of the Rohde & Schwarz family since 2021. The T&M market for quantum computing holds enormous potential for both companies. Operating and maintaining quantum computers requires a wide range of specific T&M solutions because RF signals need to be generated and measured with extremely high precision to effectively create and record quantum states. Control systems for quantum computers are part of the company’s portfolio.
Secure. More secure. Quantum communications
Quantum computers have the potential to push the limits of processing efficiency. But this brings challenges, including secure communications – increasingly a priority in view of “Q-Day”, the point at which quantum computers will be able to crack classic encryption.
That is why alternative encryption methods are becoming increasingly important. There are essentially two main approaches. The first is post-quantum cryptography, which involves conventional encryption methods with one key difference: they can survive attacks from quantum computers unscathed. The algorithms used in this approach are based on theoretical assumptions for which no effective attacks are currently known using either quantum or conventional computers.
The other approach relates to quantum key distribution (QKD). The German Federal Office for Information Security (BSI) and the National Institute of Standards and Technology (NIST) are two of the main drivers of innovation in this area. In an increasingly digitalised world, private-sector customers, and government customers in particular, are dependent on trustworthy IT security solutions. Secure communications networks have become a critical infrastructure in advanced information societies.
These innovative solutions are shifting the focus of cryptology. Conventional methods, as well as more recent post-quantum methods, are based on mathematical assumptions, i.e. the idea that certain tasks cannot be calculated with sufficient efficiency. Quantum key distribution, by contrast, is based on physical principles. Rohde & Schwarz Cybersecurity is providing and leveraging its extensive expertise in security solutions, as well as its experience in building and implementing secure devices and systems, in a variety of research projects.
The post Quantum Technology 2.0: Road to Transformation appeared first on ELE Times.
Develop Highly Efficient X-in-1 Integrated Systems for EVs
Courtesy: Renesas
The recent tightening of CO2 emission regulations has accelerated the electrification of automobiles at an unprecedented pace. With the global shift from hybrid vehicles to electric vehicles (EVs), automakers are demanding more efficient, safe, and reliable systems. System integration, known as “X-in-1”, becomes the focus of attention. This innovative concept integrates functions traditionally controlled by separate MCUs, such as inverters, onboard chargers (OBC), DC/DC converters, and battery management systems (BMS), into a single microcontroller (MCU), achieving simultaneous miniaturisation, cost reduction, and efficiency improvement. As electric vehicles evolve, demand grows for X-in-1 configurations that consolidate multiple applications onto a single MCU.
At the core of this X-in-1 approach is Renesas’ RH850/U2B MCUs. This next generation of MCUs delivers the advanced control, safety, and security required by EVs on a single chip. It features a high-performance CPU with up to six cores, operating at up to 400MHz, enabling both real-time control and parallel processing. It also offers comprehensive analogue and timer functions for inverter and power converter applications, enabling efficient control of the entire electrification system on a single chip. Furthermore, the RH850/U2B MCUs offer a wide memory lineup, allowing flexible implementation of the optimal X-in-1 system tailored to specific requirements.
Figure 1. Comparison of MCU Configuration Before and After X-in-1 Integration
The RH850/U2B MCU demonstrates overwhelming superiority in inverter control, maximising the driving performance of EVs. With dedicated hardware optimised for inverter control, including a resolver-to-digital converter (RDC), an analogue-to-digital converter (ADC), and timers for three-phase motors, the RH850/U2B MCU enables high-speed, high-precision control at the hardware level that software alone cannot achieve. The integrated RDC eliminates the need for external angle detection ICs, contributing to reduced component count and simplified board design. Furthermore, the embedded Renesas proprietary Enhanced Motor Control Unit (EMU) executes complex control calculations in the hardware, significantly reducing CPU load while achieving high-speed, high-precision motor control (EMU is only included in the RH850/U2B6).
Figure 2. Comparison of External RDC and Internal RDC
The next-generation power devices using silicon carbide (SiC) and gallium nitride (GaN) are increasingly being adopted in OBCs and DC/DC converters. These devices enable high efficiency and fast switching, directly contributing to shorter charging times and improved energy efficiency. On the other hand, the RH850/U2B MCU incorporates a multifunctional timer (generic timer module (GTM)*2 and high-resolution PWM) that is capable of generating high-speed, high-resolution waveforms (minimum resolution of 156.25ps). This facilitates control that leverages the high-speed switching characteristics of SiC and GaN. It also incorporates a 12-bit fast comparator for high-frequency switching control and protection operations.
In addition to speed and energy efficiency, the RH850/U2B MCU also delivers outstanding performance in battery management systems, the heart of EVs. Monitoring and controlling the voltage and temperature of hundreds of cells demands high processing power. The RH850/U2B MCU features a multi-core CPU, allowing surplus resources to be allocated to BMS processing. This enables system miniaturisation and cost reduction without requiring additional MCUs.
As EVs proliferate, the importance of safety and security becomes critical. Compliant with ISO 26262 ASIL D, the RH850/U2B MCU ensures functional safety at the hardware level. It also incorporates security features compliant with EVITA Full, enabling the construction of highly secure systems even in X-in-1 configurations.
The evolution of EVs is moving towards faster, safer, and more efficient use of automobiles. Achieving this requires meeting new demands that conventional MCUs cannot fully address. The RH850/U2B MCU enables users to meet the needs of EVs with high-speed, high-precision inverter control via dedicated hardware; highly efficient switching control in OBCs and DC/DC converters using high-resolution, high-speed timers; multi-core utilisation in battery management systems; and comprehensive safety and security support.
The post Develop Highly Efficient X-in-1 Integrated Systems for EVs appeared first on ELE Times.
Meanwhile, my CPU is on fire.
| submitted by /u/TallIntroduction8053 [link] [comments] |
КПІ ім. Ігоря Сікорського — лідер серед університетів України у сфері інтелектуальної власності
КПІ ім. Ігоря Сікорського — лідер серед університетів України у сфері інтелектуальної власності за даними Українського національного офісу інтелектуальної власності та інновацій (УКРНОІВІ)
Making a FOSS racing datalogger
| | I'm making a FOSS racing datalogger after I got into kart racing a few years ago and saw how expensive dataloggers were I had to make the GPS laptiming Library, the datalogger itself, designed and printed the case, and recently started on a dataviewer Well all of that took a year to perfect, the laptiming is within 0.002s of the official laptiming, I can do track/course selection, laps, pace, and even split-timing on-device. Now sure, it logs data, but it's not a datalogger without more data. Most other sensors are easy to implement... Engine rpm tho... what a nightmare I'm a software guy, never made hardware before, barely have any idea what I'm doing but i'm making progress. Right now I'm dealing with SD cards being corrupted so I finally gave in and bought a scope to learn more, managed to build a drastically cleaner circuit than I had before and I got some hope. (Yes vibration kills, but this is a new problem with adding the tachometer, and I haven't even gotten to testing that on track yet) (I must do this weird capacitive dance like the "real" ones do, but I also don't have one to take apart so we're gonna just keep winging it baybe) No I don't want to talk about how much money I've spent at this point, I'm making an open source, and cheap, datalogger I probably should have went to school for this but hey, I've gotten this far on nothing but hopes and dreams 20 year SWE brute forcing myself into hardware [link] [comments] |
I am attempting to make a racing datalogger
| | I got into kart racing a few years ago and got pissed off at how expensive dataloggers were($500++), so I went to build one, only to find no ones ever really released many if any public libraries for gps lap timing. So I made one, and then I needed a logger for it, so I made that, then I needed a case, and made that too, and recently cive-coded a data-viewer but that's another rant (20yoe swe) Well all of that took a year to perfect, the laptiming is within 0.002s of the official laptiming, I can do track/course selection, laps, pace, and even split-timing on-device. Now sure, it logs data, but it's not a datalogger without more data. Most other sensors are piss easy to implement... Engine rpm tho... My god what a nightmare I'm a software guy, never made hardware before, barely have any idea what I'm doing but by God I'm making progress. Right now I'm dealing with SD cards being corrupted so I finally gave in and bought a scope to learn more, managed to build a drastically cleaner circuit than I had before and I got some hope. (Yes vibration kills, but this is a new problem with adding the tachometer, and I haven't even gotten to testing that on track yet) (I must do this weird capacitive dance like the commercial ones do, but I also don't have one to take apart so we're gonna just keep winging it baybe) No I don't want to talk about how much money I've spent at this point, I'm making a damned open source, and cheap, datalogger so help me God I've got GitHub links for everything but don't wanna get yelled at dropping links in a first post or something I probably should have went to school for this but hey, I've gotten this far on nothing but hopes and dreams [link] [comments] |
start of my fpga programing journey a 1 bit alu on t41 thinkpad
| submitted by /u/Green-Pie4963 [link] [comments] |
EEVblog 1729 - AC Basics Tutorial Part 7: AC Ohms Law
Electronic circuit simulation engine for education
| Hi Reddit, While reading the Charles Petzold's great vulgarization book CODE : The Hidden Language of Computer Hardware and Software I told myself that it would be a cool educational project to animate the book schemas to vulgarize how computers work down to the transistor level. So I created an electronic circuit engine to help discovering how electronics and computers work. You can check the demo here. This is a starting open source project and all comments and feedback are very welcomed ! [link] [comments] |
Weekly discussion, complaint, and rant thread
Open to anything, including discussions, complaints, and rants.
Sub rules do not apply, so don't bother reporting incivility, off-topic, or spam.
Reddit-wide rules do apply.
To see the newest posts, sort the comments by "new" (instead of "best" or "top").
[link] [comments]
Found this AI generated 20V to 12V converter on the internet. Still laughing my ass off.
| How the fuck would this even work lmao🤣. [link] [comments] |
Когенерація як частина інженерної освіти: у КПІ за участі RSE запускають навчальну програму в межах Energy Resilience Lab
У КПІ ім. Ігоря Сікорського з наступного семестру запроваджується навчальна дисципліна з когенерації, інтегрована в освітню програму підготовки інженерів-енергетиків. Програма розроблена у партнерстві з інженерною компанією RSE за підтримки GIZ та буде реалізована на базі лабораторії автономного та стійкого енергозабезпечення Energy Resilience Lab.
Perceptra secures €1.2m funding from PhotonDelta
Using a single MCU port pin to drive a multi-digit display
When we design a microcontroller (MCU) project, we normally leave a few port lines unused, so that last-minute requirements can be met. Invariably, even those lines also get utilized as the project progresses.
Imagine a situation where you have only one port line left out, and you are suddenly required to add a four-digit display. (Normally, you need 16 output port lines to drive four-digit displays or 8 port lines to drive multiplexed four-digit displays). In such a critical situation, the Figure 1 circuit will come in handy.
Figure 1 A MCU single port pin outputs a reset pulse first and then a number of count pulses equal to the number to be displayed.
Figure 1’s top left portion is a long pulse detector circuit, a Design Idea (DI) of mine published in October 2023. For the components selected, this circuit outputs a pulse only when its input pulse width is more than 1 millisecond (ms). For smaller pulses, its output is LOW.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Figure 1’s circuit can be made as an add-on module to your MCU project. When a display update is needed, the MCU should send a 2-ms ON and 2-ms OFF reset pulse once. This long pulse resets the counter/ decoders.
Then, it sends 0.1-ms ON and 0.1-ms OFF count pulses, whose number equals to the four-digit number to be displayed. For example, if a number 4950 is to be displayed, the MCU will send one reset pulse followed by 4950 count pulses once. Then, the MCU can continue its other functions
The long pulse detector circuit with Q1, Q2, and U1A outputs a pulse for every input pulse, whose ON width is more than 1 ms. At the start, the MCU outputs a LOW. This turns Q1 OFF and allows Q2 to saturate, discharging C1.
When a 2-ms pulse comes, Q1 gets saturated, and Q2 turns OFF. During this period, C1 starts charging through R3, and its voltage goes to around 1.8 V. This is then sent to the positive input of the U1A comparator. Its negative input is kept at 1-V as decided by the R4, R5 divider. Hence, U1A comparator outputs HIGH, which resets all the counters.
For smaller pulses, this output remains LOW. So, when the MCU sends one reset pulse, U1A outputs a HIGH, which resets the U2,U3,U4, and U5 counter/decoders.
Then, these counters count the number of count pulses sent next and display it. U2 -U5 are counter / 7-segment decoders to drive common cathode LED seven-segment displays.
For a maximum count of 9999, the display update may take around 2 seconds. This time can be reduced by reducing the count pulse duration, depending upon the MCU and clock frequency selected.
I have used one resistor for each display for brightness control (R7, R8, R9, and R10). This will not give an equal brightness to all seven segments. Instead, you may use seven resistors per display or a resistor network per display to have equal brightness.
This idea can be extended to any number of displays driven by a single MCU port line. For more information, watch my video explaining this design:
Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.
Related Content
- A long pulse detector circuit
- How to design LED signage and LED matrix displays, Part 1
- DIY LED display provides extra functions and PWM
- Implementing Adaptive Brightness Control to Seven Segment LED Displays
- An LED display adapted for DIY projects
The post Using a single MCU port pin to drive a multi-digit display appeared first on EDN.
The high-speed data link to Mars faces a unique timing challenge

Experienced network designers know that the performance achievable of a data link depends on many factors, including the quality and consistency of the inherently analog medium between the two endpoints. Whether it’s air, copper, fiber, or even vacuum, that link sets a basic operating constraint on the speed and bit error rate (BER) of the link.
Any short- or longer-term perturbation in the link—including external and internal noise, distortion, phase shifts, media shifts, and other imperfections—will result in a lower effective data rate, need for more data encoding, error detection, correction, and re-transmissions.
A critical element in high-speed, low-BER data recovery is the advanced clock recovery and re-clocking for synchronization accomplished using phase-locked loops (analog or digital) and other arrangements. The unspoken assumption is that the fundamental measurement of “time” is the same at both ends of the link. This can be established by use of atomic and laser-optical clocks of outstanding precision and performance, if crystal or resonator-based won’t suffice.
But that endpoint equivalence is not necessarily the case. If we want to establish a long-term robotic or even human presence on our neighbor Mars, and set up a robust high-speed data link, we need to know the answer to a basic question: What time is it on Mars?
It turns out that it’s not a trivial question to answer. As Einstein showed in his classic 1905 paper on special relativity “On the Electrodynamics of Moving Bodies,” and subsequent work on general relativity, clocks don’t tick at the same rate across the universe. They will run slightly faster or slower depending on the strength of gravity in their environment, as well as their relative velocity with respect to other clocks.
This time dilation is not a fanciful theory, as it has been measured and verified in many experiments. It even points to a correction factor that must be applied to satellites orbiting the Earth. Without those adjustments, GPS signal timing would be “off” and its accuracy seriously degraded. It’s a phenomenon that is often, and quite correctly, summarized simply as “moving clocks run slow.”
The general problem of time-dilation, objects in motion, and gravity’s effects have been known for many years, and it can be a problem for non-orbiting space vehicles as well. To manage the problem, Barycentric Coordinate Time—known as TCB, from the French name—is a coordinate time standard defined in 1991 by the International Astronomical Union.
TCB is intended to be used as the independent variable of time for all calculations related to orbits of planets, asteroids, comets, and interplanetary spacecraft in the solar system, and defines time as experienced by a clock at rest in a coordinate frame co-moving with the barycenter (center of mass) of the solar system.
What does this have to do with Mars and data links? As shown in Figure 1, the magnitude of the dilation-induced time “slippage” between Earth and Mars is one factor that affects maintaining a high-speed link between these two planets.

Figure 1 In addition to “hard” data from landed rover and orbiting science packages, Mars—also known as “the red planet”—presents a complicated time-dilation scenario. Source: NIST
Now, a team of physicists at the National Institute of Standards and Technology (NIST) has calculated a fairly precise answer for the first time. The problem is complicated as there are four primary “players” to consider: Mars, Earth, Sun, and even our Moon (and the two small moons of Mars also have an effect, though much smaller).
Why the complication? It’s been known since the 1800s that the three-body problem has no general closed-form solution, and the four-body problem is worse. That means there is no explicit formula that can resolve the positions of the bodies in the dilation analysis. Consequently, number-crunching numerical calculations must be used, and it’s even more challenging with four and more bodies.
The researchers’ work is based not only on theory but also measurements from the various “rovers” that have landed on Mars as well as Mars orbiters. The team chose a point on the Martian surface as a reference, somewhat like sea level at the equator on Earth, and used years of data collected from Mars missions to estimate gravity on the surface of the planet, which is five times weaker than Earth’s.
I won’t even try to explain the mathematics of the analysis; all I will say it’s the most “intense” set of equations I have even seen, even compared to solid-state physics.
They determined that on average, clocks on Mars will tick 477 microseconds faster than those on Earth per day (Figure 2). However, Mars’ eccentric orbit and the gravity from its celestial neighbors can increase or decrease this amount by as much as 226 microseconds a day over the course of the Martian year.

Figure 2 Plots of the clock-rate offsets between a clock on Mars compared to clocks on the Earth and the Moon for ∼40 years starting from modified Julian date (MJD) 52275 (January 1, 2003), using DE440 data. DE440 is a highly accurate planetary and lunar ephemeris (a table of positions) from NASA’s Jet Propulsion Laboratory, representing precise orbital data for the Sun, Moon, planets, and Pluto. Source: NIST
The clock is not only “squeezed” with respect to Earth, but the amount of squeeze varies in a non-periodic way. In contrast, they note that the Earth and Moon orbits are relatively constant; time on the Moon is consistently 56 microseconds faster than time on Earth.
If you want the details, check out their open-access paper “A Comparative Study of Time on Mars with Lunar and Terrestrial Clocks” published in The Astronomical Journal of the American Astronomical Society. Don’t worry: a readable summary and overview is also posted at the NIST site, “What Time Is It on Mars? NIST Physicists Have the Answer.”
How engineers will deal with these results is another story, but timing is an important piece of the data link signal chain. Perhaps they will have to build an equivalent of the tide-predicting machine designed by William Thomson (later known as Lord Kelvin) shown in Figure 3.

Figure 3 This analog all-mechanical computer design by William Thomson was designed to predict tides, which are determined by cyclic motion of the Earth, Moon, and many other factors. Source: Science Museum London via IEEE Spectrum
This analog mechanical computer on display at the Science Museum in London was designed for one purpose only: combining 10 cyclic oscillations linked to the periodic motions of the Earth, Sun, and Moon and other bodies to trace the tidal curve for a given location.
Have you ever had to synchronize a data link with a nominally accurate clock on each end, but with clocks that actually had significant differences as well as cyclic and unknown shifting of their frequencies?
Bill Schweber is a degreed senior EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features. Prior to becoming an author and editor, he spent his entire hands-on career on the analog side by working on power supplies, sensors, signal conditioning, and wired and wireless communication links. His work experience includes many years at Analog Devices in applications and marketing.
Related Content
- JPL & NASA’s latest clock: almost perfect
- “Digital” Sundial: Ancient Clock Gets Clever Upgrade
- Precision metrology redefines analog calibration strategy
The post The high-speed data link to Mars faces a unique timing challenge appeared first on EDN.
AXT updates Q4/2025 revenue guidance to $22.5–23.5m
Sumitomo Chemical exhibiting compound semiconductor products at Photonics West
Nuvoton and ITRI Join Forces to Accelerate Edge AI Adoption Across Industries with the Entry-Level M55M1 AI MCU
Nuvoton Technology, centred on its NuMicro M55M1 AI MCU, is partnering with the Industrial Technology Research Institute (ITRI) to promote integrated “hardware–software” edge AI solutions. These solutions support diverse application scenarios, including manufacturing, smart buildings, and healthcare, enabling industries across the board to adopt AI quickly in a “usable, manageable, and affordable” way, and bringing AI directly into frontline equipment and business processes.
Aligned with the National Science and Technology Council (NSTC) and the Ministry of Economic Affairs (MOEA) initiative to build the Taiwan Smart System Integration and Manufacturing Platform, Nuvoton follows ITRI’s three key pillars for AI development—data, computing power, and algorithms—together with a six-dimension AI readiness framework covering AI strategy, organizational culture, talent and skills, infrastructure, data governance, and risk management. Based on this framework, Nuvoton modularises its toolchains, AI models, and development board offerings, and works with ITRI’s Chip and System Integration Service Platform Program to establish a TinyML micro-computing platform. This platform enables small and medium-sized enterprises (SMEs) to complete proof-of-concept (PoC) projects with minimal entry barriers, progress toward pilot production, and scale through replication. At the same time, it promotes “dual-brain collaboration” between AI experts and domain specialists, increasing project success rates and supporting the government’s vision of building Taiwan into an “AI Island.”
As one of the few entry-level AI solutions on the market, the M55M1 integrates an Arm Cortex-M55 core (up to 220 MHz) with an Arm Ethos-U55 micro-NPU in a single chip, delivering around 110 GOP/s of acceleration for mainstream CNN/DNN inference. The chip features up to 1.5 MB of on-chip SRAM and 2 MB of Flash. It can be expanded via HyperBus to support HyperRAM/HyperFlash, enabling real-time, offline, low-power AI inference and control directly at the edge. Together with Nuvoton’s in-house NuML Toolkit and a variety of readily available AI models (such as face recognition, object detection, speech/command recognition, and anomaly detection), developers can quickly get started using a standard MCU development flow, effectively lowering the barrier to AI adoption.
Nuvoton and ITRI will first focus on three key real-world application scenarios:
- Edge inspection on manufacturing lines: Using CCAP for image pre-processing and U55 for inference to perform object detection or defect identification at the edge, supporting quality inspection as well as predictive analysis of equipment health.
- People flow detection and energy-saving control in smart buildings: Leveraging lightweight sensing such as PIR, ToF, or low-resolution imaging, combined with time-based and zoned control strategies, to drive lighting/HVAC on/off and dimming/airflow adjustments, thereby improving energy efficiency.
- Edge alerts for medical and long-term care: Performing posture and fall detection directly on end devices, uploading only events and key indicators to balance personal data protection with overall system availability.
Nuvoton and ITRI will continue to leverage Taiwan’s local supply chain and its strengths in hardware–software integration, using a systematic approach of “data × computing power × algorithms” to bring AI directly into real-world environments. With its single-chip capability to handle combined requirements in vision, audio, and control, the M55M1 enables small and medium-sized enterprises to embrace AI in an affordable and well-governed way.
Nuvoton is now collaborating with system integrators and field partners across scenarios such as manufacturing, buildings, healthcare, and public services, providing development boards, toolchains, and best-practice templates to help enterprises complete PoC and mass deployment in the shortest possible time. We welcome inquiries and partnership opportunities to jointly advance “AI in industries and industrialisation of AI,” accelerating AI transformation and value innovation across Taiwan’s many sectors.
The post Nuvoton and ITRI Join Forces to Accelerate Edge AI Adoption Across Industries with the Entry-Level M55M1 AI MCU appeared first on ELE Times.
Cadence to deliver pre-validated chiplet solutions to Accelerate Chiplet Time to Market
Root of Trust within the Cadence Chiplet Framework. As a leading provider of non-volatile memory technologies, the combination of eMemory technology and Cadence’s security subsystem results in a Physical AI Chiplet platform delivering secure storage and long-lifecycle key management, reinforcing the strong hardware foundation provided by Cadence for die-to-die security and safety in advanced chiplet designs.”The post Cadence to deliver pre-validated chiplet solutions to Accelerate Chiplet Time to Market appeared first on ELE Times.




