Українською
   In English
Збирач потоків
Nuvoton Launches Industrial-Grade, High-Security MA35 Microprocessor with Smallest Package and Largest Stacked DRAM Capacity
Nuvoton Technology announced the MA35D16AJ87C, a new member of the MA35 series microprocessors. Featuring an industry-leading 15 x 15 mm BGA312 package with 512 MB of DDR SDRAM stacked inside, the MA35D16AJ87C streamlines PCB design, reduces product footprint, and lowers EMI, making it an excellent fit for space-constrained industrial applications.
Key Highlights-
- Dual 64-bit Arm Cortex-A35 cores plus a Cortex-M4 real-time core
 - Integrated, independent TSI (Trusted Secure Island) security hardware
 - 512 MB DDR SDRAM stacked inside a 15 x 15 mm BGA312 package
 - Supports Linux and RTOS, along with Qt, emWin, and LVGL graphics libraries
 - Industrial temperature range: -40°C to +105°C
 - Ideal for factory automation, industrial IoT, new energy, smart buildings, and smart cities
 
The MA35D16AJ87C is built on dual Arm Cortex-A35 cores (Armv8-A architecture, up to 800 MHz) paired with a Cortex-M4 real-time core. It supports 1080p display output with graphics acceleration and integrates a comprehensive set of peripherals, including 17 sets of UARTs, 4 sets of CAN-FD interfaces, 2 sets of Gigabit Ethernet ports, 2 sets of SDIO 3.0 interfaces, and 2 sets of USB 2.0 ports, among others, to meet diverse industrial application needs.
To address escalating IoT security challenges, the MA35D16AJ87C incorporates Nuvoton’s independently designed TSI (Trusted Secure Island) hardware security module. It supports Arm TrustZone technology, Secure Boot, and Tamper Detection, and integrates a complete hardware cryptographic engine suite (AES, SHA, ECC, RSA, SM2/3/4), a true random number generator (TRNG), and a key store. These capabilities help customers meet international cybersecurity requirements such as the Cyber Resilience Act (CRA) and IEC 62443.
The MA35D16AJ87C is supported by Nuvoton’s Linux and RTOS platforms and is compatible with leading graphics libraries including Qt, emWin, and LVGL, helping customers shorten development cycles and reduce overall development costs. The Nuvoton MA35 Series is designed for industrial-grade applications and is backed by a 10- year product supply commitment.
The post Nuvoton Launches Industrial-Grade, High-Security MA35 Microprocessor with Smallest Package and Largest Stacked DRAM Capacity appeared first on ELE Times.
Nokia and Rohde & Schwarz collaborate on AI-powered 6G receiver to cut costs, accelerate time to market
Nokia and Rohde & Schwarz have created and successfully tested a 6G radio receiver that uses AI technologies to overcome one of the biggest anticipated challenges of 6G network rollouts, coverage limitations inherent in 6G’s higher-frequency spectrum.
The machine learning capabilities in the receiver greatly boost uplink distance, enhancing coverage for future 6G networks. This will help operators roll out 6G over their existing 5G footprints, reducing deployment costs and accelerating time to market.
Nokia Bell Labs developed the receiver and validated it using 6G test equipment and methodologies from Rohde & Schwarz. The two companies will unveil a proof-of-concept receiver at the Brooklyn 6G Summit on November 6, 2025.
Peter Vetter, President of Bell Labs Core Research at Nokia, said: “One of the key issues facing future 6G deployments is the coverage limitations inherent in 6G’s higher-frequency spectrum. Typically, we would need to build denser networks with more cell sites to overcome this problem. By boosting the coverage of 6G receivers, however, AI technology will help us build 6G infrastructure over current 5G footprints.”
Nokia Bell Labs and Rohde & Schwarz have tested this new AI receiver under real world conditions, achieving uplink distance improvements over today’s receiver technologies ranging from 10% to 25%. The testbed comprises an R&S SMW200A vector signal generator, used for uplink signal generation and channel emulation. On the receive side, the newly launched FSWX signal and spectrum analyzer from Rohde & Schwarz is employed to perform the AI inference for Nokia’s AI receiver. In addition to enhancing coverage, the AI technology also demonstrates improved throughput and power efficiency, multiplying the benefits it will provide in the 6G era.
Michael Fischlein, VP Spectrum & Network Analyzers, EMC and Antenna Test at Rohde & Schwarz, said: “Rohde & Schwarz is excited to collaborate with Nokia in pioneering AI-driven 6G receiver technology. Leveraging more than 90 years of experience in test and measurement, we’re uniquely positioned to support the development of next-generation wireless, allowing us to evaluate and refine AI algorithms at this crucial pre-standardization stage. This partnership builds on our long history of innovation and demonstrates our commitment to shaping the future of 6G.”
The post Nokia and Rohde & Schwarz collaborate on AI-powered 6G receiver to cut costs, accelerate time to market appeared first on ELE Times.
ESA, MediaTek, Eutelsat, Airbus, Sharp, ITRI, and R&S announce world’s first Rel-19 5G-Advanced NR-NTN connection over OneWeb LEO satellites
European Space Agency (ESA), MediaTek Inc., Eutelsat, Airbus Defence and Space, Sharp, the Industrial Technology Research Institute (ITRI), and Rohde & Schwarz (R&S) have conducted the world’s first successful trial of 5G-Advanced Non-Terrestrial Network (NTN) technology over the Eutelsat’s OneWeb low Earth orbit (LEO) satellites compliant with 3GPP Rel-19 NR-NTN configurations. The tests pave the way for deployment of 5G-Advanced NR-NTN standard, which will lead to future satellite and terrestrial interoperability within a large ecosystem, lowering the cost of access and enabling the use of satellite broadband for NTN devices around the world.
The trial used OneWeb satellites, communicating with the MediaTek NR-NTN chipset, and ITRI’s NR-NTN gNB, implementing 3GPP Release 19 specifications including Ku-band, 50 MHz channel bandwidth and conditional handover (CHO). The OneWeb satellites, built by Airbus, carry transparent transponders, with Ku-band service link, Ka-band feeder link, and adopt the “Earth-moving beams” concept. During the trial, the NTN user terminal with a flat panel antenna developed by SHARP – successfully connected over satellite to the on-ground 5G core using the gateway antenna located at ESA’s European Space Research and Technology Centre (ESTEC) in The Netherlands.
David Phillips, Head of the Systems, Strategic Programme Lines and Technology Department within ESA’s Connectivity and Secure Communications directorate, said: “By partnering with Airbus Defence and Space, Eutelsat and partners, this innovative step in the integration of terrestrial and non-terrestrial networks proves why collaboration is an essential ingredient in boosting competitiveness and growth of Europe’s satellite communications sector.”
Mingxi Fan, Head of Wireless System and ASIC Engineering at MediaTek, said: “As a global leader in terrestrial and non-terrestrial connectivity, we continue in our mission to improve lives by enabling technology that connects the world around us, including areas with little to no cellular coverage. By making real-world connections with Eutelsat LEO satellites in orbit, together with our ecosystem partners, we are now another step closer to bring the next generation of 3GPP-based NR-NTN satellite wideband connectivity for commercial uses.”
Daniele Finocchiaro, Head of Telecom R&D and Projects at Eutelsat, said: “We are proud to be among the leading companies working on NTN specifications, and to be the first satellite operator to test NTN broadband over Ku-band LEO satellites. Collaboration with important partners is a key element when working on a new technology, and we especially appreciate the support of the European Space Agency.”
Elodie Viau, Head of Telecom and Navigation Systems at Airbus, said: “This connectivity demonstration performed with Airbus-built LEO Eutelsat satellites confirms our product adaptability. The successful showcase of Advanced New Radio NTN handover capability marks a major step towards enabling seamless, global broadband connectivity for 5G devices. These results reflect the strong collaboration between all partners involved, whose combined expertise and commitment have been key to achieving this milestone.”
Masahiro Okitsu, President & CEO, Sharp Corporation, said: “We are proud to announce that we have successfully demonstrated Conditional Handover over 5G-Advanced NR-NTN connection using OneWeb constellation and our newly developed user terminals. This achievement marks a significant step toward the practical implementation of non-terrestrial networks. Leveraging the expertise we have cultivated over many years in terrestrial communications, we are honored to bring innovation to the field of satellite communications as well. Moving forward, we will continue to contribute to the evolution of global communication infrastructure and strive to realize a society where everyone is seamlessly connected.”
Dr. Pang-An Ting, Vice President and General Director of Information and Communications Research Laboratories at ITRI, said: “In this trial, ITRI showcased its advanced NR-NTN gNB technology as an integral part of the NR-NTN communication system, enabling conditional handover on the Rel-19 system. We see great potential in 3GPP NTN communication to deliver ubiquitous coverage and seamless connectivity in full integration with terrestrial networks.”
Goce Talaganov, Vice President of Mobile Radio Testers at Rohde & Schwarz, said: “We at Rohde & Schwarz are excited to have contributed to this industry milestone with our test and measurement expertise. For real-time NR-NTN channel characterization, we used our high-end signal generation and analysis instruments R&S SMW200A and FSW. Our CMX500-based NTN test suite replicated the Ku-band conditional handover scenarios in the lab. This rigorous testing, which addresses the challenges of satellite-based communications, paved the way for further performance optimization of MediaTek’s and Sharp’s 5G-Advanced NTN devices.”
The post ESA, MediaTek, Eutelsat, Airbus, Sharp, ITRI, and R&S announce world’s first Rel-19 5G-Advanced NR-NTN connection over OneWeb LEO satellites appeared first on ELE Times.
Achieving analog precision via components and design, or just trim and go

I finally managed to get to a book that has been on my “to read” list for quite a while: “Beyond Measure: The Hidden History of Measurement from Cubits to Quantum Constant” (2022) by James Vincent (Figure 1). It traces the evolution of measurement, its uses, and its motivations, and how measurement has shaped our world, from ancient civilizations to the modern day. On a personal note, I found the first two-thirds of the book a great read, but in my opinion, the last third wandered and meandering off topic. Regardless, it was a worthwhile book overall.

Figure 1 This book provides a fascinating journey through the history of measurement and how different advances combined over the centuries to get us to our present levels. Source: W. W. Norton & Company Inc.
Several chapters deal with the way measurement and the nascent science of metrology were used in two leading manufacturing entities of the early 20th century: Rolls-Royce and Ford Motor Company, and the manufacturing differences between them.
Before looking at the differences, you have to “reset” your frame of reference and recognize that even low-to-moderate volume production in those days involved a lot of manual fabrication of component parts, as the “mass production” machinery we now take as a given often didn’t exist or could only do rough work.
Rolls-Royce made (and still makes) fine motor cars, of course. Much of their quality was not just in the finish and accessories; the car was entirely mechanical except for the ignition system. They featured a finely crafted and tuned powertrain. It’s not a myth that you could balance a filled wine glass on the hood (bonnet) while the engine was running and not see any ripples in the liquid’s surface. Furthermore, you could barely hear the engine at all at a time when cars were fairly noisy.
They achieved this level of performance using careful and laborious manual adjustments, trimming, filing, and balancing of the appropriate components to achieve that near-perfect balance and operation. Clearly, this was a time-consuming process requiring skilled and experienced craftspeople. It was “mass production” only in terms of volume, but not in terms of production flow as we understand it today.
In contrast, Henry Fiord focused on mass production with interchangeable parts that would work to design objective immediately when assembled. Doing so required advances in measurement of the components at Ford’s factory to weed out incoming substandard parts and statistical analysis of quality, conformance, and deviations. Ford also sent specialists to suppliers’ factories to improve both their production processes and their own metrology.
Those were the days
Of course, those were different times in terms of industrial production. When Wright brothers needed a gasoline engine for the 1903 Flyer, few “standard” engine choices were available, and none came close to their size, weight, and output-power needs.
So, their in-house mechanic Charlie Taylor machined an aluminum engine block, fabricated most parts, and assembled an engine (1903 Wright Engine) in just six weeks using a drill press and a lathe; it produced 12 horsepower, 50% above the 8 horsepower that their calculations indicated they needed (Figure 2).

Figure 2 Perhaps in an ultimate do-it-yourself project, Charlie Taylor, mechanic for Wright brothers, machined the aluminum engine block and fabricated most of the parts and then assembled the complete engine in six weeks (reportedly working only from rough sketches). Source: Wright Brothers Aeroplane Company
Which approach is better—fine adjusting and trims, or use of a better design and superior components? There’s little doubt that the “quality by components” approach is the better tactic in today’s world where even customized cars make use of many off-the-shelf parts.
Moreover, the required volume for a successful car-production line mandates avoiding hand-tuning of individual vehicles to make their components plug-and-play properly. Even Rolls-Royce now uses the Ford approach, of course; the alternative is impractical for modern vehicles except for special trim and accessories.
Single unit “perfection” uses both approaches
In some cases, both calibration and use of better topology and superior components combine for a unique design. Not surprisingly, a classic example is one of the first EDN articles by late analog-design genius Jim Williams, “This 30-ppm scale proves that analog designs aren’t dead yet”. Yes, I have cited it in previous blogs, and that’s no accident (Figure 3).

Figure 3 This 1976 EDN article by Jim Williams set a standard for analog signal-chain technical expertise and insight that has rarely been equaled. Source: EDN
In the article, he describes his step-by-step design concept and fabrication process for a portable weigh scale that would offer 0.02% absolute accuracy (0.01 lb over a 300-pound range). Yet, it would never need adjustment to be put into use. Even though this article is nearly 50 years old, it still has relevant lessons for our very different world.
I believe that Jim did a follow-up article about 20 years later, where he revisited and upgraded that design using newer components, but I can’t find it online.
Today’s requirements were unimaginable—until recently
Use of in-process calibration is advancing due to techniques such as the use of laser-based interferometry. For example, the positional accuracy of the carriage, which moves over the wafer, needs to be in the sub-micrometer range.
While this level of performance can be achieved with friction-free air bearings, they cannot be used in extreme-ultraviolet (EUV) systems since those operate in an ultravacuum environment. Instead, high-performance mechanical bearings must be used, even though they are inferior to air bearings.
There are micrometer-level errors in the x-axis and y-axis, and the two axes are also not perfectly orthogonal, resulting in a system-level error typically greater than several micrometers across the 300 × 300-mm plane. To compensate, manufacturers add interferometry-based calibration of the mechanical positioning systems to determine the error topography of a mechanical platform.
For example, with a 300-mm wafer, the grid is scanned in 10-mm steps, and the interferometer determines the actual position. This value is compared against the motion-encoder value to determine a corrective offset. After this mapping, the system accuracy is improved by a factor of 10 and can achieve an absolute accuracy of better than 0.5 µm in the x-y plane
Maybe too smart?
Of course, there are times when you can be a little too clever in the selection of components when working to improve system performance. Many years ago, I worked for a company making controllers for large machines, and there was one circuit function that needed coarse adjustment for changes in ambient temperature. The obvious way would have been to use a thermistor or a similar component in the circuit.
But our lead designer—a circuit genius by any measure—had a “better” idea. Since the design used a lot of cheap, 100-kΩ pullup resistors with poor temperature coefficient, he decided to use one of those instead of the thermistor in the stabilization loop, as they were already on the bill of materials. The bench prototype and pilot-run units worked as expected, but the regular production units had poor performance.
Long story short: our brilliant designer had based circuit stabilization on the deliberate poor tempco of these re-purposed pull-up resistors and associated loop dynamic range. However, our in-house purchasing agent got a good deal on some resistors of the same value and size, but with a much tighter tempco. Additionally, getting a better component that was functionally and physically identical for less money seemed like a win-win.
That was fine for the pull-up role, but it meant that the transfer function of temperature to resistance was severely compressed. Identifying that problem took a lot of aggravation and time.
What’s your preferred approach to achieving a high-precision, accurate, stable analog-signal chain and front-end? Have you used both methods, or are you inherently partial to one over the other? Why?
Bill Schweber is a degreed senior EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features. Prior to becoming an author and editor, he spent his entire hands-on career on the analog side by working on power supplies, sensors, signal conditioning, and wired and wireless communication links. His work experience includes many years at Analog Devices in applications and marketing.
Related Content
- JPL & NASA’s latest clock: almost perfect
 - The Wright Brothers: Test Engineers as Well as Inventors
 - Precision metrology redefines analog calibration strategy
 - Inter-satellite link demonstrates metrology’s reach, capabilities
 
The post Achieving analog precision via components and design, or just trim and go appeared first on EDN.
EEVblog 1718 - Cheap 1GHz Oscilloscopes are Useless? ($5 DIY 1GHz Resistive Probe)
III-V Epi appoints Saurabh Kumar to strengthen epi engineering team and increase manufacturing capacity
Micro-LED firm VueReal expanding presence in China market
Другий цикл Програми професійного розвитку академічних менеджерів
Команда КПІ ім. Ігоря Сікорського долучилася до другого циклу Програми професійного розвитку академічних менеджерів.
5G University Program від Ericsson офіційно розпочалася!
5G University Program від Ericsson - ініціатива спрямована на надання студентам і викладачам КПІ ім. Ігоря Сікорського знань та інструментів для підготовки нового покоління фахівців у галузі 5G від експертів Ericsson.
LED illumination addresses ventilation (at the bulb, at least)

The bulk of the technologies and products based on them that I encounter in my everyday interaction with the consumer electronics industry are evolutionary (and barely so in some cases) versus revolutionary in nature. A laptop computer, a tablet, or a smartphone might get a periodic CPU-upgrade transplant, for example, enabling it to complete tasks a bit faster and/or a bit more energy-efficiently than before. But the task list is essentially the same as was the case with the prior product generation…and the generation before that…and…not to mention that the generational-cadence physical appearance also usually remains essentially the same.
Such cadence commonality is also the case with many LED light bulbs I’ve taken apart in recent years, in no small part because they’re intended to visually mimic incandescent precursors. But SANSI has taken a more revolutionary tack, in the process tackling an issue—heat–with which I’ve repeatedly struggled. Say what you (rightly) will about incandescent bulbs’ inherent energy inefficiency, along with the corresponding high temperature output that they radiate—there’s a fundamental reason why they were the core heat source for the Easy-Bake Oven, after all:

But consider, too, that they didn’t integrate any electronics; the sole failure points were the glass globe and filament inside it. Conversely, my installation of both CFL and LED light bulbs within airflow-deficient sconces in my wife’s office likely hastened both their failure and preparatory flickering, due to degradation of the capacitors, voltage converters and regulators, control ICs and other circuitry within the bulbs as well as their core illumination sources.
That’s why SANSI’s comparatively fresh approach to LED light bulb design, which I alluded to in the comments of my prior teardown, has intrigued me ever since I first saw and immediately bought both 2700K “warm white” and 5000K “daylight” color-temperature multiple-bulb sets on sale at Amazon two years ago:

They’re smaller A15, not standard A19, in overall dimensions, although the E26 base is common between the two formats, so they can generally still be used in place of incandescent bulbs (although, unlike incandescents, these particular LED light bults are not dimmable):
 Note, too, their claimed 20% brighter illumination (900 vs 750 lumens) and 5x estimated longer usable lifetime (25,000 hours vs 5,000 hours). Key to that latter estimation, however, is not only the bulb’s inherent improved ventilation:

Versus metal-swathed and otherwise enclosed-circuitry conventional LED bulb alternatives:

But it is also the ventilation potential (or not) of wherever the bulb is installed, as the “no closed luminaires” warning included on the sticker on the left side of the SANSI packaging makes clear:

That said, even if your installation situation involves plenty of airflow around the bulb, don’t forget that the orientation of the bulb is important, too. Specifically, since heat rises, if the bulb is upside-down with the LEDs underneath the circuitry, the latter will still tend to get “cooked”.
Perusing our patientEnough of the promo pictures. Let’s now look at the actual device I’ll be tearing down today, starting with the remainder of the box-side shots, in each case, and as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:






Open ‘er up:
 lift off the retaining cardboard layer, and here’s our 2700K four-pack, which (believe it or not) had set me back only $4.99 ($1.25/bulb) two years back:

The 5000K ones I also bought at that same time came as a two-pack, also promo-priced, this time at $4.29 ($2.15/bulb). Since they ended up being more expensive per bulb, and because I have only two of them, I’m not currently planning on also taking one of them apart. But I did temporarily remove one of them and replace it in the two-pack box with today’s victim, so you could see the LED phosphor-tint difference between them. 5000K on left, 2700K on right; I doubt there’s any other design difference between the two bulbs, but you never know…

Aside from the aforementioned cardboard flap for position retention above the bulbs and a chunk of Styrofoam below them (complete with holes for holding the bases’ end caps in place):

There’s no other padding inside, which might have proven tragic if we were dealing with glass-globe bulbs or flimsy filaments. In this case, conversely, it likely suffices. Also note the cleverly designed sliver of literature at the back of the box’s insides:
 
 
Now, for our patient, with initial overview perspectives of the top:

Bottom:

And side:

Check out all those ventilation slots! Also note the clips that keep the globe in place:

Before tackling those clips, here are six sequential clockwise-rotation shots of the side markings. I’ll leave it to you to mentally “glue” the verbiage snippets together into phrases and sentences:
 
 
 
 
 
Diving in for illuminated understanding
Now for those clips. Downside: they’re (understandably, given the high voltage running around inside) stubborn. Upside: no even-more-stubborn glue!
 
 
 
 
Voila:




Note the glimpses of additional “stuff” within the base, thanks to the revealing vents. Full disclosure and identification of the contents is our next (and last) aspiration:


As usual, twist the end cap off with a tongue-and-groove slip-joint (“Channellock”) pliers:


 
 
and the ceramic substrate (along with its still-connected wires and circuitry, of course) dutifully detaches from the plastic base straightaway:



Not much to see on the ceramic “plate” backside this time, aside from the 22µF 200V electrolytic capacitor poking through:

The frontside is where most of the “action” is:

At the bottom is a mini-PCB that mates the capacitor and wires’ soldered leads to the ceramic substrate-embedded traces. Around the perimeter, of course, is the series-connected chain of 17 (if I’ve counted correctly) LEDs with their orange-tinted phosphor coatings, spectrum-tuned to generate the 2700K “warm white” light. And the three SMD resistors scattered around the substrate, two next to an IC in the upper right quadrant (33Ω “33R0” and 20Ω “33R0”) and another (33Ω “334”) alongside a device at left, are also obvious.
Those two chips ended up generating the bulk of the design intrigue, in the latter case still an unresolved mystery (at least to me). The one at upper right is marked, alongside a company logo that I’d not encountered before, as follows:
JWB1981
1PC031A
The package also looks odd; the leads on both sides are asymmetrically spaced, and there’s an additional (fourth) lead on one side. But thanks to one of the results from my Google search on the first-line term, in the form of a Hackaday post that then pointed at an informative video:
This particular mystery has, at least I believe, been solved. Quoting from the Hackaday summary (with hyperlinks and other augmentations added by yours truly):
The chip in question is a Joulewatt JWB1981, for which no datasheet is available on the internet [BD note: actually, here it is!]. However, there is a datasheet for the JW1981, which is a linear LED driver. After reverse-engineering the PCB, bigclivedotcom concluded that the JWB1981 must [BD note: also] include an onboard bridge rectifier. The only other components on the board are three resistors, a capacitor, and LEDs.
The first resistor limits the inrush current to the large smoothing capacitor. The second resistor is to discharge the capacitor, while the final resistor sets the current output of the regulator. It is possible to eliminate the smoothing capacitor and discharge resistor, as other LED circuits have done, which also allow the light to be dimmable. However, this results in a very annoying flicker of the LEDs at the AC frequency, especially at low brightness settings.
Compare the resultant schematic shown in the video with one created by EDN’s Martin Rowe, done while reverse-engineering an A19 LED light bulb at the beginning of 2018, and you’ll see just how cost-effective a modern design approach like this can be.
That only leaves the chip at left, with two visible soldered contacts (one on each end), and bare on top save for a cryptic rectangular mark (which leaves Google Lens thinking it’s the guts of a light switch, believe it or not). It’s not referenced in “Big Clive’s” deciphered design, and I can’t find an image of anything like it anywhere else. Diode? Varistor to protect against voltage surges? Resettable fuse to handle current surges? Multiple of these? Something(s) else? Post your [educated, preferably] guesses, along with any other thoughts, in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Disassembling a LED-based light that’s not acting quite right…right?
 - Teardown: Bluetooth-enhanced LED bulb
 - Teardown: Zigbee-controlled LED light bulb
 - Freeing a three-way LED light bulb’s insides from their captivity
 - Teardown: What killed this LED bulb?
 - Slideshow: LED Lighting Teardowns
 
The post LED illumination addresses ventilation (at the bulb, at least) appeared first on EDN.
I’ve done it
      |   My god. [link] [comments]  | 
Next-Generation of Optical Ethernet PHY Transceivers Deliver Precision Time Protocol and MACsec Encryption for Long-Reach Networking
The post Next-Generation of Optical Ethernet PHY Transceivers Deliver Precision Time Protocol and MACsec Encryption for Long-Reach Networking appeared first on ELE Times.
Anritsu Launches Virtual Network Measurement Solution to Evaluate Communication Quality in Cloud and Virtual Environments
Anritsu Corporation announced the launch of its Virtual Network Master for AWS MX109030PC, a virtual network measurement solution operating in Amazon Web Services (AWS) Cloud environments. This software-based solution enables accurate, repeatable evaluation of communication quality across networks, including Cloud and virtual environments. It measures key network quality indicators, such as latency, jitter, throughput, and packet (frame) loss rate, in both one-way and round-trip directions. This software can accurately evaluate end-to-end (E2E) communication quality even in virtual environments where hardware test instruments cannot be installed.
Moreover, adding the Network Master Pro MT1000A/MT1040A test hardware to the network cellular side supports consistent quality evaluation from the core and Cloud to field-deployed devices.
Anritsu has developed this solution operating on Amazon Web Services (AWS) to accurately and reproducibly evaluate end-to-end (E2E) quality under realistic operating conditions even in virtual environments.
The Virtual Network Master for AWS (MX109030PC) is a software-based solution to accurately evaluate network communication quality in Cloud and virtual environments. Deploying software probes running on AWS across Cloud, data center, and virtual networks enables precise communication quality assessment, even in environments where hardware test instruments cannot be located.
The post Anritsu Launches Virtual Network Measurement Solution to Evaluate Communication Quality in Cloud and Virtual Environments appeared first on ELE Times.
Турклуб КПІ "Глобус": традиції, романтика і сьогодення
Краще гір можуть бути лише гори. Для фанатиків, залюблених у сувору принаду вершин і мандри вище хмар, ця істина очевидна. У Турклубі КПІ "Глобус" досвідчені інструктори навчають технічно й безпечно пересуватися в горах та ще десяткам інших корисних навичок.
Rohde & Schwarz enables MediaTek’s 6G waveform verification with CMP180 radio communication tester
Rohde & Schwarz announced that MediaTek is utilizing the CMP180 radio communication tester to test and verify TC-DFT-s-OFDM, a proposed waveform technology for 6G networks. This collaboration demonstrates the critical role of advanced test equipment in developing foundational technologies for next-generation wireless communications.
TC-DFT-s-OFDM (Trellis Coded Discrete Fourier Transform spread Orthogonal Frequency Division Multiplexing) is being proposed to 3GPP as a potential candidate technology for 6G standardization. MediaTek’s research shows that TC-DFT-s-OFDM delivers superior Maximum Coupling Loss (MCL) performance across various modulation orders, including advanced configurations like 16QAM.
Key benefits of this 6G waveform proposed by MediaTek include enhanced cell coverage through reduced power back-off requirements and improved power efficiency through optimized power amplifier operation techniques such as Average Power Tracking (APT). TC-DFT-s-OFDM enables up to 4dB higher transmission power compared to traditional modulation schemes while maintaining lower interference levels, implying up to 50% gain in coverage area.
“MediaTek’s selection of our CMP180 for their 6G waveform verification work demonstrates the instrument’s capability to support cutting-edge research and development,” said Fernando Schmitt, Product Manager, Rohde & Schwarz. “As the industry advances toward 6G, we’re committed to providing test solutions that enable our customers to push the boundaries of wireless technology.”
The collaboration will be showcased at this year’s Brooklyn 6G Summit, November 5-7, highlighting industry progress toward defining technical specifications for future wireless communications. As TC-DFT-s-OFDM advances through the 3GPP standardization process, rigorous testing using advanced equipment becomes increasingly critical.
The CMP180 radio communication tester is part of the comprehensive test and measurement portfolio from Rohde & Schwarz designed to support wireless technology development from research through commercial deployment.
The post Rohde & Schwarz enables MediaTek’s 6G waveform verification with CMP180 radio communication tester appeared first on ELE Times.
STMicroelectronics powers 48V mild-hybrid efficiency with flexible automotive 8-channel gate driver
The L98GD8 driver from STMicroelectronics has eight fully configurable channels for driving MOSFETs in flexible high-side and low-side configurations. It is able to operate from a 58V supply, and the L98GD8 provides rich diagnostics and protection for safety and reliability.
The 48V power net lets car makers increase the capabilities of mild-hybrid systems including integrated starter-generators, extending electric-drive modes, and enhancing energy recovery to meet stringent new, globally harmonized vehicle-emission tests. Powering additional large loads at 48V, such as the e-compressor, pumps, fans, and valves further raises the overall electrical efficiency and lowers the vehicle weight.
ST’s L98GD8 assists the transition, as an integrated solution optimized for driving the gates of NMOS or PMOS FETs in 48V-powered systems. With eight independent, configurable outputs, a single driver IC controls MOSFETs connected as individual power switches or as high-side and low-side switches in up to two H-bridges for DC-motor driving. It can also provide peak-and-hold control for electrically operated valves. The gate current is programmable, helping engineers minimize MOSFET switching noise to meet electromagnetic compatibility (EMC) regulations.
Automotive-qualified and featured to meet the industry’s high safety and reliability demands, the L98GD8 has per-channel diagnostics for short-circuit to battery, open-load, and short-to-ground faults. Further diagnostic features include logic built in self-test (BIST), over-/under-voltage monitoring with hardware self-check (HWSC), and a configurable communication check (CC) watchdog timer.
In addition, overcurrent sensing allows many flexible configurations while the ability to monitor the drain-source voltage of external MOSFETs and the voltage across an external shunt resistor help further enhance system reliability. There is also an ultrafast overcurrent shutdown with dual-redundant failsafe pins, battery-under voltage monitoring, an ADC for battery and die temperature monitoring, and H-bridge current limiting.
The L98GD8 is in production now, in a 10mm x 10mm TQFP64 package with a budgetary pricing starting at $3.94 for orders of 1000 pieces.
The post STMicroelectronics powers 48V mild-hybrid efficiency with flexible automotive 8-channel gate driver appeared first on ELE Times.
Keysight Advances Quantum Engineering with New System-Level Simulation Solution
Keysight Technologies, announced the release of Quantum System Analysis, a breakthrough Electronic Design Automation (EDA) solution that enables quantum engineers to simulate and optimize quantum systems at the system level. This new capability marks a significant expansion of Keysight’s Quantum EDA portfolio, which includes Quantum Layout, QuantumPro EM, and Quantum Circuit Simulation. This announcement comes at a pivotal moment for the field, especially following the 2025 Nobel Prize in Physics, which recognized advances in superconducting quantum circuits, a core area of focus for Keysight’s new solution.
Quantum System Analysis empowers researchers to simulate the quantum workflow, from initial design stages to system-level experiments, reducing reliance on costly cryogenic testing and accelerating time-to-validation. This integrated approach supports simulations of quantum experiments and includes tools to optimize dilution fridge input lines for thermal noise and qubit temperature estimation.
Quantum System Analysis introduces two transformative features:
- Time Dynamics Simulator: Models the time evolution of quantum systems using Hamiltonians derived from electromagnetic or circuit simulations. This enables accurate simulation of quantum experiments such as Rabi and Ramsey pulsing, helping researchers understand qubit behavior over time.
 - Dilution Fridge Input Line Designer: Allows precise modeling of cryostat input lines to qubits, enabling thermal noise analysis and effective qubit temperature estimation. By simulating the fridge’s input architecture, engineers can minimize thermal photon leakage and improve system fidelity.
 
Chris Mueth, Senior Director for New Markets at Keysight, said: “Quantum System Analysis marks the completion of a truly unified quantum design workflow, seamlessly connecting electromagnetic and circuit-level modeling with comprehensive system-level simulation. By bridging these domains, it eliminates the need for fragmented toolchains and repeated cryogenic testing, enabling faster innovation and greater confidence in quantum system development.”
Mohamed Hassan, Quantum Solutions Planning Lead at Keysight, said: “Quantum System Analysis is a leap forward in accelerating quantum innovation. By shifting left with simulation, we reduce the need for repeated cryogenic experiments and empower researchers to validate system-level designs earlier in the development cycle.”
Quantum System Analysis is available as part of Keysight’s Advanced Design System (ADS) 2026 platform and complements existing quantum EDA solutions. It supports superconducting qubit platforms and is extensible to other modalities such as spin qubits, making it a versatile choice for quantum R&D teams.
The post Keysight Advances Quantum Engineering with New System-Level Simulation Solution appeared first on ELE Times.
Makefile vs. YAML: Modernizing verification simulation flows

Automation has become the backbone of modern SystemVerilog/UVM verification environments. As designs scale from block-level modules to full system-on-chips (SoCs), engineers rely heavily on scripts to orchestrate compilation, simulation, and regression. The effectiveness of these automation flows directly impacts verification quality, turnaround time, and team productivity.
For many years, the Makefile has been the tool of choice for managing these tasks. With its rule-based structure and wide availability, Makefile offered a straightforward way to compile RTL, run simulations, and execute regressions. This approach served well when testbenches were relatively small and configurations were simple.
However, as verification complexity exploded, the limitations of Makefile have become increasingly apparent. Mixing execution rules with hardcoded test configurations leads to fragile scripts that are difficult to scale or reuse across projects. Debugging syntax-heavy Makefiles often takes more effort than writing new tests, diverting attention from coverage and functional goals.
These challenges point toward the need for a more modular and human-readable alternative. YAML, a structured configuration language, addresses many of these shortcomings when paired with Python for execution. Before diving into this solution, it’s important to first examine how today’s flows operate and where they struggle.
Current scenario and challenges
In most verification environments today, Makefile remains the default choice for controlling compilation, simulation, and regression. A single Makefile often governs the entire flow—compiling RTL and testbench sources, invoking the simulator with tool-specific options, and managing regressions across multiple testcases. While this approach has been serviceable for smaller projects, it shows clear limitations as complexity increases.
Below is an outline of key challenges.
- Configuration management: Test lists are commonly hardcoded in text or CSV files, with seeds, defines, and tool flags scattered across multiple scripts. Updating or reusing these settings across projects is cumbersome.
 - Readability and debugging: Makefile syntax is compact but cryptic, which makes debugging errors non-trivial. Even small changes can cascade into build failures, demanding significant engineer time.
 - Scalability: As testbenches grow, adding new testcases or regression suites quickly bloats the Makefile. Managing hundreds of tests or regression campaigns becomes unwieldy.
 - Tool dependence: Each Makefile is typically tied to a specific simulator, for instance, VCS, Questa, and Xcelium. Porting the flow to a different tool requires major rewrites.
 - Limited reusability: Teams often reinvent similar flows for different projects, with little opportunity to share or reuse scripts.
 
These challenges shift the engineer’s focus away from verification quality and coverage goals toward the mechanics of scripting and tool debugging. Therefore, the industry needs a cleaner, modular, and more portable way to manage verification flows.
Makefile-based flow
A traditional Makefile-based verification flow centers around a single file containing multiple targets that handle compilation, simulation, and regression tasks. See the representative structure below.

This approach offers clear strengths: immediate familiarity with software engineers, no additional tool requirements, and straightforward dependency management. For small teams with stable tool chains, this simplicity remains compelling.
However, significant challenges emerge with scale. Cryptic syntax becomes problematic; escaped backslashes, shell expansions, and dependencies create arcane scripting rather than readable configuration. Debug cycles lengthen with cryptic error messages, and modifications require deep Maker expertise.
Tool coupling is evident in the above structure—compilation flags, executable names, and runtime arguments are VCS-specific. Supporting Questa requires duplicating rules with different syntax, creating synchronization challenges.
So, maintenance of overhead grows exponentially. Adding tests requires multiple modifications, parameter changes demand careful shell escaping, and regression management quickly outgrows Maker’s capabilities, forcing hybrid scripting solutions.
These drawbacks motivate the search for a more human-readable, reusable configuration approach, which is where YAML’s structured, declarative format offers compelling advantages for modern verification flows.
YAML-based flow
YAML (YAML Ain’t Markup Language) provides a human-readable data serialization format that transforms verification flow management through structured configuration files. Unlike Makefile’s imperative commands, YAML uses declarative key-value pairs with intuitive indentation-based hierarchy.
See below this YAML configuration structure that replaces complex Makefile logic:


The modular structure becomes immediately apparent through organized directory hierarchies. As shown in Figure 1, a well-structured YAML-based verification environment separates configurations by function and scope, enabling different team members to modify their respective domains without conflicts.

Figure 1 The block diagram highlights the YAML-based verification directory structure. Source: ASICraft Technologies
Block-level engineers manage component-specific test configurations, IP1 andIP2, while integration teams focus on pipeline and regression management. Instead of monolithic Makefiles, teams can organize configurations across focused files: build.yml for compilation settings, sim.yml for simulation parameters, and various test-specific YAML files grouped by functionality.
Advanced YAML features like anchors and aliases eliminate configuration duplication using the DRY (Don’t Repeat Yourself) principle.

Tool independence emerges naturally since YAML contains only configuration data, not tool-specific commands. The same YAML files can drive VCS, Questa, or XSIM simulations through appropriate Python parsing scripts, eliminating the need for multiple Makefiles per tool.
Of course, YAML alone doesn’t execute simulations; it needs a bridge to EDA tools. This is achieved by pairing YAML with lightweight Python scripts that parse configurations and generate appropriate tool commands.
Implementation of YAML-based flow
The transition from YAML configuration to actual EDA tool execution follows a systematic four-stage process, as illustrated in Figure 2. This implementation addresses the traditional verification challenge where engineers spend excessive time writing complex Makefiles and managing tool commands instead of focusing on verification quality.

Figure 2 The YAML-to-EDA phase bridges the YAML configuration. Source: ASICraft Technologies
YAML files serve as comprehensive configuration containers supporting diverse verification needs.
- Project metadata: Project name, descriptions, and version control
 - Tool configuration: EDA tool selection, licenses, and version specifications
 - Compilation settings: Source files, include directories, definitions, timescale, and tool-specific flags
 - Simulation parameters: Tool flags, snapshot paths, and log directory structures
 - Test specifications: Test names, seeds, plusargs, and coverage options
 - Regression management: Test lists, reporting formats, and parallel execution settings
 

Figure 3 Here is a view of Python YAML parsing workflow phases. Source: ASICraft Technologies
The Python implementation demonstrates the complete flow pipeline. Starting with a simple YAML configuration:

The Python script loads and processes the configuration below:

When executed, the Python script produces clear output, showing the command translation, as illustrated below:

The complete processing workflow operates in four systematic phases, as detailed in Figure 3.
- Load/parse: The PyYAML library converts YAML file content into native Python dictionaries and lists, making configuration data accessible through standard Python operations.
 - Extract: The script accesses configuration values using dictionary keys, retrieving tool names, file lists, compilation flags, and simulation parameters from the structured data.
 - Build commands: The parser intelligently constructs tool-specific shell commands by combining extracted values with appropriate syntax for the target simulator (VCS or Xcelium).
 - Display/execute: Generated commands are shown for verification or directly executed through subprocess calls, launching the actual EDA tool operations.
 
This implementation creates true tool-agnostic operation. The same YAML configuration generates VCS, Questa, or XSIM commands by simply updating the tool specification. The Python translation layer handles all syntax differences, making flows portable across EDA environments without configuration changes.
The complete pipeline—from human-readable YAML to executable simulation commands—demonstrates how modern verification flows can prioritize engineering productivity over infrastructure complexity, enabling teams to focus on test quality rather than tool mechanics.
Comparison: Makefile vs. YAML
Both approaches have clear strengths and weaknesses that teams should evaluate based on their specific needs and constraints. Table 1 provides a systematic comparison across key evaluation criteria.

Table 1 See the flow comparison between Makefile and YAML. Source: ASICraft Technologies
Where Makefiles work better
- Simple projects with stable, unchanging requirements
 - Small teams already familiar with Make syntax
 - Legacy environments where changing infrastructure is risky
 - Direct execution needs required for quick debugging without intermediate layers
 - Incremental builds where dependency tracking is crucial
 
Where YAML excels
- Growing complexity with multiple test configurations
 - Multi-tool environments supporting different simulators
 - Team collaboration where readability matters
 - Frequent modifications to test parameters and configurations
 - Long-term maintenance across multiple projects
 
The reality is that most teams start with Makefiles for simplicity but eventually hit scalability walls. YAML approaches require more expansive initial setup but pay dividends as projects grow. The decision often comes down to whether you’re optimizing for immediate simplicity or long-term scalability.
For established teams managing complex verification environments, YAML-based flows typically provide better return on investment (ROI). However, teams should consider practical factors like migration effort and existing tool integration before making the transition.
Choosing between Makefile and YAML
The challenges with traditional Makefile flows are clear: cryptic syntax that’s hard to read and modify, tool-specific configurations that don’t port between projects, and maintenance overhead that grows with complexity. As verification environments become more sophisticated, these limitations consume valuable engineering time that should focus on actual test development and coverage goals.
The YAML-based flows address these fundamental issues through human-readable configurations, tool-independent designs, and modular structures that scale naturally. Teams can simply describe verification intent—run 100 iterations with coverage—while the flow engine handles all tool complexity automatically. The same approach works from block-level testing to full-chip regression suites.
Key benefits realized with YAML
- Faster onboarding: New team members understand YAML configurations immediately.
 - Reduced maintenance: Configuration changes require simple text edits, not scripting.
 - Better collaboration: Clear syntax eliminates the “Makefile expert” bottleneck.
 - Tool flexibility: Switch between VCS, Questa, or XSIM without rewriting flows.
 - Project portability: YAML configurations move cleanly between different projects.
 
The choice between Makefile and YAML approaches ultimately depends on project complexity and team goals. Simple, stable projects may continue benefiting from Makefile simplicity. However, teams managing growing test suites, multiple tools, or frequent configuration changes will find YAML-based flows providing better long-term returns on their infrastructure investment.
Meet Sangani is ASIC verification engineer at ASICraft Technologies.
Hitesh Manani is senior ASIC verification engineer at ASICraft Technologies.
Shailesh Kavar is ASIC verification technical manager at ASICraft Technologies.
Related Content
- Addressing the Verification Bottleneck
 - Making Verification Methodology and Tool Decisions
 - Gate level simulations: verification flow and challenges
 - Specifications: The hidden bargain for formal verification
 - Shift-Left Verification: Why Early Reliability Checks Matter
 
The post Makefile vs. YAML: Modernizing verification simulation flows appeared first on EDN.

 

