Українською
  In English
Feed aggregator
🏆 МОН оголошує конкурс наукових проєктів молодих учених!
📢 Міністерство освіти і науки України своїм наказом від 16.09.2025 №1253 17 вересня 2025 року оголосило конкурсний відбір проєктів фундаментальних наукових досліджень, прикладних наукових досліджень та науково-технічних (експериментальних) розробок молодих учених, які працюють (навчаються) у зак
🚀 Реєстрація на найбільший у світі космічний хакатон NASA Space Apps Challenge
🚀 КПІшники, реєструйтеся на найбільший у світі космічний хакатон NASA Space Apps Challenge, який щороку проходить під егідою NASA у різних державах.
Протягом 48 годин команди вирішують практичні завдання. Цьогоріч їх 19!
Optical Chip Beats Counterparts in AI Power Efficiency 100 Fold
Meta Connect 2025: VR still underwhelms; will smart glasses alternatively thrive?

For at least as long as Meta’s been selling conventional “smart” glasses (with partner EssilorLuxottica, whose eyewear brands include the well-known Oakley and Ray-Ban), rumors suggested that the two companies would sooner or later augment them with lens-integrated displays. The idea wasn’t far-fetched; after all, Google Glass had one (standalone, in this case) way back in early 2013:

Meta founder and CEO Mark Zuckerberg poured fuel on the rumor fire when, last September, he demoed the company’s chunky but impressive Orion prototype:

And when Meta briefly, “accidentally” (call me skeptical, but I always wonder how much of a corporate mess-up versus an intentional leak these situations often really are) published a promo clip for (among other things) a display-inclusive variant of its Meta Ray-Ban AI glasses last week, we pretty much already had our confirmation ahead of the last-Wednesday evening keynote, in the middle of the 2025 edition of the company’s yearly Connect conference:
Yes, dear readers, as of this year, I’ve added yet another (at least) periodic tech-company event to my ongoing coverage suite, as various companies’ technology and product announcements align ever more closely with my editorial “beat” and associated readers’ interests.
But before I dive fully into those revolutionary display-inclusive smart glasses details, and in the spirit of crawling-before-walking-before-running (and hopefully not stumbling at any point), I’ll begin with the more modest evolutionary news that also broke at (and ahead of) Connect 2025.
Smart glasses get sportyWithin the midst of my pseudo-teardown of a transparent set of Meta Ray-Ban AI Glasses published earlier this summer:

I summarized the company’s smart glasses product-announcement cadence up to that point. The first-generation Stories introduced in September 2020:

was, I wrote, “fundamentally a content capture and playback device (plus a fancy Bluetooth headset to a wirelessly tethered smartphone), containing an integrated still and video camera, stereo speakers, and a three-microphone (for ambient noise suppression purposes) array.”
The second-generation AI Glasses unveil was led three-plus years later in October 2023, which I own—two sets of, in fact, both Transitions-lens equipped:

make advancements on these fundamental fronts…They’re also now moisture (albeit not dust) resistant, with an IPX4 rating, for example. But the key advancement, at least to this “tech-head”, is their revolutionary AI-powered “smarts” (therefore the product name), enabled by the combo of Qualcomm’s Snapdragon AR1 Gen 1, Meta’s deep learning models running both resident and in the “cloud”, and speedy bidirectional glasses/cloud connectivity. AI features include real-time language Live Translation plus AI View, which visually identifies and audibly provides additional information about objects around the wearer.
And back in June (when published, written early May), I was already teasing what was to come:
Next-gen glasses due later this year will supposedly also integrate diminutive displays.
More recently, on June 20 (just three days before my earlier coverage had appeared in EDN, in fact), Meta and EssilorLuxottica released the sports-styled, Oakley-branded HTSN new member of the AI Glasses product line:
The battery life was nearly 2x longer: up eight hours under typical use, and 19 hours in standby. They charged up to 50% in only 20 minutes. The battery case now delivered up to 48 operating hours’ worth of charging capacity, versus 36 previously. The camera, still located in the left endpiece, now captured up to 3K resolution video (albeit the same 12 Mpixel still images as previously). And the price tag was also boosted: $499 for the initial limited-edition version, followed by more mainstream $399 variants.
A precursor retrofit and sports-tailored expansionFast forward to last week, and the most modest news coming from the partnership is that the Oakley HTSN enhancements have been retrofitted to the Ray-Ban styles, with one further improvement: 1080p video can now be captured at up to 60 fps in the Gen 2 versions. Cosmetically, they look unchanged from the Gen 1 precursors. And speaking of looks, trust me when I tell you that I don’t look nearly as cool as any of these folks do when donning them:
Meta and EssilorLuxottica have also expanded the Oakley-branded AI Glasses series beyond the initial HTSN style to the Vanguard line, in the process moving the camera above the nosepiece, otherwise sticking with the same bill-of-materials list, therefore specs, as the Ray-Ban Gen 2s:
And all of these, including a welcome retrofit to the Gen 1 Ray-Ban AI Glasses I own, will support a coming-soon new feature called conversation focus, which “uses the glasses’ open-ear speakers to amplify the voice of the person you’re talking to, helping distinguish it from ambient background noise in cafes and restaurants, parks, and other busy places.”
AI on displayAnd finally, what you’ve all been waiting for: the newest, priciest (starting at $799) Meta Ray-Ban Display model:
Unlike last year’s Orion prototype, they’re not full AR; the display area is restricted to a 600×600 resolution, 30 Hz refresh rate, 20-degree lower-right portion of the right eyepiece. But with 42 pixels per degree (PPD) of density, it’s still capable of rendering crisp, albeit terse information; keep in mind how close to the user’s right eyeball it is. And thanks to its coupling to Transitions lenses, early reviewer feedback suggests that it’s discernible even in bright sunlight.
Equally interesting is its interface scheme. While I assume that you can still control them using your voice, this time Meta and EssilorLuxottica have transitioned away from the right-arm touchpad and instead to a gesture-discerning wristband (which comes in two color options):

based on very cool (IMHO) surface EMG (electromyography) technology:
Again, the initial reviewer feedback that I’ve seen has been overwhelmingly positive. I’m guessing that at least in this case (Meta’s press release makes it clear that Orion-style full AR glasses with two-hand gesture interface support are still under active development), the company went with the wristband approach both because it’s more discreet in use and to optimize battery life. An always-active front camera, after all, would clobber battery life well beyond what the display already seemingly does; Meta claims six hours of “mixed-use” (
) between-charges operating life for the glasses themselves, and 18 hours for the band.
Longstanding silicon-supplier partner Qualcomm was notably quieter than usual from an announcement standpoint last week. Back in June, it had unveiled the Snapdragon AR1+ Gen 1 Platform, which may very well be the chipset foundation of the display-less devices launched last week. Then again, given that the aforementioned operating life and video-capture quality advancements versus their precursor (running the Snapdragon AR1) are comparatively modest, they may result mostly-to-solely from beefier integrated batteries and software optimizations.
The Meta Ray-Ban Display, on the other hand, is more likely to be powered by a next-generation chipset, whether from Qualcomm—the Snapdragon AR1+ Gen 1 or perhaps even one of the company’s higher-end Snapdragon XR platforms—or another supplier. We’ll need to wait for the inevitable teardown-to-come (at $799, not from yours truly!) to know for sure. Hardware advancements aside, I’m actually equally excited (as will undoubtedly also be the software developers out there among my readership) to hear what Meta unveiled on day 2: a “Wearables Device Access Toolkit” now available as a limited developer preview, with a broader rollout planned for next year.
Pending more robust third-party app support neatly leads into my closing topic: what’s in all of this for Meta? The company has clearly grown beyond its Facebook origin and foundation, although it’s still fundamentally motivated to cultivate a community that interacts and otherwise “lives” on its social media platform. AI-augmented smart glasses are just another camera-plus-microphones-and-speakers (and now, display) onramp to that platform. It’ll be interesting to see both how Meta’s existing onramps continue to evolve and what else might come next from a more revolutionary standpoint. Share your guesses in the comments!
p.s…I’m not at all motivated to give Meta any grief whatsoever for the two live-demo glitches that happened during the keynote, given that the alternative is a far less palatable fully-pre-recorded “sanitary” video approach. What I did find interesting, however, were the root causes of the glitches; an obscure, sequence-of-events driven software bug not encountered previously as well as a local server overload fueled by the large number of AI Glasses in the audience (a phenomenon not encountered during the comparatively empty-venue preparatory dress rehearsals). Who would have thought that a bunch of smart glasses would result in a DDoS?
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Smart glasses skepticism: A look at their past, present, and future(?)
- Ray-Ban Meta’s AI glasses: A transparency-enabled pseudo-teardown analysis
- Apple’s Spring 2024: In-person announcements no more?
The post Meta Connect 2025: VR still underwhelms; will smart glasses alternatively thrive? appeared first on EDN.
Debugging a “buggy” networked CableCARD receiver

Welcome to the last in a planned series of teardowns resulting from the mid-2024 edition of “the close-proximity lightning strike that zapped Brian’s electronics devices”, following in the footsteps of a hot tub circuit board, a three-drive NAS, two eight-port GbE switches and one five-port one, and a MoCA networking adapter…not to mention all the gear that had expired in the preceding 2014 and 2015 lightning-exposure iterations…
This is—I’m sad to say, in no small part because they’re not sold any longer (even in factory-refurbished condition) and my accumulated “spares” inventory will eventually be depleted—the third straight time that a SiliconDust HDHomeRun Prime has bit the dust:


The functional failure symptoms—a subsequent access inability from elsewhere over the LAN, coupled with an offline-status front panel LED—were identical in both the first and second cases, although the first time around, I couldn’t find any associated physical damage evidence. The second time around, on the other hand…


This third time, though, the failure symptoms were somewhat different, although the “dead” end (dead-end…get it? Ahem…) result was the same; a never-ending system loop of seemingly starting up, getting “stuck” and rebooting:
Plus, my analysis of the systems’ insides in the first two cases had been more cursory than the comparative verbosity to which subsequent teardowns have evolved, so I decided a thorough revisit was apropos. I’ll start with some overview photos of our patient, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

See those left-side ventilation slots? Hold that thought:

Onward:


Two screws on top:

And two more on the bottom:


will serve as our pathway inside:



Before diving in, here’s visual confirmation:

that the “wall wart” still works (that said, I still temporarily swapped in the replacement HDHomeRun Prime’s PSU to confirm that any current-output deficit with this one wasn’t the root cause of the system’s bootup woes…it’s happened to me with other devices, after all…)



Onward:




Now for that inner plastic sleeve still surrounding three sides of the PCB, which slips right off:




This seems to be the same rev. 1.7D version of the design that I saw in the initial November 2014 teardown, versus the rev. 1.7F iteration analyzed a year (and a few months) later:

Once again, a heatsink dominates the PCB topside-center landscape, surrounded by, to the left, a Macronix MX25L1655D 16 Mbit serial interface flash memory (hold that thought) and a Hynix (now SK Hynix) H5PS5162FFR 64 Mbit DDR2 SDRAM, and above, a Realtek RTL8211CL single-port Ethernet controller. Back in late 2014, I relied on WikiDevi (or, if you prefer, DeviWiki) to ID what was underneath the heatsink:
The chip is Ubicom’s IP7150U communications and media processor; the company was acquired in early 2012 and I can’t find any mention of the SoC on new owner Qualcomm’s website. Here’s an archive of the relevant product page.
I confess that I had subsequently completely forgotten about my earlier online sleuthing success; regardless, I was determined to pop the heatsink off this time around:


Next, some rubbing alcohol and a fingernail to scrape off the marking-obscuring glue:

Yep, it’s the Ubicom IP7150U
Here’s an interesting overview of what happens when you interact with the CPU (and broader system) software via the SiliconDust-supplied Linux-based open source development toolset and a command line interface, by the way.
I was also determined this time to pry off the coax tuner subsystem’s Faraday cage and see what was underneath, although in retrospect I could have saved myself the effort by just searching for the press release first (but then again, what’s the fun in that?):

Those are MaxLinear MxL241SF single-die integrated tuner and QAM demodulator ICs, although why there are four of them in a three-tuner system design is unclear to me…(readers?)
Grace Hopper would approveNow let’s flip the PCB over and see what’s underneath:

What’s that blob in the lower right corner, under the CableCard slot? Are those…dead bugs?
Indeed!

I’d recently bought a macro lens and ring light adapter set for my smartphone:

Which I thought would be perfect to try out for the first time in this situation:


That optical combo works pretty well, eh? Apparently, the plants in the greenhouse room next door to the furnace room, which does double-duty as my network nexus, attract occasional gnats. But how and why did they end up here? For one thing, the LED at this location on the other side of the PCB is the one closest to the aforementioned ventilation slots (aka, gnat access portals). And for another, this particular LED is a) perpetually illuminated whenever the device is powered up and b) multicolor, whereas the others are either green-or-off. As I wrote in 2014:
At the bottom [editor note: of the PCB topside] are the five front-panel LEDs. The one on the left [editor note: the “buggy” one] is normally green; it’s red when the HDHomeRun Prime can’t go online. The one to its right is also normally green; it flashes when the CableCARD is present but not ready, and is dark when the CableCARD is not present or not detected. And the remaining three on the right, when green-lit, signify a respective tuner in use.
Hey, wait…I wonder what might happen if I were to scrape off the bugs?

Nope, the device is still DOA:
I’ll wrap up with one more close-up photo, this one of the passives-dominated backside area underneath the topside Ubicom processor and its memory and networking companion chips:

And in closing, a query: why did the system die this time? As was the case the first time, albeit definitely not the case the second time, there’s no obvious physical evidence for the cause of this demise. Generally, and similar to the MoCA adapter I tore down last month, these devices have dual potential EMP exposure sources, Ethernet and coax. Quoting from last month’s writeup:
Part of the reason why MoCA devices keep dying, I think, is due to their inherent nature. Since they convert between Ethernet and coax, there are two different potential “Achilles Heels” for incoming electromagnetic spikes. Plus, the fact that coax routes from room to room via cable runs attached to the exterior of the residence doesn’t help.
In this case, to clarify, the “weak link” coax run is the one coming into the house from the Comcast feed at the street, not a separate coax span that would subsequently run from room to room within the home. Same intermediary exterior-exposure conceptual vulnerability, however.
The way the device is acting this time, though, I wonder if the firmware in the Macronix flash memory might have gotten corrupted, resulting in a perpetual-reboot situation. Or maybe the processor just “loses its mind” the first time it tries to access the no-longer-functional Ethernet interface (since this seemed to be the root cause of the demise the first two times) and restarts. Reader theories, along with broader thoughts, are as-always welcomed in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- The whole-house LAN: Achilles-heel alternatives, tradeoffs, and plans
- Lightning strikes…thrice???!!!
- Computer and network-attached storage: Capacity optimization and backup expansion
- A teardown tale of two not-so-different switches
- Dissecting (and sibling-comparing) a scorched five-port Gigabit Ethernet switch
- Broke MoCA II: This time, the wall wart got zapped, too
The post Debugging a “buggy” networked CableCARD receiver appeared first on EDN.
Space Forge and United Semiconductors partner to develop supply chain for space-grown materials
Breaking Boundaries: Advanced Patterning Paves the Way for Next-Gen Chips
A cutting-edge semiconductor industry or techscape is now seeing chip features being shrunk smaller than the dimensions measured in mere atoms. Such a leap requires advanced patterning, which is a vital process involving high-precision lithography, deposition, and etching techniques working together to scale devices beyond the scope of conventional methods.
These advanced patterning processes will be used in future logic, DRAM, and NAND devices to cram more transistors into smaller dies thereby leading to faster speed, lower power consumption, and enriched functionality. Through the means of advanced patterning, one further increases yields, minimizes defects, and cuts costs at sub-half-micron nodes.
Why Does Advanced Patterning Matter?
Advanced patterning unlike the conventional method was made to help pass resolution limits that come with conventional photolithography. It can provide enhanced layouts as well as finer controls such that the application of Moore’s Law can continue with great force by semiconductor manufacturers.
Benefits include:
- Higher performance and density: More functionality in smaller chip areas.
- Improved yields: Larger process windows reduce defects.
- Sustainability: Advanced processes deliver better energy and cost efficiency.
Patterning Techniques in Action
Single Patterning versus Multipatterning
Single Patterning has been the simplest and most cost-effective method, but this only applies when the scanner is able to resolve the smallest features.
Multi-Patterning (be it Double, Triple, or Quadruple) pushes resolution limits by applying multiple exposures and photomasks. Cases of such techniques are Litho-Etch-Litho-Etch (LELE) and Litho-Freeze-Litho-Etch (LFLE) for creating feature sizes required by very dense chip designs.
Self-Aligned Patterning
Self-aligned processes, including SADP, SAQP, and SALELE, use sidewall spacers or etched references to define features smaller than those that can be lithographically defined while improving placement accuracy and pattern fidelity.
EUV Lithography
Next is Extreme Ultraviolet lithography with the shortest wavelength of 13.56 nm. EUV can produce sub-10-nm features required for nodes like 7 nm, 5 nm, etc., while resisting challenges are still there in things like resist sensitivity, defect control, and edge placement error (EPE).
Step Over the Patterning Challenges
As chips scale toward 3 nm and smaller, tolerances go down to just a few atoms. Controlling EPE caused by stochastic photoresist defects, photon limitations, and scanner imperfections is one of the biggest hurdles. Even a single misplaced edge can lead to yield loss in wafers containing billions of transistors.
Lam Research enables advanced logic and memory scaling through a suite of precision patterning technologies, including Akara for ultra-accurate etching, VECTOR DT for wafer flatness enhancement, Corvus for vertical ion edge control, Kyber for cost-effective line edge roughness reduction, and Aether for efficient dry EUV photoresist processing.
The Road Ahead
As the semiconductor roadmap pushes toward the angstrom era, advanced patterning is no longer optional it is the foundation of innovation. With companies like Lam Research leading the charge, the industry is unlocking the ability to build smaller, faster, and more sustainable chips that will power AI, advanced computing, and next-generation devices.
(This article has been adapted and modified from content on Lam Research.)
The post Breaking Boundaries: Advanced Patterning Paves the Way for Next-Gen Chips appeared first on ELE Times.
Well Degausser is dead. Repaired the first time and it melted one of the brass screw on the thyristor..
| | First repair seemed to work but melted a screw. Repaired the damaged and put it all back together. Then blew all 4 thysistors again. Apart from a bit of ringing in the ears we're alright. [link] [comments] |
Latest issue of Semiconductor Today now available
New GST Rates Bring Relief to Electronics Industry from September 22
The Government of India has taken a landmark step towards rendering the tax structure simpler and thereby easing the financial burden upon the common man while while Prime Minister Shri Narendra Modi unveiled the next generation of GST reforms during the festive season, marking a pivotal moment in India’s economic transformation. The changes in GST rates have come into force on Monday, September 22; the news has come as a relief for the electronics industry.
As per the reports, the government has announced major concessions on taxes on many household electronics and technology products:
Electric Accumulators: The GST rate has been reduced from 28% to 18%, substantially lowering the cost of backup power solutions for digital devices and small appliances. This change is expected to boost the adoption of energy storage systems in homes and offices, especially in areas with unreliable power supply.
Composting Machines: With a reduction from 12% to 5%, the rates now encourage a wider acceptance of organic waste management and waste-to-energy solutions.
Two-Way Radios: Taxes have been shrunk from the erstwhile 12% to a meagre 5%. On one hand, such tariff change led to lowered procurement costs for the security forces, including the police department, paramilitary units, and defense establishments.
Industry experts wish these reforms have far-reaching benefits. According to the Industry experts, reduction will foster domestic demand, increase the sale of electronic goods and further enlarge the market for local producers.
Rationalised GST under the new reforms will, therefore, be better placed to make the electronics sector accessible, affordable, and competitive while also striving to sustain digitisation and empowerment on the government level.
The post New GST Rates Bring Relief to Electronics Industry from September 22 appeared first on ELE Times.
A short tutorial on hybrid relay design

What’s a hybrid relay? How does it work? What are its key building blocks? Whether you are designing a power control system or tinkering with a do-it-yourself automation project, it’s important to demystify the basics and know why this hybrid approach is taking off. Here is a brief tutorial on hybrid relays, which also explains why they are becoming the go-to choice for engineers and makers alike.
Read the full article at EDN’s sister publication, Planet Analog.
Related Content
- Common Types of Relay
- IC drives up to four single-coil latching relays
- Designing a simple electronic impulse relay module
- Mastering latching relays: A hands-on design guide
- Electromechanical relays: an old-fashioned component solves modern problems
The post A short tutorial on hybrid relay design appeared first on EDN.
Vishay Intertechnology to Showcase Solutions for AI and EV Applications at PCIM Asia 2025
Company to Highlight Broad Portfolio of Semiconductor and Passive Technologies in a Series of Reference Designs and Product Demos Focused on AI Servers, Smart Cockpits, Vehicle Computing Platforms, and More
Vishay Intertechnology, Inc. announced that the company will be showcasing its latest semiconductor and passive technologies at PCIM Asia 2025. In Booth N5, C48, visitors are invited to explore Vishay’s differentiated products and reference designs tailored to the rapidly evolving demands of AI infrastructure and electric vehicles (EV).
At PCIM Asia, Vishay’s exhibits will highlight the company’s solutions for server power supplies, DC/DC converters, power delivery units, BBUs, mainboards, and optical modules in AI infrastructure and applications, as well as smart cockpit and vehicle computing and ADAS platforms for next-generation EVs. To meet the needs of these high growth sectors, the company is focused on expanding its capacity and optimizing its global manufacturing footprint to broaden its portfolio.
Vishay AI solution components on display at PCIM Asia will include power MOSFETs with extremely low on-resistance in PowerPAK 8×8, 10×12, SO-8DC double-sided cooling—for high efficiency thermal management—1212-F, and SO-8S packages; microBUCK buck regulators with 4.5 V to 60 V input; 50 A VRPower integrated power stages in the thermally enhanced PowerPAK MLP55-31L package; SiC diodes in TO-220, TO-247, D2PAK, SMA, and SlimSMA packages; TVS in DFN and SlimSMA packages; surface-mount TMBS rectifiers with ultra-low forward voltage drop of 0.38 V; IHLE series inductors with integrated e-field shields for maximum EMI reduction that handle high transient current spikes without saturation, and low DCR and high voltage power inductors; the T55 vPolyTan polymer tantalum chip capacitor with ultra-low ESR; thin film chip resistors with operating frequencies up to 70 GHz; Power Metal Strip resistors with high power density and low ohmic values, TCR, inductance, and thermal EMF; and PTC thermistors with high energy absorption levels up to 340 J.
Highlighted Vishay automotive solutions will consist of reference designs, demos and components solutions. Reference designs for automotive applications will include active discharge circuits for 400 V and 800 V; a 22 kW bidirectional 800 V to 800 V power converter for OBCs; an intelligent battery shunt built on WSBE Power Metal Strip resistors, with low TCR, inductance, and thermal EMF, and a CAN FD interface for 400 V / 800 V systems; a 4 kW bidirectional 800 V to 48 V power converter for auxiliary power; a compact 800 V power distribution solution; and a 48 V eFuse.
Highlighted Vishay Automotive Grade components for smart cockpit, vehicle computing and ADAS, and other automotive applications include fully integrated proximity, ambient light, force, gesture, and transmissive optical sensors; Ethernet ESD protection diodes; surface-mount diodes in the eSMP package; MOSFETs with extremely low on-resistance in PowerPAK 8x8LR, SO-10LR, 1212, and SO-8L packages; IHLP series low profile high current power inductors that handle high transient current spikes without saturation; the T51 vPolyTan polymer tantalum chip capacitor with ultra-low ESR; metallized polypropylene DC-Link film capacitors with high temperature operation up to +125 °C; and Automotive Grade EMI suppression safety capacitors with the ability to withstand temperature humidity bias (THB) testing of 85 °C / 85 % for 1000 h.
The post Vishay Intertechnology to Showcase Solutions for AI and EV Applications at PCIM Asia 2025 appeared first on ELE Times.
Towards Greener Connectivity: Energy-Efficient Design for 6G Networks
The need for sustainable mobile networks is stronger today than ever before. Increasing operational costs, tightening environmental rules, and international commitments toward sustainable development are all compelling telecom operators, as well as infrastructure vendors, to repaint their perspective on how networks are created and powered. Since wireless infrastructure uses more than any other type of infrastructure in terms of energy use, the transition from 5G to 6G is an opportunity to make sustainability one of the prime considerations alongside speed and capacity.
According to ITU-R Recommendation M.2160 on the IMT-2030/6G Framework, sustainability remains one of the key aspirations, where mobile systems are expected to be designed so that they use minimum power, emit least greenhouse gases, and utilize their resources efficiently. Contrary to what happened in previous generations where energy efficiency was considered after the fact, 6G has the potential to incorporate green-by-design concepts from the start so as to deliver both excellent performance and little environmental impact.
Energy-Saving Features in 5G: Achievements and Limitations
Innovations such as RRC_INACTIVE mode, Idle Mode Signaling Reduction, Discontinuous Reception (DRX), Discontinuous Transmission (DTX), and Carrier Aggregation control helped reduce unnecessary signaling and lower energy use.
The later 5G releases enhanced on such features as:
- Dynamic SSB transmission control based on cell load.
- On-Demand SIB1 broadcasting.
- Cell switch-off and micro-sleep for base stations.
- Improved RRC_INACTIVE mobility.
- Partial activation of antenna ports.
- BWP operation for UEs.
- Dynamic PDCCH monitoring control.
- SCell dormancy in carrier aggregation.
- Low-power receivers for UEs.
However, some structural shortcomings exist: for instance, frequent SSB bursts (every 20 ms) allow only shallow sleep, and persistent antenna activation wastes energy even when traffic is low. Many legacy UEs are incapable of supporting these new modes of efficiency, and high-traffic scenarios still do not have robust network-level mechanisms for saving energy. These gaps necessitate a fundamental rethink of energy efficiency in 6G.
Less ON, More OFF is the Principle on Which 6G Is Built:
In 6G, energy efficiency will become a paramount design concern instead of a mere secondary feature. The phrase “Less ON, More OFF” becomes the banner under which unnecessary transmissions are done away with and base stations and UEs are put to sleep when at all possible.
Samsung Research finds three main enablers:
Carrier-Dependent Capabilities
6G introduces Energy-Saving Network Access (ENA), which dynamically controls SSB transmission.
Multi-toned SSBs: Normal (NM-SSB), Energy-Saving (ES-SSB), and On-Demand (OD-SSB) provide extremely flexible signaling in contrast to 5G-Fixed SSBs-on.
ES-SSB usually delays the transmission periodicity (e.g., 160 ms); the OD-SSBs are transmitted only on demand, reducing base station standby energy.
- Dynamic Time/Frequency/Spatial/Power Adaptation
Here, DSA is the active adaptation of the number of active antennas and beam directions based on real-time demand.
It avoids over-provisioning and wasting idle power and is particularly applicable for high-frequency bands in which power scales with antenna density.
- Energy-Aware Network Management and Exposure (EANF)
Interfacing with the central orchestration layer for real-time monitoring of energy consumption, in order to initiate power-aware policies for scheduling, load balancing, and carrier activation.
Further, in the realm of AI-RAN, better traffic predictions will enable the optimization of beam configurations and event-driven measurements, thereby also reducing signaling, and hence power consumption.
Energy Conservation for UEs in 6G
User devices remain at the core of the 6G energy-saving scheme. Network-UE joint power saving opens the way for more proactive strategies whereby the network predicts UE activity, traffic patterns, and battery status to join in coordinating wake-up intervals.
Some of these key innovations include:
- Ultra-low-power wake-up receivers that keep energy use at a minimum.
- Context-aware wake-up signals powered by ML techniques evaluating and adapting timing and frequency.
- Collaborative scheduling between the network and the UE to reduce idle consumption without degradation of user experience.
Performance and Energy Gains
Internal studies with 24-hour traffic profiles demonstrated:
- ENA cuts energy consumption by 43.37% at low traffic and reaches 20.3% average savings.
- DSA further reduces power consumption by another 14.4%, scaling the antenna ports with demand.
- Together, ENA + DSA can reach an energy saving of ~21.2% while also enhancing the user-perceived throughput (UPT) by up to 8.4%.
In this way, such results show that 6G energy savings are not just about switching off and saving power-they also include efficiency improvements and network responsiveness enhancements.
Conclusion:
Rather from being a small improvement, the 6G energy-saving vision represents a paradigm shift. Networks can enter low-power modes more frequently when ENA, DSA, and EANF cooperate, which minimises waste and maintains service quality. 6G offers faster and more dependable connectivity as well as a sustainable foundation for the upcoming ten years of wireless evolution by fusing AI-native intelligence, signalling innovation, and hardware flexibility.
(This article has been adapted and modified from content on Samsung.)
The post Towards Greener Connectivity: Energy-Efficient Design for 6G Networks appeared first on ELE Times.
Automating FOWLP design: A comprehensive framework for next-generation integration

Fan-out wafer-level packaging (FOWLP) is becoming a critical technology in advanced semiconductor packaging, marking a significant shift in system integration strategies. Industry analyses show 3D IC and advanced packaging make up more than 45% of the IC packaging market value, underscoring the move to more sophisticated solutions.
The challenges are significant—from thermal management and testing to the need for greater automation and cross-domain expertise—but the potential benefits in terms of performance, power efficiency, and integration density make these challenges worth addressing.

Figure 1 3D IC and advanced packaging make up more than 45% of the IC packaging market value. Source: Siemens EDA
This article explores the automation frameworks needed for successful FOWLP design and focuses on core design processes and effective cross-functional collaboration.
Understanding FOWLP technology
FOWLP is an advanced packaging method that integrates multiple dies from different process nodes into a compact system. By eliminating substrates and using wafer-level batch processing, FOWLP can reduce cost and improve yield. Because it shortens interconnect lengths, FOWLP packages offer lower signal delays and power consumption compared to conventional methods. They are also thinner, making them ideal for space-constrained devices such as smartphones.
Another key benefit is support for advanced stacking, such as placing DRAM above a processor. As designs become more complex, this enables higher performance while maintaining manageable form factors. FOWLP also supports heterogeneous integration, accommodating a wide array of die combinations to suit application needs.
The need for automation in FOWLP design
Designing with FOWLP exceeds the capabilities of traditional PCB design methods. Two main challenges drive the need for automation: the inherent complexity of FOWLP and the scale of modern layouts, racking up millions of pins and tens of thousands of nets. Manual techniques cannot reliably manage this complexity and scale, increasing the risk of errors and inefficiency.
Adopting automation is not simply about speeding up manual tasks. It requires a complete change in how design teams approach complex packaging design and collaborate across disciplines. Let’s look at a few of the salient ways to make this transformation successful.
- Technology setup
All FOWLP designs start with a thorough technology setup. Process design kits (PDKs) from foundries specify layer constraints, via spans, and spacing rules. Integrating these foundry-specific rules into the design environment ensures every downstream step follows industry requirements.
Automation frameworks must interpret and apply these rules consistently throughout the design. Success here depends on close attention to detail and a deep understanding of both the foundry’s expectations and the capabilities of the design tools.
- Assembly and floor planning
During assembly and floor planning, designers establish the physical relationships between dies and other components. This phase must account for thermal and mechanical stress from the start. Automation makes it practical to incorporate early thermal analysis and flag potential issues before fabrication.
Effective design partitioning is also critical when working with automated layouts. Automated classification and grouping of nets allow custom routing strategies. This is especially important for high-speed die-to-die interfaces, compared to less critical utility signals. The framework should distinguish between these and apply suitable methodologies.
- Fan-out and routing
Fan-out and routing are among the most technically challenging parts of FOWLP design. The automation system must support advanced power distribution networks such as regional power islands, floodplains, or striping. For signal routing, the system needs to manage many constraints at once, including routing lengths, routing targets, and handling differential pairs.
Automated sequence management is essential, enabling designers to iterate and refine routing as requirements evolve. Being able to adjust routing priorities dynamically helps meet electrical and physical design constraints.
- Final verification and finishing
The last design phase is verification and finishing. Here, automation systems handle degassing hole patterns, verifying stress and density requirements, and integrating dummy metal fills. Preparing data for GDS or OASIS output is streamlined, ensuring the final package meets manufacturing and reliability standards.
Building successful automated workflows
For FOWLP automation flows to succeed, frameworks must balance technical power with ease of use. Specialists should be able to focus on their discipline without needing deep programming skills. Automated commands should have clear, self-explanatory names, and straightforward options.
Effective frameworks promote collaboration among package designers, layout specialists, signal and power integrity analysts, and thermal and mechanical engineers. Sharing a common design environment helps teams work together and apply their skills where they are most valuable.
A crucial role in FOWLP design automation is the replay coordinator. This person orchestrates the entire workflow, managing contributions from all team members as well as the sequence and dependencies of automated tasks, ensuring that all the various design steps are properly sequenced and executed.
To be effective, replay coordinators need a high-level understanding of the overall process and strong communication with the team. They are responsible for interpreting analysis results, coordinating adjustments, and driving the group toward optimal design outcomes.
The tools of the new trade
This successful shift in how we approach microarchitectural design requires new tools and technologies that support the transition from 2D to 3D ICs. Siemens EDA’s Innovator3D IC is a unified cockpit for design planning, prototyping, and predictive analysis of 2.5/3D heterogeneous integrated devices.
Innovator3D IC constructs a digital twin, unified data model of the complete semiconductor package assembly. By using system technology co-optimization, Innovator3D IC enables designers to meet their power, performance, area, and cost objectives.

Figure 2 Innovator3D IC features a unified cockpit. Source: Siemens EDA
FOWLP marks a fundamental evolution in semiconductor packaging. The future of semiconductor packaging lies in the ability to balance technological sophistication with practical implementation. Success with this technology relies on automation frameworks that make complex designs practical while enabling effective teamwork.
As industry continues to progress, organizations with robust FOWLP automation strategies will have a competitive advantage in delivering advanced products and driving the next wave of semiconductor innovation.
Todd Burkholder is a Senior Editor at Siemens DISW. For over 25 years, he has worked as editor, author, and ghost writer with internal and external customers to create print and digital content across a broad range of EDA technologies. Todd began his career in marketing for high-technology and other industries in 1992 after earning a Bachelor of Science at Portland State University and a Master of Science degree from the University of Arizona.
Chris Cone is an IC packaging product marketing manager at Siemens EDA with a diverse technical background spanning both design engineering and EDA tools. His unique combination of hands-on design experience and deep knowledge of EDA tools provides him with valuable insights into the challenges and opportunities of modern semiconductor packaging, particularly in automated workflows for FOWLP.
Editor’s Notes
This is third and final part of the article series on 3D IC. The first part provided essential context and practical depth for design engineers working on 3D IC systems. The second part highlighted 3D IC design toolkits and workflows to demonstrate how the integration technology works.
Related Content
- 3D IC Design
- Thermal analysis tool aims to reinvigorate 3D-IC design
- Heterogeneous Integration and the Evolution of IC Packaging
- Tighter Integration Between Process Technologies and Packaging
- Advanced IC Packaging: The Roadmap to 3D IC Semiconductor Scaling
The post Automating FOWLP design: A comprehensive framework for next-generation integration appeared first on EDN.
Building Trustworthy Software with AI: The Generate-and-Check Paradigm
Whether it be designing products and creative content or software engineering, artificial intelligence is steadily changing how we engineer and interact with technology. But although AI can speed up the development process, the real price of the measure lies in trusting its output, particularly when dealing with safety-critical applications. How can AI-generated software be ensured to be correct, secure, and efficient within real-world parameters?
Bosch Research recognizes the immense promise of the generation-and-execution approach in driving innovation and practical impact. This synthesis combines generative AI to suggest solutions and systematic checks to enforce correctness, safety, and performance. Balancing AI creativity occurs with a touch of strictness-a balance that lands well upon software engineering.
How Generate-and-Check Works
Think of solving a crossword puzzle: you may try out different words, but each suggestion is validated against the length of the clue and the letters already in place. Similarly, in software engineering, AI can generate new code or refactor existing code, while automated checks verify compliance with rules and desired outcomes.
Those rules can be either very simple like the coding style enforcement or highly advanced, like formal verification of software properties. From this perspective, rather than verifying every possible system state, safety, correctness, and adherence to requirements are ensured by verifying AI proposals.
Less error-prone AI assistance, and much less reliance on human supervision all the time.
Use Case 1: Smarter Code Refactoring
Refactoring is a perfect application for generate-and-check. The AI proposes improvements, e.g., migrating to more efficient frameworks, while automated checks verify the equivalence of the new version with the old code.
This approach is somewhat different from the traditional ones based mostly on unit tests as it guarantees behavioral invariance, i.e., that the refactored code behaves exactly the same but better in terms of maintainability or efficiency. Tools developed at Bosch Research allow you to profile this too, to make sure that performance has stayed the same or improved after the changes have been made.
Use Case 2: Reliable Software Translation
On the other hand, software translation remains an area where AI excels but demands human monitoring. The idea of translating legacy code into a safer or new-age language seems nice, but oftentimes traditional transpilers would fail in preserving the idiomatic essence of the target environment.
Yet with generate-and-check, AIs can translate idiomatically while automated tools check for functional correctness, safety, security, and performance. This finally offers a chance to modernize codebases in great bulk without stealthily inserting vulnerabilities.
Embedding into the Developer Workflow
AI becomes valuable for developers if their tools support integration with existing toolchains. Generate-and-check would appear in various forms:
IDE plugins for quick, low-latency assistance during coding.
Background workflows for longer tasks, such as legacy migration, where AI proposals can be rolled out as pull requests. Each PR can provide evidence, such as performance metrics or validation checks, preserving developers’ agency albeit under automated rigor.
This guarantees that AI will continue to be an aid rather than a substitute, offering reliable recommendations while developers make the ultimate choices.
Looking Forward:
The generate-and-check paradigm is a mentality shift for trustworthy AI in software engineering, not merely a technical approach. AI offers safer, better, and more efficient software development by combining its generating capacity with reliable verification.
(This article has been adapted and modified from content on Bosch.)
The post Building Trustworthy Software with AI: The Generate-and-Check Paradigm appeared first on ELE Times.
Unusual quartz crystals
| Here’s a pair of 99.9985 kHz crystals from an HP3571A spectrum analyzer. They were used in a 5-stage filter that set the IF bandwidth, and are simply gold-plated flat quartz plates with centered contacts on both sides, packaged like vacuum tubes. Manufactured by Northern Engineering Laboratories, Burlington WI [link] [comments] |
Пам'яті Сергія Ігоровича Сікорського
Нещодавно у США не стало Сергія Ігоровича Сікорського — віцепрезидента зі спеціальних проєктів Sikorsky Aircraft, сина всесвітньо відомого авіаконструктора Ігоря Сікорського, на честь якого названий наш університет.
Armstrong’s Method of FM Generation
My Homemade Electromagnetic Accelerator Project
| Hi everyone!, after 10 months of working and improving on my accelerator, its finally complete! This device accelerates a magnet in circles using 4 electromagnets and hall effect sensors (I've tried IR sensors but failed😔). Those sensors detect the magnet and then a N-MOSFET switches the coil on and off at the right moment, which leads to acceleration of the magnet. I've also used a 12v--> 5v voltage regulator and for one reason or another I've put a quick ignition and fire hazard or whatever you call it on the voltage regulator. If you wanna know more, or just wanna see the accelerator in action you find the youtube video at the KIWIvolt youtube channel. I'm thinking to make a part 2 in which the magnet is a sphere and thinking of replacing the breadboard with a PCB. If you have any other ideas or wishes please let me know so i can adjust it, to perfect my accelerator even further. [link] [comments] |
Keyboard upgrade from USB to BLE with an ESP32
| | submitted by /u/avionic_Railcar [link] [comments] |



