Feed aggregator

Recapped an old NOS Heatkit PS 4 today, here is the result

Reddit:Electronics - Mon, 10/27/2025 - 17:46
Recapped an old NOS Heatkit PS 4 today, here is the result

I recapped an old but brand-new looking 50-60's Heatkit tube power supply.

those where made back in the days to be used on the hobbyist workbench as a power supply specialized for building tube amps or radio equipment with tubes.

They are like your regular linear PSU, but with voltages for Filament (typical low voltages 1.2-24v / 6.3v) and 0-400v for High voltage supply for the Anode/grid/Cathode supply.

It went up in smoke last time I fired it up, and I found the old paper caps to be dry, so I've just rewired the whole thing, haven't fired it up yet, but thought I'd show it to you guys before I blow it up. /s

submitted by /u/MarinatedTechnician
[link] [comments]

My first HDMI swap.

Reddit:Electronics - Mon, 10/27/2025 - 17:39
My first HDMI swap.

I’ve been watching YouTube videos lately of people repairing ps4 ps5 and other consoles and I thought I’ll give it a try. Bought all of the necessary stuff to get me started and this I’d my first swap on a PS4.

Everything works fine and sold it the same day.

submitted by /u/Lanky-Classroom868
[link] [comments]

A fresh gander at a mesh router

EDN Network - Mon, 10/27/2025 - 16:12

In one of my recent teardowns, commenting on the variety of piece parts included with the manufacturer’s various products in its streaming media box line, I noted:

I would not want to be the person in charge of managing onn. product contents inventory…

Seeming diversity, but under-the-hood commonality

Multiply that sentiment by 100x or so and you’ve got a sense of my feelings about the poor folks who manage the inventories of (and forecast the future sales of) router manufacturers’ product lines. Today’s teardown victim is from Linksys, but the situation’s very much the same at ASUS, (Amazon) eero, Netgear, TP-Link or any of the other hardware providers.

There are now only a few foundation silicon suppliers, and (unlike the relatively recent past), the pace of technology evolution has notably slowed of late, particularly in the wireless realm. The most significant innovation of the past decade has been mesh networking, which only indirectly deals with the Wi-Fi signals being broadcast to and from any particular network node, mostly focusing instead on the node-to-node handoffs as LAN clients move through the network.

The results? Supplier-to-supplier and product-to-product enclosure and other cosmetics differences, but based on essentially the same underlying hardware, differentiated by software (along with, for example, antenna type and quantity and DRAM capacity variations), as each company strives to differentiate in any (preferably low-cost) way possible to squeeze whatever profit is left from an increasingly mature market. Sometimes, product line diversification (as we’ll see today) involves little more than new stickers on the outside of the device and packaging and an altered product name embedded in the firmware. And all this tweaking ends up causing ongoing stress headaches for each company’s pitiable product line managers.

Prepping for a sooner-or-later home office LAN transition

Today’s analysis is a prescient example of what I’m conceptually talking about…two examples, although, at least for the foreseeable future, you’ll only be seeing the insides of one of them. At the tail end of one of my writeups from late last year, wherein I unsuccessfully (to date, at least) strove to figure out how to eliminate my LAN’s ongoing dependence on the lightning-sensitive spans of wired Ethernet running around the outside of my house, I mentioned that:

I also plan to eventually try out newer Wi-Fi technology, to further test the hypothesis that “wires beat wireless every time”. Nearing 3,000 words, I’ll save more details on that for another post to come.

That “newer Wi-Fi technology” isn’t the primary focus of this post, either, but for now I’ll at least provide an entrée. Right now, I’m running a multi-node LAN mesh based on Google Nest Wifi routers, which implement Wi-Fi 5 (802.11ac) technology, specifically AC2200 4×4:4 albeit absent MU-MIMO. One other important “twist” here is that the backhaul connection between the network nodes is wired Ethernet, not Wi-Fi. The setup’s been operational for three years now, thankfully running quite stably, actually.

But, as with its OnHub predecessors (one of which, from TP-Link, I tore down back in mid-2020) I’d run in a mesh configuration for the prior five years, Google will eventually end support for Google Nest Wifi in favor of the newer Nest Wifi Pro and its potential successors. Indicative of my forecast, Google already pulled both the Nest Wifi and prior-gen Google Wifi (one of which I dissected back in early 2022) from its online store effective the beginning of 2024 (I plan to dissect both a Nest Wifi router and access point post-support cessation).

At that point, I’ll need to upgrade my LAN once again. Fortunately, I’ve already got the successors in hand…a bunch of them, actually, counting spares. Last September (as well as several times prior, which I hadn’t noticed at the time), Amazon subsidiary Woot sold factory-refurbished Linksys LN1301 routers for $14.99 each (plus $5 off one via a coupon code):

Also known as the MX4300, it’s a beefy Wi-Fi 6 AX4200 unit with one WAN and three LAN wired Ethernet ports, along with a USB 3.0 port, based on a 1.4 GHz quad-core CPU (identity to be revealed shortly) and with 2 GBytes of RAM and 1 GByte of flash memory. It supports both MU-MIMO and OFDMA and claims to deliver up to 4.2 Gbps of aggregate wireless bandwidth.

Linksys also refers to it as a “Tri-band” router, although given that it’s not a Wi-Fi 6E device, this doesn’t mean that it supports the newest 6 GHz Wi-Fi band. Instead, it concurrently supports two different 5 GHz band ranges, one predominantly intended for optional node-to-node wireless mesh backhaul interconnect (with wired Ethernet being the other backhaul option).

Speaking of mesh, here’s the kicker…well, one of the two. Although not advertised as being mesh-compatible, it turns out that if, after you set up the primary router, you then direct-connect other secondary “child” units to it, an undocumented setup menu screen enables activating mesh connectivity between them. And (here’s the other kicker), the LN1301/MX4300 is also supported by both the DD-WRT and OpenWRT open-source communities, providing ongoing-maintained options to Linksys’ closed-source and (likely) end-of-life’d firmware.

To that “end-of-life” note, the fundamental reason why Linksys was selling the LN1301/MX4300 so inexpensively, it turns out, was as an inventory purge; the company then dropped the device (originally intended for use by small businesses, not consumers) from its product line. Upfront suspecting that this was the case, I went ahead and purchased the maximum quantity of ten units per Woot account, and then also asked my wife to pick up another one (using the same $5-off quantity-one coupon) from her Woot account. That’ll give me plenty of units for both my current four-node mesh topology and as-needed spares…and eventually I may decide to throw caution to the wind and redirect one of the spares to a (presumed destructive) teardown, too.

Disassembling a more modest sibling

For now, I’ll focus my teardown attention on an alternative, more humbly equipped Linksys router I subsequently acquired. A month after my LN1301/MX4300 binge, Woot sold a two-pack of factory-refurbished Velop (Linksys’ brand name for its mesh-compatible devices) VLP01 AC1200 routers for $19.99, minus another $5-off coupon, therefore $14.99 plus tax. VLP0102, by the way, is Linksys’ naming scheme for the two-pack…VLP0101 is the single-unit kit, while VLP0103 refers to the three-device mesh bundled variant. Stock images to start:

Walmart’s website indicates that the VLP01 was (it’s now out of stock and presumably EOL’d as well) a Walmart-exclusive product, which explains why you can’t find a dedicated product page for it on Linksys’ own website. Instead, there’s the WHW01 series, spec’d as AC1300 devices. Anyhoo, what prompted my acquisition was three main motivations:

  • They were inexpensive, and I already had plenty of LN1301/MX4300s, so I could rationalize devoting one of them to a teardown
  • Since I planned on doing wired backhaul anyway, I didn’t need super-robust wireless capabilities, particularly at the mesh node in my wife’s office, and
  • This (grammatically-tweaked-by-me) thread at the Woot Forum page caught my eye:
    • Can these be meshed with the previous $15 Linksys router deal (Linksys LN1301 WiFi 6 Router)?
    • Couldn’t find a direct answer on the Linksys site, but someone asked this same question on Reddit, and Linksys answered: “All of our intelligent mesh systems are compatible with each other. Just ensure that you designate the one with superior specifications as the parent or main node.”
    • Yes, you can. I did this. You will need [to set up] the LN1301 as the parent and then set these up as the [child] nodes.

This support page on the Linksys website documents and supports the Woot forum claim.

Packaging and contents preliminaries

Now for some images of our patient, beginning with an outer box shot of what I got…which, I’ve just noticed, claims that it’s an AC2400 configuration 🤷‍♂️ (I’m guessing this is because Linksys is mesh-adding the two devices’ theoretical peak bandwidths together? Lame, Linksys, lame…):

Speaking of which, here are those two devices:

Along with what’s underneath ‘em:

Wall wart first, as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:

Now for the router itself:

“Only” one LAN port this time, along with the WAN port and power input connector:

Onward:

Status LED up top, along with an abundance of (passive; no fan in this design) ventilation holes:

And at the bottom, power and reset switches along with verbiage including the all-important FCC ID, Q87-03331, which interestingly (and unsurprisingly) documents this product as being the WHW01, not the Walmart-relabeled and (slightly) de-spec’d VLP01:

Diving inside

Ordinarily, I would have begun my search for a pathway to the interior by focusing on that bottom panel, but an iFixit teardown of the WHW01 that I’d stumbled across during my research (which, truth be told, I actually didn’t realize until my teardown was complete and I’d begun this writeup was of the same hardware, due to the product name variance and “AC2400” silliness) instead advised me to start at the top instead:

Top off and to the side, complete with flips and focus shifts:

Now standalone:

Next, let’s ditch those two screws:

And now we can (re)turn our attention to the bottom. As usual, the rubber feet are first to go, revealing screw heads underneath ‘em:

Buh-bye:

And we have liftoff:

Another set of flips and focus shifts:

Followed by more standalone shots:

And now, free of its upper and lower encumbrances, the inner assembly lifts right out:

Gotta love those focus shifts! The enclosure’s just so tall, don’cha know:

Clever cooling and wireless connectivity

The inner assembly exhibits some pretty nifty engineering. There’s a metal plate on top of one side of the PCB, a finned heat sink on the other side surrounded by a plastic shroud (to which the Bluetooth antenna is attached), and a plastic grill (that you sorta already saw already from those previous inside-from-top still-assembled shots) on the top end with the 2.4 and 5 GHz antennae stuck to it and the LED mini-PCB inserted within it. Side shots first:

Top end:

And bottom end:

Let’s ditch the plastic piece around the Ethernet ports and power connector first. It unclipped and pulled right off with absolutely no fuss:

Removing three screws enables the extrication of the metal plate on one side of the PCB:

Don’t worry; I’ll be getting to those two Faraday cages shortly:

But first, I want to get the topside plastic grill and the other-side plastic shroud off:

The two Wi-Fi antennas’ connections are begging for unclipping:

There’s the LED mini-PCB, still in place:

And there we are:

Some standalone shots of the top-end grill piece, topside first:

Then the underside:

Now the four…err…side sides:

I’m guessing that “P2” references the 2.4 GHz antenna structure, while “P5” is for…err, again…5 GHz. Agree or disagree, readers?

Next up, the side shroud. Outer portion first, revealing (among other things) the aforementioned Bluetooth antenna:

And now the inside:

Next, the LCD mini-PCB.

The largest chip on this side is labeled as follows:

9633
11 02
D819

My guess is that it’s an LED driver, like this PCA9633 from NXP Semiconductors. And on the other side is, of course, the multicolor LED itself:

From the online documentation for the WHW01 (which, I’m guessing, works the same as the VLP01):

  • Blue (blinking): Node is starting up
  • Blue (solid): Node is working properly
  • Purple (blinking): Node is paired with phone for setup
  • Purple (solid): Node is ready for setup
  • Red (blinking): Node lost connection to the primary node
  • If this is your primary node, ensure it’s securely connected to your modem
  • Red (solid): Node lost internet connection
  • Yellow (solid): Node is too far from another Velop node

And speaking of which, here’s a link to the PDF of the WHW01 user guide, which also references the VLP01 on the cover page!

Next up, let’s get that big finned heatsink off:

Fortunately, with all the retaining screws now removed, it lifted right off straightaway:

Oh, goodie, two more Faraday cages underneath!

Let’s deal with these first, before returning to the two on the other side that we saw before:

Remove the thermal tape from the inside of one, bend back the other…

And surprisingly, at least to me, the system SoC is not on this (formerly finned heatsink-augmented) side of the PCB. On the left is a Winbond W632GU6MB-12 2 Gbit DDR3 SDRAM. And on the right is a CSR (now Qualcomm) 8811 Bluetooth 4.2 controller, unsurprising given the antenna connector’s proximity to it.

There’s one more chip I want to point out on this side of the PCB, at the bottom:

It’s a Macronix MX25L1606E 16 Mbit serial NOR flash memory. (Briefly) hold that thought

Multiple nonvolatile memories

Wrapping up, let’s revisit the PCB’s other side, this time post-removal of the black plastic pieces:

At the top is another Winbond device, this time a serial NAND flash memory chip, the 2 Gbit 25M02GV. It’s based on high-reliability SLC (single-level cell) technology, and given comparative capacity, I’m guessing it contains the bulk of system software, with the Macronix chip on the other side relegated to boot and recovery code (or something like that…mebbe it holds updatable configuration data instead, although EEPROM would seem to be a superior choice?).

Cage tops off…

Along the left:

are (top-to-bottom) two Skyworks SKY85330-11 2.4GHz 256QAM RF front-end modules (FEMs), followed by two chips labeled:

SKY
748
2K01D

WikiDevi (or if you prefer, DeviWiki) says that they’re Skyworks SKY7482I001 5 GHz FEMs, although I can’t find such a chip on Skyworks’ website, so once again…🤷‍♂️ I’m pretty sure they’re right about the 5 GHz FEM part, but I’m questioning the specific part number…then again, I can’t find an online reference to the SKY7482K01D, either. My working theory is that we’re actually looking at the SKY85748-11, and Skyworks just didn’t have room to print the “85” portion of the part number on the package.

To their right, and formerly under two pads of thermal tape, one connecting the cage to the metal plate and the other between the cage and IC, is the dominant heat generator of the design, Qualcomm’s IPQ4018 dual-band 802.11ac controller, which also handles wired Ethernet MAC duties. To its right is the companion Qualcomm Atheros QCA8072 dual-port Ethernet PHY. So basically what we’ve got here is a Linksys-branded and software-customized Qualcomm reference design. And above the QC8072 (and below the two wired Ethernet ports) is the Link-PP HN36201CG dual-port transformer module. There’s nothing notable under the sheet metal square in between the IPQ4018 and QCA8072, by the way, in case you were wondering.

More than 2,500 words in, that’s “all” I’ve got for you today. 😂 There’s another surprise waiting in the wings, but I’ll save that for another teardown another (near-future, I promise) day. Until then, please share your thoughts with me (and your fellow readers) in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post A fresh gander at a mesh router appeared first on EDN.

Infineon adds SPICE-based model generation to IPOSIM platform for more accurate system-level simulation

ELE Times - Mon, 10/27/2025 - 14:17

The Infineon Power Simulation Platform (IPOSIM) from Infineon Technologies AG is widely used to calculate losses and thermal behavior of power modules, discrete devices, and disc devices. The platform now integrates a SPICE-based model generation tool that incorporates external circuitry and gate driver selection into system-level simulations. The tool delivers more accurate results for static, dynamic, and thermal performance, taking into consideration non-linear semiconductor physics of the devices. This enables advanced device comparison under a wide range of operating conditions and faster design decisions. Developers can also customize their application environment to reflect real-world operating conditions directly within the workflow. As a result, they can optimize the application performance, shorten time-to-market, and reduce costly design iterations. IPOSIM integrates SPICE to support a wide range of applications where switching power and thermal performance are critical, including electric vehicle (EV) charging, solar, motor drives, energy storage systems (ESS), and industrial power supplies.

In the global transition to a decarbonized future, power electronics are essential for enabling cleaner energy systems, sustainable transportation, and more efficient industrial processes. This transformation increases the demand for advanced simulation and validation tools that allow designers to innovate early in the development cycle. At the same time, they must deliver highly efficient, high-power-density designs such as EV chargers, solar inverters, motor drives, and industrial power supplies, while minimizing design iterations and reducing development costs. Switching losses and thermal performance are decisive factors in this process, yet traditional hardware testing remains time-consuming, costly, and limited in capturing real-world conditions.

With the integration of SPICE, IPOSIM brings the simulation of real switching behavior fully online and helps users optimize their designs at an early stage of the development process. By extending system simulation to real-world conditions, the models make it possible to factor in critical parameters such as stray inductance, gate voltage and dead time. The device characterization reflects the switching behavior under more realistic operating scenarios, taking the selected gate driver into account. The capability is fully integrated into IPOSIM’s multi-device comparison workflow, enabling users to select devices marked with the SPICE icon, configure application environments, and follow a guided simulation process. With its system-level accuracy and intuitive workflow, IPOSIM’s new SPICE-based models enable faster device selection and more reliable design decisions.

The post Infineon adds SPICE-based model generation to IPOSIM platform for more accurate system-level simulation appeared first on ELE Times.

GigaDevice and Navitas unveil Digital Power Joint Lab to accelerate high-efficiency power management deployment

Semiconductor today - Mon, 10/27/2025 - 14:02
GigaDevice Semiconductor Inc — a fabless supplier of Flash memory, 32-bit microcontrollers (MCUs), sensors and analog products that relocated its headquarters this year from Beijiing to Singapore — has officially launched the Digital Power Joint Lab in collaboration with gallium nitride (GaN) power IC and silicon carbide (SiC) technology firm Navitas Semiconductor Corp of Torrance, CA, USA. By combining GigaDevice’s GD32MCU expertise with Navitas’ advantages in high-frequency, high-speed and highly integrated GaN technologies and its GeneSiC technology leveraging ‘trench-assisted planar’ technology, the collaboration aims to deliver intelligent and high-efficiency digital power solutions for emerging markets such as AI data centers, photovoltaic inverters, energy storage systems, charging infrastructure, and electric vehicles...

Top 10 Agentic AI Threats and How to Defend Against Them

ELE Times - Mon, 10/27/2025 - 11:48

Author: Saugat Sindhu, Global Head – Advisory Services, Cybersecurity & Risk Services, Wipro Limited

October is Cybersecurity Awareness Month, and this year, one emerging frontier demands urgent attention: Agentic AI.

India’s digital economy is booming — from UPI payments to Aadhaar-enabled services, from smart manufacturing to AI-powered governance. But as artificial intelligence evolves from passive large language models (LLMs) into autonomous, decision-making agents, the cyber threat landscape is shifting dramatically.

These agentic AI systems can plan, reason, and act independently — interacting with other agents, adapting to changing environments, and making decisions without direct human intervention. While this autonomy can supercharge productivity, it also opens the door to new, high-impact risks that traditional security frameworks aren’t built to handle.

Here are the 10 most critical cyber risks of agentic AI — and the governance strategies to keep them in check.

1. Memory poisoning

Threat: Malicious or false data is injected into an AI’s short- or long-term memory, corrupting its context and altering decisions.

Example: An AI agent used by a bank falsely remembers that a loan is approved due to a tampered record, resulting in unauthorized fund disbursement.

Defense: Validate memory content regularly; isolate memory sessions for sensitive tasks; require strong authentication for memory access; deploy anomaly detection and memory sanitization routines.

2. Tool misuse

Threat: Attackers trick AI agents into abusing integrated tools (APIs, payment gateways, document processors) via deceptive prompts, leading to hijacking.

Example: An AI-powered HR chatbot is manipulated to send confidential salary data to an external email using a forged request.

Defense: Enforce strict tool access verification; monitor tool usage patterns in real time; set operational boundaries for high-risk tools; validate all agent instructions before execution.

3. Privilege compromise

Threat: Exploiting permission misconfigurations or dynamic role inheritance to perform unauthorized actions.

Example: An employee escalates privileges with an AI agent in a government portal to access Aadhaar-linked information without proper authorization.

Defense: Apply granular permission controls; validate access dynamically; monitor role changes continuously; audit privilege operations thoroughly.

4. Resource overload

Threat: Overwhelming an AI’s compute, memory, or service capacity to degrade performance or cause failures — especially dangerous in mission-critical systems like healthcare or transport.

Example: During festival season, an e-commerce AI agent gets flooded with thousands of simultaneous payment requests, causing transaction failures.

Defense: Implement resource management controls; use adaptive scaling and quotas; monitor system load in real time; apply AI rate-limiting policies.

5. Cascading hallucination attacks

Threat: AI-generated false but plausible information spreads through systems, disrupting decisions — from financial risk models to legal document generation.

Example: An AI agent in a stock trading platform generates a misleading market report, which is then used by other financial systems, amplifying the error.

Defense: Validate outputs with multiple trusted sources; apply behavioural constraints; use feedback loops for corrections; require secondary validation before critical decisions.

6. Intent breaking and goal manipulation

Threat: Attackers alter an AI’s objectives or reasoning to redirect its actions.

Example: A procurement AI in a company is manipulated to always select a particular vendor, bypassing competitive bidding.

Defense: Validate planning processes; set boundaries for reflection and reasoning; protect goal alignment dynamically; audit AI behaviour for deviations.

7. Overwhelming human overseers

Threat: Flooding human reviewers with excessive AI output to exploit cognitive overload — a serious challenge in high-volume sectors like banking, insurance, and e-governance.

Example: An insurance company’s AI agent sends hundreds of claim alerts to staff, making it hard to spot genuine fraud cases.

Defense: Build advanced human-AI interaction frameworks; adjust oversight levels based on risk and confidence; use adaptive trust mechanisms.

8. Agent communication poisoning

Threat: Tampering with communication between AI agents to spread false data or disrupt workflows — especially risky in multi-agent systems used in logistics or defense.

Example: In a logistics company, two AI agents coordinating deliveries are fed false location data, sending shipments to the wrong city.

Defense: Use cryptographic message authentication; enforce communication validation policies; monitor inter-agent interactions; require multi-agent consensus for critical decisions.

9. Rogue agents in multi-agent systems

Threat: Malicious or compromised AI agents operate outside monitoring boundaries, executing unauthorized actions or stealing data.

Example: In a smart factory, a compromised AI agent starts shutting down machines unexpectedly, disrupting production.

Defense: Restrict autonomy with policy constraints; continuously monitor agent behaviour; host agents in controlled environments; conduct regular AI red teaming exercises.

10. Privacy breaches

Threat: Excessive access to sensitive user data (emails, Aadhaar-linked services, financial accounts) increases exposure risk if compromised.

Example: An AI agent in a fintech app accesses users’ PAN, Aadhaar, and bank details, risking exposure if compromised.

Defense: Define clear data usage policies; implement robust consent mechanisms; maintain transparency in AI decision-making; allow user intervention to correct errors.

This list is not exhaustive — but it’s a strong starting point for securing the next generation of AI. For India, where digital public infrastructure and AI-driven innovation are becoming central to economic growth, agentic AI is both a massive opportunity and a potential liability.

Security, privacy, and ethical oversight must evolve as fast as the AI itself. The future of AI in India will be defined by the intelligence of our systems — and by the strength and responsibility with which we secure and deploy them.

The post Top 10 Agentic AI Threats and How to Defend Against Them appeared first on ELE Times.

AI is defining reality as we progress further

ELE Times - Mon, 10/27/2025 - 08:38

AI has well integrated into almost every sector of the economy. It has not only driven efficiency but has also simulated innovation. As AI assimilates in to the electronic industry, new trends have sparked a growing bud of innovation for the upcoming year. The electronics industry will experience a new wave of faster decision-making, improved efficiency, and sustainability as AI develops in 2026.

As research and development in the field of Artificial Intelligence grows, the trends for next year can be understood as follows-

  • Agentic AI: Artificial Intelligence is already being used extensively in the R&D sector but AI can conclusively solve a key challenge in the electronic manufacturing sector too. With the development of Agentic AI, the issues related to supply chain disruptions can be studied, allowing planning in advance. The Agentic AI can identify alternate suppliers and dynamically reconfigure logistics in response to changing conditions. This reduced human intervention can reduce delays in production, and allow for more focus on R&D. It can also be used as an all-time sales assistant, tracking customer requests, generating quotes, and even placing orders. This will bring a sustainable pace to the business of the industry and also reduce the role of middlemen, hence bringing a competitive edge. From predictive maintenance to autonomous marketing, this growing trend can unleash the full potential of the ESDM industry at present. Undoubtedly, businesses that integrate agentic AI early will have an edge over others at shaping the future of the B2B electronic industry.

Some existing providers of this technology are:-

  1. IBM: IBM’s prebuilt watsonx AI agents are pre-designed systems that offer standard API and SDK support for open-source frameworks, allowing developers to use their preferred tools.
  2. Wizr AI: Wizr allows companies to build and deploy LLM powered AI agents, trained on company specific data like, CRM logs, internal documents, and past customer interactions, providing a customized experience. They also provide enterprise-grade security and certifications like SOC 2 Type 2 and ISO 27001 for highly-regulated industries.
  3. TrueFoundry: This provider typically caters to data scientists, ML engineers, and IT professionals with over 1000 LLMs integrated in it along with tools for connecting with other enterprise tools like Slack, Github, and Datadog.
  • Generative AI: Gen AI is expected to become the new normal in the coming years, not just for content creators but for the electronic manufacturing industry too. The lack of advanced designing capabilities in the industry can be comprehensively solved with the advancement of Generative AI. From automating the creation of innovative designs, optimizing complex systems, speeding up prototyping and iteration, to reducing development costs, and democratizing design tools, Gen AI will be the new mastermind behind innovation in the manufacturing industry. This technology will allow engineers to explore new design spaces with quick validation and create more efficient and novel electronic components and systems, faster than any traditional methodology. It will eventually also cater to the challenge of skill shortage in the miniature production sector, hence increasing the efficiency of the industry.

With prominent faces like Synopsys.ai and Cadence Design Systems, already providing a comprehensive portfolio for the designing throughout the chip design workflow, other emerging providers are:-

  1. Flux AI: Its AI-powered e-CAD (electronic Computer Aided Design) provides for designing and building PCBs, saving time as well as giving good results.
  2. Circuit Mind: this software takes high-requirements and automatically provides optimized schematics and BOMs, creating reliable and error-free circuits.
  3. DeepPCB: This cloud-based tool uses AI for providing an automated PCB routing.
  4. Cirkit Designer: Cirkit is an online platform, providing circuit designing, simulation, and collaboration.
  5. Zuken: Zuken is a major provider of Electronic Design Automation (EDA) tools such as CR-8000, and E3.series for precise results.
  • Physical AI: The shortage of skilled labor in the miniature electronic industry is all set to get a new solution with the adoption of AI-powered robotics and automated inspection to handle repetitive and complex tasks, along with the integration of augmented reality (AR) for training and real-time guidance of the human resource. This will allow the industry to use the low-skill personnel for high-level function, which will improve efficiency, quality and speed. The physical AI can retain the knowledge from a retired skilled professional to continue working in sync to the production requirements, further using the same knowledge base to train new recruits, but with a reduced cost on human-resource development. Additionally, the skilled personnel can be freed to focus on strategic and value-added activities that require creativity and decision-making.

Some of the key players in providing this technologies are:-

  1. Grey Matter Robotics: They are specialized in developing AI-powered robotics systems specifically for automating manufacturing and industrial operations.
  2. Veco Robotics: Veco integrates 3D sensoring, computer vision, along with AI to make robots work faster alongside humans. They are particularly efficient in handling delicate electronics assembly without the need for traditional caging.
  • Sovereign AI: As the race to build newer AI system builds pace, it draws attention to data privacy in the AI landscape. Tomorrow is not just about any AI, but a safe and indigenous AI system that protects sensitive data with national and regional boundaries. This has given rise to a budding trend for sovereign AI. This system will allow businesses to build their own AI models which can comply with local data protection laws and industry-specific regulations. Such customized AI models can adhere to the specific needs of the business and reduce foreign dependence. A self-controlled AI system reduces the risk of cyber fraud and helps protect sensitive Intellectual property (IP). Sovereign AI can also be used to study the impact of a predicted geopolitical event on the supply chains, especially for import dependent components in the industry.

Some of the service providers of Sovereign AI in India include:-

  1. EDB Postgres: They offer a platform, allowing for secure, on-premises or private-cloud Gen AI interfacing. It also ensures that the data remains within he company’s control, essential for designers and manufacturers.
  2. Sarvam AI: It is considered India’s leading sovereign AI provider, selected by the Indian government to develop the country’s first homegrown large language model (LLM).
  • Digital Twin + AI: A dynamic collaboration between a digital twin and AI will unleash new energy into the electronic manufacturing industry. As the need for miniaturization grows, the modelling of a digital twin in collaboration with AI subjecting it to real-time usage tests can improve the quality and efficiency of microscopic components like PCBs, silicon chips, and ICs. The sensors can be fed with data from real-user experiences which can help engineers design a more efficient and lasting product. It will allow minimal damage, making the process of R&D and testing more cost-effective.

From the several Digital Twin providers, some of them best suited for the electronics industry are:-

  1. Ansys: They specialize in simulation-based digital twins that use physics-based modelling along with AI integration to create highly accurate virtual prototype of systems.
  2. PTC: Their ‘ThingWorx’ platform integrates Industrial IoT, AR, and Digital Twin technologies. Allowing manufacturers to monitor, analyze, and optimize operations in real-time, benefiting product quality and predictive maintenance.

While the future of artificial intelligence is bright in the electronic industry, its integration into the existing system can prove to be a challenge. The initial costs may be overbearing for the business, however, the productivity achieved in the long-run will definitely be worth-it.

The post AI is defining reality as we progress further appeared first on ELE Times.

From Monoliths to Modules: A story of heterogeneous integration, chiplets, and the industry reshaping itself

ELE Times - Mon, 10/27/2025 - 08:15

For nearly four decades, the semiconductor narrative has simply revolved around Moore’s Law, shrinking transistors, packing more logic onto a single die for achieving results. However, now the limitations of such an approach are evident in reticle sizes, yields, rising costs, and the reality that not every function benefits from bleeding-edge lithography. The industry’s answer to this, is to stop treating the system as “one big die” instead, treat it as a system of optimized pieces chiplets and heterogeneous integration. What once started as an engineering workaround is now a full-blown industrial shift. The article is a curated, human narrative of how the industry got here, what the leading players are doing, the key technologies emerging, and how it is likely to play out in the coming future.

The pivot: when economics beat scaling

The earliest chiplet experiments were pragmatic. Designers realized that a single large die amplifies risk: one defect ruins the whole chip, and reticle-scale chips are expensive to manufacture. Chiplet thinking flips that risk model and many smaller dies (chiplets) are cheaper to yield and can be produced on the process node best suited to their function. AMD’s decision to “bet the company’s roadmap on chiplets” is perhaps the clearest strategic statement of this pivot; CEO Dr. Lisa Su has repeatedly framed chiplets as a transformational, multi-year bet that paid off by enabling modular, high-performance designs.

That economic logic attracted big players. When companies like AMD, Intel, NVIDIA, TSMC and major cloud providers all start designing around modular architectures, the idea moves from clever trick to industry standard. But to make chiplets practical at scale required new packaging, new interconnect standards, and new supply-chain thinking.

The technical enabling stack- what changed?

Three packaging techniques and a set of interconnect innovations allowed chiplets to become real:

  1. 2.5D (silicon interposer / CoWoS family): A silicon interposer routes huge numbers of fine wires between side-by-side dies and HBM stacks. TSMC’s CoWoS family (Chip on Wafer on Substrate) is a productionized example used in AI accelerators and high-bandwidth systems; it provides the highest on-package bandwidth today.
  2. 3D stacking (Foveros, TSVs, hybrid bonding): Stacking dies face-to-face shortens interconnects, saves board area, and opens power/latency advantages. Intel’s Foveros showed how a system could be built vertically from optimized tiles. The real leap is hybrid (Cu–Cu) bonding, which enables ultra-dense, low-parasitic vertical interconnects and is rapidly becoming the preferred route for the highest-performance 3D stacks.
  3. EMIB (embedded bridge):  A cost-effective middle ground: small high-density bridges route signals between adjacent dies on a package without needing a full interposer, balancing cost and performance.

On top of physical packaging, industry collaboration produced UCIe (Universal Chiplet Interconnect Express) a standard that defines die-to-die electrical and protocol layers so designers can mix chiplets from different vendors. UCIe’s goal is simple but radical: make chiplets plug-and-play the way IP blocks (or board components) are today, lowering integration friction and encouraging a multi-vendor marketplace. The consortium’s growth and tone of its public messaging reflect broad industry support.

What the industry leaders are saying (high-level truth from the field)

Words matter because they reveal strategy. Lisa Su framed AMD’s move as an existential bet that enabled modular scaling and faster product cycles not a tweak, but a new company playbook. Jensen Huang (NVIDIA) has discussed shifting packaging needs as designs evolve, stressing that advanced packaging remains a bottleneck even as capacity improves a reminder that packaging is now a strategic choke point full of commercial leverage. And foundries and integrators (TSMC, Intel Foundry, Samsung) openly invest in CoWoS, Foveros and hybrid bonding capacity because advanced packaging is the next frontier after lithography.

The practical outcomes we’re seeing now

  • Modular server CPUs and accelerators: AMD’s chiplet EPYC architecture split cores and I/O dies for yield and flexibility; major GPU vendors assemble compute tiles and HBM via CoWoS to reach enormous memory bandwidth.
  • New supply-chain pressure: Advanced packaging capacity became a bottleneck in some cycles, forcing companies to book OSAT / CoWoS capacity years ahead. That’s why foundries and governments are investing in packaging fabs.
  • Standardization momentum: UCIe and related initiatives reduce engineering friction and unlock third-party chiplet IP as a realistic business model.

The tensions and technical gaps

Heterogeneous integration isn’t a panacea. It introduces new engineering complexity: thermal hotspots in 3D stacks, multi-die power delivery, system-level verification across vendor boundaries, and supply-chain trust issues (who vouches for a third-party chiplet?). EDA flows are catching up but still need better automation for partitioning, packaging-aware floor planning, and co-validation. Packaging capacity, while expanding, remains a strategic scarce resource that shapes product roadmaps.

New technologies to watch

  • Hybrid bonding at scale:  enabling face-to-face stacks with very high I/O density; companies (TSMC, Samsung, Intel) are racing on patents and process maturity.
  • UCIe ecosystem growth:  as more vendors ship UCIe-compatible die interfaces, an open marketplace for physical chiplet IP becomes more viable.
  • CoWoS-L / CoWoS-S differentiation and packaging variants: vendors are tailoring interposer variants to balance area, cost and performance for AI workloads.

How this story likely ends (judgement, not prophecy)

The industry is not replacing monolithic chips entirely monoliths will remain where tight coupling, the lowest latency, or the cheapest bill-of-materials matter (e.g., mass-market SoCs). But for high-value, high-performance markets (AI, HPC, networking, high-end CPUs), heterogeneous integration becomes standard. Expect three converging trends:

  1. An ecosystem of chiplet vendors: IP providers sell actual physical chiplets (compute tiles, accelerators, analog front ends) that can be combined like components.
  2. Packaging as strategic infrastructure: fabs and OSATs that excel at hybrid bonding, interposers, and 3D stacking will hold new leverage; national strategies will include packaging capacity.
  3. Toolchains and standards that normalize integration: with UCIe-style standards and improved EDA flows, system architects will shift focus from transistor-level tricks to system partitioning and orchestration.

If executed well, the result is faster innovation, cheaper scaling for complex systems, and diversified supply chains. If poorly coordinated, the industry risks fragmentation, security and provenance problems, and bottlenecks centered on a few packaging suppliers.

Final thought

We have moved from a single-die worldview to a modular systems worldview.

That change is technical (new bonds, interposers, interfaces), economic (yield and cost models), and strategic (packaging capacity equals competitive advantage). The transition is messy and political in places, but it’s already rewriting roadmaps: chiplets and heterogeneous integration are not an academic curiosity they are the architecture by which the next decade of compute will be built.

The post From Monoliths to Modules: A story of heterogeneous integration, chiplets, and the industry reshaping itself appeared first on ELE Times.

Chip Code, written by AI: Driving Lead Time Optimization and Supply Chain Resilience in Semiconductor Manufacturing

ELE Times - Mon, 10/27/2025 - 08:09

The semiconductor world is grappling with complex challenges and designing a modern chip that involves billions of transistors, massive verification workloads, and global supply chains prone to disruption is making the process no easier. One of the critical factors hindering innovation and market responsiveness is the extensive lead time, often exceeding 20 weeks. While the procurement and supply chain managers are constantly coordinating wafer fabs, managing inventory, and dealing with rapidly changing markets, the industry’s core bottleneck is the design phase’s sheer complexity and iterative nature.

AI technologies, including Large Language Models (LLM) and newer multi-agent generative systems, are fundamentally transforming Electronic Design Automation (EDA). These systems automate Register Transfer Level (RTL) generation, detect verification errors earlier, and help predict wafer fab schedules. Integrating AI with procurement teams and supply chain planners helps in dealing with industry volatility and resource allocation uncertainty. It is quietly reshaping the entire ecosystem, moving design from an art form reliant on small teams of gurus to a computationally optimized process.

AI’s Role in Chip Design Automation

RTL design, which defines a chip’s logic, was traditionally hand-crafted, taking engineers months for debugging. Now, AI trained on large HDL datasets suggests RTL fragments, accelerates design exploration, and flags inconsistencies. Reinforcement learning ensures the code becomes progressively accurate, often identifying optimal solutions humans miss.

This capability moves beyond mere efficiency; it reduces manufacturing risk. Fewer RTL mistakes mean fewer costly fab re-spins, making wafer scheduling predictable. Predictive analytics spot fab queue bottlenecks, allowing teams to optimize lithography usage before issues escalate. This foresight maintains consistent throughput.

Generative AI advances this using multiple specialized agents: one for synthesis tuning, one for logic checking, and a third for modelling power or timing. This distributed intelligence improves efficiency and provides procurement teams early risk warnings. By simulating designs, they can anticipate mask shortages, material spikes, or foundry capacity issues, effectively optimizing the physical supply chain.

“The ability to automate RTL generation and verification simultaneously is a game-changer. It shifts our engineering focus from tedious bug-hunting to true architectural innovation, accelerating our time-to-market by months.”- — Dr. Lisa Su, CEO, AMD

Multi-Agent Generative AI for Verification: Operational Impact

Verification often consumes up to 70 percent of chip design time, scaling non-linearly with transistor count, making traditional methods unsustainable. The Multi-Agent Verification Framework (MAVF) uses multiple AI agents to collaborate: reading specifications, writing testbenches, and continuously refining the design. This division of labour operates at machine speed and scale.

Results are notable: human effort drops by 50 to 80 percent, with accuracy exceeding manual methods. While currently module-level, this hints at faster full verification loops, compressing the ‘time-to-known-good-design’ window. This means fewer wasted weeks on debugging and substantial savings on re-spins, protecting billions in costs.

“We are seeing a 15% reduction in verification cycles across key IP blocks within a year. The key is the verifiable audit trail these new systems create, which builds trust for sign-off.”- — Anirudh Devgan, CEO, Cadence Design Systems

Predictable verification helps procurement reduce lead-time buffers. Instead of hoarding stock or overbooking fab slots, teams plan using reliable design milestones. The ROI is twofold: engineers save effort, and procurement negotiates smarter contracts, boosting resilience and freeing up working capital.

Industry Insights and Strategic Implications

Research at Intel’s AI Lab shows that machine learning is powerful, but it works best when integrated with classical optimization techniques. For example, in floor planning or system-level scheduling, AI alone often struggles with hard constraints. However, hybrid approaches offer substantial improvements, combining the exploratory power of AI with the deterministic precision of conventional algorithms. The release of datasets like FloorSet demonstrates a strong commitment to benchmarking realistic chip design problems under real-world industrial constraints.

From a strategic perspective, AI-driven design efficiency provides procurement and supply chain teams with several key advantages:

  • Agility: Design-to-tapeout cycles become faster, enabling companies to respond quickly when demand surges or falls, capturing market share faster than competitors.
  • Resilience: More predictable verification milestones stabilize wafer fab scheduling and reduce exposure to market volatility.
  • Negotiation Power: Procurement teams can better align contracts with foundries and suppliers to actual needs, helping reduce buffer costs. This shift moves contracts from being based on generalized risk to specific, design-validated schedules.

“For foundry operations, predictability is everything. AI-driven design provides a stable pipeline of GDSII files, allowing us to lock in capacity planning with much greater confidence, directly improving overall facility utilization.”- C. C. Wei, CEO, TSMC

This alignment reflects a careful integration of technical advances with operational priorities, ensuring that AI improvements translate into tangible, real-world impact across the entire value chain, from concept to silicon.

Future Outlook: AI, Market Dynamics, and Strategic Planning

The next big step is full-chip synthesis and automated debugging. LLM-powered assistants generate block-level RTL, while reinforcement learning agents iterate to resolve timing or power conflicts. This could significantly speed up tapeout cycles and give supply chain planners a clearer picture of what is coming, though challenges remain regarding the size and systemic integrity of full-chip designs.

Real challenges persist. AI models require large data, raising concerns about proprietary Intellectual Property (IP) and training biases. Even if output passes syntax checks, deeper semantic or safety issues may arise. Integrating these tools into existing EDA workflows requires careful validation, certification, and substantial computing resources. The explainability of AI-generated code is paramount for regulatory approval and risk mitigation.

Ways to manage risks include hybrid human-in-the-loop approaches, deploying modules first, and maintaining strict audit trails for correctness. For supply chain leaders, AI is a tool to reduce volatility buffers, not a magic solution eliminating all risks. Geopolitical and natural disaster risks remain, but AI minimizes internal, process-driven risks.

Conclusion

AI is gradually driving operational change in semiconductor design. Full-chip automation remains a long-term goal, but today’s advances in RTL generation, module-level verification, and predictive analytics already shorten design cycles and make wafer fab scheduling predictable. For procurement leaders, supply chain managers, and strategists, this translates to greater agility, reduced risk, and stronger resilience in an instantly changing market.

The takeaway is simple. Companies that thoughtfully integrate AI into design and supply chain operations will gain a clear competitive advantage. Tomorrow’s chips won’t just be faster or more efficient. Their code will be shaped by AI intelligence, providing engineers with insights previously almost impossible to achieve.

The post Chip Code, written by AI: Driving Lead Time Optimization and Supply Chain Resilience in Semiconductor Manufacturing appeared first on ELE Times.

I may have undersized my transistor…

Reddit:Electronics - Mon, 10/27/2025 - 07:01
I may have undersized my transistor…

So I’m making a Arduino controlled pwm fan controller that has a temp sensor and I thought my fans drew 0.6 W combined but obviously not (see attached image)

submitted by /u/SwanRepresentative39
[link] [comments]

Basic Principles and Implementation of the Quadrature FM Demodulator

AAC - Sun, 10/26/2025 - 19:00
Learn how an analog multiplier or AND gate can function as a quadrature detector for FM demodulation.

💌 Запрошуємо студентів на онлайн-тренінг «Корупція та протидія корупції»

Новини - Sat, 10/25/2025 - 22:10
💌 Запрошуємо студентів на онлайн-тренінг «Корупція та протидія корупції»
Image
kpi сб, 10/25/2025 - 22:10
Текст

Pro NGO (Німеччина) за підтримки Міністерства закордонних справ Німеччини запрошує студентів професійних закладів освіти Вінницької, Дніпропетровської, Житомирської, Черкаської та інших областей України на онлайн-тренінг «Корупція та протидія корупції»

Weekly discussion, complaint, and rant thread

Reddit:Electronics - Sat, 10/25/2025 - 18:00

Open to anything, including discussions, complaints, and rants.

Sub rules do not apply, so don't bother reporting incivility, off-topic, or spam.

Reddit-wide rules do apply.

To see the newest posts, sort the comments by "new" (instead of "best" or "top").

submitted by /u/AutoModerator
[link] [comments]

Apple Looks to the Future of AI With New M5 Processor

AAC - Fri, 10/24/2025 - 23:00
Apple’s M5 chip brings per-core neural acceleration, a 30% boost in memory bandwidth, and GPU-driven AI to the MacBook Pro and iPad Pro.

ST launches four 5-MP image sensors

EDN Network - Fri, 10/24/2025 - 21:09
Traffic management and object detection.

STMicroelectronics introduces a new family of 5-megapixel (MP) CMOS image sensors: the VD1943, VB1943, VD5943, and VB5943. These advanced BrightSense sensors accelerate the development of vision applications across a variety of industries, including industrial automation for machine and robotic vision, advanced security including biometric identification and traffic management, and smart retail applications such as inventory management and automated checkout.

Traffic management and object detection.(Source: STMicroelectronics)

Suited for high-speed automated manufacturing processes and object tracking, the new sensors provide hybrid global and rolling shutter modes, enabling developers to optimize image capture for their specific applications. This delivers motion-artifact-free video capture (global shutter), and low noise, high detail-imaging (rolling shutter).

Featuring a compact 2.25-µm pixel and advanced 3D stacking, the sensors deliver high image quality in a small footprint. The sensors feature a die size of 5.76 × 4.46 mm and a package size of 10.3 × 8.9 mm with an industry-leading 73% pixel array to die surface ratio. This enables integration into space-constrained embedded vision systems without compromising performance, ST said.

Delivering high-quality imaging in challenging environments, these sensors leverage backside illumination and capacitive deep trench isolation pixel technologies to enhance sensitivity and sharpness, particularly in low lighting conditions. Single-frame on-chip high dynamic range improves detail visibility in both bright and dark areas.

The RGB-IR variants feature on chip RGB-IR separation, eliminating additional components and simplifying system design. This capability supports multiple output patterns, including 5-MP RGB-NIR 4×4, 5-MP RGB Bayer, 1.27-MP NIR subsampling, and 5-MP NIR smart upscale, with independent exposure times and instant output pattern switching. This reduces costs while maintaining full 5-MP resolution for both color and infrared imaging, ST said.

The four sensors are currently available for evaluation and sampling, with mass production scheduled for February 2026. Documentation, evaluation kits, and product samples are available.

The post ST launches four 5-MP image sensors appeared first on EDN.

DC/DC converters add digital monitoring and control

EDN Network - Fri, 10/24/2025 - 20:57
XP Power's HRF15 series of 15-W DC/DC converters.

XP Power announces a digital version of its HRF15 series of 15-W DC/DC converters with output voltage and current programming through a PMBus via I2C. These new capabilities address the growing need for automation and remote control in high precision equipment, including mass spectrometry, scanning electron microscopy, and transmission electron microscopy for semiconductor inspection and analytical research.

XP Power's HRF15 series of 15-W DC/DC converters.(Source: XP Power)

Compared with the company’s precision analog version launched earlier in 2025, the digital interface of the HRF15 DC/DC converters makes integration simpler, reduces setup time through a graphical user interface, and accelerates product development. Reliability also improves with advanced monitoring and programming.

Other key features include power supply status flags that deliver visibility into system health and performance, enhancing uptime and protecting sensitive instruments; and data logging and real-time diagnostics that converts complex internal data into actionable insights, enabling users to make quick, informed decisions that result in lower operating costs and enhanced application safety. In addition, multi-unit synchronization enables scalable power architectures.

Suitable for noise-sensitive applications, the HRF15 series features extremely low ripple down to 0.001% (10 ppm), critical for high performance. The units exhibit high stability over time at 10 ppm/hr, delivering consistency and repeatability in sensitive processes. Load and line regulation, down to 0.001%, delivers high performance even in load-dependent applications or where input voltage fluctuates. They also have a low temperature coefficient of 25 ppm/°C, minimizing environmental performance influences.

Single-output voltages can be specified at 10 kV, 12 kV, and 15 kV and each unit can deliver 15 W of power from a 24-VDC input. The output rail is fully adjustable for constant current and constant voltage from 0 to 100%, which addresses a wide range of loads.

The HRF15 series carries UL6101O and UL62368 safety approvals. Housed in a case measuring 33.0 × 72.4 × 161.0 mm, and weighing approximately 465 g, the compact units ease integration into space-constrained applications. They are currently available from Avnet Abacus or direct from XP Power with a three-year warranty.

The post DC/DC converters add digital monitoring and control appeared first on EDN.

Pages

Subscribe to Кафедра Електронної Інженерії aggregator