Українською
  In English
Новини світу мікро- та наноелектроніки
Sivers appoints Alexander McCann as strategic senior advisor
A precision digital rheostat

Rheostats are simple and ubiquitous circuit elements, usually comprising a potentiometer connected as an adjustable two terminal resistor. The availability of manual pots with resistances spanning ohms to megohms makes the optimum choice of nominal resistance easy. But when an application calls for a digital potentiometer (Dpot), the problem can be challenging.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Dpots are only available in a resistance range that’s narrow compared to manual pots. They also typically suffer from problematically high wiper resistance and resistance tolerance. These limitations conspire to make Dpots a difficult medium for implementing precision rheostats. Recent EDN design idea (DI) articles have addressed these issues with a variety of strategies and topologies:
- Op-amp wipes out DPOT wiper resistance
- Synthesize precision Dpot resistances that aren’t in the catalog
- Synthesize precision bipolar Dpot rheostats
- A class of programmable rheostats
While each of these designs corrects one or more complaints on the lengthy list of digital rheostat shortcomings, none fixes them all and some introduce complications of their own. Examples include crossover distortion, unreduced sensitivity to resistance tolerances, resolution-reducing nonlinearity of the programmed resistance, and just plain old complexity.
The designFigure 1’s circuit isn’t a perfect solution either. But it does synthesize an accurate programmed resistance equal to reference resistor R1 linearly multiplied by U1’s Rbw/Rab digital setting (the ratio between the terminal B to wiper resistance and total element resistance).
Figure 1 A precision digital rheostat that synthesizes an accurate programmed resistance equal to reference resistor R1 linearly multiplied by U1’s Rbw/Rab.
Here’s how it works.
R = (Va – Vb)/Ia
R = R1/(Raw/Rbw + 1) = R1 Rbw/Rab
Rab = Raw + Rbw = typically 5k to 10k
Where R is the programmed synthetic resistance, R1 is the reference resistor, Raw is the resistance between terminal A and wiper terminal, Rbw is the resistance between B and wiper terminal, and Rab is the total element resistance.
U1 works in “voltage divider” (pot) mode to set the gain of inverting amplifier A2. Pot mode makes gain insensitive to both U1’s wiper resistance (Rw) and Rab. They really don’t matter much—see Figure 4-4 in the Microchip MCP41XXX/42XXX datasheet.
Turning the crank on Figure 1’s design equation math, we get:
Ga2 = Raw/Rbw
Where Ga2 is A2’s gain. Further,
Voltage across R1 = (Va – Vb) + Ga2(Va – Vb) = (Raw/Rbw + 1)(Va – Vb) = Rab/Rbw(Va – Vb)
Current through R1 = Ia = Rab/Rbw(Va – Vb)/R1
Then, since R = (Va – Vb)/Ia:
R = R1*Rbw/Rab
Va is lightly loaded by A1’s ~10 picoamp (pA) input bias, so R1 can range from hundreds of ohms up to multiple megohms as the application may dictate. It should be precision, certainly 1% or better; then, programming and the math above takes over.
Figure 2 plots the linear relationship between R and Rbw.
Figure 2 Linear relationship between R and Rbw showing the circuit synthesizes an accurate programmed resistance equal to reference resistor R1 linearly multiplied by U1’s Rbw/Rab.
A compensation capacitor (C1) probably isn’t necessary for the parts selection shown in Figure 1 for A2 and U1. But if a faster amplifier or a higher resistance Dpot is chosen, then 10 pF to 20 pF would probably be prudent.
Meanwhile, I think it would be fair to say this design looks competitive with its peers. But earlier I described it as imperfect. Besides being a single-terminal topology (like two others on the list), where else does it fall short of being a complete solution to the ideal digital rheostat (Digistat) problem?
ShortcomingsHere’s where: As Figure 3 shows, when the programmed value for R goes down, A2’s gain (Ga2) must go up. Reading the graph from right to left, we see gain rising moderately as R declines by 75% from R1 to R1/4 where, Rbw/Rab = 64/256 and gain = 3, but then it takes off. This tends to exaggerate errors like input offset, finite GBW and other op-amp nonidealities while creating the possibility of early A2 saturation at relatively low signal levels.
Figure 3 Graphs for Ga2 (red) and R/R1 (black) versus Rbw/Rab on the x-axis. When the programmed value for R goes down, Ga2 must go up.
The severity of the impact of these effects on utility of the design, whether mild, serious, or fatal, will depend on how low you need go in R/R1 and other specifics of the application. So, it’s certainly not perfect, but maybe it’s still useful somewhere.
Two-terminal designAnd about that single terminal problem. If you have an application that absolutely requires a two-terminal programmable resistance, you might consider Figure 4. Depending on the external circuitry, it might not oscillate.
Figure 4 Duplicate and cross-connect Figure 1’s circuitry to get a two-terminal programmable resistance.
In closing…
Thanks to frequent contributor Christopher R. Paul for his clever innovations and stimulating discussions on this topic, I would likely never have come up with this design without his help. More thanks go to editor Aalyia Shaukat for her clever creation of this DI section that makes fun teamwork like this possible in the first place. This article would definitely never have happened without her help.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Op-amp wipes out DPOT wiper resistance
- Synthesize precision Dpot resistances that aren’t in the catalog
- Synthesize precision bipolar Dpot rheostats
- A class of programmable rheostats
The post A precision digital rheostat appeared first on EDN.
CGD raises $32m in Series C funding round
SoC interconnect automates processes, reduces wire length

A new network-on-chip (NoC) IP aims to dramatically accelerate chip development by introducing artificial intelligence (AI)-driven automation and reducing wire length to lower power use in system-on-chip (SoC) interconnect design. Arteris, which calls its newly introduced FlexGen interconnect IP a smart NoC, claims to deliver a 10x productivity boost, shortening design iterations from weeks to days.
Modern chips—connected by billions of wires—are ever-expanding with growing complexity. Modern SoCs have 5 to 20+ unique NoC instances, and each instance can require 5-10 iterations. As a result, SoC design complexity has surpassed manual human capabilities, which calls for smarter NoC automation.
“In SoC interconnect, while technology has advanced to new levels, a lot of work is still done in manual mode,” said Michal Siwinski, CMO of Arteris. FlexGen accelerates chip design by shortening and reducing iterations from weeks to days for greater efficiency.
“While FlexGen is still using the tried-and-tested NoC IP technology as basic building blocks, it automates the existing infrastructure by employing AI technology,” said Andy Nightingale, VP of product management and marketing at Arteris. “With FlexGen, we automate the NoC IP generation to reduce the manual work while opening high-quality configurations that rival or surpass the manual designs.”
Figure 1 A FlexNoC manual interconnect (above) is shown for an ADAS chip, while an automated FlexGen interconnect (blow) accelerates this chip design by up to 10x. Source: Arteris
According to Nightingale, it enhances engineering efficiency by 3x while delivering expert-quality results with optimized routing and reduced congestion. Dream Chip Technologies, a supplier of advanced driver assistance systems (ADAS) silicon solutions, acknowledges reducing design iterations from weeks to days while using FlexGen in its Zukimo 1.1 automotive ADAS chip design.
“FlexGen’s automated NoC IP generation allows us to create floorplan adaptive topologies with complex automotive traffic requirements within minutes,” said Jens Benndorf, GM at Dream Chip Technologies. “That enabled rapid experimentation to find design sweet spots and to respond quickly to floorplan changes with almost push-button timing closure.”
Shorter wire length
With AI comes a compute performance explosion, and as a result, the complexity of interconnects is going to exponential levels in SoC designs, leading to a huge explosion in the number of wires. FlexGen claims to reduce wire length by up to 30% to improve chip or chiplet power efficiency.
“We are also tackling the big problem of wire length in modern SoC designs,” said Nightingale. “As the gate count size reduces, it inevitably leads to dynamic power issues due to massive data traffic across wires.” By reducing wire length, FlexGen interconnect IP can reduce overall system power and thus help heating problems caused by the energy density of moving massive amounts of data across SoC interconnects.
Figure 2 FlexNoC manual interconnect (above) is shown with the best performance, while automated FlexGen (below) significantly reduces the interconnect wire length. Source: Arteris
Siwinski added that the number of gates doesn’t matter at smaller nodes. “Power from wire length kills you, so we reduce wire length to reduce overall power, performance, and area (PPA) in SoC designs.” That’s crucial as SoCs scale and become more powerful to meet the demands of applications like AI, autonomous driving, and cloud computing.
FlexGen is processor agnostic and supports Arm, RISC-V, and x86 processors. Moreover, its IP generation is highly repeatable to facilitate incremental design.
Related Content
- SoC Interconnect: Don’t DIY!
- What is the future for Network-on-Chip?
- Why verification matters in network-on-chip (NoC) design
- SoC design: When is a network-on-chip (NoC) not enough
- Network-on-chip (NoC) interconnect topologies explained
The post SoC interconnect automates processes, reduces wire length appeared first on EDN.
electronica China 2025 is Coming: Embarking on a journey of in-depth exploration of the electronic industry chain!
electronica China 2025 will take place from April 15 to 17, 2025 at the Shanghai New International Expo Centre (SNIEC), in halls W3-W5 and N1-N5. It is expected to attract a total of 1,700 high-quality exhibitors from Chinese and international markets covering 100,000 square meters. The visitor registration is going on heatedly, register now and check out the highlights of the important trade fair for the electronics industry in Asia!
Tech Exhibition Areas: Highlighting the Allure of Electronic TechnologyThe venue will feature sections for semiconductors, sensors, power supplies, testing and measurement, passive components, displays, connectors, switches, wiring harnesses and cables, distributors, printed circuit boards, electronic manufacturing services, semiconductor intelligent manufacturing, etc. 1,700 premium enterprises from both Chinese and international markets will join in succession, showcasing their cutting – edge scientific research achievements and industry solutions.
Theme Forums: Exploring the Future Development of Technological InnovationThis year’s exhibition will continue to host multiple themed forums, focusing on popular application markets and rapidly evolving industries such as electric vehicles, automotive electronics, humanoid robot, third-generation semiconductor, embedded system, AI, IoT, energy storage, smart manufacturing, connector, motor drive. Industry leaders, technical experts, and academic researchers from the electronic sector, application domains, and research institutes will be invited to address audience queries, share case studies, and provide cutting-edge technological solutions.
Click to register now: https://ec.global-eservice.com/?lang=en&channel=ele
For more information: https://www.electronicachina.com.cn/en
The post electronica China 2025 is Coming: Embarking on a journey of in-depth exploration of the electronic industry chain! appeared first on ELE Times.
Rohde & Schwarz first to achieve GCF approval for 5G FR2 RRM standalone mode conformance test cases
The latest radio resource management (RRM) conformance work item from the Global Certification Forum (GCF) has reached “Active” status following the approval of the Rohde & Schwarz R&S TS8980FTA-M1 5G conformance test system as the first to meet test platform approval criteria (TPAC). This includes the validation of RRM FR2 test cases in standalone mode (SA) for both one and two angles of arrival (1x AoA and 2xAoA) within 5G mmWave band combinations.
Rohde & Schwarz is the first company to obtain Global Certification Forum (GCF) test approval for validating the radio resource management (RRM) performance of advanced 5G New Radio (NR) devices. This approval specifically applies to devices operating in 5G NR standalone (SA) mode within FR2 and capable of managing multiple angles of arrival. It enables device manufacturers to certify their products for global compatibility with mobile networks that utilize mmWave frequencies and operate in 5G SA with a real 5G core network (5GC).
Verifying compliance with critical RRM test cases is increasingly important as wireless 5G routers and customer premises equipment (CPE) gain prominence for fixed wireless access (FWA) applications. These test cases are essential for ensuring a reliable end user experience, especially in challenging network conditions, such as blockages, or at the cell edge.
After the Conformance Agreement Group (CAG) #81 meeting held by GCF in Bangkok January 21 to 23, 2025, it was confirmed that Rohde & Schwarz is currently the only provider offering certification solutions that meet all mandatory test requirements for 5G FR2 RRM, while also supporting the largest number of validated test cases for both RF and RRM in this frequency range. The Rohde & Schwarz 5G mmWave test platform offers a wide range of advanced 5G FR2 non-standalone (NSA) and SA test cases. The platform is built on the CMX500 5G network emulator and includes the R&S ATS1800M anechoic chamber series as well as a powerful device under test (DUT) positioner.
Thomas Eyring, Senior Director of Device Certification at Rohde & Schwarz, commented on the milestone, saying: “We are proud to take a significant step toward enhanced conformance testing as a pioneer in mobile device testing. Our technology will help ensure the quality and reliability of 5G devices while addressing the
challenges in the FR2 frequency range. We are also working on test cases for the upcoming FR3 frequency range, which is expected to play a key role in 6G networks.”
The post Rohde & Schwarz first to achieve GCF approval for 5G FR2 RRM standalone mode conformance test cases appeared first on ELE Times.
Altium and AWS Collaborate to Equip India’s Next Generation of Engineers with Industry-Ready Skills in Electronics Design and Cloud Technology
Altium joins AWS Skills to Jobs Tech Alliance in India; Aims to Bridge Skills Gaps and Enhance Employability for Indian Students
- AWS Cloud Practitioner Essentials – Covering cloud technology basics
- AWS Technical Essentials – Introducing core cloud concepts like networking, databases, and storage
- Amazon AppStream 2.0 Primer – AWS’s application streaming solution
- IoT Fundamentals – Training in IoT and smart technology
- On Campus Applied Learning: Co-hosted events, workshops and challenges will give students hands-on experience in solving industry challenges.
- Pathways to Employment: Students who complete the joint curriculum will connect with industry partners for career opportunities.
The post Altium and AWS Collaborate to Equip India’s Next Generation of Engineers with Industry-Ready Skills in Electronics Design and Cloud Technology appeared first on ELE Times.
Global Semiconductor Manufacturing Industry Reports Solid Q4 2024 Results, SEMI Reports
The global semiconductor manufacturing industry closed 2024 with strong fourth quarter results and solid year-on-year (YoY) growth across most of the key industry segments, SEMI announced today in its Q4 2024 publication of the Semiconductor Manufacturing Monitor (SMM) Report, prepared in partnership with TechInsights. The industry outlook is cautiously optimistic at the start of 2025 as seasonality and macroeconomic uncertainty may impede near-term growth despite momentum from strong investments related to AI applications.
After declining in the first half of 2024, electronics sales bounced back later in the year resulting in a 2% annual increase. Electronics sales grew 4% YoY in Q4 2024 and are expected to see a 1% YoY increase in Q1 2025 impacted by seasonality. Integrated circuit (IC) sales rose by 29% YoY in Q4 2024 and continued growth is expected in Q1 2025 with a 23% increase YoY as AI-fueled demand continues boosting shipments of high-performance computing (HPC) and datacenter memory chips.
Similar to electronics sales, semiconductor capital expenditures (CapEx) decreased in the first half of 2024 but saw a strong rebound, particularly in the fourth quarter, resulting in 3% annual growth by the end of 2024. Memory-related CapEx continued to lead the growth surging 53% quarter-on-quarter (QoQ) and 56% YoY in Q4 2024. Non-memory CapEx also edged up in Q4 2024 showing 19% QoQ and 17% YoY improvement. Total CapEx is expected to remain strong in Q1 2025, growing 16% relative to the same period of the previous year on the strength of investments to support high bandwidth memory (HBM) capacity additions for AI deployment.
The semiconductor capital equipment segment remained resilient primarily due to increased investments into expanding leading-edge logic, advanced packaging and HBM capacity. Wafer fab equipment (WFE) spending increased 14% YoY and 8% QoQ in Q4 2024. Quarterly WFE billings are expected to be around $26 billion in Q1 2025. China’s investment continues to play a significant role in the WFE market but started to subside by end of the year. Additionally, back-end equipment showed strong increases in Q4 2024 with the Test segment logging 5% QoQ growth and an impressive 55% YoY increase for the quarter, while the Assembly and Packaging segment experienced a YoY increase of 15%. Both segments are expected to show similar QoQ growth between 6-8% in Q1 2025.
In Q4 2024, installed wafer fab capacity surpassed a record 42 million wafers per quarter worldwide (in 300mm wafer equivalent), and capacity is projected to reach nearly 42.7 million in Q1 2025. Foundry and Logic-related capacity continues to show stronger increases, growing 2.3% QoQ in Q4 2024, and the segment is projected to rise 2.1% in Q1 2025 driven by capacity expansion for advanced nodes. Memory capacity increased 1.1% in Q4 2024 and is forecasted to remain at the same level in Q1 2025 driven by strong demand for HBM.
“Despite seasonality and the challenges of macroeconomic uncertainty, momentum in AI-driven investments continues to fuel expansion across key segments, including memory, capital expenditures, and wafer fab equipment,” said Clark Tseng, Senior Director of Market Intelligence at SEMI. “Looking forward for 2025, the industry remains cautiously optimistic, with robust growth prospects driven by ongoing demand for high-performance computing and data center buildout.”
“As we begin the year, our expectation is for stronger performance in the second half, with semiconductor sales anticipated to remain flat sequentially in the first half, followed by a notable double-digit increase in the latter half,” said Boris Metodiev, Director of Market Analysis at TechInsights. “Inventory challenges persist for discrete, analog, and optoelectronic manufacturers, which will need to be addressed before we can expect widespread growth to resume.”
The Semiconductor Manufacturing Monitor (SMM) report provides end-to-end data on the worldwide semiconductor manufacturing industry. The report highlights key trends based on industry indicators including capital equipment, fab capacity, and semiconductor and electronics sales, along with a capital equipment market forecast. The SMM report also contains two years of quarterly data and a one-quarter outlook for the semiconductor manufacturing supply chain including leading IDM, fabless, foundry, and OSAT companies. An SMM subscription includes quarterly reports.
The post Global Semiconductor Manufacturing Industry Reports Solid Q4 2024 Results, SEMI Reports appeared first on ELE Times.
EEVblog 1669 - Agilent U53131A VFD to LED Display Upgrade MAILBAG
Anyone need to test a tube?
![]() | submitted by /u/Switchlord518 [link] [comments] |
The Google Chromecast Gen 3: Gluey and screwy

In my recent 2nd generation Google Chromecast teardown, “The Google Chromecast Gen 2 (2015): A Form Factor Redesign with Beefier Wi-Fi, Too,” I noted that I subsequently planned on tearing down the Chromecast Ultra, followed by the 3rd generation Chromecast, chronologically ordered per their respective introduction dates (2016 and 2018).
I’ve subsequently decided to flip-flop that ordering, tackling the 3rd generation Chromecast first, in the interest of grouping together devices with output resolution commonality. All three Chromecast generations, also including 2013’s original version, peak-output 1080p video, although the 3rd generation model also bumped up the frame rate from 30 fps to 60 fps; the Ultra variant you’ll see in the near future conversely did 4K. If you’re wondering why I’m referring to them all in the past tense, by the way, it’s because none of them are in production any longer, although everything but the first-generation Chromecast still gets software updates.
Google also claimed at intro that the Chromecast 3 not only came in new color options:
but was also 15% faster than its predecessor (along with adding support for Dolby Digital Plus and fully integrated Chromecast with Nest smart speakers), although the company was never specific about what “15% faster” meant. Was it only in reference to the already mentioned 1080p60 smoother video-playback option? One might deduce that it also referred to more general UI responsiveness, but if true was this due to faster innate processing—which all users would conceivably experience—or higher wireless network performance, only for those with advanced LANs? Or both? I hope this teardown will help answer these and other questions (like why does Wikipedia list no CPU or memory details for this generation?) to at least a degree.
Generally speaking, I try whenever possible to avoid teardowns of perfectly good hardware that end up being destructive, i.e., leaving the hardware in degraded condition that precludes me from putting it back together afterwards and donating it to someone else. In such cases, I instead strive to focus my attention on already nonfunctional “for parts only” devices sourced from eBay and elsewhere. This time, however, all I could find were still-working eBay options:
although the one I picked was not only inexpensive ($10 plus shipping and sales tax, $16.63 total) but was absent its original packaging:
Here’s a closeup of the micro-USB connector—a legacy approach that’s rapidly being obsoleted by the USB-C successor—at the other end of the USB-A power cable:
And here’s a top view of our patient, as usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:
Followed by a closeup of the top of the main body:
Same goes for the underside:
That printing on the bottom is quite scuffed at this seeming long-time use point, although I suspect it was already faint from the get-go. In the center are “UL Listed” and HDMI logos, with the phrase “ITE E258392” in-between them. And here’s what it says around the circumference:
Google Model NC2-685
1600 Amphitheater Parkway
Mountain View, CA 94043
CAN ICES-3
(B)NMB-3(B)
IC 10395A-NC26A5
FCC ID A4RNC2-6A5B
Made in Thailand
06041HFADVM445
…whatever (some of, I already get the rest of) that means. And phew!
Here’s the HDMI connector on one end of the cable:
And jumping ahead in time a bit, the other end, entering the partially disassembled body:
Back to where we were before in time, the opposite side of the body showcases, left to right, a hardware reset button (you can also reset via software, if the Chromecast is already activated and mobile device-connected), the aforementioned micro-USB power input and a status LED:
Speaking of sides, you probably already noticed the seam around the circumference of the main body. Also speaking of sides, let’s see if it gets us inside. First off, I used the warmed-up iOpener introduced previously in the Chromecast 2 teardown to heat up the presumed glue holding the two halves together at the seam:
Then I set to work with its iFixit-companion Jimmy:
which got me partly, albeit not completely, to my desired end result, complete with collateral damage in the process:
I suspected that the diminutive dallop of under-case thermal paste I’d encountered with the 2nd-generation Chromecast was more substantially applied this time, to counteract the higher heat output associated with the 3rd-generation unit’s “15% faster” boast. So, I reheated the iOpener, reoriented it on the device, waited some more:
and tried, tried again. Yep, there’s paste inside:
Veni, vidi, vici (albeit, in this case, not particularly swiftly):
My, that’s a lot of (sloppily applied, to boot) glue:
The corresponding paste repository on the inside of the upper-case half is an odd spongy donut-shaped thing. I’ve also scraped away some of the obscuring black paint to reveal the metallic layer beneath it, which presumably acts as a secondary heat sink:
Some rubbing alcohol and a tissue cleaned the blue-green goop off the Faraday cage:
Although subsequently removing the retaining screws on either side of the HDMI cable did nothing to get the cage itself off:
Resigning me to just ripping it off (behavior that, as you’ll soon see, wasn’t a one-time event):
Followed by (most of) the black tape underneath it:
I never actually ever got the HDMI cable detached from the lower-case half, but with the screws removed, I was at least able to disconnect it from the PCB:
enabling me to remove the PCB from the remainder of the case…at least theoretically:
This Chromecast-generation time around, there’s an abundance of thermal paste on both sides:
Even after jamming the Jimmy in the gap in attempting to cut the offending paste in half, I still wasn’t able to separate the PCB from the case, specifically down at the bottom where the micro-USB connector was. The ambient light in the room was starting to get dim and I needed to leave for Mass soon, so I—umm—just gave the PCB a yank, ripping it out of the case:
and quickly snapped the remainder of the photos you’ll see, including the first glimpse of the bottom of the PCB:
When I got back home and reviewed the shots, I was first flummoxed, then horrified, and finally (to this very day, in fact) mortified and embarrassed. And I bet that at least a few of you eagle-eyed readers already know where I’m going with this. What’s that in the bottom left-ish edge of the inside of the back half of the case (with the reset button rubber “spring” to its left and the light guide for the activity LED to its right)? Is that…a still-attached screw? Surrounded by a chunk of PCB?
Yes…yes it is. This dissected device is destined solely for the landfill, “thanks” to my rushed ham-handedness. Sigh:
Guess I might as well get this Faraday cage off too:
And clean off the additional inside paste:
The IC in the upper left is Marvell’s Avastar 88W8887 quad wireless transceiver, supporting 1×1 802.11ac, Bluetooth 4.1, NFC and FM, only some of these functions actually implemented in this design. It’s the same chip used in the 2nd generation Chromecast, so the basis for the “15% faster” claim seemingly doesn’t seemingly source here. Next to it on the right is a SK Hynix H5TC4G63EFR-RDA 4 Gbit LPDDR3-1866 SDRAM. Note too the LED in the lower left corner, and the PCB-embedded antennae on both sides. And, since this PCB is “toast” anyway (yes, note the chunk out of it in the lower right, too), I went ahead and lifted the upper right corner of the cage frame to assure myself (and you) that nothing meaningful was hiding underneath:
Back to the previously seen front side of the PCB:
At far left (with the hardware reset switch below it in the lower left corner…and did I mention yet the missing chunk of PCB to the right of it?), peeking out from the cage frame, is a small, obviously Marvell-sourced, IC labeled as follows (ID, anyone?):
MRVL
G868
952GAX
which I suspect has the same (unknown) function(s) as a similarly (but not identically) labeled chip I’d found in the 2nd-generation Chromecast:
MRVL
G868
524GBD
To its left is the much larger Synaptics MM3006, formerly known as the Marvell 88DE3006 (Synaptics having acquired Marvell’s Multimedia Solutions Business in mid-2017). Again, it’s the same IC as in the 2nd generation Chromecast. And finally, at far right is a Toshiba TC58NVG2S0 4 Gbit NAND flash memory. Same flash memory supplier as before. Same flash memory technology as before. But hey, twice the capacity as before (presumably to provide headroom for the added firmware “support for Dolby Digital Plus and fully integrated Chromecast with Nest smart speakers” mentioned earlier)! So, there’s that…
Aside from a bigger flash memory chip (and, ok, getting rid of the magnet integrated into the Chromecast 2’s HDMI connector), what’s different between the 2nd and 3rd generation Chromecasts, and where does Google’s “15% faster” claim come from? The difference, I suspect, originates with the DRAM. I hadn’t specifically mentioned this in the previous teardown, but the Samsung DRAM found there, while also LPDDR3 in technology and 4 Gbit in capacity, was a “K0” variant reflective of a 1600 MHz speed bin. This time, conversely and as already noted, the DRAM runs at 1866 MHz. My guess is that this uptick also corresponds to a slightly faster speed bin for the Marvell-now-Synaptics application processor. And therein lies, between the two, the modest overall system performance boost.
Agree or disagree, readers? Any other thoughts? Let me know in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- The Google Chromecast Gen 2 (2015): A Form Factor Redesign with Beefier Wi-Fi, Too
- Google’s Chromecast with Google TV: Car accessory similarity, and a post-teardown resurrection opportunity?
- The Google Chromecast with Google TV: Realizing a resurrection opportunity
- Google’s Chromecast with Google TV: Dissecting the HD edition
- Teardown: Chromecast streams must have gotten crossed
The post The Google Chromecast Gen 3: Gluey and screwy appeared first on EDN.
The Rise of DRDO’s Trinetra UAV in India’s Defense Landscape
The Defence Research and Development Organisation (DRDO) of India has been instrumental in advancing indigenous unmanned aerial vehicle (UAV) technologies to meet the diverse operational requirements of the Indian Armed Forces. Among its notable developments is the Trinetra UAV, a cutting-edge drone system designed to enhance surveillance, reconnaissance, and tactical operations.
Design and Development
The Trinetra UAV, aptly named after the Sanskrit term for “Three Eyes,” signifies a comprehensive surveillance capability. This UAV is engineered to be lightweight and highly portable, primarily constructed from advanced composite materials that ensure structural integrity while minimizing weight. Its compact design facilitates rapid deployment across various terrains, making it an invaluable asset for field operations.
One of the standout features of the Trinetra is its Vertical Take-Off and Landing (VTOL) capability. Utilizing a quadcopter configuration, the UAV can ascend and descend vertically, negating the need for traditional runways or launch systems. This attribute is particularly advantageous in confined or rugged environments where conventional take-off and landing are impractical.
Technical Specifications
While specific technical details of the Trinetra UAV remain classified, insights can be drawn from DRDO’s previous UAV projects, such as the Netra series. The Netra V4+, for instance, boasts an operational range of up to 10 kilometers and a flight endurance exceeding 60 minutes at mean sea level. It is equipped with a high-resolution imaging payload featuring a 10x optical zoom, enabling detailed surveillance from significant distances. Additionally, the Netra V4+ can operate at altitudes up to 400 meters Above Ground Level (AGL) and is designed for ease of transport and quick assembly, weighing approximately 6.5 kilograms.
Given the evolutionary nature of DRDO’s UAV development, it is plausible that the Trinetra UAV incorporates similar or enhanced specifications, tailored to meet the specific demands of modern military operations.
Operational Capabilities
The Trinetra UAV is designed to execute a wide array of missions, encompassing intelligence gathering, border surveillance, and tactical support. Its VTOL capability ensures that it can be deployed in diverse environments without the constraints associated with traditional UAV launch and recovery methods.
Equipped with advanced electro-optical and infrared sensors, the Trinetra provides real-time video streaming and high-resolution imagery, facilitating both day and night operations. This dual-sensor setup ensures continuous situational awareness, enabling ground commanders to make informed decisions based on live intelligence.
Autonomous navigation is a cornerstone of the Trinetra’s operational design. The UAV can follow pre-programmed flight paths using waypoint navigation, allowing it to conduct missions with minimal human intervention. In scenarios where communication is disrupted or battery levels are critically low, the Trinetra is programmed to autonomously return to its launch point, ensuring mission continuity and asset recovery.
Recent Developments
In a strategic move to enhance its unmanned capabilities, the Indian Army has placed a significant order for nearly 700 Trinetra drones. This procurement aims to bolster surveillance and reconnaissance operations, particularly in challenging terrains such as the Himalayas. The deployment of these drones is expected to provide real-time intelligence, thereby improving operational efficiency and response times. This acquisition aligns with the broader strategy of integrating advanced indigenous unmanned systems into the armed forces to address contemporary security challenges.
Strategic Implications
The induction of the Trinetra UAV into the Indian Armed Forces signifies a pivotal shift towards embracing indigenous technologies for defense applications. This move not only reduces dependency on foreign systems but also fosters self-reliance in critical defense technologies. The Trinetra’s capabilities are poised to enhance border surveillance, counter-insurgency operations, and disaster management efforts. Its real-time intelligence-gathering prowess is expected to be a force multiplier, providing commanders with actionable insights and facilitating informed decision-making in complex operational environments.
Conclusion
The DRDO’s Trinetra UAV represents a significant advancement in India’s unmanned aerial capabilities. Its blend of cutting-edge technology, autonomous features, and adaptability to diverse mission requirements positions it as a pivotal asset in modern warfare and surveillance. As the Indian Armed Forces continue to integrate such indigenous systems, the Trinetra UAV stands as a testament to India’s commitment to technological innovation and self-reliance in defense.
The post The Rise of DRDO’s Trinetra UAV in India’s Defense Landscape appeared first on ELE Times.
Infineon launches CoolGaN G3 Transistor in new silicon-footprint packages
Infineon introduces CoolGaN G3 Transistor in new Silicon- footprint packages to drive industry-wide standardization
Gallium Nitride (GaN) technology plays a crucial role in enabling power electronics to reach the highest levels of performance. However, GaN suppliers have thus far taken different approaches to package types and sizes, leading to fragmentation and lack of multiple footprint-compatible sources for customers. Infineon Technologies AG addresses this challenge by announcing the high-performance gallium nitride CoolGaN G3 Transistor 100 V in RQFN 5×6 package (IGD015S10S1) and 80 V in RQFN 3.3×3.3 package (IGE033S08S1).
“The new devices are compatible with industry-standard silicon MOSFET packages, meeting customer demands for a standardized footprint, easier handling and faster-time-to- market,” said, Dr. Antoine Jalabert, Product Line Head for mid-voltage GaN at Infineon.
The CoolGaN G3 100 V Transistor devices will be available in a 5×6 RQFN package with a typical on-resistance of 1.1 mΩ. Additionally, the 80 V transistor in a 3.3×3.3 RQFN package has a typical resistance of 2.3 mΩ. These transistors offer a footprint that, for the first time, allows for easy multi-sourcing strategies and complementary layouts to Silicon-based designs. The new packages in combination with GaN offer a low-resistance connection and low parasitics, enabling high performance transistor output in a familiar footprint.
Moreover, this chip and package combination allows for high level of robustness in terms of thermal cycling, in addition to improved thermal conductivity, as heat is better distributed and dissipated due to the larger exposed surface area and higher copper density.
The post Infineon introduces CoolGaN G3 Transistor in new Silicon- footprint packages to drive industry-wide standardization appeared first on ELE Times.
Top 10 UAV Manufacturers in India
India’s unmanned aerial vehicle (UAV) industry is soaring to new heights, fueled by rapid technological advancements, policy support, and increasing demand across diverse sectors. From bolstering national security to revolutionizing precision agriculture and infrastructure monitoring, drones are becoming indispensable assets. As the Indian government pushes for self-reliance under the “Make in India” initiative, domestic UAV manufacturers are stepping up with cutting-edge innovations and indigenous designs. In this evolving landscape, several companies are leading the charge, redefining aerial intelligence and automation. Here’s a look at the top 10 UAV manufacturers shaping India’s drone ecosystem in 2025.
- ideaForge Technology Limited
Established in 2007, ideaForge is a Mumbai-based UAV manufacturer renowned for designing and developing drones tailored for mapping, security, and surveillance applications. Serving defense forces and various government departments, ideaForge holds a significant position in the Indian drone industry. In July 2023, the company marked a milestone by launching its initial public offering (IPO), underscoring its growth trajectory. At Aero India 2025, ideaForge unveiled four new UAVs: NETRA 5, SWITCH V2, a Tactical UAV concept, and a Logistics UAV concept, each designed to address operational challenges in high-stakes environments.
- Asteria Aerospace
Asteria Aerospace is a prominent player in the Indian drone industry, focusing on the development of UAVs for diverse applications across defense and industrial sectors. The company offers a range of drone solutions tailored to meet specific operational requirements, emphasizing innovation and reliability in its products.
- Zen Technologies
Specializing in defense training solutions, Zen Technologies has expanded its portfolio to include UAVs and related systems for military applications. Leveraging its expertise in simulation and training, the company provides comprehensive drone solutions that enhance defense capabilities and operational readiness.
- Paras Defence and Space Technologies
Paras Defence is engaged in the design, development, and manufacturing of a wide array of defense and space engineering products, including UAVs. Catering to various segments of the Indian defense industry, the company contributes to the nation’s strategic capabilities by delivering advanced drone technologies and solutions.
- Garuda Aerospace
Based in Chennai, Garuda Aerospace offers drone-based solutions across multiple sectors, including agriculture, infrastructure, and surveillance. The company’s focus on delivering cost-effective and efficient UAV services has positioned it as a key player in the Indian drone ecosystem, addressing diverse industry needs with innovative approaches.
- Aarav Unmanned Systems
Aarav Unmanned Systems specializes in providing drone solutions for industrial applications such as mining, urban planning, and agriculture. Their UAVs are engineered to deliver high-quality data, facilitating informed decision-making processes and enhancing operational efficiency across various sectors.
- NewSpace Research & Technologies
Headquartered in Bengaluru, NewSpace Research & Technologies focuses on developing persistent drones for earth observation and communications. Collaborating with Hindustan Aeronautics Limited (HAL), the company is instrumental in projects like the Combat Air Teaming System (CATS) Infinity, a high-altitude pseudo-satellite UAV designed for extended endurance and strategic operations.
- Throttle Aerospace Systems Pvt Ltd
Throttle Aerospace Systems (TAS), based in Bengaluru, is a leading entity in the Indian drone manufacturing sector. As the first Directorate General of Civil Aviation (DGCA)-approved maker of civil drones and licensed to produce military drones, TAS offers a range of innovative products and solutions aimed at transforming mobility across various industries.
- Drones Origin Private Limited
Located in Hyderabad, Drones Origin Private Limited specializes in the indigenous design and manufacturing of drone and UAV components. Positioning itself as a one-stop solution for various drone-related needs, including propulsion and custom designs, the company emphasizes self-reliance and aligns with the ‘Make in India’ initiative, contributing to the nation’s growing drone manufacturing industry.
- IG Drones
IG Drones is recognized as one of the top ‘Made-in-India’ drone manufacturers, contributing significantly to the country’s UAV landscape. The company focuses on delivering innovative drone solutions that cater to various industry requirements, enhancing operational efficiency and productivity.
The rapid expansion of India’s UAV industry is a testament to the technological advancements and entrepreneurial spirit driving the sector. These top 10 manufacturers exemplify the country’s commitment to innovation, self-reliance, and the strategic integration of drone technology across multiple domains.
The post Top 10 UAV Manufacturers in India appeared first on ELE Times.
Braun 6550/5704 PCB dismantled
![]() | submitted by /u/Loud_Construction998 [link] [comments] |
NORD DRIVESYSTEMS SUSTAINABILITY STRATEGY FOR 2025
At NORD DRIVESYSTEMS, our sustainability strategy for 2025 focuses on acting in an environmentally conscious, responsible and integer manner. A cross-divisional team as well as the management and owners are part of the implementation. Besides NORD’s products, it includes four further fields of action.
“Our sustainability strategy for 2025 is a promise to our customers, to the public and to ourselves to consequently act in an ecological, economic and socially responsible manner”, emphasizes Carolin von Rönne from the area of Process and Organisational Development & Corporate Sustainability Management at NORD DRIVESYSTEMS. The strategy comprises five key aspects:
ProductsWhen it comes to sustainability, our products at NORD are also our top priority. This is because the design, life cycle and application areas have an impact on the environment. The concept of sustainability is therefore already rooted in the product development process. “Drives can be found in many areas of industry, where they consume a large proportion of the energy used,” explains Carolin von Rönne. “With efficient drive solutions such as the IE5+ synchronous motor, we want to make a significant contribution to reducing CO₂ emissions.“ The NORD ECO service furthermore supports companies in finding the most efficient drive solutions for them.
Governance & processesThe sustainability management was introduced at NORD in 2022. Since then, the company has achieved important milestones such as an annual sustainability report according to GRI, environmental certifications and the integration of international structures. The central objective in this field of action is the establishment of an international governance structure and CSRD-compliant reporting for the entire NORD DRIVESYSTEMS Group with 48 subsidiaries in 36 countries because the success of other factors – in
EnvironmentIn order to coordinate structured measures and document them in a legally secure manner, international environmental management is essential for NORD. This is implemented in accordance with ISO14001 for the largest subsidiaries. In addition, the climate balance for Scope 1–3 is determined group-wide. NORD DRIVESYSTEMS Group further aims to reduce its energy consumption and amount of waste as well as increase the share of self-produced electric power and the use of renewable energies. Existing biodiversity areas are to be further expanded.
PeopleIn times of skills shortage, NORD continues to increase its attractiveness as an employer. The company is currently rolling out a global digital learning management system to offer all employees the opportunity for further individual development. Further targeted campaigns and measures are intended to promote diversity among the workforce. “Inclusion, respect for human rights, strengthening our work culture, safety and continuous transfer of knowledge are only some of the topics we would like to promote”, says Carolin von Rönne.
Supply chainNORD wants to reassure its customers and employees that sustainable production is given high priority both at manufacturing facilities and in the upstream supply chain. Risk analyses and other processes are carried out within the framework of the Germany Act on Corporate Due Diligence Obligations in Supply Chains (LKSG).
The post NORD DRIVESYSTEMS SUSTAINABILITY STRATEGY FOR 2025 appeared first on ELE Times.
Vintage and not so vintage electronics, the photos are just the tip of the iceberg. My father passed away last year ,he was a lab technician customs officer and a communications/electronics enthusiast that knew more about electronics than I could ever...
![]() | submitted by /u/Advanced_Director358 [link] [comments] |
DRAM basics and its quest for thermal stability by optimizing peripheral transistors

For decades, compute architectures have relied on dynamic random-access memory (DRAM) as their main memory, providing temporary storage from which processing units retrieve data and program code. The high-speed operation, large integration density, cost-effectiveness, and excellent reliability have contributed to the widespread adoption of DRAM technology in many electronic devices.
DRAM bit cell—the element that stores one bit of information—has a very basic structure. It consists of one capacitor (1C) and one transistor (1T) integrated close to the capacitor. While the capacitor’s role is to store a charge, the transistor is used to access the capacitor, either to read how much charge is stored or to store a new charge.
The 1T-1C bit cells are arranged in arrays containing word and bit lines, and the word line is connected to the transistors’ gate, which controls access to the capacitor. The memory state can be read by sensing the stored charge on the capacitor via the bit line.
Over the years, the memory community introduced subsequent generations of DRAM technology, enabled by continuous bit-cell density scaling. Current DRAM chips belong to the ’10-nm class’ (denoted as D1x, D1y, D1z, D1a…), where the half pitches of the active area in the memory cell array range from 19 nm down to 10 nm. However, the AI-driven demand for better performing and larger capacity DRAM is propelling R&D into beyond 10-nm generations.
This requires innovations in capacitors, access transistors, and bit cell architectures. Examples of such innovations are high-aspect ratio pillar capacitors, the move from saddle-shaped (FinFET-based) access transistors to vertical-gate architectures, and the transition from 6F2 to 4F2 cell designs—F being the minimum feature size for a given technology node.
A closer look inside a planar 1T-1C DRAM chip: The peripheral circuit
To enable full functionality of the DRAM chip, several other transistors are needed besides the access transistors. These additional transistors play a role in, for example, the address decoder, sense amplifier, or output buffer function. They are called DRAM peripheral transistors and are traditionally fabricated next to the DRAM memory array area.
Figure 1 The 1T-1C-based DRAM memory array and DRAM peripheral area are shown inside a DRAM chip. Source: imec
DRAM peripheral transistors can be grouped into three main categories. The first category is regular logic transistors: digital switches that are repeatedly turned on and off. The second category is sense amplifiers—analog types of transistors that sense the difference in charge between two-bit cells. A small positive change is amplified into a high voltage (representing a logic 1) and a small negative change into zero voltage (representing a logical 0).
These logical values are then stored in a structure of latches called the row buffer. The sense amplifiers typically reside close to the memory array, consuming a significant area of the DRAM chip. The third category is row decoders: transistors that pass a relatively high bias (typically around 3 V) to the memory element to support the write operation.
To keep pace with the node-to-node improvement of the memory array, the DRAM periphery evolves accordingly in terms of area reduction and performance enhancement. In the longer term, more disruptive solutions may be envisioned that break the traditional ‘2D’ DRAM chip architecture. One option is to fabricate the DRAM periphery on a separate wafer, and bond it to the wafer that contains the memory array, following an approach introduced in 3D NAND.
Toward a single and thermally stable platform optimized for peripheral transistors
The three groups of peripheral transistors come with their own requirements. The regular logic transistors must have good short channel control, high on current (Ion), and low off current (Ioff). With these characteristics, they closely resemble the logic transistors that are part of typical systems-on-chips (SoCs). They also need to enable multiple threshold voltages (Vth) to satisfy different design requirements.
The other two categories have more dissimilar characteristics and do not exist in typical logic SoCs. The analog sense amplifier requires good amplification, benefitting from a low threshold voltage (Vth). In addition, since signals are amplified, the mismatch between two neighboring sense amplifiers must be as low as possible. The ideal sense amplifier, therefore, is a very repeatable transistor with good analog functionality.
Finally, the row decoder is a digital transistor that needs an exceptionally thick gate oxide—compared to an advanced logic node—to sustain the higher bias. This makes the transistor inherently more reliable at the expense of being slower in operation.
Figure 2 Here are the main steps needed to fabricate a transistor for DRAM peripheral applications; the critical modules requiring specific developments are underlined. Source: PSS
In addition to these specific requirements, there are a number of constraints that apply to all peripheral transistors. One critical issue is the thermal stability. In current DRAM process flows with DRAM memory arrays sitting next to the periphery, peripheral transistors are fabricated before DRAM memory elements. The periphery is thus subjected to several thermal treatments imposed by the fabrication of the storage capacitor, access transistor, and memory back-end-of-line.
Peripheral transistors must, therefore, be able to withstand ‘DRAM memory anneal’ temperatures as high as 550-600°C for several hours. Next, the cost-effectiveness of DRAM chips must be preserved, driving the integration choices toward simpler process solutions than what logic flows are generally using.
To keep costs down, the memory industry also favors a single technology platform for various peripheral transistors—despite their individual needs. Additionally, there is a more aggressive requirement for low leakage and low power consumption, which benefits multiple DRAM use cases, especially mobile ones.
The combination of all these specifications makes a direct copy of the standard logic process flow impossible. It requires optimization of specific modules, including the transistors’ gate stack, source/drain junctions, and source/drain metal contacts.
Editor’s Note: This is first part of the article series about the latest advancements in DRAM designs. This part focuses on DRAM basics, peripheral circuits, and the journey toward a single, cost-effective, and thermally stable technology platform optimized for peripheral transistors. The second part will provide a detailed account of DRAM periphery advancements.
Alessio Spessot, technical account director, has been involved in developing advanced CMOS, DRAM, NAND, emerging memory array, and periphery during his stints at Micron, Numonyx, and STMicroelectronics.
Naoto Horiguchi, director of CMOS device technology at imec, has worked at Fujitsu and the University of California Santa Barbara while being involved in advanced CMOS device R&D.
Related Content
- DRAM gets more exotic
- FCRAM 101: Understanding the Basics
- DRAM: the field for material and process innovation
- Winbond’s innovative DRAM design and the legacy of Qimonda
- DRAM technology for SOC designers and—maybe—their customers
The post DRAM basics and its quest for thermal stability by optimizing peripheral transistors appeared first on EDN.
DIY custom Tektronix 576 & 577 curve tracer adapters

Older folks may recall the famous Tektronix 576 and 577 curve tracers from half a century ago. A few of these have survived the decades and ended up in some lucky engineer’s home/work lab.
Wow the engineering world with your unique design: Design Ideas Submission Guide
We were able to acquire a Tek 577 curve tracer with a Tek 177 standard test fixture from a local surplus house that had been used at Sandia Labs, but it was not functional. Even being non-functional, these old relics still command a high price, which set us back $500!! With the help of an on-line service manual and some detailed troubleshooting, the old 577 was revived by replacing all the power supply’s electrolytic capacitors and a few op-amps, plus removing the modifications Sandia had installed.
Once operational we went looking for the various Tek device under test (DUT) adapters for the Tek 177 standard test fixture, these adapters are indeed rare and likewise expensive which sent us on the DIY journey!
The DIY journeyObserving a similar path to Jay_Diddy_B1 [1], we set out to develop a DIY adapter to replace the rare and expensive Tek versions. Like the popular T7 multi-function component tester, which employs a small ZIF socket for leaded-component attachment that works very well; we decided to do the same for the custom Tek adapter using the right and left sides of the ZIF socket to facilitate DUT comparisons with the Tek 177 standard test fixture. This can be seen in Figure 1.
Figure 1 Custom ZIF-based Tek 577 adapter where the right and left sides of the ZIF socket facilitate DUT comparisons with the Tek 177 standard test fixture.
As shown in Figure 2, a low-cost PCB was developed with SMD ferrites added to the nominal “Collector” and “Base” 577 terminals to help suppress any parasitic DUT oscillations. Connectors were also added to allow for external cables if desired (something we have never used). The general idea was to use the 6 banana jacks as support for holding the PCB in place with the ZIF on top of the PCB where one can directly attach various DUTs.
This approach has worked well and allows easy attachment of various leaded-components including the T0-126 and T0-220 power devices.
Figure 2 The custom adapter installed on the Tek 177 standard test fixture.
Applying the curve tracer to an SMD DUTHowever, this still leaves the SMD types in need of a simple means to apply with the Tek 577 curve tracer with the 177 fixture; we set out to investigate this.
After studying the methods Tektronix utilized, we discovered some low-cost toggle clamps (found on AliExpress and elsewhere) which are used for clamping planar objects to a surface for machining. Figure 3 shows the custom toggle clamps used on a custom SMD DUT along with the custom adapter installed on the Tek 177 standard text fixture.
Figure 3 Custom toggle-type SMD adapter for the Tek 577 where using the pair of toggle arms allows both the right and left sides of the Tek 177 fixture to be utilized for direct SMD component comparisons.
These clamps could be repurposed to act as a clamp to hold a SMD DUT in place, which resulted in a custom PCB being developed to mount directly on top of the ZIF-based PCB previously discussed (Figure 4).
Figure 4 The custom SMD PCB that can be used with toggle clamps. This can be installed on the Tek 177 fixture for the Tek 577.
The toggle arms allow slight pressure to be applied to the SMD DUT where the leads make contact with the PCB’s exposed surfaces. Using a pair of toggle arms allows both the right and left sides of the Tek 177 fixture to be utilized for direct SMD component comparisons.
A connector on the rear of the PCB is mounted on the bottom side and mates with another connector on the ZIF type PCB, which allows connection to the 6 banana jacks that plug into the Tek 177 Fixture. Four nylon standoffs provide mechanical support and hold the two PCBs together. This setup allows easy SMD component installation and removal with little effort. Figure 5 shows both adapters for the Tek 577 with 177 standard test fixture.
Figure 5 Both adapters for the Tek 577 with 177 standard test fixture.
Both the ZIF and the SMD Adapters have served well and allow most components to be easily evaluated with the old Tektronix 576 and 577 curve tracers. Figure 6 shows the custom toggle-type SMD adapter in action with pair of DUTs.
Figure 6 Custom toggle-type SMD adapter in operation with pair of DUTs.
A word of cautionJust a word of caution when using these and any adapters, fixtures, or leads with the Tek 576 and 577 curve tracers: these instruments can produce lethal voltages across the exposed terminals. The Tek 177 standard test fixtures were originally supplied with plastic protective covers and sensor switches which removed the DUT stimulus when the plastic cover was open. In the old Tek service manuals, there was a modification method to defeat the Tek 177 sensor switch which many engineers employed, and many also removed the plastic protective covers.
Anyway, I hope some folks lucky enough to have an old Tek 576 or 577 curve tracer with Tek 177 standard test fixture find these custom DIY adapters useful if they don’t already have the old Tek OEM elusive adapters!
Michael A Wyatt is a life member with IEEE and has continued to enjoy electronics ever since his childhood. Mike has a long career spanning Honeywell, Northrop Grumman, Insyte/ITT/Ex-elis/Harris, ViaSat and retiring (semi) with Wyatt Labs. During his career he accumulated 32 US Patents and in the past published a few EDN Articles including Best Idea of the Year in 1989. All posts by Mike Wyatt below:
References
- “EEVblog Electronics Community Forum.” SMD Test Fixture for the Tektronix 576 Curve Tracer – Page 1, www.eevblog.com/forum/projects/smd-test-fixture-for-the-tektronix-576-curve-tracer/.
Related Content
- Tracing a transistor’s curves
- Throw it a curve (tracer)
- Trace voltage-current curves on your PC
- Using oscilloscope X-Y displays
The post DIY custom Tektronix 576 & 577 curve tracer adapters appeared first on EDN.
Сторінки
