Українською
  In English
Feed aggregator
Board-to-board connectors reduce EMI

Molex LLC develops a quad-row shield connector, claiming the industry’s first space-saving, four-row signal pin layout with a metal electromagnetic interference (EMI) shield. The quad-row board-to-board connector achieves up to a 25-dB reduction in EMI compared to a non-shielded quad-row solution. These connectors are suited for space-constrained applications such as smart watches and other wearables, mobile devices, AR/VR applications, laptops, and gaming devices.
Shielding protects connectors from external electromagnetic noise such as nearby components and far-field devices that can cause signal degradation, data errors, and system faults. The quad-row connectors eliminate the need for external shielding parts and grounding by incorporating the EMI shields, which saves space and simplifies assembly. It also improves reliability and signal integrity.
(Source: Molex LLC)
The quad-row board-to-board connectors, first released in 2020, offers a 30% size reduction over dual-row connector designs, Molex said. The space-saving staggered-circuit layout positions pins across four rows at a signal contact pitch of 0.175 mm.
Targeting EMI challenges at 2.4-6 GHz and higher, the quad-row layout with the addition of an EMI shield mitigates both electromagnetic and radio frequency (RF) interference, as well as signal integrity issues that create noise.
The quad-row shield meets stringent EMI/EMC standards to lower regulatory testing burdens and speed product approvals, Molex said.
The new design also addresses the most significant requirements related to signal interference and incremental power requirements. These include how best to achieve 80 times the signal connections and four times the power delivery compared to a single-pin connector, Molex said.
The quad-row connectors offer a 3-A current rating to meet customer requirements for high power in a compact design. Other specifications include a voltage rating of 50 V, a dielectric withstanding voltage of 250 V, and a rated insulation resistance of 100 megohms. Click here for the datasheet.
Samples of the quad-row shield connectors are available now to support custom inquiries. Commercial availability will follow in the second quarter of 2026.
The post Board-to-board connectors reduce EMI appeared first on EDN.
I just got the connector book! Wow!
| | Holy cow, it's huge, it covers everything! These are just a few random pages. I already learned a lot. [link] [comments] |
5-V ovens (some assembly required)—part 2

In the first part of this Design Idea (DI), we looked at simple ways of keeping critical components at a constant temperature using a linear approach. In this second part, we’ll investigate something PWM-based, which should be more controllable and hence give better results.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Adding PWM to the oven
As before, this starts with a module based on a TO-220 package, the tab of which makes a decent hotplate on which our target component(s) can be mounted. Figure 1 shows this new circuit, which compares the voltage from a thermistor/resistor pair with a tri-wave and uses the result to vary the duty cycle of the heating current. Varying the amplitude and level of that tri-wave lets us tune the circuit’s performance.
This looks too simple and perhaps obvious to be completely original, but a quick search found nothing very similar. At least this was designed from scratch.
Figure 1 A tri-wave oscillator, a thermistor, and a comparator work together to pulse-width modulate the current through R7, the main heating element. Q1 switches that current and also helps with the heating.
U1a forms a conventional oscillator running at around 1 kHz. Neither the frequency nor the exact wave-shape on C1 is critical. R1 and R2+R3 determine the tri-wave’s offset, and R4 its amplitude. U1b compares the voltage across the thermistor with the tri-wave, as shown in Figure 2. When the temperature is low so that voltage is higher than any part of the tri-wave, U1b’s output will be solidly low, turning on Q1 to heat up R7 as fast as possible.
As the temperature rises, the voltages start to overlap and proportional control kicks in, progressively reducing the on-time so that the heat input is proportional to the difference between the actual and target temperatures. By the time the set-point has been reached, the on-time is down to ~18%. This scheme minimizes or even eliminates overshoot. (Thermal time-constants—ignored for the moment—can upset this a little.)

Figure 2 Oscilloscope captures showing the operation of Figure 1’s circuit.
Once the circuit is stable, Th1 will have the same resistance as R6, or 3.36 kΩ at our nominal target of 50°C (or 50.03007…°C, assuming perfect components), so Figure 1’s point B will be at half-rail. To keep that balance, the tri-wave must be offset upwards so that slicing gives our 18% figure at the set-point. Setting R3 to 1k0 achieved that. The performance after starting can be seen in Figure 3. (The first 40 seconds or so is omitted because it’s boring.)

Figure 3 From cold, Figure 1’s circuit stabilizes in two to three minutes. The upper trace is U1b’s output, heavily filtered. Also shown are Th1’s temperature (magenta) and that of the hotplate as measured by an external thermistor probe (cyan).
The use of Q1 as an over-driven emitter follower needs some explanation. First thoughts were to use an NPN Darlington or an n-MOSFET as a switch (with U1b’s inputs swapped), but that meant that the collector or drain—which we want to use as a hotplate—would be flapping up and down at the switching frequency.
While the edges are slowish, they could still couple capacitively to a target device: potentially bad news. With a PNP Darlington, the collector can be at ground, give or take a handful of millivolts. (The fine copper wire used to connect the module to the outside world has a resistance of about 1 Ω per meter.) Q1 drops ~1.3 V and so provides about a third of the heating, rather like the corresponding device in Part 1. This is a good reason to stay with the idea of using a TO-220’s tab as that hotplate—at least for the moment. Q1 could be a p-MOSFET, but R7 would then need to be adjusted to suit its (highly variable) VGS(on): fiddly and unrealistic.
LED1 starts to turn on once the set-point is near and becomes brighter as the duty cycle falls. This worked as well in practice as the long-tailed pair approach used in Part 1’s Figure 4.
The duty cycle is given as 18%, but where does that figure come from? It’s the proportion of the input heat that leaks out once the circuit has stabilized, and that depends on how well the module is thermally insulated and how thin the lead-out wires are. With a maximum heating current of 120 mA (600 mW in), practical tests gave that 18% figure, implying that ~108 mW is being lost. With a temperature differential of ~30°C, that corresponds to an overall thermal resistance of ~280°C/W. (Many DIL ICs are quoted as around 100°C/W.)
Some more assembly required
The final build is mechanically quite different and uses a custom-built hotplate instead of a TO-220’s tab. It’s shown in Figure 4.

Figure 4 Our new hotplate is a scrap of copper sheet with the heater resistors glued to it symmetrically, with Th1 on one side and room for the target component(s) on the other. The third picture shows it fixed to the lower block of insulating foam, with fine wires meandered and ready for terminating. Not shown: an extra wire to ground the copper. Please excuse the blobby epoxy. I’d never get a job on a production line.
R7 now comprises four -33 Ω resistors in series/parallel, which are epoxied towards the ends of a piece of copper, two on each side, with Th1 centered on one side. The other side becomes our hotplate area, with a sweet spot directly above the thermistor. Thermally, it is symmetrical, so that—all other things being equal, which they rarely are—our target component will be heated exactly like Th1.
The drive circuit is a variant on Figure 1, the main difference being Q1, which can now be a small but low-RON n-MOSFET as it’s no longer intended to dissipate any power. R3 and R4 are changed to give a tri-wave amplitude of ~500 mV pk–pk at a frequency of ~500 Hz to optimize the proportional control. Figure 5 and Figure 6 show the schematic and its performance. It now stabilizes within a degree after one minute and perhaps a tenth after two, with decent tracking between the internal (Th1) and hotplate temperatures. The duty cycle is higher, largely owing to the different construction; more (and bulkier) insulation would have reduced it, improving efficiency.

Figure 5 The driving circuit for the new hotplate.

Figure 6 How Figure 5’s circuit performs.
The intro to Part 1 touched on my original oven, which needed to stabilize the operation of a logarithmically tuned oscillator. It used a circuit similar to Part 1’s Figure 5 but had a separate power transistor, whose dissipation was wasted. The logging diode was surrounded by a thermally-insulated cradle of heating resistors and the control thermistor.
It worked well and still does, but these circuits improve on it. Time for a rebuild? If so, I’ll probably go for the simplest, Part 1/Figure 1 approach. For higher-power use, Figure 5 (above) could probably be scaled to use different heating resistors fed from a separate and larger voltage. Time for some more experimental fun, anyway.
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- 5-V ovens (some assembly required)—part 1
- Fixing a fundamental flaw of self-sensing transistor thermostats
- Self-heated ∆Vbe transistor thermostat needs no calibration
- Take-back-half thermostat uses ∆Vbe transistor sensor
- Dropping a PRTD into a thermistor slot—impossible?
The post 5-V ovens (some assembly required)—part 2 appeared first on EDN.
SuperLight launches SLP-2000 full-spectrum SWIR supercontinuum laser
Voyant appoints former Valeo exec Clément Nouvel as CEO
Nuvoton Launches Industrial-Grade, High-Security MA35 Microprocessor with Smallest Package and Largest Stacked DRAM Capacity
Nuvoton Technology announced the MA35D16AJ87C, a new member of the MA35 series microprocessors. Featuring an industry-leading 15 x 15 mm BGA312 package with 512 MB of DDR SDRAM stacked inside, the MA35D16AJ87C streamlines PCB design, reduces product footprint, and lowers EMI, making it an excellent fit for space-constrained industrial applications.
Key Highlights-
- Dual 64-bit Arm Cortex-A35 cores plus a Cortex-M4 real-time core
- Integrated, independent TSI (Trusted Secure Island) security hardware
- 512 MB DDR SDRAM stacked inside a 15 x 15 mm BGA312 package
- Supports Linux and RTOS, along with Qt, emWin, and LVGL graphics libraries
- Industrial temperature range: -40°C to +105°C
- Ideal for factory automation, industrial IoT, new energy, smart buildings, and smart cities
The MA35D16AJ87C is built on dual Arm Cortex-A35 cores (Armv8-A architecture, up to 800 MHz) paired with a Cortex-M4 real-time core. It supports 1080p display output with graphics acceleration and integrates a comprehensive set of peripherals, including 17 sets of UARTs, 4 sets of CAN-FD interfaces, 2 sets of Gigabit Ethernet ports, 2 sets of SDIO 3.0 interfaces, and 2 sets of USB 2.0 ports, among others, to meet diverse industrial application needs.
To address escalating IoT security challenges, the MA35D16AJ87C incorporates Nuvoton’s independently designed TSI (Trusted Secure Island) hardware security module. It supports Arm TrustZone technology, Secure Boot, and Tamper Detection, and integrates a complete hardware cryptographic engine suite (AES, SHA, ECC, RSA, SM2/3/4), a true random number generator (TRNG), and a key store. These capabilities help customers meet international cybersecurity requirements such as the Cyber Resilience Act (CRA) and IEC 62443.
The MA35D16AJ87C is supported by Nuvoton’s Linux and RTOS platforms and is compatible with leading graphics libraries including Qt, emWin, and LVGL, helping customers shorten development cycles and reduce overall development costs. The Nuvoton MA35 Series is designed for industrial-grade applications and is backed by a 10- year product supply commitment.
The post Nuvoton Launches Industrial-Grade, High-Security MA35 Microprocessor with Smallest Package and Largest Stacked DRAM Capacity appeared first on ELE Times.
Nokia and Rohde & Schwarz collaborate on AI-powered 6G receiver to cut costs, accelerate time to market
Nokia and Rohde & Schwarz have created and successfully tested a 6G radio receiver that uses AI technologies to overcome one of the biggest anticipated challenges of 6G network rollouts, coverage limitations inherent in 6G’s higher-frequency spectrum.
The machine learning capabilities in the receiver greatly boost uplink distance, enhancing coverage for future 6G networks. This will help operators roll out 6G over their existing 5G footprints, reducing deployment costs and accelerating time to market.
Nokia Bell Labs developed the receiver and validated it using 6G test equipment and methodologies from Rohde & Schwarz. The two companies will unveil a proof-of-concept receiver at the Brooklyn 6G Summit on November 6, 2025.
Peter Vetter, President of Bell Labs Core Research at Nokia, said: “One of the key issues facing future 6G deployments is the coverage limitations inherent in 6G’s higher-frequency spectrum. Typically, we would need to build denser networks with more cell sites to overcome this problem. By boosting the coverage of 6G receivers, however, AI technology will help us build 6G infrastructure over current 5G footprints.”
Nokia Bell Labs and Rohde & Schwarz have tested this new AI receiver under real world conditions, achieving uplink distance improvements over today’s receiver technologies ranging from 10% to 25%. The testbed comprises an R&S SMW200A vector signal generator, used for uplink signal generation and channel emulation. On the receive side, the newly launched FSWX signal and spectrum analyzer from Rohde & Schwarz is employed to perform the AI inference for Nokia’s AI receiver. In addition to enhancing coverage, the AI technology also demonstrates improved throughput and power efficiency, multiplying the benefits it will provide in the 6G era.
Michael Fischlein, VP Spectrum & Network Analyzers, EMC and Antenna Test at Rohde & Schwarz, said: “Rohde & Schwarz is excited to collaborate with Nokia in pioneering AI-driven 6G receiver technology. Leveraging more than 90 years of experience in test and measurement, we’re uniquely positioned to support the development of next-generation wireless, allowing us to evaluate and refine AI algorithms at this crucial pre-standardization stage. This partnership builds on our long history of innovation and demonstrates our commitment to shaping the future of 6G.”
The post Nokia and Rohde & Schwarz collaborate on AI-powered 6G receiver to cut costs, accelerate time to market appeared first on ELE Times.
ESA, MediaTek, Eutelsat, Airbus, Sharp, ITRI, and R&S announce world’s first Rel-19 5G-Advanced NR-NTN connection over OneWeb LEO satellites
European Space Agency (ESA), MediaTek Inc., Eutelsat, Airbus Defence and Space, Sharp, the Industrial Technology Research Institute (ITRI), and Rohde & Schwarz (R&S) have conducted the world’s first successful trial of 5G-Advanced Non-Terrestrial Network (NTN) technology over the Eutelsat’s OneWeb low Earth orbit (LEO) satellites compliant with 3GPP Rel-19 NR-NTN configurations. The tests pave the way for deployment of 5G-Advanced NR-NTN standard, which will lead to future satellite and terrestrial interoperability within a large ecosystem, lowering the cost of access and enabling the use of satellite broadband for NTN devices around the world.
The trial used OneWeb satellites, communicating with the MediaTek NR-NTN chipset, and ITRI’s NR-NTN gNB, implementing 3GPP Release 19 specifications including Ku-band, 50 MHz channel bandwidth and conditional handover (CHO). The OneWeb satellites, built by Airbus, carry transparent transponders, with Ku-band service link, Ka-band feeder link, and adopt the “Earth-moving beams” concept. During the trial, the NTN user terminal with a flat panel antenna developed by SHARP – successfully connected over satellite to the on-ground 5G core using the gateway antenna located at ESA’s European Space Research and Technology Centre (ESTEC) in The Netherlands.
David Phillips, Head of the Systems, Strategic Programme Lines and Technology Department within ESA’s Connectivity and Secure Communications directorate, said: “By partnering with Airbus Defence and Space, Eutelsat and partners, this innovative step in the integration of terrestrial and non-terrestrial networks proves why collaboration is an essential ingredient in boosting competitiveness and growth of Europe’s satellite communications sector.”
Mingxi Fan, Head of Wireless System and ASIC Engineering at MediaTek, said: “As a global leader in terrestrial and non-terrestrial connectivity, we continue in our mission to improve lives by enabling technology that connects the world around us, including areas with little to no cellular coverage. By making real-world connections with Eutelsat LEO satellites in orbit, together with our ecosystem partners, we are now another step closer to bring the next generation of 3GPP-based NR-NTN satellite wideband connectivity for commercial uses.”
Daniele Finocchiaro, Head of Telecom R&D and Projects at Eutelsat, said: “We are proud to be among the leading companies working on NTN specifications, and to be the first satellite operator to test NTN broadband over Ku-band LEO satellites. Collaboration with important partners is a key element when working on a new technology, and we especially appreciate the support of the European Space Agency.”
Elodie Viau, Head of Telecom and Navigation Systems at Airbus, said: “This connectivity demonstration performed with Airbus-built LEO Eutelsat satellites confirms our product adaptability. The successful showcase of Advanced New Radio NTN handover capability marks a major step towards enabling seamless, global broadband connectivity for 5G devices. These results reflect the strong collaboration between all partners involved, whose combined expertise and commitment have been key to achieving this milestone.”
Masahiro Okitsu, President & CEO, Sharp Corporation, said: “We are proud to announce that we have successfully demonstrated Conditional Handover over 5G-Advanced NR-NTN connection using OneWeb constellation and our newly developed user terminals. This achievement marks a significant step toward the practical implementation of non-terrestrial networks. Leveraging the expertise we have cultivated over many years in terrestrial communications, we are honored to bring innovation to the field of satellite communications as well. Moving forward, we will continue to contribute to the evolution of global communication infrastructure and strive to realize a society where everyone is seamlessly connected.”
Dr. Pang-An Ting, Vice President and General Director of Information and Communications Research Laboratories at ITRI, said: “In this trial, ITRI showcased its advanced NR-NTN gNB technology as an integral part of the NR-NTN communication system, enabling conditional handover on the Rel-19 system. We see great potential in 3GPP NTN communication to deliver ubiquitous coverage and seamless connectivity in full integration with terrestrial networks.”
Goce Talaganov, Vice President of Mobile Radio Testers at Rohde & Schwarz, said: “We at Rohde & Schwarz are excited to have contributed to this industry milestone with our test and measurement expertise. For real-time NR-NTN channel characterization, we used our high-end signal generation and analysis instruments R&S SMW200A and FSW. Our CMX500-based NTN test suite replicated the Ku-band conditional handover scenarios in the lab. This rigorous testing, which addresses the challenges of satellite-based communications, paved the way for further performance optimization of MediaTek’s and Sharp’s 5G-Advanced NTN devices.”
The post ESA, MediaTek, Eutelsat, Airbus, Sharp, ITRI, and R&S announce world’s first Rel-19 5G-Advanced NR-NTN connection over OneWeb LEO satellites appeared first on ELE Times.
Decoding the AI Age for Engineers: What all Engineers need to thrive in this?
As AI tools increasingly take on real-world tasks, the roles of professionals, from copywriters to engineers, are undergoing a rapid and profound redefinition and reenvisioning. This swift transformation, characteristic of the AI era, is shifting core fundamentals and operational practices. Sometimes AI complements people, other times it replaces them, but most often, it fundamentally redefines their role in the workplace.
In this story, we look further into the emerging roles and responsibilities of an engineer as AI tools gain greater traction, while also tracking the industry’s shifting expectations through the eyes of prominent names from the electronics and semiconductor industry. The resounding messages? Engineers must anchor themselves in foundational principles and embrace systems-level thinking to thrive.
The Siren Song of AI/ML
There’s no doubt that AI and Machine Learning (ML) are the current darlings of the tech world, attracting a huge talent pool. Raghu Panicker, CEO of Kaynes Semicon, notes this trend: “Engineers today at large are seeing that there are more and more people going after AI, ML, data science.” While this pursuit is beneficial, he issues a crucial caution. He urges engineers to “start to re-look at the hardcore electronics,” pointing out the massive advancements happening across the semiconductor and systems space that are being overlooked.
The engineering landscape is broadening beyond just circuit design. Panicker highlights that a semiconductor package today involves less purely “semiconductors” and more physics, chemistry, materials science, and mechanical engineering. This points to a diverse, multi-faceted engineering future.
The Bright Future in Foundations and Manufacturing
The industry’s optimism about the future of electronics, especially in manufacturing, is palpable. With multiple large-scale projects, including silicon and display fabs, being approved, Panicker sees a “very, very bright” future for Electronics and Manufacturing in India.
He stresses that manufacturing is a career path engineers should take “very seriously,” noting that while design attracts the larger paychecks, manufacturing is catching up and has significant, long-term promise. He also brings up the practical aspect of efficiency, stating that minimizing test time is critical for cost-effective customer solutions, requiring a deep understanding of the trade, often gained through specialized programs.
Innovate, Systematize, Tinker: The Engineer’s New Mandate
Building on this theme, Shitendra Bhattacharya, Country Head of Emerson’s Test and Measurement group, emphasizes the need for a community of innovators. He challenges the new generation of engineers to “think innovation, think systems,” which requires them to “get down to dirtying their hands.”
Bhattacharya is vocal about the danger of focusing solely on the “cooler or sexier looking fields like AI and ML.” He asserts that the future growth of the industry, particularly in India, hinges on local innovation and the creation of homegrown products and OEMs. To achieve this, he calls for a shift toward integrated coursework at the university level.
“System design requires you to understand engineering fundamentals. Today, that is missing at many levels… knowing only one domain is not good enough for it. It will not cut it.” – Shitendra Bhattacharya, Emerson
This call for system design thinking —the ability to bring different fields of engineering together—is a key takeaway for thriving in the AI age.
The Return of the ‘Tinkerer’
This focus on fundamental, hands-on knowledge is echoed strongly by Raja Manickam, CEO of iVP Semicon. He reflects on how the education system’s pivot toward coding and computer science led to the loss of skills like tinkering and a foundational understanding of “basics of physics, basics of electricity.”
Manickam argues that AI’s initial impact will be felt most acutely by IT engineers, and the core electronics sector needs engineers who are “more fundamentally strong.” The emphasis is on the joy and necessity of building things from the very scratch. To future-proof their careers, engineers must actively cultivate this foundational, tangible skill set.
The AI Enabler: Transforming the Value Chain
While the focus must return to engineering basics, it’s vital to recognize that AI is not a threat to be avoided but a tool to be mastered. Amit Agnihotri, Chief Operating Officer at RS Components & Control, provides a clear picture of how AI is already transforming the semiconductor value chain end-to-end.
AI is embedded in:
- Design: Driving simulation and optimization to improve power/performance trade-offs.
- Manufacturing: Assisting testing, yield analytics, and smarter process control.
- Supply Chain: Enhancing forecasting, allocation, and inventory strategies with predictive analytics.
- Customer Engagement: Providing personalized guidance and virtual technical support to accelerate time-to-market.
Agnihotri explains that companies like RS Components leverage AI to improve component discovery, localize inventory, and provide data-backed design-in support, accelerating prototyping and scaling with confidence.
Conclusion: Engineering for Longevity
The AI age presents an exciting paradox for engineers. To successfully leverage the most advanced tools, they must first become profoundly proficient in the most fundamental aspects of their discipline. The future belongs not to those who chase the shiniest new technology in isolation, but to those who view AI as an incredible enabler layered upon an unshakeable foundation of physics, materials science, system-level design, and hands-on tinkering.
Engineers who embrace this philosophy—being both an advanced AI user and a foundational master—will be the true architects of the next wave of innovation in the core electronics and semiconductor industry. The message from the industry is clear: Get back to the basics, think in systems, and start innovating locally. That is the wholesome recipe for a thriving engineering career in the AI era.
The post Decoding the AI Age for Engineers: What all Engineers need to thrive in this? appeared first on ELE Times.
Achieving analog precision via components and design, or just trim and go

I finally managed to get to a book that has been on my “to read” list for quite a while: “Beyond Measure: The Hidden History of Measurement from Cubits to Quantum Constant” (2022) by James Vincent (Figure 1). It traces the evolution of measurement, its uses, and its motivations, and how measurement has shaped our world, from ancient civilizations to the modern day. On a personal note, I found the first two-thirds of the book a great read, but in my opinion, the last third wandered and meandering off topic. Regardless, it was a worthwhile book overall.

Figure 1 This book provides a fascinating journey through the history of measurement and how different advances combined over the centuries to get us to our present levels. Source: W. W. Norton & Company Inc.
Several chapters deal with the way measurement and the nascent science of metrology were used in two leading manufacturing entities of the early 20th century: Rolls-Royce and Ford Motor Company, and the manufacturing differences between them.
Before looking at the differences, you have to “reset” your frame of reference and recognize that even low-to-moderate volume production in those days involved a lot of manual fabrication of component parts, as the “mass production” machinery we now take as a given often didn’t exist or could only do rough work.
Rolls-Royce made (and still makes) fine motor cars, of course. Much of their quality was not just in the finish and accessories; the car was entirely mechanical except for the ignition system. They featured a finely crafted and tuned powertrain. It’s not a myth that you could balance a filled wine glass on the hood (bonnet) while the engine was running and not see any ripples in the liquid’s surface. Furthermore, you could barely hear the engine at all at a time when cars were fairly noisy.
They achieved this level of performance using careful and laborious manual adjustments, trimming, filing, and balancing of the appropriate components to achieve that near-perfect balance and operation. Clearly, this was a time-consuming process requiring skilled and experienced craftspeople. It was “mass production” only in terms of volume, but not in terms of production flow as we understand it today.
In contrast, Henry Fiord focused on mass production with interchangeable parts that would work to design objective immediately when assembled. Doing so required advances in measurement of the components at Ford’s factory to weed out incoming substandard parts and statistical analysis of quality, conformance, and deviations. Ford also sent specialists to suppliers’ factories to improve both their production processes and their own metrology.
Those were the days
Of course, those were different times in terms of industrial production. When Wright brothers needed a gasoline engine for the 1903 Flyer, few “standard” engine choices were available, and none came close to their size, weight, and output-power needs.
So, their in-house mechanic Charlie Taylor machined an aluminum engine block, fabricated most parts, and assembled an engine (1903 Wright Engine) in just six weeks using a drill press and a lathe; it produced 12 horsepower, 50% above the 8 horsepower that their calculations indicated they needed (Figure 2).

Figure 2 Perhaps in an ultimate do-it-yourself project, Charlie Taylor, mechanic for Wright brothers, machined the aluminum engine block and fabricated most of the parts and then assembled the complete engine in six weeks (reportedly working only from rough sketches). Source: Wright Brothers Aeroplane Company
Which approach is better—fine adjusting and trims, or use of a better design and superior components? There’s little doubt that the “quality by components” approach is the better tactic in today’s world where even customized cars make use of many off-the-shelf parts.
Moreover, the required volume for a successful car-production line mandates avoiding hand-tuning of individual vehicles to make their components plug-and-play properly. Even Rolls-Royce now uses the Ford approach, of course; the alternative is impractical for modern vehicles except for special trim and accessories.
Single unit “perfection” uses both approaches
In some cases, both calibration and use of better topology and superior components combine for a unique design. Not surprisingly, a classic example is one of the first EDN articles by late analog-design genius Jim Williams, “This 30-ppm scale proves that analog designs aren’t dead yet”. Yes, I have cited it in previous blogs, and that’s no accident (Figure 3).

Figure 3 This 1976 EDN article by Jim Williams set a standard for analog signal-chain technical expertise and insight that has rarely been equaled. Source: EDN
In the article, he describes his step-by-step design concept and fabrication process for a portable weigh scale that would offer 0.02% absolute accuracy (0.01 lb over a 300-pound range). Yet, it would never need adjustment to be put into use. Even though this article is nearly 50 years old, it still has relevant lessons for our very different world.
I believe that Jim did a follow-up article about 20 years later, where he revisited and upgraded that design using newer components, but I can’t find it online.
Today’s requirements were unimaginable—until recently
Use of in-process calibration is advancing due to techniques such as the use of laser-based interferometry. For example, the positional accuracy of the carriage, which moves over the wafer, needs to be in the sub-micrometer range.
While this level of performance can be achieved with friction-free air bearings, they cannot be used in extreme-ultraviolet (EUV) systems since those operate in an ultravacuum environment. Instead, high-performance mechanical bearings must be used, even though they are inferior to air bearings.
There are micrometer-level errors in the x-axis and y-axis, and the two axes are also not perfectly orthogonal, resulting in a system-level error typically greater than several micrometers across the 300 × 300-mm plane. To compensate, manufacturers add interferometry-based calibration of the mechanical positioning systems to determine the error topography of a mechanical platform.
For example, with a 300-mm wafer, the grid is scanned in 10-mm steps, and the interferometer determines the actual position. This value is compared against the motion-encoder value to determine a corrective offset. After this mapping, the system accuracy is improved by a factor of 10 and can achieve an absolute accuracy of better than 0.5 µm in the x-y plane
Maybe too smart?
Of course, there are times when you can be a little too clever in the selection of components when working to improve system performance. Many years ago, I worked for a company making controllers for large machines, and there was one circuit function that needed coarse adjustment for changes in ambient temperature. The obvious way would have been to use a thermistor or a similar component in the circuit.
But our lead designer—a circuit genius by any measure—had a “better” idea. Since the design used a lot of cheap, 100-kΩ pullup resistors with poor temperature coefficient, he decided to use one of those instead of the thermistor in the stabilization loop, as they were already on the bill of materials. The bench prototype and pilot-run units worked as expected, but the regular production units had poor performance.
Long story short: our brilliant designer had based circuit stabilization on the deliberate poor tempco of these re-purposed pull-up resistors and associated loop dynamic range. However, our in-house purchasing agent got a good deal on some resistors of the same value and size, but with a much tighter tempco. Additionally, getting a better component that was functionally and physically identical for less money seemed like a win-win.
That was fine for the pull-up role, but it meant that the transfer function of temperature to resistance was severely compressed. Identifying that problem took a lot of aggravation and time.
What’s your preferred approach to achieving a high-precision, accurate, stable analog-signal chain and front-end? Have you used both methods, or are you inherently partial to one over the other? Why?
Bill Schweber is a degreed senior EE who has written three textbooks, hundreds of technical articles, opinion columns, and product features. Prior to becoming an author and editor, he spent his entire hands-on career on the analog side by working on power supplies, sensors, signal conditioning, and wired and wireless communication links. His work experience includes many years at Analog Devices in applications and marketing.
Related Content
- JPL & NASA’s latest clock: almost perfect
- The Wright Brothers: Test Engineers as Well as Inventors
- Precision metrology redefines analog calibration strategy
- Inter-satellite link demonstrates metrology’s reach, capabilities
The post Achieving analog precision via components and design, or just trim and go appeared first on EDN.
КПІшники серед переможців Sikorsky Challenge 2025!
XIV Міжнародний фестиваль інноваційних проєктів Sikorsky Challenge зібрав 123 проєкти з 12 держав 🇺🇦🇺🇸🇮🇹🇫🇷🇰🇷🇪🇸🇮🇪🇱🇺🇮🇸🇮🇱🇦🇿🇵🇱
EEVblog 1718 - Cheap 1GHz Oscilloscopes are Useless? ($5 DIY 1GHz Resistive Probe)
III-V Epi appoints Saurabh Kumar to strengthen epi engineering team and increase manufacturing capacity
Micro-LED firm VueReal expanding presence in China market
Другий цикл Програми професійного розвитку академічних менеджерів
Команда КПІ ім. Ігоря Сікорського долучилася до другого циклу Програми професійного розвитку академічних менеджерів.
5G University Program від Ericsson офіційно розпочалася!
5G University Program від Ericsson - ініціатива спрямована на надання студентам і викладачам КПІ ім. Ігоря Сікорського знань та інструментів для підготовки нового покоління фахівців у галузі 5G від експертів Ericsson.
LED illumination addresses ventilation (at the bulb, at least)

The bulk of the technologies and products based on them that I encounter in my everyday interaction with the consumer electronics industry are evolutionary (and barely so in some cases) versus revolutionary in nature. A laptop computer, a tablet, or a smartphone might get a periodic CPU-upgrade transplant, for example, enabling it to complete tasks a bit faster and/or a bit more energy-efficiently than before. But the task list is essentially the same as was the case with the prior product generation…and the generation before that…and…not to mention that the generational-cadence physical appearance also usually remains essentially the same.
Such cadence commonality is also the case with many LED light bulbs I’ve taken apart in recent years, in no small part because they’re intended to visually mimic incandescent precursors. But SANSI has taken a more revolutionary tack, in the process tackling an issue—heat–with which I’ve repeatedly struggled. Say what you (rightly) will about incandescent bulbs’ inherent energy inefficiency, along with the corresponding high temperature output that they radiate—there’s a fundamental reason why they were the core heat source for the Easy-Bake Oven, after all:

But consider, too, that they didn’t integrate any electronics; the sole failure points were the glass globe and filament inside it. Conversely, my installation of both CFL and LED light bulbs within airflow-deficient sconces in my wife’s office likely hastened both their failure and preparatory flickering, due to degradation of the capacitors, voltage converters and regulators, control ICs and other circuitry within the bulbs as well as their core illumination sources.
That’s why SANSI’s comparatively fresh approach to LED light bulb design, which I alluded to in the comments of my prior teardown, has intrigued me ever since I first saw and immediately bought both 2700K “warm white” and 5000K “daylight” color-temperature multiple-bulb sets on sale at Amazon two years ago:

They’re smaller A15, not standard A19, in overall dimensions, although the E26 base is common between the two formats, so they can generally still be used in place of incandescent bulbs (although, unlike incandescents, these particular LED light bults are not dimmable):
Note, too, their claimed 20% brighter illumination (900 vs 750 lumens) and 5x estimated longer usable lifetime (25,000 hours vs 5,000 hours). Key to that latter estimation, however, is not only the bulb’s inherent improved ventilation:

Versus metal-swathed and otherwise enclosed-circuitry conventional LED bulb alternatives:

But it is also the ventilation potential (or not) of wherever the bulb is installed, as the “no closed luminaires” warning included on the sticker on the left side of the SANSI packaging makes clear:

That said, even if your installation situation involves plenty of airflow around the bulb, don’t forget that the orientation of the bulb is important, too. Specifically, since heat rises, if the bulb is upside-down with the LEDs underneath the circuitry, the latter will still tend to get “cooked”.
Perusing our patientEnough of the promo pictures. Let’s now look at the actual device I’ll be tearing down today, starting with the remainder of the box-side shots, in each case, and as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:






Open ‘er up:
lift off the retaining cardboard layer, and here’s our 2700K four-pack, which (believe it or not) had set me back only $4.99 ($1.25/bulb) two years back:

The 5000K ones I also bought at that same time came as a two-pack, also promo-priced, this time at $4.29 ($2.15/bulb). Since they ended up being more expensive per bulb, and because I have only two of them, I’m not currently planning on also taking one of them apart. But I did temporarily remove one of them and replace it in the two-pack box with today’s victim, so you could see the LED phosphor-tint difference between them. 5000K on left, 2700K on right; I doubt there’s any other design difference between the two bulbs, but you never know…

Aside from the aforementioned cardboard flap for position retention above the bulbs and a chunk of Styrofoam below them (complete with holes for holding the bases’ end caps in place):

There’s no other padding inside, which might have proven tragic if we were dealing with glass-globe bulbs or flimsy filaments. In this case, conversely, it likely suffices. Also note the cleverly designed sliver of literature at the back of the box’s insides:

Now, for our patient, with initial overview perspectives of the top:

Bottom:

And side:

Check out all those ventilation slots! Also note the clips that keep the globe in place:

Before tackling those clips, here are six sequential clockwise-rotation shots of the side markings. I’ll leave it to you to mentally “glue” the verbiage snippets together into phrases and sentences:
Diving in for illuminated understanding
Now for those clips. Downside: they’re (understandably, given the high voltage running around inside) stubborn. Upside: no even-more-stubborn glue!
Voila:




Note the glimpses of additional “stuff” within the base, thanks to the revealing vents. Full disclosure and identification of the contents is our next (and last) aspiration:


As usual, twist the end cap off with a tongue-and-groove slip-joint (“Channellock”) pliers:


and the ceramic substrate (along with its still-connected wires and circuitry, of course) dutifully detaches from the plastic base straightaway:



Not much to see on the ceramic “plate” backside this time, aside from the 22µF 200V electrolytic capacitor poking through:

The frontside is where most of the “action” is:

At the bottom is a mini-PCB that mates the capacitor and wires’ soldered leads to the ceramic substrate-embedded traces. Around the perimeter, of course, is the series-connected chain of 17 (if I’ve counted correctly) LEDs with their orange-tinted phosphor coatings, spectrum-tuned to generate the 2700K “warm white” light. And the three SMD resistors scattered around the substrate, two next to an IC in the upper right quadrant (33Ω “33R0” and 20Ω “33R0”) and another (33Ω “334”) alongside a device at left, are also obvious.
Those two chips ended up generating the bulk of the design intrigue, in the latter case still an unresolved mystery (at least to me). The one at upper right is marked, alongside a company logo that I’d not encountered before, as follows:
JWB1981
1PC031A
The package also looks odd; the leads on both sides are asymmetrically spaced, and there’s an additional (fourth) lead on one side. But thanks to one of the results from my Google search on the first-line term, in the form of a Hackaday post that then pointed at an informative video:
This particular mystery has, at least I believe, been solved. Quoting from the Hackaday summary (with hyperlinks and other augmentations added by yours truly):
The chip in question is a Joulewatt JWB1981, for which no datasheet is available on the internet [BD note: actually, here it is!]. However, there is a datasheet for the JW1981, which is a linear LED driver. After reverse-engineering the PCB, bigclivedotcom concluded that the JWB1981 must [BD note: also] include an onboard bridge rectifier. The only other components on the board are three resistors, a capacitor, and LEDs.
The first resistor limits the inrush current to the large smoothing capacitor. The second resistor is to discharge the capacitor, while the final resistor sets the current output of the regulator. It is possible to eliminate the smoothing capacitor and discharge resistor, as other LED circuits have done, which also allow the light to be dimmable. However, this results in a very annoying flicker of the LEDs at the AC frequency, especially at low brightness settings.
Compare the resultant schematic shown in the video with one created by EDN’s Martin Rowe, done while reverse-engineering an A19 LED light bulb at the beginning of 2018, and you’ll see just how cost-effective a modern design approach like this can be.
That only leaves the chip at left, with two visible soldered contacts (one on each end), and bare on top save for a cryptic rectangular mark (which leaves Google Lens thinking it’s the guts of a light switch, believe it or not). It’s not referenced in “Big Clive’s” deciphered design, and I can’t find an image of anything like it anywhere else. Diode? Varistor to protect against voltage surges? Resettable fuse to handle current surges? Multiple of these? Something(s) else? Post your [educated, preferably] guesses, along with any other thoughts, in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Disassembling a LED-based light that’s not acting quite right…right?
- Teardown: Bluetooth-enhanced LED bulb
- Teardown: Zigbee-controlled LED light bulb
- Freeing a three-way LED light bulb’s insides from their captivity
- Teardown: What killed this LED bulb?
- Slideshow: LED Lighting Teardowns
The post LED illumination addresses ventilation (at the bulb, at least) appeared first on EDN.
I’ve done it
| My god. [link] [comments] |
Next-Generation of Optical Ethernet PHY Transceivers Deliver Precision Time Protocol and MACsec Encryption for Long-Reach Networking
The post Next-Generation of Optical Ethernet PHY Transceivers Deliver Precision Time Protocol and MACsec Encryption for Long-Reach Networking appeared first on ELE Times.
Anritsu Launches Virtual Network Measurement Solution to Evaluate Communication Quality in Cloud and Virtual Environments
Anritsu Corporation announced the launch of its Virtual Network Master for AWS MX109030PC, a virtual network measurement solution operating in Amazon Web Services (AWS) Cloud environments. This software-based solution enables accurate, repeatable evaluation of communication quality across networks, including Cloud and virtual environments. It measures key network quality indicators, such as latency, jitter, throughput, and packet (frame) loss rate, in both one-way and round-trip directions. This software can accurately evaluate end-to-end (E2E) communication quality even in virtual environments where hardware test instruments cannot be installed.
Moreover, adding the Network Master Pro MT1000A/MT1040A test hardware to the network cellular side supports consistent quality evaluation from the core and Cloud to field-deployed devices.
Anritsu has developed this solution operating on Amazon Web Services (AWS) to accurately and reproducibly evaluate end-to-end (E2E) quality under realistic operating conditions even in virtual environments.
The Virtual Network Master for AWS (MX109030PC) is a software-based solution to accurately evaluate network communication quality in Cloud and virtual environments. Deploying software probes running on AWS across Cloud, data center, and virtual networks enables precise communication quality assessment, even in environments where hardware test instruments cannot be located.
The post Anritsu Launches Virtual Network Measurement Solution to Evaluate Communication Quality in Cloud and Virtual Environments appeared first on ELE Times.



