Українською
  In English
Збирач потоків
Photonics-electronics Convergence Technology Becomes Essential to Next-generation DCs Precise Measurements Required for DCI Evaluation
Courtesy: Anritsu Corporation
Due to the capacity constraints imposed by metropolitan areas, there is a growing trend to shift towards decentralized regional data centers. Along with the adoption of optical coherent transmission, such as 400G-ZR and OpenZR+, key to achieving this is the precise visualisation of fine quality. Anritsu, a long-established manufacturer of measurement instruments, supports this advancement of data centre networks with its high-precision measurement technology and support system completed in Japan.
The rapid growth in demand for AI has accelerated the global development of data centres, giving rise to an explosive growth in the amount of computational processing. In Japan, however, capacity limits are becoming apparent due to there being little physical space and an overextended electricity grid in metropolitan areas such as Tokyo, Chiba, and Osaka. This situation has led to a move towards the construction of decentralised data centres in rural areas.
Essential to supporting this decentralisation are high-speed, large-capacity, and low-latency data centre interconnections (DCIs). The communication speed of 400G is becoming mainstream, while the development of 800G-compatible products is progressing. At the same time, however, the increase in power consumption that accompanies higher transmission speeds is becoming an issue.
Co-Packaged Optics (CPO), an optical device technology that utilises photonics electronics convergence, is expected to be key to solving this problem.
Daiki Mochizuki, director of the Solutions Marketing Department at Anritsu’s Service Infrastructure Solutions Division, said, “Hyperscalers are also paying attention to CPO, with momentum building for its practical application.” CPO is an architecture that can significantly reduce transmission loss and power consumption by implementing optical transceivers in the same package as the switch ASIC, while shortening the length of the electrical wiring as much as possible. This also contributes to the IOWN initiative’s goal of “reducing electricity consumption to 1/100,” and is therefore attracting attention as a core technology for supporting next-generation infrastructure.
Director Daiki Mochizuki (right) and Manager Mitsuhiro Usuba, Solution Marketing Department, Service Infrastructure Solutions Division, Test & Measurement Company
On the other hand, unlike pluggable optical transceivers, which are easy to replace, CPOs may require the replacement of the entire device in the event of its failure. Therefore, more precise measurements and evaluations that have been undertaken in the past are required to ensure reliability in the development and manufacturing stages.

Comprehensive Measurement Solutions for CPO Quality Enhancement
In CPO, the optical elements and ASICS are extremely close to each other, making it very difficult to guarantee performance after implementation and to identify the demarcation point of responsibility among vendors. Anritsu offers measurement solutions to overcome this issue.
Mr Mochizuki first introduced the Bit Error Rate Tester (BERT), MP1900A. This is an instrument that visualises transmission errors by passing a test signal through a device, and which can accurately detect even minute bit errors.
The MP2110A is an optical sampling oscilloscope that analyses the waveforms and jitter of high-speed optical signals. As such, it is widely used on production lines for pluggable optical transceivers such as QSFP-DD. Due to its high repeatability and measurement accuracy, it will be increasingly applied to signal quality evaluation in new architectures such as CPO. These devices enable the quantitative understanding of signal quality and modulation integrity through “eye diagram measurement which visualises multiple signal waveforms by overlaying them.
In addition, the MS9740B is an optical spectrum analyser that analyses the wavelength characteristics of optical devices while measuring the Optical Signal-to-Noise Ratio (OSNR) and Side- Mode Suppression Ratio (SMSR). “There is a need to support measurement from a variety of perspectives to ensure the quality of optical devices,” said Mochizuki, further mentioning that these instruments are widely used not only by NTT’s research and development department but also by major device manufacturers.
MT 1040A: Essential for Distributed DCs – Focus on Virtual Tester Development
The practical operation of a distributed data centre requires that the network handle multiple geographically distant locations as if they were a single data centre. To this end, it is essential to be able to precisely measure and manage the latency and quality of communications. The Network Master Pro MT1040A addresses this need.
The MT1040A supports multiple communication standards, including 400G Ethernet. It is also equipped with a forward error correction (FEC) analysis function, enabling the comprehensive verification of the communication quality from the physical layer to the network layer.
Notably, it supports digital coherent transmission technologies such as 400G- ZR and OpenZR+, with measurement possible at both the IP and optical layers. Until recently, transponder manufacturers were the main users of the device, but with the spread of 400G-ZR/OpenZR+ transceivers, which do not require transponders and which can be directly mounted on routers, their use is expanding to those equipment vendors that deal with coherent signals and users who are building ROADM networks.

While the use of 400G-ZR/OpenZR+ transceivers reduces both the number of devices and the power consumption, it also requires those users dealing with carrier networks to evaluate the network quality themselves, a task that was previously handled by telecommunications carriers.
The MT1040A, which supports QSFP-DD. plays an important role here because it can directly connect to 400G-ZR/OpenZR+ compatible transceivers and measure end-to-end communication quality.
Mitsuhiro Usuba, manager of the department, said: “More and more companies are considering introducing the 400G-ZR, which is becoming more multi- vendor compatible, but some are worried about its operation. To address this, we bring the MT1040A to the customer’s site to measure latency and throughput and support their operational launch.”
Figure 2 shows an example of measuring 400G-ZR network quality using the MT1040A. Two MT1040As are connected to the ends of an ROADM network using dark fibre. As a result, link downs due to temporary drops in receiving power, the time required to recover the link, and the detection conditions in the absence of received light were observed in detail. In addition, the MT1040A captures quality variations that cannot be detected by normal BER measurements, such as State- Of-Polarization Rate-Of-Change (SOP ROC).
Anritsu is further developing virtual testers for 5G MEC and cloud-native environments. The goal is to enable end-to-end latency and throughput measurements by deploying virtualized software testers on the server side, even in environments where it is physically difficult to install testers, such as in data centers or in automotive. “To take advantage of MEC’s low latency, it is important to have the technology to measure and guarantee its performance,” said Usuba.
Anritsu’s strength lies in its ability to complete all processes from planning to development, through production, to support in Japan. As such, Anritsu is an unparalleled partner in the construction and operation of increasingly sophisticated and complex next-generation networks.
Signal Quality Analyzer-R MP1900A
Network Master Pro MT1040A
The post Photonics-electronics Convergence Technology Becomes Essential to Next-generation DCs Precise Measurements Required for DCI Evaluation appeared first on ELE Times.
Driving the Future of Vehicle E/E Architecture: Arrow Electronics to Support Next-Generation Mobility
By
- Murdoch Fitzgerald, chief growth officer of global services for Arrow’s global components business, and
- Dr. Raphael Salmi, president of Arrow Electronics’ South Asia, Korea & Japan components business
The automotive industry is rapidly advancing toward architectures built for high‑bandwidth data movement, centralized compute, and lifecycle‑ready software operations. Traditional distributed ECU topologies—characterized by increasing wiring mass, point‑to‑point signaling, and proliferation of function-specific modules—are no longer adequate to meet the computational and functional demands of modern vehicles. E/E architecture is vital to this transformation because it provides the foundational electrical, networking, and computing framework required to support higher data throughput, real‑time decision‑making, and the integration of increasingly complex vehicle functions.
The global Vehicle E/E Architecture market was valued at $46.2 Bn in 2024 and is projected to reach $115.6 Bn by 2033, growing at a CAGR of 10.7% (source: Global Market Insights)
Technical Challenges and Complexities Involved in the Adoption of E/E Architecture
- Complex Interdependencies: ADAS, infotainment, and V2X must interoperate across protocols, bridging legacy and new systems.
- Cybersecurity: Increased connectivity expands the attack surface and increases security design complexity.
- Power & Thermal Management: Diverse power demands require real‑time energy and thermal control to prevent failures.
- Validation & Testing: Complex system interactions demand extensive simulation and HIL testing.
- Regulatory Compliance: E/E architectures must meet safety, emissions, and data‑privacy regulations end‑to‑end.
- Environmental Considerations: Sustainable design prioritizes recyclability and lower environmental impact.
Architectural Transformation: From Distributed ECUs to Centralized, Zonal Topologies
Next‑generation E/E architectures shift to a centralized, hierarchical model:
- High‑Performance Compute (HPC) Nodes: Centralized compute consolidates functions from multiple ECUs, reducing module count and enabling ADAS, autonomy, connectivity, and advanced diagnostics.
- Zonal Controllers: Controllers aggregate sensors and actuators by physical zone, cutting wiring length by 30–50% and harness weight by 15–30%.
- Smart Endpoints (SEPs): Ethernet‑centric networks simplify edge connectivity, replacing multiple legacy buses with scalable, deterministic communications.
- High‑Speed Interconnect & Power Distribution: Advanced connectors, harnesses, Ethernet, timing, and power components ensure signal integrity, EMC stability, and high‑speed performance.


E/E Architecture: Engineering the New Vehicle Nervous System
To support this transformation, Arrow Electronics has launched a strategic initiative and dedicated research hub focused on enabling robust next‑generation Electrical and Electronic (E/E) architectures. The initiative addresses critical design, integration, and supply‑chain requirements for OEM and tier‑1 engineering teams building the next wave of mobility platforms.
Arrow Electronics: Technical Enablement Across the Full E/E Stack
Cross Disciplinary Engineering Support: Arrow’s initiative provides engineering teams with access to expertise spanning semiconductors, networking, IP&E, system architecture, safety, and cybersecurity. This includes:
- Architecture level guidance on HPC, zonal, and endpoint implementation
- Safety and cybersecurity engineering aligned to ISO 26262, ISO 21434, and UN R155 expectations
- Power distribution and 48V readiness design considerations
- EMC-driven component selection for high-speed Ethernet and mixed signal environments
This interdisciplinary support helps design teams reduce risk early in platform development.
“E/E architecture is the cornerstone of the modern automotive revolution, enabling the transition from hardware-centric machines to intelligent, software-defined mobility,” said Murdoch Fitzgerald, chief growth officer of global services for Arrow’s global components business. “By combining our global engineering reach with a broad range of components and specialized software expertise, we are well positioned to help our customers navigate this complexity, reducing their time-to-market and helping ensure their platforms are built to adapt as the industry evolves.”
Comprehensive Technology Ecosystem
Arrow’s portfolio includes components and subsystems essential to modern architectures, such as:
- Vehicle networking processors and real-time controllers
- PCIe switching and high-speed interconnect devices
- Automotive Ethernet PHYs, switches, MACsec enabled devices
- High-speed connectors and automotive-grade cabling ecosystems
- Automotive memory, storage, timing, and power components
Access to these technologies simplifies system integration and allows rapid architecture prototyping.
Strengthened Software & Safety Capabilities: Through expanded software engineering centers and the addition of established automotive software firms, Arrow now supports:
- AUTOSAR Classic and Adaptive development
- System-level modelling, HIL/SIL workflows, and model-based development
- OTA and diagnostic pipeline development
- Functional safety engineering and cybersecurity analysis
These capabilities enable engineering teams to build systems that are robust, certifiable, and scalable across vehicle lines.
Automotive Grade Supply Chain Reliability: Modern vehicle platforms require stable, long lifecycle, traceable electronic components. Arrow supports engineers with:
- Multi-sourced, risk-balanced component strategies
- Lifecycle and obsolescence planning
- Global inventory breadth across semiconductor and IP&E categories
This mitigates supply chain risk during development, validation, and production scaling.
Arrow’s E/E Architecture Research Hub
To accelerate architecture development, Arrow has launched an external research hub providing:
- Technical whitepapers
- High-level and subsystem-specific design guidance
- Deep dive analyses of HPC, zonal, and endpoint architectures
- Reference material on safety, cybersecurity, and diagnostics
- Component selection insights and technology mappings
The hub is designed as a resource for engineers, architects, and procurement specialists engaged in next-generation platform design.
E/E architecture represents a complete overhaul of the “nervous system” within modern vehicles. Photo copyright 2026 Artlist Ltd.
Arrow Electronics is a central solution aggregator for E/E architecture, bridging the gap between individual components and complete, integrated systems.Photo copyright 2026 Artlist Ltd.
Local customer success case:
Arrow Electronics Fuels SAVART Motors’ EV Manufacturing Expansion in Indonesia, Boosting engineering and supply chain capabilities to drive sustainable e-mobility
Arrow has supported Indonesia’s homegrown EV maker SAVART Motors in designing and manufacturing high-quality, safe, and affordable electric scooters.
Founded in 2018, SAVART Motors stands out as one of the few local brands with in-house R&D capabilities, advanced prototyping hardware and software, and a dedicated testing and manufacturing facility in Mojokerto, East Java, Indonesia.
Indonesia’s motorcycle market is the third largest in the world. With nearly 130 million motorcycles on the road, the emissions from these vehicles significantly impact air quality and contribute to climate change. To address this, Indonesia aims to have 13 million electric two-wheelers on the roads by 20302, reducing greenhouse gas emissions and air pollution while promoting eco-friendly commuting.
Empowering homegrown EV entrepreneurs to drive electrification and e-mobility, SAVART Motors meticulously designs its electric scooters from the ground up, seamlessly integrating design aesthetics and performance to suit road conditions, riding culture, and local market expectations. With a strong commitment to quality, safety, comfort, and R&D excellence, the majority of electrical and mechanical components are developed in-house by a team of dedicated and talented engineers who are graduates of leading universities in Indonesia.
SAVART Motors is electrifying Indonesia’s transportation landscape by designing and manufacturing its electric vehicles almost entirely in-house. The company has reached a significant milestone with a 74.27% TKDN verification, reflecting the high level of domestic content in its goods and services produced in Indonesia. From concept through production, SAVART’s engineers develop cutting-edge technology tailored to the needs of local riders. Through its collaboration with Arrow Electronics, SAVART gains access to advanced components from leading global brands such as Analog Devices, Infineon, Littelfuse, Quectel, and STMicroelectronics. Arrow’s support strengthens SAVART’s designs, accelerates production timelines, and enables efficient scaling, while helping the company maintain its commitment to quality and innovation as a homegrown Indonesian brand.
“Electrification and AI-powered technologies are fundamentally transforming transportation,” said Dr. Raphael Salmi, president of Arrow Electronics’ South Asia, Korea & Japan components business. “We are excited to provide SAVART Motors with the essential engineering capabilities and supply chain services they need to manufacture EVs that not only prioritize safety, comfort, and ease of use but also cater to the needs of Indonesian riders. By offering a comprehensive technology portfolio that includes smart IoT connectivity modules, microprocessors, sensors, and automotive-grade silicon carbide MOSFETs, we are well-positioned to be their trusted technology supplier as they continue to revolutionize sustainable e-mobility in Indonesia and beyond.”
A substantial portion of the electronic components in SAVART Motors’ latest model has been sourced and supplied by Arrow. In addition to complementing SAVART Motors’ in-house R&D efforts, Arrow has provided engineering support and guidance on system integration, including adaptive user interfaces, smart vehicle control units, AI-based user profiling, keyless and fingerprint security access, and smart battery management systems.

The post Driving the Future of Vehicle E/E Architecture: Arrow Electronics to Support Next-Generation Mobility appeared first on ELE Times.
Designing AI-resistant technical evaluations
Courtesy: Anthropic
What we learned from three iterations of a performance engineering take-home that Claude keeps beating.
Evaluating technical candidates becomes harder as AI capabilities improve. A take-home that distinguishes well between human skill levels today may be trivially solved by models tomorrow, rendering it useless for evaluation.
Since early 2024, our performance engineering team has used a take-home test where candidates optimise code for a simulated accelerator. Over 1,000 candidates have completed it, and dozens now work here, including engineers who brought up our Trainium cluster and shipped every model since Claude 3 Opus.
But each new Claude model has forced us to redesign the test. When given the same time limit, Claude Opus 4 outperformed most human applicants. That still allowed us to distinguish the strongest candidates—but then Claude Opus 4.5 matched even those. Humans can still outperform models when given unlimited time, but under the constraints of the take-home test, we no longer have a way to distinguish between the output of our top candidates and our most capable model.
I’ve now iterated through three versions of our take-home in an attempt to ensure it still carries a signal. Each time, I’ve learned something new about what makes evaluations robust to AI assistance and what doesn’t.
This post describes the original take-home design, how each Claude model defeated it, and the increasingly unusual approaches I’ve had to take to ensure our test stays ahead of our top model’s capabilities. While the work we do has evolved alongside our models, we still need stronger engineers—just increasingly creative ways to find them.
To that end, we’re releasing the original take-home as an open challenge, since with unlimited time, the best human performance still exceeds what Claude can achieve. If you can best Opus 4.5, we’d love to hear from you—details are at the bottom of this post.
The origin of the take-home
In November 2023, we were preparing to train and launch Claude Opus 3. We’d secured new TPU and GPU clusters, our large Trainium cluster was coming, and we were spending considerably more than we had in the past on accelerators, but we didn’t have enough performance engineers for our new scale. I posted on Twitter asking people to email us, which brought in more promising candidates than we could evaluate through our standard interview pipeline, a process that consumes significant time for staff and candidates
We needed a way to evaluate candidates more efficiently. So, I took two weeks to design a take-home test that could adequately capture the demands of the role and identify the most capable applicants.
Design goals
Take-homes have a bad reputation. Usually, they’re filled with generic problems that engineers find boring and which make for poor filters. My goal was different: create something genuinely engaging that would make candidates excited to participate and allow us to capture their technical skills at a high-level of resolution.
The format also offers advantages over live interviews for evaluating performance engineering skills:
- Longer time horizon: Engineers rarely face deadlines of less than an hour when coding. A 4-hour window (later reduced to 2 hours) better reflects the actual nature of the job. It’s still shorter than most real tasks, but we need to balance that with how onerous it is.
- Realistic environment: No one is watching or expecting narration. Candidates work in their own editor without distraction.
- Time for comprehension and tooling: Performance optimisation requires understanding existing systems and sometimes building debugging tools. Both are hard to realistically evaluate in a normal 50-minute interview.
- Compatibility with AI assistance: Anthropic’s general candidate guidance asks candidates to complete take-homes without AI unless indicated otherwise. For this take-home, we explicitly indicate otherwise.
Longer-horizon problems are harder for AI to solve completely, so candidates can use AI tools (as they would on the job) while still needing to demonstrate their own skills.
Beyond these format-specific goals, I applied the same principles I use when designing any interview to make the take-home:
- Representative of real work: The problem should give candidates a taste of what the job actually involves.
- High signal: The take-home should avoid problems that hinge on a single insight and ensure candidates have many chances to show their full abilities — leaving as little as possible to chance. It should also have a wide scoring distribution and ensure enough depth that even strong candidates don’t finish everything.
- No specific domain knowledge: People with good fundamentals can learn specifics on the job. Requiring narrow expertise unnecessarily limits the candidate pool.
- Fun: Fast development loops, interesting problems with depth, and room for creativity.
The simulated machine
I built a Python simulator for a fake accelerator with characteristics that resemble TPUs. Candidates optimise code running on this machine, using a hot-reloading Perfetto trace that shows every instruction, similar to the tooling we have on Trainium.
The machine includes features that make accelerator optimisation interesting: manually managed scratchpad memory (unlike CPUs, accelerators often require explicit memory management), VLIW (multiple execution units running in parallel each cycle, requiring efficient instruction packing), SIMD (vector operations on many elements per instruction), and multicore (distributing work across cores).

The task is a parallel tree traversal, deliberately not deep learning flavoured, since most performance engineers hadn’t worked on deep learning yet and could learn domain specifics on the job. The problem was inspired by branchless SIMD decision tree inference, a classical ML optimisation challenge as a nod to the past, which only a few candidates had encountered before.
Candidates start with a fully serial implementation and progressively exploit the machine’s parallelism. The warmup is multicore parallelism, then candidates choose whether to tackle SIMD vectorisation or VLIW instruction packing. The original version also included a bug that candidates needed to debug first, exercising their ability to build tooling.
Early results
The initial take-home worked well. One person from the Twitter batch scored substantially higher than everyone else. He started in early February, two weeks after our first hires through the standard pipeline. The test proved predictive: He immediately began optimising kernels and found a workaround for a launch-blocking compiler bug involving tensor indexing math overflowing 32 bits.
Over the next year and a half, about 1,000 candidates completed the take-home, and it helped us hire most of our current performance engineering team. It proved especially valuable for candidates with limited experience on paper: several of our highest-performing engineers came directly from undergrad but showed enough skill on the take-home for us to hire confidently.
Feedback was positive. Many candidates worked past the 4-hour limit because they were enjoying themselves. The strongest unlimited-time submissions included full optimising mini-compilers and several clever optimisations I hadn’t anticipated.
Then Claude Opus 4 defeated it
By May 2025, Claude 3.7 Sonnet had already crept up to the point where over 50% of candidates would have been better off delegating to Claude Code entirely. I then tested a pre-release version of Claude Opus 4 on the take-home. It came up with a more optimised solution than almost all humans did within the 4-hour limit.
This wasn’t my first interview defeated by a Claude model. I’d designed a live interview question in 2023 specifically because our questions at the time were based around common tasks that early Claude models had lots of knowledge of and so could solve easily. I tried to design a question that required more problem-solving skill than knowledge, still based on a real (but niche) problem I’d solved at work. Claude 3 Opus beat part 1 of that question; Claude 3.5 Sonnet beat part 2. We still use it because our other live questions aren’t AI-resistant either.
For the take-home, there was a straightforward fix. The problem had far more depth than anyone could explore in 4 hours, so I used Claude Opus 4 to identify where it started struggling. That became the new starting point for version 2. I wrote cleaner starter code, added new machine features for more depth, and removed multicore (which Claude had already solved, and which only slowed down development loops without adding signal).
I also shortened the time limit from 4 hours to 2 hours. I’d originally chosen 4 hours based on candidate feedback, preferring less risk of getting sunk if they got stuck for a bit on a bug or confusion, but the scheduling overhead was causing multi-week delays in our pipeline. Two hours is much easier to fit into a weekend.
Version 2 emphasised clever optimisation insights over debugging and code volume. It served us well for several months.
Then Claude Opus 4.5 defeated that
When I tested a pre-release Claude Opus 4.5 checkpoint, I watched Claude Code work on the problem for 2 hours, gradually improving its solution. It solved the initial bottlenecks, implemented all the common micro-optimisations, and met our passing threshold in under an hour.
Then it stopped, convinced it had hit an insurmountable memory bandwidth bottleneck. Most humans reach the same conclusion. But there are clever tricks that exploit the problem structure to work around that bottleneck. When I told Claude the cycle count it was possible to achieve, it thought for a while and found the trick. It was then debugged, tuned, and implemented with further optimisations. By the 2-hour mark, its score matched the best human performance within that time limit—and that human had made heavy use of Claude 4 with steering.
We tried it out in our internal test-time compute harness for more rigour and confirmed it could both beat humans in 2 hours and continue climbing with time. Post-launch, we even generically improved our harness and got a higher score.

I had a problem. We were about to release a model where the best strategy on our take-home would be delegating to Claude Code.
Considering the options
Some colleagues suggested banning AI assistance. I didn’t want to do this. Beyond the enforcement challenges, I had a sense that given people continue to play a vital role in our work, I should be able to figure out some way for them to distinguish themselves in a setting with AI, as they’d have on the job. I didn’t want to give in yet to the idea that humans only have an advantage on tasks longer than a few hours.
Others suggested raising the bar to “substantially outperform what Claude Code achieves alone.” The concern here was that Claude works fast. Humans typically spend half the 2 hours reading and understanding the problem before they start optimising. A human trying to steer Claude would likely be constantly behind, understanding what Claude did only after the fact. The dominant strategy might become sitting back and watching.
Nowadays, performance engineers at Anthropic still have lots of work to do, but it looks more like tough debugging, systems design, performance analysis, figuring out how to verify the correctness of our systems, and figuring out how to make Claude’s code simpler and more elegant. Unfortunately, these things are tough to test in an objective way without a lot of time or common context. It’s always been hard to design interviews that represent the job, but now it’s harder than ever.
But I also worried if I invested in designing a new take-home, either Claude Opus 4.5 would solve that too, or it would become so challenging that it would be impossible for humans to complete in two hours.
Attempt 1: A different optimisation problem
I realised Claude could help me implement whatever I designed quickly, which motivated me to try developing a harder take-home. I chose a problem based on one of the trickier kernel optimisations I’d done at Anthropic: an efficient data transposition on 2D TPU registers while avoiding bank conflicts. I distilled it into a simpler problem on a simulated machine and had Claude implement the changes in under a day.
Claude Opus 4.5 found a great optimisation I hadn’t even thought of. Through careful analysis, it realised it could transpose the entire computation rather than figuring out how to transpose the data, and it rewrote the whole program accordingly.
In my real case, this wouldn’t have worked, so I patched the problem to remove that approach. Claude then made progress but couldn’t find the most efficient solution. It seemed like I had my new problem, now I just had to hope human candidates could get it fast enough. But I had some nagging doubt, so I double-checked using Claude Code’s “ultrathink” feature with longer thinking budgets … and it solved it. It even knew the tricks for fixing bank conflicts.
In hindsight, this wasn’t the right problem to try. Engineers across many platforms have struggled with data transposition and bank conflicts, so Claude has substantial training data to draw on. While I’d found my solution from first principles, Claude could draw on a larger toolbox of experience.
Attempt 2: Going weirder
I needed a problem where human reasoning could win over Claude’s larger experience base: something sufficiently out of distribution. Unfortunately, this conflicted with my goal of being recognizably like the job.
I thought about the most unusual optimisation problems I’d enjoyed and landed on Zachtronics games. These programming puzzle games use unusual, highly constrained instruction sets that force you to program in unconventional ways. For example, in Shenzhen I/O, programs are split across multiple communicating chips that each hold only about 10 instructions with one or two state registers. Clever optimisation often involves encoding state into the instruction pointer or branch flags.
I designed a new take-home consisting of puzzles using a tiny, heavily constrained instruction set, optimising solutions for minimal instruction count. I implemented one medium-hard puzzle and tested it on Claude Opus 4.5. It failed. I filled out more puzzles and had colleagues verify that people less steeped in the problem than I could still outperform Claude.
Unlike Zachtronics games, I intentionally provided no visualisation or debugging tools. The starter code only checks whether solutions are valid. Building debugging tools is part of what’s being tested: you can either insert well-crafted print statements or ask a coding model to generate an interactive debugger in a few minutes. Judgment about how to invest in tooling is part of the signal.
I’m reasonably happy with the new take-home. It might have lower variance than the original because it comprises more independent sub-problems. Early results are promising: scores correlate well with the calibre of candidates’ past work, and one of my most capable colleagues scored higher than any candidate so far.
I’m still sad to have given up the realism and varied depth of the original. But realism may be a luxury we no longer have. The original worked because it resembled real work. The replacement works because it simulates novel work.
The post Designing AI-resistant technical evaluations appeared first on ELE Times.
Why Connected MCUs Are Replacing Bolt-On Wireless in IoT Devices?
Courtesy: Infineon
Connected MCUs are gaining popularity rapidly.
If you’ve spent any time building or supporting connected products over the last decade, you’ve seen the pattern repeat itself: a product team realises they need wireless, so they bolt on a Wi-Fi module, wire up SDIO or SPI, route antennas where there’s some available space, and duct-tape the firmware stack into place right before release.
It works…until it doesn’t. And the truth is, most of us knew it was going to be painful the moment that architecture was chosen.
Whether it’s a forklift, a vitals monitoring device, a handheld scanner, or an HVAC controller, the problems are surprisingly universal. And they all stem from the same root issue.
Wireless was treated like an accessory instead of a part of the system.
We’re finally at a turning point. Connected MCUs, especially with integrated Wi-Fi 6 and 6E, change the entire equation.
Let me walk you through why this shift is happening and what it solves.
The pain: Bolt-on wireless was never as simple as it looked
Customers usually come with the same set of problems:
- Integration complexity snowballs quickly
That “easy” SDIO Wi-Fi module seems fine at first, until you realise:
- Your host processor can’t keep up under load
- The driver needs specific kernel patches
- The layout constraints choke your antenna performance
- You’re juggling two separate firmware roadmaps
By the end, half the schedule is spent debugging issues no one originally accounted for.
- RF performance suffers because it has to fit the enclosure, not the system
Bolt-on designs force antennas into whatever space is left. That might be inside a forklift mast, behind a metal enclosure, or buried under plastic in a medical device.
You can predict the RF problems before they happen, and yet they still happen.
- Certifications slow everything down
When wireless is a separate module, you:
- Test the radio module for compliance
- Test your host MCU for EMI
- You have to do integration testing when you put them together
- And then redo it every time you want to change the antenna
Teams underestimate this every single time.
- The BOM cost keeps going higher
One board for the host MCU, one for wireless, external memory, custom harnesses, enclosures… By the time the full system is built, the wireless subsystems cost more than the product owner ever expected.
And in long-lifecycle industries, like material handling, medical, and commercial HVAC, that pain compounds across entire product lines.
The turning point: Wi-Fi 6 and 6E connected MCUs
The reason this shift is happening is simple:
We finally have connected MCUs that are powerful enough, low-power enough, and secure enough to replace external wireless subsystems entirely.
This means the wireless subsystem is no longer bolted on. It’s a self-contained compute & connectivity module that slides directly into your main system design. For the first time, the architecture reflects how engineers actually want to build products:
A single module that handles wireless, networking, security, protocol stacks, and memory, rather than scattering those components across multiple boards.
Simplifying the integration, RF performance, and certification challenges that discrete systems have.
The post Why Connected MCUs Are Replacing Bolt-On Wireless in IoT Devices? appeared first on ELE Times.
FYI you can use AI to identify components. Take a picture of the component and upload it to an AI
| Make sure the package markings are clear in your picture. I used grok. It will even find parts if it doesn't have the part number on the part, just a marking code. [link] [comments] |
I'm a first year high school electrical student and I designed a 4-to-10 weighted sum decoder from scratch using discrete NPN transistors. Here's how it works.
| I started this a few months ago. No university, no engineering background, just a goal: 4 input switches, 10 LEDs, light up N LEDs when the inputs sum to N. I figured out the logic, built it in simulation, got told I was wrong by experienced people, proved them right, and then discovered what I built has a name in a field I'd never heard of. --- **The Core Idea: Non-Binary Weighting** Most 4-bit decoders assign binary weights: 1, 2, 4, 8. I didn't do that. I assigned decimal additive weights: - SW-A = 1 - SW-B = 2 - SW-C = 3 - SW-D = 4 Maximum sum = exactly 10. Every integer from 0 to 10 is reachable. The 16 physical switch combinations collapse into 11 unique output states. Five of those states are reachable by two different switch combinations (e.g. A+D = 5 and B+C = 5). The circuit correctly treats these as identical — it decodes *value*, not *pattern*. --- **Logic: Series NPN AND Gates** Each output channel is a chain of NPN transistors in series. All transistors in the chain must be ON for collector current to flow — logical AND. Chain depth varies per output: - 1 NPN: single input conditions - 2 NPNs in series: two-input conditions - 3 NPNs in series: three-input conditions - 4 NPNs in series: sum = 10 only The Vbe stacking problem is real — 4 transistors in series drops ~2.8V. I solved it by using a 9V supply and adding a booster NPN after each AND gate to restore a clean full-swing signal before hitting the LED stage. --- **Output Stage** Each booster drives an LED via a 330 ohm resistor to VCC: R = (9V - 2V) / 20mA = 350 ohms → 330 ohm standard value, ~21mA per LED This fully isolates logic voltage from LED forward voltage. Without this separation the LED acts as a voltage divider and corrupts the logic states — I learned that the hard way in the simulation. --- **The Part That Surprised Me** After I finished, someone pointed out that this circuit structure is identical to a single hardware neuron: - Weighted inputs → synaptic weights - Arithmetic sum → dendritic summation - AND gate threshold → activation function - Thermometer output → step activation I had never heard of neuromorphic computing when I designed this. I just landed there by solving the problem from first principles. Apparently there's a billion dollars of research built on the same idea. --- **Simulation Results (all confirmed working):** - A → 1 LED ✓ - B → 2 LEDs ✓ - C → 3 LEDs ✓ - A+B → 3 LEDs ✓ - A+D → 5 LEDs ✓ - B+C → 5 LEDs ✓ - B+D → 6 LEDs ✓ - A+B+C+D → 10 LEDs ✓ --- Happy to share full schematics and simulation screenshots. Thanks for reading. [link] [comments] |
EEVblog 1739 - UNUSUAL REPAIR! : Beelink Ryzen 9 Mini PC
Thats why you should plan first and then do the rest.
| submitted by /u/Worth-Ganache1472 [link] [comments] |
AXT’s Q4/2025 revenue constrained by delay in China export permits
TP-Link’s Kasa EP10: If at first it doesn’t connect, buy, buy again

How visibly different (if at all) inside are two generations of smart plugs, and is the more recent device’s comparative connectivity issue due to hardware, software, or a combination of the two?
Back in early December, EDN published my initial write-up in a planned series of posts covering experiences setting up and using devices from TP-Link’s two somewhat-overlapping smart home hardware, software, and service ecosystems, Kasa and Tapo. The first two products I’ve tried out (I’ve since added several more to the stable; stand by for additional details in future blog posts and teardowns) were both Kasa-branded and were also both smart plugs: the HS103, which I subsequently dissected here:

and its more diminutive successor, the EP10:

My so-far sample set is small, so conclusions should be accordingly calibrated. That said, I’ve had no issues with any of the multiple HS103 devices I’ve so far activated here at the residence, whether in the initial setup steps or during subsequent usage. The same can’t be said, however, for the EP10. None of the devices I tried in either of the first two four-packs I purchased would successfully setup-connect to my Wi-Fi network. But both devices in the third two-pack worked fine…at least until I subsequently disassembled one of them. Meet today’s teardown candidate, as usual, accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes:




This particular two-pack was sourced from Amazon’s Resale (formerly Warehouse) sub-site, therefore rationalizing the non-TP-Link sticker stuck to the top of the box:

And since, as I’d mentioned previously, I got the idea to do a comparative teardown between the HS103 and EP10 after sending back for refund the original two four-packs of the latter, I can’t say whether their hardware versions matched this device’s v1.6 ID. v1.0 and v1.8 EP10 designs have also been shipped by the company (all three with multiple firmware releases):

Inside…

and underneath a sliver of literature, along with a bit of protective foam:



is our patient:

whose sibling, I’ve already noted, is in active use:

Some as-usual overview shots to start; the EP10 has dimensions of 2.36 x 1.50 x 1.21 in (60 x 38 x 33 mm) and weighs 0.13 lb (59 g) versus its slightly heftier HS103 predecessor at 2.62 x 1.57 x 1.5 in (66.5 x 40 x 38 mm) and 0.25 lb (113 g):

The LED-augmented on/off, pairing and reset switch is on the left side this time:

Theoretically, at least, the visible presence of a screw head implies a potentially simpler disassembly process as compared to the HS103 of the past. We shall see…


Once again, there’s a seam-inclusive topside, suggestive of the pathway inside:

And, last but not least, the bottom-side stamped specification suite, including the always-insightful FCC ID (2AXJ4EP10):
Speaking of pathways inside, let’s take the first step in the journey, shall we?


I wish I could say the two halves of the case then separated straightaway…but that’d be a lie:




Still, the mission was eventually accomplished, this time with an added bonus: no blood loss!

This YouTuber’s video (which, although it claims to be of an HS103, is actually of an EP10; note the switch location, along with glimpses of the bottom-side markings) bolsters my opinion as to the device’s lingering disassembly difficulty. Alas, I didn’t come across it until afterwards:
Comparatively boring front half first:

including a closeup of the left-side mechanical switch’s translucent insides:

Now for the (rear) half I suspect you all mostly care about:
The relay on the right side is, at least in my v1.6 hardware version of the design, a Hongfa HF32FV-16, the exact same component I found a month back in my HS103 teardown:
However, the one in the video I just showed you, complete with a convenient “v1.8” hardware version sticker atop it, is blue in color, therefore presumably from a different manufacturer. As is the one shown in the FCC certification internal photos, which is sticker-less, but I’m assuming it references the initial v1.0 hardware design. And now for the other end, containing the digital and RF (control and wireless communications) sections, of which I’m most interested, both in an absolute sense and functionally relative to the HS103 predecessor:
Once again, there’s the on/off, pairing, and reset switch, this time right next to the LED, and with both now surrounded by the previously encountered LED-only light leak-preventing foam. The embedded antenna runs along the PCB’s right edge. And the “brains” of the operation at the end of the antenna are seemingly also the same as in the HS103: Realtek’s RTL8710, which, as I noted before, supports a complete TCP/IP “stack” and integrates a 166 MHz Arm Cortex M3 processor core, 512 Kbytes of RAM, and 1 Mbyte of flash memory. The only differences, perhaps reflective of a silicon revision, are in the IC’s bottom two marking lines. The IC in the HS103 says:
08F01H3
G038A2
while the Realtek RTL87210 in the EP10 design is marked as follows for the 2nd and 3rd lines:
08EL0C1
G031A2
Alas, and as with the HS103 precursor, I was unsuccessful in my attempt to free the EP10’s PCB from the rear-half case within which it was ensconced. I’ll alternatively attempt to pacify your curiosity by first pointing out that a scattering-of-passive PCB backside image is included in the FCC certification internal photo set. And I’ll also point you toward another video, this one also showing both PCB sides but also more broadly of interest to me (and you as well, I suspect):
I found it within a Reddit post I stumbled across while doing my initial research. The OP (original poster, for those of you not yet familiar with frequently used Reddit verbiage) had an EP10 whose relay had developed perpetually clicking behavior. Turns out one of the “can” capacitors on the board had gone bad; replacing it restored normal functionality (not to mention ending the din). Note that the relay in the version of the hardware shown in this video (which I think also says v1.8, although the video-frame images aren’t clear) is also blue in color.
(Not-) working theoriesThis internal information is all well and good, I hope you agree, but it still doesn’t answer my fundamental question: why was I successful in using only a subset of the EP10s I tried setting up? I’ll first reiterate something I said in my initial December 2025 coverage:
I wondered if these particular smart plugs, which, like their seemingly more reliable HS103 precursors, are 2.4 GHz Wi-Fi-only, were somehow getting confused by one or more of the several relatively unique quirks of my Google Nest Wifi wireless network:
- The 2.4 GHz and 5 GHz Wi-Fi SSIDs broadcast by any node are the same name, and
- Being a mesh configuration, all nodes (both stronger-signal nearby and weaker, more distant, to which clients sometimes connect instead) also have the exact same SSID.
If I was right, the issue might have been caused by an EP10 software shortcoming, which a newer version of the firmware could conceivably resolve. But this leads to a chicken-and-egg situation. Downloading and installing the latest firmware to the device requires that I first connect the EP10 to TP-Link’s “cloud” firmware repository via my smartphone intermediary. But absent a sufficiently functional initial firmware version, I can’t get the device online in the first place. To wit, note that the TP-Link devices’ lack of Bluetooth support precludes using this alternative wireless communications interface to get them updated; it’s Wi-Fi or nothing.
A fundamental hardware limitation is also a possibility, of course. Via both documented and pictorial evidence, I’m aware (as, now, are you as well) of at least three different hardware versions of the EP10. For that matter, TP-Link’s website currently lists six different hardware versions of the HS103 “in the wild”, ranging from v1.0 to v5.8. All five of the HS103s currently active in my home are v5 units, the Kasa app conveniently tells me via the Device Info screen in each device’s advanced settings. Again, the sample sizes are small and therefore statistically suspect: did I just get lucky with the HS103s, and unlucky with the first two batches of EP10s?
With that, I’ll wrap up and refer you to the comments section below for any answers you might be willing to publicly posit for my closing questions, and/or any other thoughts you might have! Stay tuned, as I alluded to earlier both in this post and a prior one in the series, for additional teardowns to come of products from both TP-Link’s Kasa and Tapo smart plug families, along with other, potentially even more interesting, smart home ecosystem devices.
—Brian Dipert is the Principal at Sierra Media and a former technical editor at EDN Magazine, where he still regularly contributes as a freelancer.
Related Content
- Tapo or Kasa: Which TP-Link ecosystem best suits ya?
- How smart plugs can make your life easier and safer
- Teardown: Smart plug adds energy consumption monitoring
- Teardown: A Wi-Fi smart plug for home automation
- Teardown: Smart switch provides Bluetooth power control
The post TP-Link’s Kasa EP10: If at first it doesn’t connect, buy, buy again appeared first on EDN.
❤️ Чемпіонат КПІ з шахів!
Відкрито реєстрацію на найграндіозніший університетський турнір за багато років. Перші змагання вже 10 березня. Поспішайте взяти участь у Чемпіонаті КПІ із шахів!
BluGlass enters AUS$1.25m development program with US tier-1 defence prime for visible GaN DFB lasers and gain chips
R&S’s next-generation Wi-Fi 8 access point testing in collaboration with NETGEAR
NETGEAR has selected the CMP180 radio communication tester from Rohde & Schwarz for the development of future Wi-Fi 8 access points. By integrating the tester into their design validation test environment, NETGEAR will be able to speed up the development of performance-optimised Wi-Fi products.
Rohde & Schwarz, a leading supplier of test and measurement equipment for wireless applications, and NETGEAR, manufacturer of advanced networking technologies and leading-edge Wi-Fi 7 products, collaborate to get the next generation Wi-Fi 8 products ready for the market.
Wi-Fi 8 is the next generation Wi-Fi based on the upcoming IEEE 802.11bn standard. With the focus on ultra-high reliable (UHR) WLAN, this new technology will improve the wireless user experience at homes, offices and factories: high-speed connectivity under all conditions with low latency for gaming, learning and working applications, which will use augmented reality (AR) and virtual reality (VR) to provide an immersive user experience.
Design validation of Wi-Fi 8 access points requires test solutions which support the latest Wi-Fi 8 features like distributed resource units (DRU) or unequal modulation (UEQM) with up to 320 MHz wide channels in all supported bands and highest modulation schemes (4096QAM), while providing the performance (EVM), and scalability (4×4 MIMO) required to optimize the wireless device performance.
Rohde & Schwarz provides NETGEAR with the CMP180 radio communication tester, a future-proof non-signalling testing solution for wireless devices, which can be used in research, development, validation and production. It supports many cellular and non-cellular technologies, including the latest Wi-Fi 6E, Wi-Fi 7, Wi-Fi 8 and 5G NR FR1 in frequencies up to 8 GHz and bandwidths of up to 500 MHz.
The CMP180 comes equipped with two analysers, two generators and two times eight RF ports in a single box, plus the possibility to scale up by stacking several testers. This makes it a cost-efficient test solution with best-in-class performance, addressing current and future test demands.
While its fast multi-DUT testing capabilities make the CMP180 ideal for testing in mass-production test environments, test engineers can use the instrument throughout the entire development cycle: from engineering validation tests (EVT), design validation tests (DVT) and production validation tests (PVT) to mass production (MP).
Joseph Emmanuel, VP, Consumer Business Unit HW Engineering at NETGEAR, says: “Working with Rohde & Schwarz enables us to bring our Wi-Fi 8 products on the market with the expected high quality and extremely high performance for the best multi-gigabit Wi-Fi experience everywhere at home.”
Goce Talaganov, Vice President Mobile Radio Testers at Rohde & Schwarz, says: “We are grateful for the close collaboration with NETGEAR on the latest Wi-Fi 8 technology. Our experience in wireless device testing and early cooperation with Wi-Fi 8 chipset and device vendors helped us to improve our test solution for the upcoming broad Wi-Fi 8 market.”
The post R&S’s next-generation Wi-Fi 8 access point testing in collaboration with NETGEAR appeared first on ELE Times.
Smartphone production grows 2.5% to 1.25 billion units in 2025
Redefining Precision: How CNC Robotics is Transforming Machining with SINUMERIK Machine Tool Robot
Courtesy: Siemens
Walk into any modern factory, and you’ll meet robots: palletising, tending, loading, unloading. Useful? Absolutely. But using them for more advanced robotic tasks like machining steel with tight tolerances, you’ll always hear the old refrain: “Robots aren’t rigid enough,” like in textbooks and lectures. That was then. Today, CNC robots shorten the distance between the agility of industrial robots and the precision of machine tools. And at the centre of that shift is the SINUMERIK Machine Tool Robot (MTR) – the first robot we can confidently call a machining asset, with steel milling capabilities and more, not just an automation helper.
In this article, you can find the following three things:
- Learn what “CNC robotics” means and what robots do on the shopfloor today.
- Understand why the SINUMERIK Machine Tool Robot is different.
- Zoom out to CNC Robotics as a whole and the practical benefits you can expect.
Along the way, we will challenge a couple of comfortable assumptions in the industry. Consider it as an invitation to rethink what a robot can be used for and also where it unleashes new automation potential.
CNC Robotics: From “Good at Handling” to “Great at Machining”
For years, robots excelled at tasks with low process forces, such as handling, assembly, welding, or laser cutting. They’re flexible, they have reach, and they integrate well around machines. But whenever we crossed the line into machining, conventional robot mechanics and their controls hit a wall: Insufficient stiffness and path accuracy under load, slow machining, and vibrations. That reality entrenched a mindset: “Let machines machine, let robots move things.”
SINUMERIK CNC robotics aims to break that model by putting CNC-grade motion control and digital workflows into the robot’s core. With SINUMERIK, that means:
- A control concept that treats the robot like a machine tool, not a black‑box auxiliary.
- Integration into the SINUMERIK ONE CNC environment (including a digital twin for simulation and validation before the first cut).
- A solution family that spans from simple connections for handling through to full high‑precision motion control of machines using robot kinematics, meeting you where you are on the automation journey.
“If robots still strike you as unsuitable for high‑precision tasks, the latest developments may surprise you.”
Meet the SINUMERIK Machine Tool Robot: a Robot That Machines Like a Machine
At the core of the story is this: Siemens developed the SINUMERIK Machine Tool Robot (MTR) technology, combining the agility of a 6‑axis robot with the precision of a CNC machine tool. So how did we achieve that?
- Machine‑tool‑grade control: The MTR is controlled by SINUMERIK ONE, Siemens’ digital‑native CNC. It lets a robot inherit machine‑tool behaviours for high-precision path tasks.
- Measured gains: Compared to conventional industrial robots, users can expect over 200% higher path accuracy and significantly higher dynamic stiffness. That’s the difference between “good enough for trimming” and “great even for steel.”
- Real productivity: The new control concept delivers 20–40% productivity increases, which is also compelling in non‑process-force path processes (laser, waterjet) where speed and path smoothness dominate.
Now, let’s add something from the shopfloor perspective, we don’t say enough: the user experience is as critical as the physics of the process. With SINUMERIK ONE, the digital twin lets you verify programs, validate reach and sequence, and fine-tune before you ever stop the line, all with existing machine tool programming know-how. Commissioning becomes a digital problem first, a hardware problem second – and that’s a non-trivial cultural shift.
Recognition Matters: Innovators of the Year
Breakthroughs like this don’t exist in a vacuum. The hybrid‑drive system, developed together with Fraunhofer Institute for Manufacturing Technology and Advanced Materials and Siemens colleagues, was recognised with the Siemens “Inventor of the Year” award.
“Swiss Army Knife” Machining – Brought to Life by Hybrid Drive Innovation
A core complaint against machining with robots has been stiffness under process forces, especially in heavy-duty machining of steel or tough alloys. Here’s where an innovative hybrid drive concept changes the picture.
- By combining the strengths of direct motors (precision, speed) and geared motors (robustness, power), the hybrid approach delivers both sensitivity and muscle.
- Robots equipped this way stay stable and have low vibration at high feed rates, even under strong process‑force excitation, approaching the precision and dynamics of classic machine tools.
- The result is a robot that genuinely evolves into the “Swiss Army knife” of manufacturing: precision machining where needed, agile flexibility everywhere else, with a smaller overall footprint.
This isn’t just a technical refinement; it affects practical operations: it can reduce floor‑space requirements and lower energy use per part.
The post Redefining Precision: How CNC Robotics is Transforming Machining with SINUMERIK Machine Tool Robot appeared first on ELE Times.
Infineon and Subaru’s collaboration improves driver safety by enhancing real-time performance in advanced driver assistance systems
Infineon Technologies AG and Subaru Corporation are collaborating to enhance driver safety, confidence and comfort in future Subaru vehicles. Infineon plays a key role in Subaru’s integrated electronic control unit (ECU) for next‑generation advanced driver assistance systems (ADAS) and vehicle motion control: Infineon’s latest AURIX microcontroller (MCU) enhances the real-time capability of this ECU compared to previous generations, supporting faster, more reliable processing of vehicle and sensor information.
“As advanced driver assistance systems become more sophisticated, reliable real-time operation across the entire system is key,” said Peter Schaefer, Executive Vice President and Chief Sales Officer, Automotive at Infineon. “With our market-leading microcontroller family AURIX, we support Subaru in building the foundation needed to deliver dependable decision-making and control across the vehicle.”
“Subaru is working on the development of an integrated electronic control unit that coordinates next‑generation EyeSight and vehicle motion control for future Subaru vehicles,” said Eiji Shibata, Executive Officer and Chief Digital Car Officer, Subaru Corporation. “Infineon’s AURIX MCU is a core technology that will support robust sensor data fusion and real‑time control within this integrated ECU, and is a key element enabling the evolution of next‑generation ADAS and vehicle motion control. We have built a strong relationship of trust with Infineon over many years and have collaborated from the early stages of development to optimise the design of the AURIX MCU. We value this trusted partnership and look forward to Infineon’s next‑generation MCU.”
In the integrated ECU, Subaru leverages Infineon’s most advanced automotive MCU – AURIX TC4x – to strengthen computing and in-vehicle networking. AURIX TC4x will serve as the main controller for next-generation ADAS functionality controlled by the ECU. In real-time, it enables sensor data fusion as well as decision-making and control, by utilising inputs from camera, radar and other sensors – delivering faster and more reliable driver assistance functions. TC4x combines up to six cores at 500 MHz in lockstep operation and automotive functional safety up to ASIL-D.
Infineon and Subaru have already collaborated for Subaru’s current generation ADAS. Both companies will deepen their collaboration around in-vehicle computing and networking in the future and will drive technology development and value creation toward safer and more secure mobility.
One year ago, Infineon climbed to the number one position in the global microcontroller market, after having reached this position for automotive microcontrollers specifically already in 2023. Since then, the company has further strengthened its position by developing technology ready to meet car manufacturers’ future MCU requirements, such as paving the way for RISC-V to become the open standard for automotive MCUs. Furthermore, Infineon has strengthened its MCU-adjacent product portfolio through the acquisition of the automotive Ethernet business from Marvell in August 2025. With this move, Infineon has created the most comprehensive system offering in the industry for centralised computing architectures in software-defined vehicles.
The post Infineon and Subaru’s collaboration improves driver safety by enhancing real-time performance in advanced driver assistance systems appeared first on ELE Times.
E/E Architecture Redefined: Building Smarter, Safer, and Scalable Vehicles
The automotive industry is shifting toward a new generation of electrical and electronic architectures, moving from distributed ECUs to domain and zonal systems centered around centralized computing. This webinar covers the technical drivers behind that change and the engineering impacts on modern vehicle design.
Attendees will learn how wiring optimization, functional safety, cybersecurity, network speed, and subsystem integration influence architectural choices across the vehicle platform. Join Vishal Barde, Associate Director of Automotive Engineering at eInfochips, as he shares real-world examples of how OEMs are speeding up their move to scalable, software-ready architectures. Engineers and system architects will leave with useful insights they can apply to current and future projects.
Equip yourself with the architectural blueprints needed to lead the shift toward software-defined vehicles.
Access the webinar here!
The post E/E Architecture Redefined: Building Smarter, Safer, and Scalable Vehicles appeared first on ELE Times.
Designing LIDAR on a Chip: A Multiphysics Simulation Workflow for Integrated Photonics
Courtesy: Keysight
Introduction
LIDAR (Light Detection and Ranging) has become a cornerstone technology for autonomous vehicles, enabling high-resolution spatial mapping and object detection. As the industry pushes toward scalable, cost-effective solutions, LIDAR on a chip has emerged as a compelling alternative to traditional mechanical systems. Its advantages—compactness, robustness, and the absence of moving parts—make it an excellent candidate for large volume manufacturing.
However, achieving a commercially viable on-chip LIDAR requires careful optimisation. Designers must minimise insertion loss, maximise output optical power, broaden beam steering range, and narrow the emitted beam. To meet these challenges, reliable and specialised photonic simulation tools are essential for reducing development cycles and ensuring high-performance designs.
Overall Design and Simulation Strategy
To efficiently design a LIDAR-on-chip system, the device is decomposed into functional blocks, each simulated using the most appropriate tool from the RSoft Photonic Device Tools suite:
Cascaded 1×32 splitter: BeamPROP BPM
BPM is ideal for 1×2 splitters due to little backward reflection and suitability for slowly varying structures.
Thermo-optical phase shifter: BeamPROP BPM + Multiphysics Utility
BPM handles optical propagation, while the Multiphysics Utility computes temperature-dependent refractive index perturbations.
Emitter (grating antenna array): FullWAVE FDTD
FDTD (Finite Difference Time Domain) is required for omnidirectional light propagation and accurate grating coupler modelling.
This modular approach ensures each component is optimised using the most accurate and computationally efficient method available.

Step-by-Step Design of Individual Components
Power Splitter
A splitter tree is constructed using cascaded 1×2 splitters—either MMI or Y‑branch designs.
1×2 MMI splitters
- Low insertion loss (~0.3 dB)
- Robust to asymmetric input
- More complex to design
- Wavelength sensitive, limited bandwidth, polarisation dependent
Y‑branch splitters
- Simple geometry (two S‑bends)
- Broadband and polarisation independent
- Higher insertion loss (~2 dB)
- Less tolerant to asymmetric input
Both structures are well-suited to BeamPROP BPM, which solves one-way wave equations under assumptions of slow structural variation and monochromatic excitation.

After optimising width and length using 2.5D (2D‑EIM) BPM, sensitivity analyses were performed for symmetric and asymmetric inputs. The final 1×32 splitter tree uses four levels of 1×2 MMIs, followed by a fifth level of Y‑branches where MMIs become too large to fit the remaining layout area.

Thermo-Optical Phase Shifter
Silicon’s strong thermal sensitivity (dn/dT = 0.00024/K) enables phase tuning by heating waveguide arrays. Unequal heating introduces phase delays between channels, steering the output beam.

The workflow:
- The thermal diffusion equation was solved to obtain the temperature distribution.
- Temperature profile converted into refractive index perturbation.
- BPM simulates optical propagation to compute amplitude and phase at the output.
- Far‑field analysis reveals the resulting beam steering.
For a temperature change of ΔT = 50 °C, the phase difference between adjacent waveguides is 120°. BPM predicts a steering angle of 15°, matching the theoretical value:
Emitting Gratings
To efficiently outcouple light (orthogonally) with minimal divergence, the grating must be properly apodized. FDTD optimisation yields an optimal tapered-width grating profile, normalised to the grating length (Fig. 5).
The post Designing LIDAR on a Chip: A Multiphysics Simulation Workflow for Integrated Photonics appeared first on ELE Times.
Most of us started here
| submitted by /u/hakh-ti-cxamen [link] [comments] |
Mastering EDA Tools: How India is Upskilling 85,000 Engineers for the Global Chip Race
The Government of India’s ‘Chips to Startups’ (C2S) programme, under the India Semiconductor Mission, has made tremendous progress, completing training for 85,000 engineers in semiconductor design over the past decade. Students have been trained in 315 academic institutions as part of the current chip design training programme.
Union Minister for Electronics and IT, Ashwini Vaishnaw, highlighted how the programme has provided students with an experience of using world-class EDA tools, such as Synopsys, Cadence, Renesas, AMD, Ansys, and Siemens. Using these tools, the students have experienced hands-on learning as needed when they step into the industry. So far, the ministry claims to have recorded more than 1.85 crore hours of EDA tool usage for chip design training.
The training under this programme comprises a holistic experience from design and fabrication to packaging and testing. The chips designed by students are tested at Mohali’s Semiconductor Laboratory. This allows them to understand the complete semiconductor development cycle.
Students from across the country are taking this training, and the government aims to expand this programme under the second edition of the India Semiconductor Mission by raising the number of affiliated institutions from 315 to 500.
As we expect the electronics industry to grow manifold in the coming years, the need for a skilled workforce is bound to rise exponentially, too. Hence, the upskilling and thorough training programme are important to keep India at the forefront of this global race.
By: Shreya Bansal | Sub-Editor
The post Mastering EDA Tools: How India is Upskilling 85,000 Engineers for the Global Chip Race appeared first on ELE Times.









