Українською
  In English
Feed aggregator
How AI Is Powering the Road to Level 4 Autonomous Driving
Courtesy: Nvidia
When the Society of Automotive Engineers established its framework for vehicle autonomy in 2014, it created the industry-standard roadmap for self-driving technology.
The levels of automation progress from level 1 (driver assistance) to level 2 (partial automation), level 3 (conditional automation), level 4 (high automation) and level 5 (full automation).
Predicting when each level would arrive proved more challenging than defining them. This uncertainty created industry-wide anticipation, as breakthroughs seemed perpetually just around the corner.
That dynamic has shifted dramatically in recent years, with more progress in autonomous driving in the past three to four years than in the previous decade combined. Below, learn about recent advancements that have made such rapid progress possible.
What Is Level 4 Autonomous Driving?
Level 4 autonomous driving enables vehicles to handle all driving tasks within specific operating zones, such as certain cities or routes, without the need for human intervention. This high automation level uses AI breakthroughs including foundation models, end-to-end architectures and reasoning models to navigate complex scenarios.
Today, level 4 “high automation” is bringing the vision of autonomous driving closer to a scalable, commercially viable reality.
Six AI Breakthroughs Advancing Autonomous Vehicles
Six major AI breakthroughs are converging to accelerate level 4 autonomy:
- Foundation Models
Foundation models can tap internet-scale knowledge, not just proprietary driving fleet data.
When humans learn to drive at, say, 18 years old, they’re bringing 18 years of world experience to the endeavour. Similarly, foundation models bring a breadth of knowledge — understanding unusual scenarios and predicting outcomes based on general world knowledge.
With foundation models, a vehicle encountering a mattress in the road or a ball rolling into the street can now reason its way through scenarios it has never seen before, drawing on information learned from vast training datasets.
- End-to-End Architectures
Traditional autonomous driving systems used separate modules for perception, planning and control — losing information at each handoff.
End-to-end autonomy architectures have the potential to change that. With end-to-end architectures, a single network processes sensor inputs directly into driving decisions, maintaining context throughout. While the concept of end-to-end architectures is not new, architectural advancements and improved training methodologies are finally making this paradigm viable, resulting in better autonomous decision-making with less engineering complexity.
- Reasoning Models
Reasoning vision language action (VLA) models integrate diverse perceptual inputs, language understanding, and action generation with step-by-step reasoning. This enables them to break down complex situations, evaluate multiple possible outcomes and decide on the best course of action — much like humans do.
Systems powered by reasoning models deliver far greater reliability and performance, with explainable, step-by-step decision-making. For autonomous vehicles, this means the ability to flag unusual decision patterns for real-time safety monitoring, as well as post-incident debugging to reveal why a vehicle took a particular action. This improves the performance of autonomous vehicles while building user trust.
- Simulation
With physical testing alone, it would take decades to test a driving policy in every possible driving scenario, if ever achievable at all. Enter simulation.
Technologies like neural reconstruction can be used to create interactive simulations from real-world sensor data, while world models like NVIDIA Cosmos Predict and Transfer produce unlimited novel situations for training and testing autonomous vehicles.
With these technologies, developers can use text prompts to generate new weather and road conditions, or change lighting and introduce obstacles to simulate new scenarios and test driving policies in novel conditions.
- Compute Power
None of these advances would be possible without sufficient computational power. The NVIDIA DRIVE AGX and NVIDIA DGX platforms have evolved through multiple generations, each designed for today’s AI workloads as well as those anticipated years down the road.
Co-optimization matters. Technology must be designed anticipating the computational demands of next-generation AI systems.
- AI Safety
Safety is foundational for level 4 autonomy, where reliability is the defining characteristic distinguishing it from lower autonomy levels. Recent advances in physical AI safety enable the trustworthy deployment of AI-based autonomy stacks by introducing safety guardrails at the stages of design, deployment and validation.
For example, NVIDIA’s safety architecture guardrails the end-to-end driving model with checks supported by a diverse modular stack, and validation is greatly accelerated by the latest advancements in neural reconstruction.
Why It Matters: Saving Lives and Resources
The stakes extend far beyond technological achievement. Improving vehicle safety can help save lives and conserve significant amounts of money and resources. Level 4 autonomy systematically removes human error, the cause of the vast majority of crashes.
NVIDIA, as a full-stack autonomous vehicle company — from cloud to car — is enabling the broader automotive ecosystem to achieve level 4 autonomy, building on the foundation of its level 2+ stack already in production. In particular, NVIDIA is the only company that offers an end-to-end compute stack for autonomous driving.
The post How AI Is Powering the Road to Level 4 Autonomous Driving appeared first on ELE Times.
Revolutionizing System Design with AI-Powered Real-Time Simulation
Courtesy: Cadence
The rising demand for AI infrastructure is driving faster innovation and smarter resource utilization throughout the design lifecycle. Accelerated computing shortens design and simulation cycles, streamlines workflows, and amplifies human creativity through data-driven insights. Together, AI and accelerated computing empower engineers to explore ideas in real time and bring their visions to life. Cadence, with its GPU-accelerated Cadence Fidelity CFD Software, collaborated with NVIDIA to generate high-fidelity simulation data for airframe simulations, generating thousands of simulations in the span of weeks using NVIDIA GB200, available through the Cadence Millennium M2000 AI Supercomputer. This was followed by using NVIDIA PhysicsNeMo, an AI physics framework, to train a physics-accurate AI surrogate model for a digital twin that provides interactive what-if design changes and analyses for aircraft design.
This breakthrough in real-time simulation is a powerful example of the Cadence strategy for innovation, “The Three Layer Cake,” in action. This strategic framework unifies Cadence’s technology stack and drives our solutions. At the foundation is accelerated compute, exemplified by the Millennium M2000 AI Supercomputer, built with NVIDIA Blackwell systems. In the middle is Cadence’s Fidelity CFD Software, enabling high-fidelity, physics-based modeling of the system under design. At the top sits AI, where frameworks like NVIDIA PhysicsNeMo and Cadence’s AI-driven design intelligence transform simulation data into interactive, predictive digital twins. Combined, these layers form a cohesive platform that empowers engineers to design, simulate, and optimize complex systems faster and more intelligently than ever before. A demonstration of the technology shows real-time airframe performance simulation while varying the design configuration. Other applications, including automotive aerodynamics or aeroacoustics, 3D-IC thermal and electromagnetic analysis, and data center thermal analysis, are possible.
How AI for Physics Is Transforming Engineering Design?
Computational fluid dynamics (CFD) is a cornerstone of modern engineering. It allows designers to simulate the flow of fluids—like air over a plane’s wings or fuel through an engine—to predict performance, identify issues, and optimize designs. However, traditional CFD methods are incredibly resource-intensive. Historically, running a single high-fidelity simulation on conventional computing systems can take days or even weeks, limiting the number of design iterations engineers can perform. Applying AI technology speeds the calculations and turnaround time, making real-time what-if design analysis practical.
High-quality results from AI are dependent on accurate and representative training data, in sufficient quantities. The availability of such data for computational engineering purposes is relatively limited in comparison to typical data used to train foundational AI models. In this example, the Cadence Fidelity CFD Software, accelerated on the Millennium M2000 AI Supercomputer, produced the high-quality dataset for the NVIDIA PhysicsNeMo framework. Thousands of detailed, time-dependent simulations were computed in a matter of weeks, with each simulation comprising hundreds of millions of degrees of freedom. This volume of high-quality data, generated from the design itself, is critical to being able to trust the AI’s predictions.
The collaboration between NVIDIA and Cadence addresses these challenges head-on. By leveraging GPU acceleration and AI, this integrated solution fundamentally changes the speed and scale of engineering simulation.
Cadence and NVIDIA Transform Aerospace and Automotive Design with AI Physics
NVIDIA is unveiling ground-breaking advancements in AI-powered simulation, transforming aerospace and automotive design with up to 500X faster engineering workflows. Cadence is at the forefront of this transformation, leveraging its Fidelity CFD Software with the Millennium M2000 AI Supercomputer built on NVIDIA Blackwell to empower aerospace leaders. By combining high-fidelity Multiphysics simulations with modern accelerated computing, Cadence enables rapid design iteration, enhanced efficiency, and optimized performance for next-generation systems. Together, Cadence and NVIDIA are accelerating innovation and redefining the future of computational engineering.
Shaping the Future of AI Infrastructure
NVIDIA has unveiled the NVIDIA AI Factory Research Center in Virginia, designed to leverage the NVIDIA Vera Rubin platform and NVIDIA DSX blueprint to enable gigawatt-scale AI factory design and development.
To ensure design precision and operational excellence, Cadence is developing a high-fidelity digital twin of the facility through its Reality DC Platform. This platform, integrated with NVIDIA Omniverse libraries, provides a physics-based simulation environment that allows engineers to model thermal, energy, and airflow dynamics across the entire infrastructure—from chip to chiller. By combining computational fluid dynamics (CFD) and Multiphysics analysis, the Cadence Reality DC Platform empowers teams to explore design configurations, predict failure scenarios, and optimize performance before physical implementation.
Together, these innovations pave the way for smarter, more sustainable data center designs—accelerating the journey toward the next generation of AI-powered infrastructure.
The post Revolutionizing System Design with AI-Powered Real-Time Simulation appeared first on ELE Times.
Microchip Technology Expands its India Footprint with a New Office Facility in Bengaluru
Microchip Technology has expanded its India footprint with the acquisition of 1.72 lakh square feet (16,000 square meters) of premium office space at the Export Promotion Industrial Park (EPIP) Zone in Whitefield, Bengaluru. This move highlights the company’s continued focus on strengthening its engineering and design capabilities in the region.
The facility will serve as a strategic extension of Microchip’s Bengaluru Development Center that can easily accommodate over 3,000 employees in the next 10 years. It is designed to support the company’s growing workforce and future hiring plans, encourage stronger collaboration across global and regional teams, and provide them with modern infrastructure for advanced research and development.
Talking on the new acquisition, Srikanth Settikere, vice president and managing director of Microchip’s India Development Center stated, “At Microchip, growth is about creating opportunities as much as scaling operations. With India contributing to nearly 20% of global semiconductor design talent, our new Bengaluru facility will sharpen our advanced IC design focus and strengthen our engagement in one of the country’s most dynamic technology hubs.”
Steve Sanghi, President and CEO of Microchip added, “We recently celebrated Microchip’s 25th anniversary in India and this office acquisition is a testament to our commitment in India. We believe our investments in the region will enable us to both benefit from and contribute to the country’s increasingly important role in the global semiconductor industry.”
The Bengaluru acquisition is Microchip’s second facility in Bengaluru besides its physical presence in Hyderabad, Chennai, Pune and New Delhi, reinforcing its long-term commitment to product development, business enablement and talent growth in India. With this expansion, the company further positions itself to deliver innovative semiconductor solutions across industrial, automotive, consumer, aerospace and defense, communications and computing markets.
The post Microchip Technology Expands its India Footprint with a New Office Facility in Bengaluru appeared first on ELE Times.
Finally wired the tp4056 to my controller
| | Ayo guys this is follow up on my post 10 days ago about changing the micro usb port on third party controller so I finally got thr tp4056 and did lots of soldering and sanding t the shell of the controller but couldn't tget it to stay inside so it's gonna be external as i use it only once in a while😅 [link] [comments] |
PCB I got out of a Roomba from 2015
| submitted by /u/CIemson [link] [comments] |
Old Chips Found During Cleanup
| | Amazing how you can have spare parts sit in draws for 25 years untouched. I'm a fan of AMD so I was excited to find two of these are from them. I'm wishing I had a better microscope to de-cap and view the die. I'll have to figure out how to see if Evil Monkeyz Designz is interested in any of these for a de-capping. Parts Shown Above: [link] [comments] |
Built a desktop PSU from junk I found in the hostel.
| | I mean atleast it didn't blow up... I really should've gotten a pcb... [link] [comments] |
Weekly discussion, complaint, and rant thread
Open to anything, including discussions, complaints, and rants.
Sub rules do not apply, so don't bother reporting incivility, off-topic, or spam.
Reddit-wide rules do apply.
To see the newest posts, sort the comments by "new" (instead of "best" or "top").
[link] [comments]
When there are no switches big enough but still want to launch the project.
| | I need a switch that can handle some power and don't have the patiance to wait for it from shipping, so what do we do? We take out the key ignition with bonus volt meter that's ment for escooter to be able to start it and shut it off with a key. [link] [comments] |
📰 Газета "Київський політехнік" № 39-40 за 2025 (.pdf)
Вийшов 39-40 номер газети "Київський політехнік" за 2025 рік
EEVblog 1719 - 75kWh Home Storage Battery Expansion!
Identically rated capacitors from the 80s to now
| Recapping an Apple IIe and the size difference blew me away. [link] [comments] |
Gold leg earrings I made from electronic components
| | submitted by /u/Independent-Gazelle6 [link] [comments] |
Power pole collapse

Two or three days ago, as of this writing, there was a power pole collapse in Bellmore, NY, at the intersection of Bellmore Avenue and Sunrise Highway. The collapsed pole is seen in Figure 1, lying across two westbound lanes of Sunrise Highway. The traffic lights are dark.
Figure 1 Collapsed power pole in Bellmore, NY, temporarily knocking out power.
Going to Google Maps, I took a close look at a photograph of the collapsed pole taken three months earlier, back in July, when the pole was still standing (Figure 2).

Figure 2 The leaning power pole and its damaged wood in July 2025.
The wood at the base of the leaning power pole was clearly, obviously, and indisputably in a state of severe decrepitude.
An older picture of this same pole on Google Maps, taken in December 2022 (Figure 3), shows this pole to have been damaged even at that time. Clearly, the local power utility company had, by inexcusable neglect, allowed that pole damage to remain unaddressed, which had thus allowed the collapse to happen.

Figure 3 Google Maps image of a power pole showing damage as early as December 2022.
Sunrise Highway is an extremely busy roadway. It is only by sheer blind luck that nobody was injured or killed by this event.
A replacement pole was later installed where the old pole had fallen. The new pole’s placement is exactly vertical, but how many other power poles out there are in a similarly unsafe condition as that fallen pole in Bellmore had been?
John Dunn is an electronics consultant and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- A tale about loose cables and power lines
- Shock hazard: filtering on input power lines
- Why do you never see birds on high-tension power lines?
- Misplaced insulator proves fatal
- Ground strikes and lightning protection of buried cables
The post Power pole collapse appeared first on EDN.
Photon Design taking PCSEL simulation solution to PCSEL 2025 workshop
Сучасний стан системи забезпечення якості вищої освіти в Україні: виклики та перспективи
КПІ ім. Ігоря Сікорського став партнером та майданчиком для проведення регіонального семінару НАЗЯВО «Сучасний стан системи забезпечення якості вищої освіти в Україні: виклики та перспективи»
How Quantum Sensors and Post-Moore Measurement Tech Are Rewriting Reality
When the chip industry stopped promising effortless doublings every two years, engineers didn’t panic, they changed the problem. Instead of forcing ever-smaller transistors to do the same old sensing and measurement jobs, the field has begun to ask a bolder question: what if measurement itself is redesigned from first physical principles? That shift from “more of the same” to “different physics, different stack” is where the current revolution lives.
Today is not about one device or one lab, instead, it’s about a system-level pivot. Government labs, hyperscalers, deep-tech start-ups and legacy instrument makers are converging around sensors that read quantum states, neuromorphic edge processors that pre-digest raw physical signals, and materials-level breakthroughs (2D materials, diamond colour centres, integrated photonics) that enable ultra-sensitive transduction. This results in a pipeline of measurement capabilities that look less like incremental sensor upgrades and more like new senses for machines and humans.
The opening act: credibility and capability
Two facts anchor this moment. First, quantum measurement is leaving the lab and becoming engineering work. Companies are reporting sustained fidelity and performance gains, enabling practical devices rather than one-off demonstrations. Quantinuum’s recent announcements new trapped-ion systems and record fidelities illustrate the industry’s transition from discovery to engineering at scale.
Second, established compute and platform players are doubling down on quantum ecosystems — not because they expect instant universal quantum computers, but because quantum sensing and hybrid quantum-classical workflows have near-term value. Nvidia’s move to open a quantum research lab in Boston is a concrete example of big-tech treating quantum as part of an integrated future compute stack. As Jensen Huang put it when announcing the initiative, the work “ reflects the complementary nature of quantum and classical computing.”
The technologies: what’s actually being built
Here are the concrete innovations that are moving from prototype to product:
- Portable optical atomic clocks. Optical lattice clocks have long been the domain of national labs; recent work shows designs that ditch cryogenics and complex laser trees, opening the door to compact, fieldable clocks that could replace GPS time references in telecom, finance, and navigation. (NIST and research groups published simplified optical clock designs in 2024.)
- Diamond (NV-centre) magnetometry. The nitrogen-vacancy (NV) centre in diamond has matured as a practical transducer: ensembles and Faraday-effect architectures now push magnetometry into the femto- to picotesla regime for imaging and geophysics. Recent preprints and lab advances show realistic sensitivity improvements that industry can productize for MEG, non-destructive testing, and subsurface exploration.
- Atom-interferometric gravimetry and inertial sensing. Cold-atom interferometers are being transformed into compact gravimeters and accelerometers suitable for navigation, resource mapping, and structural monitoring — systems that enable GPS-independent positioning and subsurface mapping. Market and technical reports point to rapid commercial interest and growing device deployments.
- Quantum photonics: entanglement and squeezing used in imaging and lidar. By borrowing quantum optical tricks (squeezed light, correlated photons), new imagers and LIDAR systems reduce classical shot- noise limits and succeed in low-light and high-clutter environments a direct win for autonomous vehicles, remote sensing, and biomedical imaging.
- Edge intelligence + hybrid stacks. The pragmatic path to adoption is hybrid: quantum-grade front-ends feeding neural or neuromorphic processors at the edge that perform immediate anomaly detection or data compression before sending distilled telemetry to cloud AI. McKinsey and industry analysts argue that this hybrid model unlocks near-term value while the pure quantum stack matures. “Quantum sensing’s untapped potential” is exactly this: integrate, don’t wait.
Voices from the field
Rajeeb Hazra of Quantinuum captures the transition: the company frames recent hardware advances as a move from research to engineering, and the market reaction underscores that sensors and systems with quantum components are becoming realistic engineering deliverables.
Nvidia’s Jensen Huang framed the strategy plainly when announcing the Boston lab: quantum and classical systems are complementary and will be developed together a pragmatic admission that integration is the near-term path.
Industry analysts from consulting and market research also point to rapid investment and
commercialization cycles in quantum technologies, especially sensing, where near-term ROI exists.
(Each of the above citations points to public statements or industry reporting documenting these positions.)
The industrial storyline: how it’s being developed
Three engineering patterns repeat across successful projects:
- Co-design of physics and system: Sensors are designed simultaneously with readout electronics, packaging, and AI stacks. Atomic clocks aren’t just lasers in a box they are timing engines integrated into telecom sync, GNSS augmentation, and secure-time services.
- Material and integration leaps: High-purity diamonds, integrated photonics, and 2D materials are used not as laboratory curiosities but as manufacturing inputs. The emphasis is on manufacturable material processes that support yield and repeatability.
- Hybrid deployment models: Pilots embed quantum sensors with classical edge compute in aircraft, subsea drones, and industrial plants. These pilots emphasize robustness, calibration, and lifecycle engineering rather than purely chasing sensitivity benchmarks.
The judgment: what will change, and how fast
Expect pockets of rapid, strategic impact not immediate universal replacement. Quantum sensors will first displace classical approaches where
(a) There’s no classical alternative (gravimetry for subsurface mapping)
(b) Small improvements produce outsized outcomes (timekeeping in finance, telecom sync)
(c) The environment is hostile to classical methods (low-light imaging, non-invasive brain sensing).
Within five years we will see commercial quantum-assisted navigation units, fieldable optical clocks for telecom carriers and defense, and NV-based magnetometry entering clinical and energy-sector workflows. Over a decade, as packaging, calibration standards, and manufacturing mature, quantum- grade measurements will diffuse widely and the winners will be those who mastered hybrid systems engineering, not isolated device physics.
What leaders should do now?
- Invest in hybrid stacks: fund pilots that pair quantum front-ends with robust edge AI and lifecycle engineering.
- Prioritize integration not headline sensitivity: a slightly less sensitive sensor that works reliably in the field beats a lab record every time.
- Build standards and calibration pathways: work with national labs; timekeeping and magnetometry need interoperable, certified standards.
- Secure talent at the physics-engineering interface: hires that understand both decoherence budgets and manufacturable packaging are gold.
The revolution is not a single “quantum sensor” product; it’s a new engineering posture: design sensors from the physics up, integrate them with intelligent edge processing, and industrialize the stack. That is how measurement stops being passive infrastructure and becomes a strategic asset one that will reshape navigation, healthcare, energy and national security in the decade to come.
The post How Quantum Sensors and Post-Moore Measurement Tech Are Rewriting Reality appeared first on ELE Times.
“Robots are undoubtedly set to make the manufacturing process more seamless and error-free,” says Anil Chaudhry, Head of Automation at Delta.
“Everywhere. Automation can be deployed from very simple things to the most complicated things,” says Delta’s Anil Chaudhry as he underlines Delta’s innovations in the field of digital twins and human-centric innovation. With industries across the globe preparing to embrace the automation revolution—from advanced assembly lines to robotic arms—ELE Times sits down with Anil Chaudhry, Business Head – Solution & Robotics at Delta Electronics India, and Dr. Sanjeev Srivastava, Head of Industrial Automation at SBP, to discuss Delta’s plans.
In the conversation, both guests talk extensively about the evolution emerging in the industrial automation space, especially with reference to India, and how these solutions are set to support India in securing a significant share of the global semiconductor and electronics market. “The availability of the collaborative robots, or the co-bot, which is called in short form, is one of the stepping stones into Industry 5.0,” says Anil Chaudhry.
A Well-integrated and All-encompassing Approach
“We are in all domains, and we see these domains are well integrated,” says Anil Chaudhry as he reflects on the automation demands of the industry, ranging from power electronics to mobility (such as EV charging), automation (industrial and building automation), and finally the infrastructure (data centers, telecom & renewable energy). He highlights that such an approach makes Delta the most sought-after name in automation tech. “We offer products ranging from an AI-enabled energy management system to edge computing software that can handle the large, all-encompassing processes in an industrial landscape,” he adds.
“It is an integrated solution we have everywhere in our capabilities, and we are integrating all this to make it more enhanced,” says Anil Chaudhry.
Delta’s Key to Efficiency and Productivity
As the conversation touched upon the aspects of efficiency and productivity, Anil Chaudhry was quick to say, “The key to efficiency and productivity lies in no breakdown.” He further says that Delta’s software-enabled programs are equipped to provide predictive failure information to the manufacturing plant, enabling the necessary actions to be taken in advance.
Delta’s Stride through Digital Twins & Industry 5.0
Delta’s digital twin models imitate the manufacturing process, making it easy and seamless to enable automation. He says, “The robots are definitely going to enable the manufacturing process to be more seamless and more error-free. Sharing a glimpse into the Delta’s co-bots, he says that the robots are equipped enough to handle repetitive yet highly accurate and high-demand requirements.
He was quick to underline that Delta not only offers these machines but also a lot of software tools required to make the whole facility run seamlessly, making an end-to-end solution and enabling a stride towards an Industry 5.0 environment.
Delta’s Approach towards Localization
On the subject of localization of manufacturing in India, Dr. Sanjeev Srivastava, Head of Industrial Automation at SBP, highlighted the progress Delta has been making in building a strong ecosystem. “It’s good that we are developing an ecosystem wherein we also have our supply chain integrated with manufacturing, and as we see, a lot of industries are coming into India. Over the years, we will have a very robust supply chain,” he said.
He pointed out that many components are already being sourced locally, and this share is expected to grow further. Confirming Delta’s commitment to Indian manufacturing, Dr. Srivastava added, “Yes, we are manufacturing in India, and we are also exporting to other places. We have the global manufacturing unit, which is supplying to other parts of the world, as well as catering to our Indian market. So it is both domestic and international.”
As the conversation wrapped up, Anil Chaudhry went on to further underline Delta’s overall and larger goal, wherein he says, “So we work on the TC, or total cost of ownership, on how to reduce it with our technology going into the equipment, as well as the overall solution.”
The post “Robots are undoubtedly set to make the manufacturing process more seamless and error-free,” says Anil Chaudhry, Head of Automation at Delta. appeared first on ELE Times.



