Збирач потоків

Kioxia Targets Automotive Applications With New Embedded Flash Memory

AAC - 6 годин 1 хв тому
Kioxia says its UFS version 4.1 embedded flash memory device can improve performance and diagnostic capabilities in automotive and data center applications.

Advancing Telecommunications With Edge AI

AAC - Срд, 08/13/2025 - 20:00
By strategically incorporating artificial intelligence throughout their networks, telecom companies can meet demand for better performance, streamlined operations, and improved customer experiences.

Improve the accuracy of programmable LM317 and LM337-based power sources

EDN Network - Срд, 08/13/2025 - 17:52

Several Design Ideas (DIs) have employed the subject ICs to implement programmable current sources in an innovative manner [Editor’s note: DIs referenced in “Related Content” below]. Figure 1 shows the idea.

Figure 1 Two independent current sources, one for loads referenced to the more negative voltage, and the other for those to the more positive one. The Isub current sources control the magnitudes of the currents delivered to the loads.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Each of the ICs works by enforcing Vref = 1.25 V (±50 mV over load current, supply voltage, and operating temperature) between the OUT and ADJ terminals. The Isubs are programmable current sources (PWM-implemented or otherwise) which produce voltage drops Vsubs across the Rsubs.

Given that there are ADJ terminal currents IADJ ( typically 50 µA and maxing out at 100 µA ), the load currents can be seen to be:

[Vref  + ( IADJ – Isub ) · Rsub] / Rsns

When Isub is 0, the load current is at its maximum, Imax, and its uncertainty is a mere ±50 mV / 1250 mV = ±4%. But when Isub rises to yield a desired current of Imax/10, the uncertainty rises to ±40%; the intended fraction of 1.25 V is subtracted, but the unknown portion of the ±50 mV remains. If Imax/25 is desired, the actual load current could be anywhere from 0 to twice that value. Things are actually slightly worse, since the uncertainty in IADJ is a not-insignificant portion of the typically few-milliamp maximum value of Isub.

Circumnavigating the accuracy limitations of reference voltages

Despite the modest accuracy of their reference voltages, these ICs have the advantage of built-in over-temperature limiting. So it’s desirable to find a way around their accuracy limitations. Figure 2 shows just such a method.

Figure 2 Two independent current regulators. The Isub magnitudes are programmable and are often implemented with PWMs. Diodes connected to the ADJ terminals protect the LM ICs during startup. The 0.1-µF supply decoupling capacitors for U1 and U3 are not shown.

The idea of the three-diode string was borrowed from the referenced DIs [Editor’s note: in “Related Content” below]. It ensures that even for the lowest load currents (the LM ICs’ minimum operating is spec’d at 10 mA max.), the ADJ terminal voltages needn’t be beyond the supply rails.

The OPA186 op-amp’s input operating range extends beyond both supply rails (a maximum of 24 V between them is recommended), and its outputs swing to within 120 mV of the rails for loads of less than 1 mA.

The maximum input offset voltage, including temperature drift and supply voltage variations, is less than ±20 µV. An input current of less than ±5 nA maximum means that for Rsubs of 1 kΩ or less, the total input offset voltage is 2000 times better than the LMs’ ±50 mV.

Placing the LM ICs in this op-amp’s feedback loop improves output current accuracy by a similar factor (but see addendum).

Adapting Jim Williams’ design for a current regulator

Jim Williams of analog design fame published an application note placing the LM317 in an LT1001-based feedback loop to produce a voltage regulator. Nothing prevents the adaptation of this idea to a current regulator. The LT1001’s typical gain-bandwidth (GBW) product is 800 kHz, almost exactly the 750 kHz of the OPA186, so no stability problems are expected. And there were none when the LM317 circuit was bench-tested with an LM358 op amp (GBW typically 700 kHz), which I had handy.

Just as you would with the Figure 1 designs, make sure the LM ICs are heatsinked for intended operation. Enclosing them in a feedback loop won’t help if their over-temperature circuitry kicks in. But under the temperature limit, this circuit increases not only load current accuracy, but also the IN-terminal impedances and the rejection of both the power supply and the LM’s references’ noises.

Note that some of the reduction in reference voltage error can be traded off to reduce power dissipation by making the Rsns resistors small. You can also convert the design to a precision voltage regulator by replacing the three-diode strings with a resistor and moving the load to between the OUT terminal and its Rsns resistor’s supply terminal.

Addendum

There’s a missing term in the equation given for load current. In Figure 2, the unknown and unaccounted-for amount of ADJ terminal current is added to the load current.

Considering that the LMs’ minimum specified operating current (see the LM317 3-Pin Adjustable Regulator datasheet and LMx37 3-Terminal Adjustable Regulators datasheet)—and therefore the minimum current through the load—is 10 mA at 25°C, the ADJ maximum of 100 µA is small potatoes. Still, there might be applications where it would be desirable to account for it. Figure 3 is a possible solution, although I’ve not bench-tested it.

Figure 3 Replacing the ADJ terminal-connected diodes with JFETs preserves startup protection for the LM ICs.

The ‘201 and ‘270 JFETS route the ADJ terminal current through the Rsns resistors where it can be recognized and accounted for as part of the current that passes through the load. Cheaper bipolar transistors (which would reroute almost all IADJ) could be used in place of the JFETS, but that would require an additional diode in series with the three-diode string.

Christopher Paul has worked in various engineering positions in the communications industry for over 40 years.

Related Content

The post Improve the accuracy of programmable LM317 and LM337-based power sources appeared first on EDN.

My Work Area

Reddit:Electronics - Срд, 08/13/2025 - 13:14
My Work Area

Very on-budget setup. What do you think I should add next? (I've already saved some space for a fume extractor).

submitted by /u/GamingVlogBox
[link] [comments]

k-Space’s RHEEDSim software available for labs and classrooms

Semiconductor today - Срд, 08/13/2025 - 12:08
k-Space Associates Inc of Dexter, MI, USA — which produces thin-film metrology instrumentation and software — says that its new kSA RHEEDSim reflection high-energy electron diffraction (RHEED) simulation software is now available for both labs and classrooms...

AOI chooses ClassOne’s Solstice S8 system for gold plating and metal lift-off on InP

Semiconductor today - Срд, 08/13/2025 - 11:47
ClassOne Technology of Kalispell, MT, USA (which manufactures electroplating and wet-chemical process systems for ≤200mm wafers) is providing its Solstice S8 single-wafer processing system to Applied Optoelectronics Inc (AOI) of Sugar Land, TX, USA, a designer and manufacturer of optical components, modules and equipment for fiber access networks in the Internet data-center, cable TV broadband, fiber-to-the-home (FTTH) and telecom markets. The system will further strengthen AOI’s capabilities for producing optoelectronic components that power high-speed data and communications infrastructure...

Deep Learning Definition, Types, Examples and Applications

ELE Times - Срд, 08/13/2025 - 10:41

Deep learning is a subfield of machine learning that applies multilayered neural networks to simulate brain decision-making. The concept is essentially interchangeably with human learning systems which allow machines to learn from data, thus constituting many AI applications we use today-dotting, speech recognition, image analysis, and natural language processing areas.

Deep Learning History:

Since the 1940s, when Walter Pitts and Warren McCulloch introduced a mathematical model of neural networks inspired by the human brain, the very onset of deep learning can be said to have started. In the 1950s and 60s, with pioneers like Alan Turing and Alexey Ivakhnenko laying the groundwork for neural computation and early network architectures, it proceeded forward. Backpropagation emerged as a concept during the ’80s but became very popular with the availability of large computational prowess and data set in 2000. The dawn of newfound applications truly arose in 2012 when, for instance, AlexNet, a deep convolutional neural network, took image classification to another level by dramatically increasing accuracy. Since then, deep learning has become an ever indomitable force for innovation in computer vision, natural language processing, and autonomous systems.

Types of Deep Learning:

Deep learning can be grouped into various learning approaches, depending on the training of the model and the data being used-

  • Supervised deep learning models are trained over labeled datasets, which have all input data paired with the corresponding output data. The model tries to learn to map the input data to the output data so that it can later generalize for unseen data through prediction. Among the popular examples of fulfillment of these tasks are image classification, sentiment analysis, and price or trend prediction.
  • Unsupervised deep learning operates over unlabeled data, with the system expected to unearth underlying structures or patterns on its own. It is used in clustering similar data points, reducing the dimensionality of data, or detecting relationships among large-size datasets. Examples are customer segmentation, topic detection, and anomaly detection.
  • Semi-supervised deep learning places a small set of labeled data against a large set of unlabeled data, striking a balance between accuracy and efficiency in medicine and fraud detection. Self-supervised deep learning lets models create their own learning labels, opening the two fields of NLP and vision to tasks requiring less manual annotation.
  • Reinforcement deep learning is a training methodology for machine-learning models where the agent interacts with an environment, receiving rewards or penalties based on its actions. The aim is to maximize the obtained reward and its performance over time. This learning technique is used to train game-playing AIs such as AlphaGo, autonomous navigation, and robotic manipulation.

Deep learning utilizes the passage of data through an array of artificial neural networks, where each subsequent layer extracts successively more complex features. Such networks learn by adjusting the internal weights via backpropagation so as to minimize prediction errors, which ends up training the model to discern various patterns in the input and finally make recognition decisions with respect to the raw input in the form of images, text, or speech.

Deep Learning Applications:

  • Image & Video Recognition: Used in facial recognition, driverless cars, and medical imaging.
  • Natural Language Processing (NLP): Used to power chatbots, and virtual assistants like Siri and Alexa, and translate languages.
  • Speech Recognition: Used for voice typing, smart assistants, and live transcription.
  • Recommendation Systems: Personalizes Netflix, Amazon, and Spotify.
  • Healthcare: For disease detection, drug discovery, and predictive diagnosis.
  • Finance: Used for fraud detection, assessing risks, and running algorithmic trading operations.
  • Autonomous Vehicles: Enable cars to detect objects, navigate roads, and make decisions related to driving.
  • Entertainment & Media: Supports video editing, audio generation, and content tagging.
  • Security & Surveillance: Supports anomaly detection and crowd monitoring.
  • Education: Supports the creation of intelligent-tutoring systems and automated grading.

Key Advantages of Deep Learning:

  • Automatic Feature Extraction: There is no need for manual data preprocessing. The programs glean important features from raw data on their own.
  • High Accuracy: Works extremely well where organization is difficult, such as image recognition, speech, and language processing.
  • Scalability: Can deal with huge datasets, much heterogeneous at that, which include unstructured data like text and images.
  • Cross-Domain Flexibility: Offers applications in all sectors, including health care, finance, and autonomous systems.
  • Continuous Improvement: Deep learning models get even better with the passage of time and more data-ought to be especially more on GPUs.
  • Transfer Learning: These kinds of models can be used for other domains after a little setting up; this minimizes human effort and also time required in model engineering.

Deep Learning Examples:

Deep learning techniques are used in face recognition, autonomous cars, and medical imaging. Chatbots and virtual assistants work through natural language processing, speech-to-text, and voice control; recommendation engines power sites like Netflix and Amazon. In the medical field, it assists in identifying diseases and speeding up the drug-discovery process.

Conclusion:

Deep learning changes industries as it can cater to intricate data. The future seems even more bright because of advances like self-supervised learning, multimodal models, and edge computing, which will enable AI to be more efficient in terms of time, context-aware, and capable of learning with the lightest assistance of humans. Deep learning is now increasingly becoming associated with explanations and ethical concerns, as explainable AI and privacy-preserving techniques grow in emphasis. From tailor-made healthcare to the autonomous system and intelligent communication, deep learning will still do so much to transform our way of interfacing with technology and defining the next age of human handwork.

The post Deep Learning Definition, Types, Examples and Applications appeared first on ELE Times.

Nexperia Shrinks Designs With BJTs in Compact CFP15 Packages

AAC - Срд, 08/13/2025 - 02:00
Nexperia’s MJPE-series BJTs in CFP15B format offer smaller footprints and strong thermal performance for automotive and industrial designs.

Custom designed spiderman wall climbers (3d printed suction cups)

Reddit:Electronics - Срд, 08/13/2025 - 00:15
Custom designed spiderman wall climbers (3d printed suction cups)

I am using arduino and custom PCBs for control. A 12v vacuum pump, 6v air release Valve, and 2 6v lipo batteries. Almost all of this project is 3d printed with the exception of a couple metal brackets.

I made a video of this project if you are interested.

submitted by /u/ToBecomeImmortal
[link] [comments]

S.Korea Elecparts Mistery box

Reddit:Electronics - Втр, 08/12/2025 - 20:53
S.Korea Elecparts Mistery box

I bought $8, got 2500 pics.. capacitor, Mosfet, led, transformer... is this good price?

Unboxing video on my YouTube. You can watching if you're curious

https://youtu.be/Ld6hYG9f518

submitted by /u/Time_Double_1213
[link] [comments]

Teledyne e2v Adds 16-GB Variant to Rad-Hard DDR4 Memory Portfolio

AAC - Втр, 08/12/2025 - 20:00
The company claims the new 16-GB DDR4 model is the highest density space-grade DDR4 memory on the market.

How to Build a Variable Constant Current Source with Sink Function

Open Electronics - Втр, 08/12/2025 - 18:04

An adjustable constant current generator is an essential tool for many electronic applications, especially when a stable current is required regardless of the load. This project, designed and built with a PIC16F1765 microcontroller, combines both constant current sourcing and sinking capabilities in one device, with the ability to adjust the value from 0 to 1000 […]

The post How to Build a Variable Constant Current Source with Sink Function appeared first on Open-Electronics. The author is Boris Landoni

Wave Photonics launches PDK Management Platform to integrate foundry PDKs with EDA tools

Semiconductor today - Втр, 08/12/2025 - 17:22
Wave Photonics of Cambridge, UK (which was founded in May 2021 and develops design technology to drive the advancement and mass adoption of integrated photonics) has launched its PDK Management Platform to integrate foundry process design kits (PDKs) with leading electronic design automation (EDA) tools, provide ready-calculated SParameters for circuit simulation, and provide easy access for designers...

Matchmaker

EDN Network - Втр, 08/12/2025 - 16:38

Precision-matched resistors, diode pairs, and bridges are generic items. But sometimes an extra critical application with extra tight tolerances (or an extra tight budget) can dictate a little (or a lot) of DIY.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1’s matchmaker circuit can help make the otherwise odious chore of searching through a batch of parts for accurately matching pairs of resistors (or diodes) quicker and a bit less taxing. Best of all, it does precision (potentially to the ppm level) matchmaking with no need for pricey precision reference components.  

Here’s how it works.

Figure 1 A1a, U1b, and U1c generate precisely symmetrical excitation of the A and B parts being matched. The asterisked resistors are ordinary 1% parts; their accuracy isn’t critical. The A>B output is positive relative to B>A if resistor/diode A is greater than B, and vice versa.

Matchmaker’s A1a and U1bc generate a precisely symmetrical square-wave excitation (timed by the 100-Hz multivibrator A1b) to measure the mismatch between test parts A and B. The resulting difference signal is boosted by preamp A1d in switchable gains of 1, 10, or 100, synchronously demodulated by U1a, then filtered to DC with a final calibrating gain of 16x by A1c.

The key to Matchmaker’s precision is the Kelvin-style connection topology of the CMOS switches U1b and U1c. U1b, because it carries no significant load current (nothing but the pA-level input bias current of A1a), introduces only nanovolts of error. Meanwhile, the resulting sensing of excitation voltage level at the parts being matched, and the cancellation of U1c’s max 200-Ω on-resistance, is therefore exact, limited only by A1a’s gain-bandwidth at 100 Hz. Since the op-amp’s gain bandwidth (GBW) is ~10 MHz, the effective residual resistance is only 200/105  = 2 mΩ. Meanwhile, the 10-Ω max differential between the MAX4053 switches (the most critical parameter for excitation symmetry) is reduced to a usually insignificant 10/105 = 100 µΩ. The component lead wire and PWB trace resistance will contribute (much) more unless the layout is carefully done.

Matching resistors to better than ±1 ppm = 0.0001% is therefore possible. No ppm level references (voltage or resistance) need apply.

Output voltage as a function of Ra/Rb % mismatch is maximized when load resistor R1 is (at least approximately) equal to the nominal resistance of the resistances being matched. But because of the inflection maximum at R1/Rab = 1, that equality isn’t at all critical, as shown in Figure 2.

Figure 2 The output level (MV per 1% mismatch at Gain = 1) is not sensitive to the exact value of R1/Rab.

R1/Rab thus can vary from 1.0 by ±20% without disturbing mismatch gain by much more than 1%. However, R1 should not be less than ~50 Ω in order to stay within A1 and U1 current ratings.

Matchmaker also works to match diodes. In that case, R1 should be chosen to mirror the current levels expected in the application, R1 = 2v / Iapp.  

 Due to what I attribute to an absolute freak of nature (for which I take no credit whatsoever), the output MV per 1% mismatch of forward diode voltage drop is (nearly) the same as for resistors, at least for silicon junction diodes. 

Actually, there’s a simple explanation for the “freak of nature.” It’s just that a 1% differential between legs of the 2:1 Ra/Rb/R1 voltage divider is attenuated by 50% to become 1.25v/100/2 = 6.25 mV, and 6.25 mV just happens to be very close to 1% of a silicon junction diode’s ~600 mV forward drop. 

So, the freakiness really isn’t all that freaky, but it is serendipitous!  Matchmaker also works with Schottky diodes, but due to their smaller forward drop, it will underreport their percent mismatch by about a factor of 3. 

Due to the temperature sensitivity of diodes, it’s a good idea to handle them with thermally insulating gloves. This will save time and frustration waiting for them to equilibrate, not to mention possible, outright erroneous results. In fact, considering the possibility of misleading thermal effects (accidental dissimilar metal thermocouple junctions, etc.), it’s probably not a bad idea to wear gloves when handling resistors, too!

Happy ppm match making!

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Matchmaker appeared first on EDN.

Deconstructing the Semiconductor Revolution in Automotive Design: Understanding Composition and Challenges

ELE Times - Втр, 08/12/2025 - 14:08

As the world embraces the age of technology, semiconductors stand as the architects of the digital lives we live today. Semiconductors are the engines running silently behind everything from our smartphones and PCs to the AI assistants in our homes and even the noise-canceling headphones on our ears. Now, that same quiet power is stepping out of our pockets and onto our roads, initiating a second, parallel revolution in the automotive sector.

As we turn towards the automotive industry, we see a rise in the acceptance of electric and autonomous vehicles that has necessitated the use of around 1,000 to 3,500 individual chips or semiconductors in a single machine, transforming modern-day vehicles into moving computational giants. This isn’t just a trend; it’s a fundamental rewiring of the car. Asif Anwar, Executive Director of Automotive Market Analysis at TechInsights, validates this, stating that the “path to the SDV will be underpinned by the digitalization of the cockpit, vehicle connectivity, and ADAS capabilities,” with the vehicle’s core electronic architecture being the primary enabler. Features like the Advanced Driver Assistance System (ADAS) are no longer niche; they are central to the story of smart, connected vehicles on the roads. In markets like India, this is about delivering “premium, intelligent automotive experiences,” according to Savi Soin, President of Qualcomm India, who emphasizes that the country is moving beyond low-end models and embracing local innovation.

To understand this revolution—and the immense challenges engineers face—we must first dissect the new nervous system of the automobile: the array of specialized semiconductors that gives it intelligence.

The New Central Nervous System of Automotives

  • The Brains: Central Compute System on Chips (SoC)

It is a single, centralized module comprising high-performance computing units that brings together various functions of a vehicle. These enable modern-day Software-Defined Vehicles (SDVs), where features are continuously enhanced through agile software updates throughout their lifecycle. This capability is what allows automakers to offer what Hisashi Takeuchi, MD & CEO of Maruti Suzuki India Ltd, describes as “affordable telematics and advanced infotainment systems,” by leveraging the power of a centralized SoC.

Some of the prominent SoCs include the Renesas R-Car Family, the Qualcomm Snapdragon Ride Flex SoC, and the STMicroelectronics Accordo and Stellar families. These powerful chips receive pre-processed data from all zonal gateways (regional data hubs) through sensors. Further, they run complex software (including the “Car OS” and AI algorithms) and make all critical decisions for functions like ADAS and infotainment, effectively controlling the car’s advanced operations; hence, it is called the Brain. The goal, according to executives like Vivek Bhan of Renesas, is to provide “end-to-end automotive-grade system solutions” that help carmakers “accelerate SDV development.”

  • The Muscles: Power Semiconductors:

Power semiconductors are specialized devices designed to handle high voltage and large currents, enabling efficient control and conversion of electrical power. These are one of the most crucial components in the emerging segment of connected, electric, and autonomous vehicles. They are crucial components in electric motor drive systems, inverters, and on-board chargers for electric and hybrid vehicles.

Some of the prominent power semiconductors include IGBTs, MOSFETs (including silicon carbide (SiC) and gallium nitride (GaN) versions), and diodes. These are basically switches enabling the flow of power in the circuit.

These form the muscles of the automotives as they regulate and manage power to enable efficient and productive use of energy, hence impacting vehicle efficiency, range, and overall performance.

  • The Senses: Sensors

Sensors are devices that detect and respond to changes in their environment by converting physical phenomena into measurable signals. These play a crucial role in monitoring and reporting different parameters, including engine performance, safety, and environmental conditions. These provide the critical data needed to make decisions in aspects like ADAS, ABS, and autonomous driving. 

Semiconductor placement in an automotiveRepresentational Image

Some commonly used sensors in automobiles include the fuel temperature sensor, parking sensors, vehicle speed sensor, tire pressure monitoring system, and airbag sensors, among others.

These sensors, like lidar, radar, and cameras, sense the environment ranging from the engine to the roads, enabling critical functions like ADAS and autonomous driving, hence the name Senses. These are one of the crucial elements in modern automotive, as their collection enables the SoC to make decisions.

  • The Reflexes and Nerves: MCUs and Connectivity

Microcontrollers are small, integrated circuits that function as miniature computers, designed to control specific tasks within electronic devices. While SoCs are the “brains” for complex tasks, MCUs are still embedded everywhere, managing smaller, more specific tasks (e.g., controlling a single window, managing a specific light, basic engine control units, and individual airbag deployment units). 

Besides, the memory inside the automobiles enables them to store data from sensors and run applications while the vehicle’s communication with the cloud is enabled by dedicated communication chips or RF devices (5G, Wi-Fi, Bluetooth, and Ethernet transceivers). These are distinct from SoCs and sensors.

Apart from these, automobiles comprise analog ICs/PMICs for power regulation and signal conditioning.

Design Engineer’s Story: The Core Challenges

This increasing semiconductor composition naturally gives way to a plethora of challenges. As Vivek Bhan, Senior Vice President at Renesas, notes, the company’s latest platforms are designed specifically to “tackle the complex challenges the automotive industry faces today,” which range from hardware optimization to ensuring safety compliance. This sentiment highlights the core pain points of an engineer designing these systems.

Semiconductors are highly expensive and prone to degradation and performance issues as they enter the automotive sector. The computational giants produce a harsh environment, including high temperature, vibrations, and humidity, and come with an abundance of electric circuits. These factors together make the landscape extremely challenging for designing engineers. Some important challenges are listed below:

  1. Rough Automotive Environment: The engine environment in an automobile is generally rough owing to temperature, vibrations, and humidity. This scenario poses a significant threat, as high temperatures can lead to increased thermal noise, reduced carrier mobility, and even damage to the semiconductor material itself. Therefore, the performance of semiconductors heavily depends on conducive environmental conditions. Design engineers must manage these complex environmental needs through select materials and specific packaging techniques.
  2. Electromagnetic Interference: Semiconductors, being small in size, operating at high speed, and sensitive to voltage fluctuations, are highly prone to electromagnetic interference. This vulnerability can disrupt their operations and lead to the breakdown of the automotive system. This is extremely crucial for design engineers to resolve, as it could compromise the entire concept of connected vehicles.
  3. Hardware-Software Integration: Modern vehicles are increasingly software-defined, requiring seamless integration of complex hardware and software systems. Engineers must ensure that hardware and software components work together flawlessly, particularly with over-the-air (OTA) software updates.
  4. Supply-Chain-Related Risks: The automotive industry is heavily reliant on semiconductors, making it vulnerable to supply chain disruptions. Global shortages and geopolitical dependencies in chip fabrication can lead to production delays, increased costs, and even halted assembly lines.
  5. Design Complexity: The increasing complexity of automotive chip designs, driven by features like AI, raises development costs and verification challenges. Engineers need to constantly update their skills through R&D to address these challenges. This is where concepts like “Shift-Left innovations,” mentioned by industry leaders, become critical, allowing for earlier testing and validation in the design cycle. To solve this, Electronic Design Automation (EDA) tools are used to test everything from thermal analysis to signal integrity in complex chiplet-based designs.
  6. Safety and Compliance: Automotive systems, especially those related to safety-critical functions, require strict adherence to standards like ISO 26262 and ASIL-D. Engineers must ensure their systems meet these standards through rigorous testing and validation.

Conclusion

Ultimately, the story of modern-day vehicles is the story of human growth and triumphs. Behind every advanced safety system lies a design engineer navigating a formidable battleground. The challenges of taming heat, shielding circuits, and ensuring flawless hardware-software integration are the crucibles where modern automotive innovation is forged. While the vehicle on the road is a testament to the power of semiconductors, its success is a direct result of the designers who can solve these complex puzzles. The road ahead is clear: the most valuable component is not just the chip itself, but the human expertise required to master it. This is why tech leaders emphasize collaboration. As Savi Soin of Qualcomm notes, strategic partnerships with OEMs “empower the local ecosystem to lead the mobility revolution and leapfrog into a future defined by intelligent transportation,” concluding optimistically that “the road ahead is incredibly exciting and we’re just getting started.”

The post Deconstructing the Semiconductor Revolution in Automotive Design: Understanding Composition and Challenges appeared first on ELE Times.

Top 10 Machine Learning Companies in India

ELE Times - Втр, 08/12/2025 - 13:40

The rapid growth of machine learning development companies in India is shaking up industries like healthcare, finance, and retail. With breakthrough innovations and cutting-edge machine learning innovations, these companies enter 2025 as leaders. From designing algorithms to addressing custom machine learning development needs, these companies are interjected into the future of India. This article highlights the top 10 machine learning companies shaping India’s technological landscape in 2025, focusing on their cutting-edge innovations and transformative impact across various sectors.

  1. Tata Consultancy Services (TCS)

Tata Consultancy Services (TCS) is an important player in India’s machine learning landscape, weaving ML within enterprise solutions and internal operations. TCS with more than 270 AI and ML engagements applies machine learning in fields like finance, retail, and compliance to support better decisions and to automate processes. Areas such as deep learning, natural language processing, and predictive analytics fall within their scope. TCS offers frameworks and tools for enhancing the client experience, improving decision-making, and automating processes. TCS also has its platform, namely Decision Fabric combine ML with generative AI to deliver scalable, intelligent solutions.

  1. Infosys

Infosys is India’s pride in cutting-edge machine learning innovation, transforming enterprises with an AI-first approach. Infosys Topaz, the company’s main product, combines cloud, generative AI, and machine learning technologies to improve service personalization and intelligent ecosystems while automating business decision-making processes. Infosys Applied AI provides scaled ML solutions across industries, from financial services to manufacturing, integrating analytics, cloud, and AI models into a single framework. In terms of applying machine learning to various industries such as banking, healthcare, and retail, Infosys helps its clients automate operations and forecast market trends.

  1. Wipro

Wipro applies machine learning in its consulting, engineering, and cloud services to enable intelligent automation and predictive insights. Its implementations range from machine learning for natural language processing, intelligent search, and content moderation to computer vision for security and defect identification and predictive analytics for product failure prediction and maintenance optimization. The HOLMES AI platform by Wipro predominantly concentrates on NLP, robotic process automation (RPA), and advanced analytics.

  1. HCL Technologies

HCL Technologies provides high-end machine learning solutions through AION, which helps in streamlining the ML lifecycle by way of low-code automation, and Graviton, which offers data-platform modernization for scalable model building and deployment. Use tools like Data Genie for synthetic data generation, while HCL MLOps and NLP services allow smooth deployment along with natural-language-based interfaces. Industries including manufacturing, healthcare, and finance are all undergoing change as a result of these advancements.

  1. Accenture India

A global center of machine learning innovation in India, Accenture India works with thousands of experts applying AI solutions across industries. It sustains the AI Refinery platforms for ML scale-up across finance, healthcare, and retail. To solve healthcare, energy, and retail issues, Accenture India applies state-of-the-art machine learning technologies with profound domain knowledge of those service areas. The organization offers AI solutions that include natural language processing, computer vision, and data-driven analytics.

  1. Tech Mahindra

Tech Mahindra’s complete breadth of ML services incorporates deep learning, data analytics, automation, and so on. Tech Mahindra India uses ML in digital transformation in telecom, manufacturing, and BFSI sectors. The ML services it provides are predictive maintenance, fraud detection, and intelligent customer support. It offers its services to manufacturing, logistics, and telecom sectors, helping them in their operations and decision-making.

  1. Fractal Analytics

Fractal Analytics is one of India’s leading companies in decision intelligence and machine learning. Qure.ai and Cuddle.ai are platforms where ML is applied for diagnosis, analytics, and automation. Being a company that highly respects ethical AI and innovation, Fractal seeks real-time insights and predictive intelligence for enterprises.

  1. Mu Sigma

Mu Sigma uses machine learning within its Man-Machine Ecosystem, creating a synergy between human decision scientists and its own analytical platforms. The ML stack at Mu Sigma caters to all aspects of enterprise analytics: starting from problem definition, using natural language querying, sentiment analysis to solution design with PreFabs and accelerators for rapid deployment of ML models. The company also offers services such as: predictive analytics, data visualization, and decision modeling using state-of-the-art ML algorithms to solve some of the most challenging problems faced by businesses.

  1. Zensar Technologies

Zensar Technologies integrates ML with its AI-powered platforms to support decision-making, enhance customer experience, and increase operational excellence in sectors like BFSI, healthcare, and manufacturing. The rebranded R&D hub, Zensar AIRLabs, identified three AI pillars-experience, research, and decision-making-where it applies ML to predictive analytics, fraud detection, and digital supply chain optimization.

  1. Mad Street Den

Mad Street Den is famous for the AI-powered platform, Vue.ai, providing intelligent automation across retail, finance, and healthcare. Blox-the horizontal AI stack of the company-uses computer vision and ML to enhance customer experience, increase efficiency in operations, and reduce the dependence of large data science teams. With a strong focus on scalable enterprise AI, Mad Street Den is turning global businesses AI-native through automation, predictive analytics, and decision intelligence-real-time.

Conclusion:

India, for instance, is witnessing a surge in machine learning ecosystem driven by innovation, scale, and sector-specific knowledge. Starting from tech giants like TCS and Infosys to quick disruptors like Mad Street Den and Fractal Analytics, these companies have redefined the way industries operate in automated decision-making, outcome predictions, and angle personal experiences. With further development into 2025, their contributions will not only help shape the digital economy of India but also set the country on the world map for AI and machine-learning aptitude.

The post Top 10 Machine Learning Companies in India appeared first on ELE Times.

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів