Збирач потоків

My Work Area

Reddit:Electronics - Срд, 08/13/2025 - 13:14
My Work Area

Very on-budget setup. What do you think I should add next? (I've already saved some space for a fume extractor).

submitted by /u/GamingVlogBox
[link] [comments]

k-Space’s RHEEDSim software available for labs and classrooms

Semiconductor today - Срд, 08/13/2025 - 12:08
k-Space Associates Inc of Dexter, MI, USA — which produces thin-film metrology instrumentation and software — says that its new kSA RHEEDSim reflection high-energy electron diffraction (RHEED) simulation software is now available for both labs and classrooms...

AOI chooses ClassOne’s Solstice S8 system for gold plating and metal lift-off on InP

Semiconductor today - Срд, 08/13/2025 - 11:47
ClassOne Technology of Kalispell, MT, USA (which manufactures electroplating and wet-chemical process systems for ≤200mm wafers) is providing its Solstice S8 single-wafer processing system to Applied Optoelectronics Inc (AOI) of Sugar Land, TX, USA, a designer and manufacturer of optical components, modules and equipment for fiber access networks in the Internet data-center, cable TV broadband, fiber-to-the-home (FTTH) and telecom markets. The system will further strengthen AOI’s capabilities for producing optoelectronic components that power high-speed data and communications infrastructure...

Deep Learning Definition, Types, Examples and Applications

ELE Times - Срд, 08/13/2025 - 10:41

Deep learning is a subfield of machine learning that applies multilayered neural networks to simulate brain decision-making. The concept is essentially interchangeably with human learning systems which allow machines to learn from data, thus constituting many AI applications we use today-dotting, speech recognition, image analysis, and natural language processing areas.

Deep Learning History:

Since the 1940s, when Walter Pitts and Warren McCulloch introduced a mathematical model of neural networks inspired by the human brain, the very onset of deep learning can be said to have started. In the 1950s and 60s, with pioneers like Alan Turing and Alexey Ivakhnenko laying the groundwork for neural computation and early network architectures, it proceeded forward. Backpropagation emerged as a concept during the ’80s but became very popular with the availability of large computational prowess and data set in 2000. The dawn of newfound applications truly arose in 2012 when, for instance, AlexNet, a deep convolutional neural network, took image classification to another level by dramatically increasing accuracy. Since then, deep learning has become an ever indomitable force for innovation in computer vision, natural language processing, and autonomous systems.

Types of Deep Learning:

Deep learning can be grouped into various learning approaches, depending on the training of the model and the data being used-

  • Supervised deep learning models are trained over labeled datasets, which have all input data paired with the corresponding output data. The model tries to learn to map the input data to the output data so that it can later generalize for unseen data through prediction. Among the popular examples of fulfillment of these tasks are image classification, sentiment analysis, and price or trend prediction.
  • Unsupervised deep learning operates over unlabeled data, with the system expected to unearth underlying structures or patterns on its own. It is used in clustering similar data points, reducing the dimensionality of data, or detecting relationships among large-size datasets. Examples are customer segmentation, topic detection, and anomaly detection.
  • Semi-supervised deep learning places a small set of labeled data against a large set of unlabeled data, striking a balance between accuracy and efficiency in medicine and fraud detection. Self-supervised deep learning lets models create their own learning labels, opening the two fields of NLP and vision to tasks requiring less manual annotation.
  • Reinforcement deep learning is a training methodology for machine-learning models where the agent interacts with an environment, receiving rewards or penalties based on its actions. The aim is to maximize the obtained reward and its performance over time. This learning technique is used to train game-playing AIs such as AlphaGo, autonomous navigation, and robotic manipulation.

Deep learning utilizes the passage of data through an array of artificial neural networks, where each subsequent layer extracts successively more complex features. Such networks learn by adjusting the internal weights via backpropagation so as to minimize prediction errors, which ends up training the model to discern various patterns in the input and finally make recognition decisions with respect to the raw input in the form of images, text, or speech.

Deep Learning Applications:

  • Image & Video Recognition: Used in facial recognition, driverless cars, and medical imaging.
  • Natural Language Processing (NLP): Used to power chatbots, and virtual assistants like Siri and Alexa, and translate languages.
  • Speech Recognition: Used for voice typing, smart assistants, and live transcription.
  • Recommendation Systems: Personalizes Netflix, Amazon, and Spotify.
  • Healthcare: For disease detection, drug discovery, and predictive diagnosis.
  • Finance: Used for fraud detection, assessing risks, and running algorithmic trading operations.
  • Autonomous Vehicles: Enable cars to detect objects, navigate roads, and make decisions related to driving.
  • Entertainment & Media: Supports video editing, audio generation, and content tagging.
  • Security & Surveillance: Supports anomaly detection and crowd monitoring.
  • Education: Supports the creation of intelligent-tutoring systems and automated grading.

Key Advantages of Deep Learning:

  • Automatic Feature Extraction: There is no need for manual data preprocessing. The programs glean important features from raw data on their own.
  • High Accuracy: Works extremely well where organization is difficult, such as image recognition, speech, and language processing.
  • Scalability: Can deal with huge datasets, much heterogeneous at that, which include unstructured data like text and images.
  • Cross-Domain Flexibility: Offers applications in all sectors, including health care, finance, and autonomous systems.
  • Continuous Improvement: Deep learning models get even better with the passage of time and more data-ought to be especially more on GPUs.
  • Transfer Learning: These kinds of models can be used for other domains after a little setting up; this minimizes human effort and also time required in model engineering.

Deep Learning Examples:

Deep learning techniques are used in face recognition, autonomous cars, and medical imaging. Chatbots and virtual assistants work through natural language processing, speech-to-text, and voice control; recommendation engines power sites like Netflix and Amazon. In the medical field, it assists in identifying diseases and speeding up the drug-discovery process.

Conclusion:

Deep learning changes industries as it can cater to intricate data. The future seems even more bright because of advances like self-supervised learning, multimodal models, and edge computing, which will enable AI to be more efficient in terms of time, context-aware, and capable of learning with the lightest assistance of humans. Deep learning is now increasingly becoming associated with explanations and ethical concerns, as explainable AI and privacy-preserving techniques grow in emphasis. From tailor-made healthcare to the autonomous system and intelligent communication, deep learning will still do so much to transform our way of interfacing with technology and defining the next age of human handwork.

The post Deep Learning Definition, Types, Examples and Applications appeared first on ELE Times.

Nexperia Shrinks Designs With BJTs in Compact CFP15 Packages

AAC - Срд, 08/13/2025 - 02:00
Nexperia’s MJPE-series BJTs in CFP15B format offer smaller footprints and strong thermal performance for automotive and industrial designs.

Custom designed spiderman wall climbers (3d printed suction cups)

Reddit:Electronics - Срд, 08/13/2025 - 00:15
Custom designed spiderman wall climbers (3d printed suction cups)

I am using arduino and custom PCBs for control. A 12v vacuum pump, 6v air release Valve, and 2 6v lipo batteries. Almost all of this project is 3d printed with the exception of a couple metal brackets.

I made a video of this project if you are interested.

submitted by /u/ToBecomeImmortal
[link] [comments]

S.Korea Elecparts Mistery box

Reddit:Electronics - Втр, 08/12/2025 - 20:53
S.Korea Elecparts Mistery box

I bought $8, got 2500 pics.. capacitor, Mosfet, led, transformer... is this good price?

Unboxing video on my YouTube. You can watching if you're curious

https://youtu.be/Ld6hYG9f518

submitted by /u/Time_Double_1213
[link] [comments]

Teledyne e2v Adds 16-GB Variant to Rad-Hard DDR4 Memory Portfolio

AAC - Втр, 08/12/2025 - 20:00
The company claims the new 16-GB DDR4 model is the highest density space-grade DDR4 memory on the market.

How to Build a Variable Constant Current Source with Sink Function

Open Electronics - Втр, 08/12/2025 - 18:04

An adjustable constant current generator is an essential tool for many electronic applications, especially when a stable current is required regardless of the load. This project, designed and built with a PIC16F1765 microcontroller, combines both constant current sourcing and sinking capabilities in one device, with the ability to adjust the value from 0 to 1000 […]

The post How to Build a Variable Constant Current Source with Sink Function appeared first on Open-Electronics. The author is Boris Landoni

Wave Photonics launches PDK Management Platform to integrate foundry PDKs with EDA tools

Semiconductor today - Втр, 08/12/2025 - 17:22
Wave Photonics of Cambridge, UK (which was founded in May 2021 and develops design technology to drive the advancement and mass adoption of integrated photonics) has launched its PDK Management Platform to integrate foundry process design kits (PDKs) with leading electronic design automation (EDA) tools, provide ready-calculated SParameters for circuit simulation, and provide easy access for designers...

Matchmaker

EDN Network - Втр, 08/12/2025 - 16:38

Precision-matched resistors, diode pairs, and bridges are generic items. But sometimes an extra critical application with extra tight tolerances (or an extra tight budget) can dictate a little (or a lot) of DIY.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1’s matchmaker circuit can help make the otherwise odious chore of searching through a batch of parts for accurately matching pairs of resistors (or diodes) quicker and a bit less taxing. Best of all, it does precision (potentially to the ppm level) matchmaking with no need for pricey precision reference components.  

Here’s how it works.

Figure 1 A1a, U1b, and U1c generate precisely symmetrical excitation of the A and B parts being matched. The asterisked resistors are ordinary 1% parts; their accuracy isn’t critical. The A>B output is positive relative to B>A if resistor/diode A is greater than B, and vice versa.

Matchmaker’s A1a and U1bc generate a precisely symmetrical square-wave excitation (timed by the 100-Hz multivibrator A1b) to measure the mismatch between test parts A and B. The resulting difference signal is boosted by preamp A1d in switchable gains of 1, 10, or 100, synchronously demodulated by U1a, then filtered to DC with a final calibrating gain of 16x by A1c.

The key to Matchmaker’s precision is the Kelvin-style connection topology of the CMOS switches U1b and U1c. U1b, because it carries no significant load current (nothing but the pA-level input bias current of A1a), introduces only nanovolts of error. Meanwhile, the resulting sensing of excitation voltage level at the parts being matched, and the cancellation of U1c’s max 200-Ω on-resistance, is therefore exact, limited only by A1a’s gain-bandwidth at 100 Hz. Since the op-amp’s gain bandwidth (GBW) is ~10 MHz, the effective residual resistance is only 200/105  = 2 mΩ. Meanwhile, the 10-Ω max differential between the MAX4053 switches (the most critical parameter for excitation symmetry) is reduced to a usually insignificant 10/105 = 100 µΩ. The component lead wire and PWB trace resistance will contribute (much) more unless the layout is carefully done.

Matching resistors to better than ±1 ppm = 0.0001% is therefore possible. No ppm level references (voltage or resistance) need apply.

Output voltage as a function of Ra/Rb % mismatch is maximized when load resistor R1 is (at least approximately) equal to the nominal resistance of the resistances being matched. But because of the inflection maximum at R1/Rab = 1, that equality isn’t at all critical, as shown in Figure 2.

Figure 2 The output level (MV per 1% mismatch at Gain = 1) is not sensitive to the exact value of R1/Rab.

R1/Rab thus can vary from 1.0 by ±20% without disturbing mismatch gain by much more than 1%. However, R1 should not be less than ~50 Ω in order to stay within A1 and U1 current ratings.

Matchmaker also works to match diodes. In that case, R1 should be chosen to mirror the current levels expected in the application, R1 = 2v / Iapp.  

 Due to what I attribute to an absolute freak of nature (for which I take no credit whatsoever), the output MV per 1% mismatch of forward diode voltage drop is (nearly) the same as for resistors, at least for silicon junction diodes. 

Actually, there’s a simple explanation for the “freak of nature.” It’s just that a 1% differential between legs of the 2:1 Ra/Rb/R1 voltage divider is attenuated by 50% to become 1.25v/100/2 = 6.25 mV, and 6.25 mV just happens to be very close to 1% of a silicon junction diode’s ~600 mV forward drop. 

So, the freakiness really isn’t all that freaky, but it is serendipitous!  Matchmaker also works with Schottky diodes, but due to their smaller forward drop, it will underreport their percent mismatch by about a factor of 3. 

Due to the temperature sensitivity of diodes, it’s a good idea to handle them with thermally insulating gloves. This will save time and frustration waiting for them to equilibrate, not to mention possible, outright erroneous results. In fact, considering the possibility of misleading thermal effects (accidental dissimilar metal thermocouple junctions, etc.), it’s probably not a bad idea to wear gloves when handling resistors, too!

Happy ppm match making!

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Matchmaker appeared first on EDN.

Deconstructing the Semiconductor Revolution in Automotive Design: Understanding Composition and Challenges

ELE Times - Втр, 08/12/2025 - 14:08

As the world embraces the age of technology, semiconductors stand as the architects of the digital lives we live today. Semiconductors are the engines running silently behind everything from our smartphones and PCs to the AI assistants in our homes and even the noise-canceling headphones on our ears. Now, that same quiet power is stepping out of our pockets and onto our roads, initiating a second, parallel revolution in the automotive sector.

As we turn towards the automotive industry, we see a rise in the acceptance of electric and autonomous vehicles that has necessitated the use of around 1,000 to 3,500 individual chips or semiconductors in a single machine, transforming modern-day vehicles into moving computational giants. This isn’t just a trend; it’s a fundamental rewiring of the car. Asif Anwar, Executive Director of Automotive Market Analysis at TechInsights, validates this, stating that the “path to the SDV will be underpinned by the digitalization of the cockpit, vehicle connectivity, and ADAS capabilities,” with the vehicle’s core electronic architecture being the primary enabler. Features like the Advanced Driver Assistance System (ADAS) are no longer niche; they are central to the story of smart, connected vehicles on the roads. In markets like India, this is about delivering “premium, intelligent automotive experiences,” according to Savi Soin, President of Qualcomm India, who emphasizes that the country is moving beyond low-end models and embracing local innovation.

To understand this revolution—and the immense challenges engineers face—we must first dissect the new nervous system of the automobile: the array of specialized semiconductors that gives it intelligence.

The New Central Nervous System of Automotives

  • The Brains: Central Compute System on Chips (SoC)

It is a single, centralized module comprising high-performance computing units that brings together various functions of a vehicle. These enable modern-day Software-Defined Vehicles (SDVs), where features are continuously enhanced through agile software updates throughout their lifecycle. This capability is what allows automakers to offer what Hisashi Takeuchi, MD & CEO of Maruti Suzuki India Ltd, describes as “affordable telematics and advanced infotainment systems,” by leveraging the power of a centralized SoC.

Some of the prominent SoCs include the Renesas R-Car Family, the Qualcomm Snapdragon Ride Flex SoC, and the STMicroelectronics Accordo and Stellar families. These powerful chips receive pre-processed data from all zonal gateways (regional data hubs) through sensors. Further, they run complex software (including the “Car OS” and AI algorithms) and make all critical decisions for functions like ADAS and infotainment, effectively controlling the car’s advanced operations; hence, it is called the Brain. The goal, according to executives like Vivek Bhan of Renesas, is to provide “end-to-end automotive-grade system solutions” that help carmakers “accelerate SDV development.”

  • The Muscles: Power Semiconductors:

Power semiconductors are specialized devices designed to handle high voltage and large currents, enabling efficient control and conversion of electrical power. These are one of the most crucial components in the emerging segment of connected, electric, and autonomous vehicles. They are crucial components in electric motor drive systems, inverters, and on-board chargers for electric and hybrid vehicles.

Some of the prominent power semiconductors include IGBTs, MOSFETs (including silicon carbide (SiC) and gallium nitride (GaN) versions), and diodes. These are basically switches enabling the flow of power in the circuit.

These form the muscles of the automotives as they regulate and manage power to enable efficient and productive use of energy, hence impacting vehicle efficiency, range, and overall performance.

  • The Senses: Sensors

Sensors are devices that detect and respond to changes in their environment by converting physical phenomena into measurable signals. These play a crucial role in monitoring and reporting different parameters, including engine performance, safety, and environmental conditions. These provide the critical data needed to make decisions in aspects like ADAS, ABS, and autonomous driving. 

Semiconductor placement in an automotiveRepresentational Image

Some commonly used sensors in automobiles include the fuel temperature sensor, parking sensors, vehicle speed sensor, tire pressure monitoring system, and airbag sensors, among others.

These sensors, like lidar, radar, and cameras, sense the environment ranging from the engine to the roads, enabling critical functions like ADAS and autonomous driving, hence the name Senses. These are one of the crucial elements in modern automotive, as their collection enables the SoC to make decisions.

  • The Reflexes and Nerves: MCUs and Connectivity

Microcontrollers are small, integrated circuits that function as miniature computers, designed to control specific tasks within electronic devices. While SoCs are the “brains” for complex tasks, MCUs are still embedded everywhere, managing smaller, more specific tasks (e.g., controlling a single window, managing a specific light, basic engine control units, and individual airbag deployment units). 

Besides, the memory inside the automobiles enables them to store data from sensors and run applications while the vehicle’s communication with the cloud is enabled by dedicated communication chips or RF devices (5G, Wi-Fi, Bluetooth, and Ethernet transceivers). These are distinct from SoCs and sensors.

Apart from these, automobiles comprise analog ICs/PMICs for power regulation and signal conditioning.

Design Engineer’s Story: The Core Challenges

This increasing semiconductor composition naturally gives way to a plethora of challenges. As Vivek Bhan, Senior Vice President at Renesas, notes, the company’s latest platforms are designed specifically to “tackle the complex challenges the automotive industry faces today,” which range from hardware optimization to ensuring safety compliance. This sentiment highlights the core pain points of an engineer designing these systems.

Semiconductors are highly expensive and prone to degradation and performance issues as they enter the automotive sector. The computational giants produce a harsh environment, including high temperature, vibrations, and humidity, and come with an abundance of electric circuits. These factors together make the landscape extremely challenging for designing engineers. Some important challenges are listed below:

  1. Rough Automotive Environment: The engine environment in an automobile is generally rough owing to temperature, vibrations, and humidity. This scenario poses a significant threat, as high temperatures can lead to increased thermal noise, reduced carrier mobility, and even damage to the semiconductor material itself. Therefore, the performance of semiconductors heavily depends on conducive environmental conditions. Design engineers must manage these complex environmental needs through select materials and specific packaging techniques.
  2. Electromagnetic Interference: Semiconductors, being small in size, operating at high speed, and sensitive to voltage fluctuations, are highly prone to electromagnetic interference. This vulnerability can disrupt their operations and lead to the breakdown of the automotive system. This is extremely crucial for design engineers to resolve, as it could compromise the entire concept of connected vehicles.
  3. Hardware-Software Integration: Modern vehicles are increasingly software-defined, requiring seamless integration of complex hardware and software systems. Engineers must ensure that hardware and software components work together flawlessly, particularly with over-the-air (OTA) software updates.
  4. Supply-Chain-Related Risks: The automotive industry is heavily reliant on semiconductors, making it vulnerable to supply chain disruptions. Global shortages and geopolitical dependencies in chip fabrication can lead to production delays, increased costs, and even halted assembly lines.
  5. Design Complexity: The increasing complexity of automotive chip designs, driven by features like AI, raises development costs and verification challenges. Engineers need to constantly update their skills through R&D to address these challenges. This is where concepts like “Shift-Left innovations,” mentioned by industry leaders, become critical, allowing for earlier testing and validation in the design cycle. To solve this, Electronic Design Automation (EDA) tools are used to test everything from thermal analysis to signal integrity in complex chiplet-based designs.
  6. Safety and Compliance: Automotive systems, especially those related to safety-critical functions, require strict adherence to standards like ISO 26262 and ASIL-D. Engineers must ensure their systems meet these standards through rigorous testing and validation.

Conclusion

Ultimately, the story of modern-day vehicles is the story of human growth and triumphs. Behind every advanced safety system lies a design engineer navigating a formidable battleground. The challenges of taming heat, shielding circuits, and ensuring flawless hardware-software integration are the crucibles where modern automotive innovation is forged. While the vehicle on the road is a testament to the power of semiconductors, its success is a direct result of the designers who can solve these complex puzzles. The road ahead is clear: the most valuable component is not just the chip itself, but the human expertise required to master it. This is why tech leaders emphasize collaboration. As Savi Soin of Qualcomm notes, strategic partnerships with OEMs “empower the local ecosystem to lead the mobility revolution and leapfrog into a future defined by intelligent transportation,” concluding optimistically that “the road ahead is incredibly exciting and we’re just getting started.”

The post Deconstructing the Semiconductor Revolution in Automotive Design: Understanding Composition and Challenges appeared first on ELE Times.

Top 10 Machine Learning Companies in India

ELE Times - Втр, 08/12/2025 - 13:40

The rapid growth of machine learning development companies in India is shaking up industries like healthcare, finance, and retail. With breakthrough innovations and cutting-edge machine learning innovations, these companies enter 2025 as leaders. From designing algorithms to addressing custom machine learning development needs, these companies are interjected into the future of India. This article highlights the top 10 machine learning companies shaping India’s technological landscape in 2025, focusing on their cutting-edge innovations and transformative impact across various sectors.

  1. Tata Consultancy Services (TCS)

Tata Consultancy Services (TCS) is an important player in India’s machine learning landscape, weaving ML within enterprise solutions and internal operations. TCS with more than 270 AI and ML engagements applies machine learning in fields like finance, retail, and compliance to support better decisions and to automate processes. Areas such as deep learning, natural language processing, and predictive analytics fall within their scope. TCS offers frameworks and tools for enhancing the client experience, improving decision-making, and automating processes. TCS also has its platform, namely Decision Fabric combine ML with generative AI to deliver scalable, intelligent solutions.

  1. Infosys

Infosys is India’s pride in cutting-edge machine learning innovation, transforming enterprises with an AI-first approach. Infosys Topaz, the company’s main product, combines cloud, generative AI, and machine learning technologies to improve service personalization and intelligent ecosystems while automating business decision-making processes. Infosys Applied AI provides scaled ML solutions across industries, from financial services to manufacturing, integrating analytics, cloud, and AI models into a single framework. In terms of applying machine learning to various industries such as banking, healthcare, and retail, Infosys helps its clients automate operations and forecast market trends.

  1. Wipro

Wipro applies machine learning in its consulting, engineering, and cloud services to enable intelligent automation and predictive insights. Its implementations range from machine learning for natural language processing, intelligent search, and content moderation to computer vision for security and defect identification and predictive analytics for product failure prediction and maintenance optimization. The HOLMES AI platform by Wipro predominantly concentrates on NLP, robotic process automation (RPA), and advanced analytics.

  1. HCL Technologies

HCL Technologies provides high-end machine learning solutions through AION, which helps in streamlining the ML lifecycle by way of low-code automation, and Graviton, which offers data-platform modernization for scalable model building and deployment. Use tools like Data Genie for synthetic data generation, while HCL MLOps and NLP services allow smooth deployment along with natural-language-based interfaces. Industries including manufacturing, healthcare, and finance are all undergoing change as a result of these advancements.

  1. Accenture India

A global center of machine learning innovation in India, Accenture India works with thousands of experts applying AI solutions across industries. It sustains the AI Refinery platforms for ML scale-up across finance, healthcare, and retail. To solve healthcare, energy, and retail issues, Accenture India applies state-of-the-art machine learning technologies with profound domain knowledge of those service areas. The organization offers AI solutions that include natural language processing, computer vision, and data-driven analytics.

  1. Tech Mahindra

Tech Mahindra’s complete breadth of ML services incorporates deep learning, data analytics, automation, and so on. Tech Mahindra India uses ML in digital transformation in telecom, manufacturing, and BFSI sectors. The ML services it provides are predictive maintenance, fraud detection, and intelligent customer support. It offers its services to manufacturing, logistics, and telecom sectors, helping them in their operations and decision-making.

  1. Fractal Analytics

Fractal Analytics is one of India’s leading companies in decision intelligence and machine learning. Qure.ai and Cuddle.ai are platforms where ML is applied for diagnosis, analytics, and automation. Being a company that highly respects ethical AI and innovation, Fractal seeks real-time insights and predictive intelligence for enterprises.

  1. Mu Sigma

Mu Sigma uses machine learning within its Man-Machine Ecosystem, creating a synergy between human decision scientists and its own analytical platforms. The ML stack at Mu Sigma caters to all aspects of enterprise analytics: starting from problem definition, using natural language querying, sentiment analysis to solution design with PreFabs and accelerators for rapid deployment of ML models. The company also offers services such as: predictive analytics, data visualization, and decision modeling using state-of-the-art ML algorithms to solve some of the most challenging problems faced by businesses.

  1. Zensar Technologies

Zensar Technologies integrates ML with its AI-powered platforms to support decision-making, enhance customer experience, and increase operational excellence in sectors like BFSI, healthcare, and manufacturing. The rebranded R&D hub, Zensar AIRLabs, identified three AI pillars-experience, research, and decision-making-where it applies ML to predictive analytics, fraud detection, and digital supply chain optimization.

  1. Mad Street Den

Mad Street Den is famous for the AI-powered platform, Vue.ai, providing intelligent automation across retail, finance, and healthcare. Blox-the horizontal AI stack of the company-uses computer vision and ML to enhance customer experience, increase efficiency in operations, and reduce the dependence of large data science teams. With a strong focus on scalable enterprise AI, Mad Street Den is turning global businesses AI-native through automation, predictive analytics, and decision intelligence-real-time.

Conclusion:

India, for instance, is witnessing a surge in machine learning ecosystem driven by innovation, scale, and sector-specific knowledge. Starting from tech giants like TCS and Infosys to quick disruptors like Mad Street Den and Fractal Analytics, these companies have redefined the way industries operate in automated decision-making, outcome predictions, and angle personal experiences. With further development into 2025, their contributions will not only help shape the digital economy of India but also set the country on the world map for AI and machine-learning aptitude.

The post Top 10 Machine Learning Companies in India appeared first on ELE Times.

My homemade bench lamp

Reddit:Electronics - Втр, 08/12/2025 - 11:12
My homemade bench lamp

I couldn’t find a bench lamp that was inexpensive and met all my requirements. I wanted the light to be fairly diffuse, have adjustable brightness, and be positionable in any way I wanted. I particularly wanted to avoid shadows and reflected glare.

In the end, I decided to make my own. For the angle-poise stand, I bought an AliExpress phone mount, meant for filming things on desks. It was about a tenner and has all the adjustment I could wish for. Then I bought a 12V COB panel, also from AliExpress, for about £1.50. I always intended to underrun it, but even so, it gets fairly warm — so I stuck two 25x100mm heatsinks to the back using thermal glue.

Finally, to power it, I used a boost converter set to output around 10.5 volts (I know that’s not exactly “adjustable,” but I’ll live with it for now), and soldered a USB-A plug to the input.

In the end, I’m delighted with it. Not only is it perfect for my soldering and other assorted nerd tasks, it was also incredibly cheap — the whole thing cost less than £15 — and I enjoyed every moment of making it.

submitted by /u/Open_Theme6497
[link] [comments]

RISC-V basics: The truth about custom extensions

EDN Network - Втр, 08/12/2025 - 11:00

The era of universal processor architectures is giving way to workload-specific designs optimized for performance, power, and scalability. As data-centric applications in artificial intelligence (AI), edge computing, automotive, and industrial markets continue to expand, they are driving a fundamental shift in processor design.

Arguably, chipmakers can no longer rely on generalized architectures to meet the demands of these specialized markets. Open ecosystems like RISC-V empower silicon developers to craft custom solutions that deliver both innovation and design efficiency, unlocking new opportunities across diverse applications.

RISC-V, an open-source instruction set architecture (ISA), is rapidly gaining momentum for its extensibility and royalty-free licensing. According to Rich Wawrzyniak, principal analyst at The SHD Group, “RISC-V SoC shipments are projected to grow at nearly 47% CAGR, capturing close to 35% of the global market by 2030.” This growth highlights why SoC designers are increasingly embracing architectures that offer greater flexibility and specialization.

 

RISC-V ISA customization trade-offs

The open nature of the RISC-V ISA has sparked widespread interest across the semiconductor industry, especially for its promise of customization. Unlike fixed-function ISAs, RISC-V enables designers to tailor processors to specific workloads. For companies building domain-specific chips for AI, automotive, or edge computing, this level of control can unlock significant competitive advantages in optimizing performance, power efficiency, and silicon area.

But customization is not a free lunch.

Adding custom extensions means taking ownership of both hardware design and the corresponding software toolchain. This includes compiler and simulation support, debug infrastructure, and potentially even operating system integration. While RISC-V’s modular structure makes customization easier than legacy ISAs, it still demands architectural consideration and robust development and verification workflows to ensure consistency and correctness.

In many cases, customization involves additional considerations. When general-purpose processing and compatibility with existing software libraries, security frameworks, and third-party ecosystems are paramount, excessive or non-standard extensions can introduce fragmentation. Design teams can mitigate this risk by aligning with RISC-V’s ratified extensions and profiles, for instance RVA23, and then applying targeted customizations where appropriate.

When applied strategically, RISC-V customization becomes a powerful lever that yields substantial ROI by rewarding thoughtful architecture, disciplined engineering, and clear product objectives. Some companies devote full design and software teams to developing strategic extensions, while others leverage automated toolchains and hardware-software co-design methodologies to mitigate risks, accelerate time to market, and capture most of the benefits.

For teams that can navigate the trade-offs well, RISC-V customization opens the door to processors truly optimized for their workloads and to massive product differentiation.

Real world use cases

Customized RISC-V cores are already deployed across the industry. For example, Nvidia’s VP of Multimedia Arch/ASIC, Frans Sijstermans, described the replacement of their internal Falcon MCU with customized RISC-V hardware and software developed in-house, now being deployed across a variety of applications.

One notable customization is support for 2KB beyond the standard 4K pages, which yielded a 50% performance improvement for legacy code. Page size changes like this are a clear example of modifications with system-level impact from processor hardware to operating system memory management.

Figure 1 The view of Nvidia’s RISC-V cores and extensions taken from the keynote “RISC-V at Nvidia: One Architecture, Dozens of Applications, Billions of Processors.”

Another commercial example is Meta’s MTIA accelerator, which extends a RISC-V core with application-specific instructions, custom interfaces, and specialized register files. While Meta has not published the full toolchain flow, the scope of integration implies an internally managed co-design methodology with tightly coupled hardware and software development.

Given the complexity of the modifications, the design likely leveraged automated flows capable of regenerating RTL, compiler backends, simulators, and intrinsics to maintain toolchain consistency. This reflects a broader trend of engineering teams adopting user-driven, in-house customization workflows that support rapid iteration and domain-specific optimization.

Figure 2 Meta’s MTIA accelerator integrates Andes RISC-V cores for optimized AI performance. Source: MTIA: First Generation Silicon Targeting Meta’s Recommendation Systems, A. Firoozshahian, et. al.

Startup company Rain.ai illustrates that even small teams can benefit from RISC-V customization via automated flows. Their process begins with input files that define operands, vector register inputs and outputs, vector unit behavior, and a C-language semantic description. These instructions are pipelined, multi-cycle, and designed to align with the stylistic and semantic properties of standard vector extensions.

The input files are extended with a minimal hardware implementation and processed through a flow that generates updated core RTL, simulation models, compiler support, and intrinsic functions. This enables developers to quickly update kernels, compile and run them on simulation models, and gather feedback on performance, utilization, and cycle count.

By lowering the barrier to custom instruction development, this process supports a hardware-software co-design methodology, making it easier to explore and refine different usage models. This approach was used to integrate their matrix multiply, sigmoid, and SiLU acceleration in the hardware and software flows, yielding an 80% reduction in power and a 7x–10x increase in throughput compared to the standard vector processing unit.

Figure 3 Here is an example of a hardware/software co‑design flow for developing and optimizing custom instructions. Source: Andes Technology

Tools supporting RISC-V customization

To support these holistic workflows, automation tools are emerging to streamline customization and integration. For example, Andes Technology provides silicon-proven IP and a comprehensive suite of design tools to accelerate development.

Figure 4 ACE and CoPilot simplify the development and integration of custom instructions. Source: Andes Technology

Andes Custom Extension (ACE) framework and CoPilot toolchain offer a streamlined path to RISC-V customization. ACE enables developers to define custom instructions optimized for specific workloads, supporting advanced features such as pipelining, background execution, custom registers, and memory structures.

CoPilot automates the integration process by regenerating the entire hardware and software stack, including RTL, compiler, debugger, and simulator, based on the defined extensions. This reduces manual effort, ensures alignment between hardware and software, and accelerates development cycles, making custom RISC-V design practical for a broad range of teams and applications.

RISC-V’s open ISA broke down barriers to processor innovation, enabling developers to move beyond the constraints of proprietary architectures. Today, advanced frameworks and automation tools empower even lean teams to take advantage of hardware-software co-design with RISC-V.

For design teams that approach customization with discipline, RISC-V offers a rare opportunity: to shape processors around the needs of the application, not the other way around. The companies that succeed in mastering this co-design approach won’t just keep pace, they’ll define the next era of processor innovation.

Marc Evans, director of Business Development & Marketing at Andes Technology, brings deep expertise in IP, SoC architecture, CPU/DSP design, and the RISC-V ecosystem. His career spans hands-on processor and memory system architecture to strategic leadership roles driving the adoption of new IP for emerging applications at leading semiconductor companies.

Related Content

The post RISC-V basics: The truth about custom extensions appeared first on EDN.

Semiconductor Collabs Yield Design Wins, From Chiplets to Charging Speed

AAC - Втр, 08/12/2025 - 02:00
From high-performance EVs to low-power IoT modules and next-gen AI chiplets, three recent collaborations showcase how semiconductor innovation is driving new design frontiers.

Nuvoton Rolls Out 8-bit MCU With Rich Peripherals & High Noise Immunity

AAC - Пн, 08/11/2025 - 20:00
The NuMicro MG51 series brings enhanced I/O flexibility, analog precision, and EMI protection to industrial applications.

AXT appoints former director Leonard J. Leblanc as board member

Semiconductor today - Пн, 08/11/2025 - 17:55
AXT Inc of Fremont, CA, USA — which makes gallium arsenide (GaAs), indium phosphide (InP) and germanium (Ge) substrates and raw materials — has appointed Leonard J. Leblanc as a member of its board to fill the vacancy due to the passing of Christine Russell. He will serve as a Class III director with a maximum term expiring on 29 July 2027, or until his successor is duly elected and qualified...

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів