Збирач потоків

Deconstructing the Semiconductor Revolution in Automotive Design: Understanding Composition and Challenges

ELE Times - 3 години 17 хв тому

As the world embraces the age of technology, semiconductors stand as the architects of the digital lives we live today. Semiconductors are the engines running silently behind everything from our smartphones and PCs to the AI assistants in our homes and even the noise-canceling headphones on our ears. Now, that same quiet power is stepping out of our pockets and onto our roads, initiating a second, parallel revolution in the automotive sector.

As we turn towards the automotive industry, we see a rise in the acceptance of electric and autonomous vehicles that has necessitated the use of around 1,000 to 3,500 individual chips or semiconductors in a single machine, transforming modern-day vehicles into moving computational giants. This isn’t just a trend; it’s a fundamental rewiring of the car. Asif Anwar, Executive Director of Automotive Market Analysis at TechInsights, validates this, stating that the “path to the SDV will be underpinned by the digitalization of the cockpit, vehicle connectivity, and ADAS capabilities,” with the vehicle’s core electronic architecture being the primary enabler. Features like the Advanced Driver Assistance System (ADAS) are no longer niche; they are central to the story of smart, connected vehicles on the roads. In markets like India, this is about delivering “premium, intelligent automotive experiences,” according to Savi Soin, President of Qualcomm India, who emphasizes that the country is moving beyond low-end models and embracing local innovation.

To understand this revolution—and the immense challenges engineers face—we must first dissect the new nervous system of the automobile: the array of specialized semiconductors that gives it intelligence.

The New Central Nervous System of Automotives

  • The Brains: Central Compute System on Chips (SoC)

It is a single, centralized module comprising high-performance computing units that brings together various functions of a vehicle. These enable modern-day Software-Defined Vehicles (SDVs), where features are continuously enhanced through agile software updates throughout their lifecycle. This capability is what allows automakers to offer what Hisashi Takeuchi, MD & CEO of Maruti Suzuki India Ltd, describes as “affordable telematics and advanced infotainment systems,” by leveraging the power of a centralized SoC.

Some of the prominent SoCs include the Renesas R-Car Family, the Qualcomm Snapdragon Ride Flex SoC, and the STMicroelectronics Accordo and Stellar families. These powerful chips receive pre-processed data from all zonal gateways (regional data hubs) through sensors. Further, they run complex software (including the “Car OS” and AI algorithms) and make all critical decisions for functions like ADAS and infotainment, effectively controlling the car’s advanced operations; hence, it is called the Brain. The goal, according to executives like Vivek Bhan of Renesas, is to provide “end-to-end automotive-grade system solutions” that help carmakers “accelerate SDV development.”

  • The Muscles: Power Semiconductors:

Power semiconductors are specialized devices designed to handle high voltage and large currents, enabling efficient control and conversion of electrical power. These are one of the most crucial components in the emerging segment of connected, electric, and autonomous vehicles. They are crucial components in electric motor drive systems, inverters, and on-board chargers for electric and hybrid vehicles.

Some of the prominent power semiconductors include IGBTs, MOSFETs (including silicon carbide (SiC) and gallium nitride (GaN) versions), and diodes. These are basically switches enabling the flow of power in the circuit.

These form the muscles of the automotives as they regulate and manage power to enable efficient and productive use of energy, hence impacting vehicle efficiency, range, and overall performance.

  • The Senses: Sensors

Sensors are devices that detect and respond to changes in their environment by converting physical phenomena into measurable signals. These play a crucial role in monitoring and reporting different parameters, including engine performance, safety, and environmental conditions. These provide the critical data needed to make decisions in aspects like ADAS, ABS, and autonomous driving. 

Semiconductor placement in an automotiveRepresentational Image

Some commonly used sensors in automobiles include the fuel temperature sensor, parking sensors, vehicle speed sensor, tire pressure monitoring system, and airbag sensors, among others.

These sensors, like lidar, radar, and cameras, sense the environment ranging from the engine to the roads, enabling critical functions like ADAS and autonomous driving, hence the name Senses. These are one of the crucial elements in modern automotive, as their collection enables the SoC to make decisions.

  • The Reflexes and Nerves: MCUs and Connectivity

Microcontrollers are small, integrated circuits that function as miniature computers, designed to control specific tasks within electronic devices. While SoCs are the “brains” for complex tasks, MCUs are still embedded everywhere, managing smaller, more specific tasks (e.g., controlling a single window, managing a specific light, basic engine control units, and individual airbag deployment units). 

Besides, the memory inside the automobiles enables them to store data from sensors and run applications while the vehicle’s communication with the cloud is enabled by dedicated communication chips or RF devices (5G, Wi-Fi, Bluetooth, and Ethernet transceivers). These are distinct from SoCs and sensors.

Apart from these, automobiles comprise analog ICs/PMICs for power regulation and signal conditioning.

Design Engineer’s Story: The Core Challenges

This increasing semiconductor composition naturally gives way to a plethora of challenges. As Vivek Bhan, Senior Vice President at Renesas, notes, the company’s latest platforms are designed specifically to “tackle the complex challenges the automotive industry faces today,” which range from hardware optimization to ensuring safety compliance. This sentiment highlights the core pain points of an engineer designing these systems.

Semiconductors are highly expensive and prone to degradation and performance issues as they enter the automotive sector. The computational giants produce a harsh environment, including high temperature, vibrations, and humidity, and come with an abundance of electric circuits. These factors together make the landscape extremely challenging for designing engineers. Some important challenges are listed below:

  1. Rough Automotive Environment: The engine environment in an automobile is generally rough owing to temperature, vibrations, and humidity. This scenario poses a significant threat, as high temperatures can lead to increased thermal noise, reduced carrier mobility, and even damage to the semiconductor material itself. Therefore, the performance of semiconductors heavily depends on conducive environmental conditions. Design engineers must manage these complex environmental needs through select materials and specific packaging techniques.
  2. Electromagnetic Interference: Semiconductors, being small in size, operating at high speed, and sensitive to voltage fluctuations, are highly prone to electromagnetic interference. This vulnerability can disrupt their operations and lead to the breakdown of the automotive system. This is extremely crucial for design engineers to resolve, as it could compromise the entire concept of connected vehicles.
  3. Hardware-Software Integration: Modern vehicles are increasingly software-defined, requiring seamless integration of complex hardware and software systems. Engineers must ensure that hardware and software components work together flawlessly, particularly with over-the-air (OTA) software updates.
  4. Supply-Chain-Related Risks: The automotive industry is heavily reliant on semiconductors, making it vulnerable to supply chain disruptions. Global shortages and geopolitical dependencies in chip fabrication can lead to production delays, increased costs, and even halted assembly lines.
  5. Design Complexity: The increasing complexity of automotive chip designs, driven by features like AI, raises development costs and verification challenges. Engineers need to constantly update their skills through R&D to address these challenges. This is where concepts like “Shift-Left innovations,” mentioned by industry leaders, become critical, allowing for earlier testing and validation in the design cycle. To solve this, Electronic Design Automation (EDA) tools are used to test everything from thermal analysis to signal integrity in complex chiplet-based designs.
  6. Safety and Compliance: Automotive systems, especially those related to safety-critical functions, require strict adherence to standards like ISO 26262 and ASIL-D. Engineers must ensure their systems meet these standards through rigorous testing and validation.

Conclusion

Ultimately, the story of modern-day vehicles is the story of human growth and triumphs. Behind every advanced safety system lies a design engineer navigating a formidable battleground. The challenges of taming heat, shielding circuits, and ensuring flawless hardware-software integration are the crucibles where modern automotive innovation is forged. While the vehicle on the road is a testament to the power of semiconductors, its success is a direct result of the designers who can solve these complex puzzles. The road ahead is clear: the most valuable component is not just the chip itself, but the human expertise required to master it. This is why tech leaders emphasize collaboration. As Savi Soin of Qualcomm notes, strategic partnerships with OEMs “empower the local ecosystem to lead the mobility revolution and leapfrog into a future defined by intelligent transportation,” concluding optimistically that “the road ahead is incredibly exciting and we’re just getting started.”

The post Deconstructing the Semiconductor Revolution in Automotive Design: Understanding Composition and Challenges appeared first on ELE Times.

Top 10 Machine Learning Companies in India

ELE Times - 3 години 45 хв тому

The rapid growth of machine learning development companies in India is shaking up industries like healthcare, finance, and retail. With breakthrough innovations and cutting-edge machine learning innovations, these companies enter 2025 as leaders. From designing algorithms to addressing custom machine learning development needs, these companies are interjected into the future of India. This article highlights the top 10 machine learning companies shaping India’s technological landscape in 2025, focusing on their cutting-edge innovations and transformative impact across various sectors.

  1. Tata Consultancy Services (TCS)

Tata Consultancy Services (TCS) is an important player in India’s machine learning landscape, weaving ML within enterprise solutions and internal operations. TCS with more than 270 AI and ML engagements applies machine learning in fields like finance, retail, and compliance to support better decisions and to automate processes. Areas such as deep learning, natural language processing, and predictive analytics fall within their scope. TCS offers frameworks and tools for enhancing the client experience, improving decision-making, and automating processes. TCS also has its platform, namely Decision Fabric combine ML with generative AI to deliver scalable, intelligent solutions.

  1. Infosys

Infosys is India’s pride in cutting-edge machine learning innovation, transforming enterprises with an AI-first approach. Infosys Topaz, the company’s main product, combines cloud, generative AI, and machine learning technologies to improve service personalization and intelligent ecosystems while automating business decision-making processes. Infosys Applied AI provides scaled ML solutions across industries, from financial services to manufacturing, integrating analytics, cloud, and AI models into a single framework. In terms of applying machine learning to various industries such as banking, healthcare, and retail, Infosys helps its clients automate operations and forecast market trends.

  1. Wipro

Wipro applies machine learning in its consulting, engineering, and cloud services to enable intelligent automation and predictive insights. Its implementations range from machine learning for natural language processing, intelligent search, and content moderation to computer vision for security and defect identification and predictive analytics for product failure prediction and maintenance optimization. The HOLMES AI platform by Wipro predominantly concentrates on NLP, robotic process automation (RPA), and advanced analytics.

  1. HCL Technologies

HCL Technologies provides high-end machine learning solutions through AION, which helps in streamlining the ML lifecycle by way of low-code automation, and Graviton, which offers data-platform modernization for scalable model building and deployment. Use tools like Data Genie for synthetic data generation, while HCL MLOps and NLP services allow smooth deployment along with natural-language-based interfaces. Industries including manufacturing, healthcare, and finance are all undergoing change as a result of these advancements.

  1. Accenture India

A global center of machine learning innovation in India, Accenture India works with thousands of experts applying AI solutions across industries. It sustains the AI Refinery platforms for ML scale-up across finance, healthcare, and retail. To solve healthcare, energy, and retail issues, Accenture India applies state-of-the-art machine learning technologies with profound domain knowledge of those service areas. The organization offers AI solutions that include natural language processing, computer vision, and data-driven analytics.

  1. Tech Mahindra

Tech Mahindra’s complete breadth of ML services incorporates deep learning, data analytics, automation, and so on. Tech Mahindra India uses ML in digital transformation in telecom, manufacturing, and BFSI sectors. The ML services it provides are predictive maintenance, fraud detection, and intelligent customer support. It offers its services to manufacturing, logistics, and telecom sectors, helping them in their operations and decision-making.

  1. Fractal Analytics

Fractal Analytics is one of India’s leading companies in decision intelligence and machine learning. Qure.ai and Cuddle.ai are platforms where ML is applied for diagnosis, analytics, and automation. Being a company that highly respects ethical AI and innovation, Fractal seeks real-time insights and predictive intelligence for enterprises.

  1. Mu Sigma

Mu Sigma uses machine learning within its Man-Machine Ecosystem, creating a synergy between human decision scientists and its own analytical platforms. The ML stack at Mu Sigma caters to all aspects of enterprise analytics: starting from problem definition, using natural language querying, sentiment analysis to solution design with PreFabs and accelerators for rapid deployment of ML models. The company also offers services such as: predictive analytics, data visualization, and decision modeling using state-of-the-art ML algorithms to solve some of the most challenging problems faced by businesses.

  1. Zensar Technologies

Zensar Technologies integrates ML with its AI-powered platforms to support decision-making, enhance customer experience, and increase operational excellence in sectors like BFSI, healthcare, and manufacturing. The rebranded R&D hub, Zensar AIRLabs, identified three AI pillars-experience, research, and decision-making-where it applies ML to predictive analytics, fraud detection, and digital supply chain optimization.

  1. Mad Street Den

Mad Street Den is famous for the AI-powered platform, Vue.ai, providing intelligent automation across retail, finance, and healthcare. Blox-the horizontal AI stack of the company-uses computer vision and ML to enhance customer experience, increase efficiency in operations, and reduce the dependence of large data science teams. With a strong focus on scalable enterprise AI, Mad Street Den is turning global businesses AI-native through automation, predictive analytics, and decision intelligence-real-time.

Conclusion:

India, for instance, is witnessing a surge in machine learning ecosystem driven by innovation, scale, and sector-specific knowledge. Starting from tech giants like TCS and Infosys to quick disruptors like Mad Street Den and Fractal Analytics, these companies have redefined the way industries operate in automated decision-making, outcome predictions, and angle personal experiences. With further development into 2025, their contributions will not only help shape the digital economy of India but also set the country on the world map for AI and machine-learning aptitude.

The post Top 10 Machine Learning Companies in India appeared first on ELE Times.

RISC-V basics: The truth about custom extensions

EDN Network - 6 годин 25 хв тому

The era of universal processor architectures is giving way to workload-specific designs optimized for performance, power, and scalability. As data-centric applications in artificial intelligence (AI), edge computing, automotive, and industrial markets continue to expand, they are driving a fundamental shift in processor design.

Arguably, chipmakers can no longer rely on generalized architectures to meet the demands of these specialized markets. Open ecosystems like RISC-V empower silicon developers to craft custom solutions that deliver both innovation and design efficiency, unlocking new opportunities across diverse applications.

RISC-V, an open-source instruction set architecture (ISA), is rapidly gaining momentum for its extensibility and royalty-free licensing. According to Rich Wawrzyniak, principal analyst at The SHD Group, “RISC-V SoC shipments are projected to grow at nearly 47% CAGR, capturing close to 35% of the global market by 2030.” This growth highlights why SoC designers are increasingly embracing architectures that offer greater flexibility and specialization.

 

RISC-V ISA customization trade-offs

The open nature of the RISC-V ISA has sparked widespread interest across the semiconductor industry, especially for its promise of customization. Unlike fixed-function ISAs, RISC-V enables designers to tailor processors to specific workloads. For companies building domain-specific chips for AI, automotive, or edge computing, this level of control can unlock significant competitive advantages in optimizing performance, power efficiency, and silicon area.

But customization is not a free lunch.

Adding custom extensions means taking ownership of both hardware design and the corresponding software toolchain. This includes compiler and simulation support, debug infrastructure, and potentially even operating system integration. While RISC-V’s modular structure makes customization easier than legacy ISAs, it still demands architectural consideration and robust development and verification workflows to ensure consistency and correctness.

In many cases, customization involves additional considerations. When general-purpose processing and compatibility with existing software libraries, security frameworks, and third-party ecosystems are paramount, excessive or non-standard extensions can introduce fragmentation. Design teams can mitigate this risk by aligning with RISC-V’s ratified extensions and profiles, for instance RVA23, and then applying targeted customizations where appropriate.

When applied strategically, RISC-V customization becomes a powerful lever that yields substantial ROI by rewarding thoughtful architecture, disciplined engineering, and clear product objectives. Some companies devote full design and software teams to developing strategic extensions, while others leverage automated toolchains and hardware-software co-design methodologies to mitigate risks, accelerate time to market, and capture most of the benefits.

For teams that can navigate the trade-offs well, RISC-V customization opens the door to processors truly optimized for their workloads and to massive product differentiation.

Real world use cases

Customized RISC-V cores are already deployed across the industry. For example, Nvidia’s VP of Multimedia Arch/ASIC, Frans Sijstermans, described the replacement of their internal Falcon MCU with customized RISC-V hardware and software developed in-house, now being deployed across a variety of applications.

One notable customization is support for 2KB beyond the standard 4K pages, which yielded a 50% performance improvement for legacy code. Page size changes like this are a clear example of modifications with system-level impact from processor hardware to operating system memory management.

Figure 1 The view of Nvidia’s RISC-V cores and extensions taken from the keynote “RISC-V at Nvidia: One Architecture, Dozens of Applications, Billions of Processors.”

Another commercial example is Meta’s MTIA accelerator, which extends a RISC-V core with application-specific instructions, custom interfaces, and specialized register files. While Meta has not published the full toolchain flow, the scope of integration implies an internally managed co-design methodology with tightly coupled hardware and software development.

Given the complexity of the modifications, the design likely leveraged automated flows capable of regenerating RTL, compiler backends, simulators, and intrinsics to maintain toolchain consistency. This reflects a broader trend of engineering teams adopting user-driven, in-house customization workflows that support rapid iteration and domain-specific optimization.

Figure 2 Meta’s MTIA accelerator integrates Andes RISC-V cores for optimized AI performance. Source: MTIA: First Generation Silicon Targeting Meta’s Recommendation Systems, A. Firoozshahian, et. al.

Startup company Rain.ai illustrates that even small teams can benefit from RISC-V customization via automated flows. Their process begins with input files that define operands, vector register inputs and outputs, vector unit behavior, and a C-language semantic description. These instructions are pipelined, multi-cycle, and designed to align with the stylistic and semantic properties of standard vector extensions.

The input files are extended with a minimal hardware implementation and processed through a flow that generates updated core RTL, simulation models, compiler support, and intrinsic functions. This enables developers to quickly update kernels, compile and run them on simulation models, and gather feedback on performance, utilization, and cycle count.

By lowering the barrier to custom instruction development, this process supports a hardware-software co-design methodology, making it easier to explore and refine different usage models. This approach was used to integrate their matrix multiply, sigmoid, and SiLU acceleration in the hardware and software flows, yielding an 80% reduction in power and a 7x–10x increase in throughput compared to the standard vector processing unit.

Figure 3 Here is an example of a hardware/software co‑design flow for developing and optimizing custom instructions. Source: Andes Technology

Tools supporting RISC-V customization

To support these holistic workflows, automation tools are emerging to streamline customization and integration. For example, Andes Technology provides silicon-proven IP and a comprehensive suite of design tools to accelerate development.

Figure 4 ACE and CoPilot simplify the development and integration of custom instructions. Source: Andes Technology

Andes Custom Extension (ACE) framework and CoPilot toolchain offer a streamlined path to RISC-V customization. ACE enables developers to define custom instructions optimized for specific workloads, supporting advanced features such as pipelining, background execution, custom registers, and memory structures.

CoPilot automates the integration process by regenerating the entire hardware and software stack, including RTL, compiler, debugger, and simulator, based on the defined extensions. This reduces manual effort, ensures alignment between hardware and software, and accelerates development cycles, making custom RISC-V design practical for a broad range of teams and applications.

RISC-V’s open ISA broke down barriers to processor innovation, enabling developers to move beyond the constraints of proprietary architectures. Today, advanced frameworks and automation tools empower even lean teams to take advantage of hardware-software co-design with RISC-V.

For design teams that approach customization with discipline, RISC-V offers a rare opportunity: to shape processors around the needs of the application, not the other way around. The companies that succeed in mastering this co-design approach won’t just keep pace, they’ll define the next era of processor innovation.

Marc Evans, director of Business Development & Marketing at Andes Technology, brings deep expertise in IP, SoC architecture, CPU/DSP design, and the RISC-V ecosystem. His career spans hands-on processor and memory system architecture to strategic leadership roles driving the adoption of new IP for emerging applications at leading semiconductor companies.

Related Content

The post RISC-V basics: The truth about custom extensions appeared first on EDN.

Semiconductor Collabs Yield Design Wins, From Chiplets to Charging Speed

AAC - 15 годин 25 хв тому
From high-performance EVs to low-power IoT modules and next-gen AI chiplets, three recent collaborations showcase how semiconductor innovation is driving new design frontiers.

Nuvoton Rolls Out 8-bit MCU With Rich Peripherals & High Noise Immunity

AAC - Пн, 08/11/2025 - 20:00
The NuMicro MG51 series brings enhanced I/O flexibility, analog precision, and EMI protection to industrial applications.

AXT appoints former director Leonard J. Leblanc as board member

Semiconductor today - Пн, 08/11/2025 - 17:55
AXT Inc of Fremont, CA, USA — which makes gallium arsenide (GaAs), indium phosphide (InP) and germanium (Ge) substrates and raw materials — has appointed Leonard J. Leblanc as a member of its board to fill the vacancy due to the passing of Christine Russell. He will serve as a Class III director with a maximum term expiring on 29 July 2027, or until his successor is duly elected and qualified...

Assessing vinyl’s resurrection: Differentiation, optimization, and demand maximization

EDN Network - Пн, 08/11/2025 - 17:32

As long-time readers may already realize from my repeat case study coverage of the topic, one aspect of the tech industry that I find particularly interesting is how suppliers react to the inevitable maturation of a given technology. Seeing all the cool new stuff get launched each year—and forecasting whether any of it will get to the “peak of inflated expectations” region of Gartner’s hype cycle, far from the “trough of disillusionment” beyond—is all well and good:

But what happens when a technology (and products based on it) makes it through the “slope of enlightenment” and ends up at the “plateau of productivity”? A sizeable mature market inevitably attracts additional market participants: often great news for consumers, not so much for suppliers. How do the new entrants differentiate themselves from existing “players” with already established brand names, and without just dropping prices, resulting in a “race to the bottom” that fiscally benefits no one? And how do those existing “players” combat these new entrants, leveraging (hopeful positive) existing consumer awareness and prolonging innovation to ensure that ongoing profits counterbalance upfront R&D and market-cultivation expenses?

The vinyl example

I’ve discussed such situations in the past, for example, with Bluetooth audio adapters and LED-based illumination sources. The situation I’m covering today, however, is if anything even more complicated. It involves a technology—the phonograph record—that in the not-too-distant past was well past the “plateau of productivity” and in a “death spiral”, the victim of more modern music-delivery alternatives such as optical discs and, later, online downloads and streams. But today? Just last night I was reading the latest issue of Stereophile Magazine (in print, by the way, speaking of “left for dead” technologies with recent resurgences), which included analysis of both Goldman Sachs’ most recent 2025 “Music In the Air” market report (as noted elsewhere, the most recent report available online as I write this is from 2024) and others’ reaction to it:

Analyses of the latest Goldman Sachs “Music in the Air” report show how the same news can be interpreted in different ways. Billboard sees it in a negative light: “Goldman Sachs Lowers Global Music Industry Growth Forecast, Wiping Out $2.5 Billion.” Music Business Worldwide is more measured, writing, “Despite revising some forecasts downward following a slower-than-expected 2024, Goldman Sachs maintains a positive long-term outlook for the music industry.”

 The Billboard article is good, but the headline is clickbait. The Goldman Sachs report didn’t wipe out $2.5 billion. Rather, it reported a less optimistic forecast, projecting lower future revenues than last year’s report projected: The value wiped out was never real.

Stereophile editor Jim Austin continues:

Most of this [2024] growth was from streaming. Worldwide streaming revenue exceeded $20 billion for the first time, reaching $20.4 billion. Music Business Worldwide points out that that’s a bigger number than total worldwide music revenue, from all sources, for all years 2003–2020. Streaming subscription revenue was the main source of growth, rising by 9.5% year over year. That reflects a 10.6% increase in worldwide subscribers, to 752 million.

But here’s the key takeaway (bolded emphasis mine):

Meanwhile, following an excellent 2023 for physical media—it was up that year by 14.5%—trade revenue from physical media fell by 3.1% last year. Physical media represented just 16% of trade revenues in 2024, down 2% from the previous year. Physical-media revenue in Asia—a last stronghold of music you can touch—also fell. What about vinyl records? Trade revenue from vinyl records rose by 4.4% year over year.

Now combine this factoid with another one I recently came across, from a presentation made by market research firm Luminate Data at the 2023 SXSW conference:

The resurgence of vinyl sales among music fans has been going on for some time now, but the trend marked a major milestone in 2022. According to data recently released by the Recording Industry Association of America (RIAA), annual vinyl sales exceeded CD sales in the US last year for the first time since 1987.

 Consumers bought 41.3 million vinyl records in the States in 2022, compared to 33.4 million compact discs…Revenues from vinyl jumped 17.2% YoY, to USD $1.2 billion in 2022, while revenues from CDs fell 17.6%, to $483 million.

Now, again, the “money quote” (bolded emphasis again mine):

In the company’s [Luminate Data’s] recent “Top Entertainment Trends for 2023” report, Luminate found that “50% of consumers who have bought vinyl in the past 12 months own a record player, compared to 15% among music listeners overall.” Naturally, this also means that 50% of vinyl buyers don’t own a record player.

Note that this isn’t saying that half of the records sold went to non-turntable-owners. I suspect (and admittedly exemplify) that turntable owners represent a significant percentage of total record unit sales (and profits, for that matter). But it’s mind-boggling to me that half the people who bought at least one record don’t even own a turntable to play it on. What’s going on?

Not owning a turntable obviates the majority (at least) of the motivation rationale I proffered in one of last month’s posts for the other half of us:

There’s something fundamentally tactile-titillating and otherwise sensory-pleasing (at least to a memory-filled “old timer” like me) to carefully pulling an LP out of its sleeve, running a fluid-augmented antistatic velvet brush over it, lowering the stylus onto the disc and then sitting back to audition the results while perusing the album cover’s contents.

And of course, some of the acquisition activity for non-turntable-owners ends up turning into gifts for the other half of us. But there’s still that “perusing the album cover’s content” angle, more generally representative of “collector” activity. It’s one of the factors that I’ve lumped into the following broad characteristic categories, curated after my reconnection with vinyl and my ensuing observations of how musicians and the record labels that represent (most of) them differentiate an otherwise-generic product to maximize buyer acquisition, variant selection and (for multi-variant collection purposes) repeat-purchase iteration motivations.

Media deviations

Standard LPs (long-play records) weigh between 100 and 140 grams. Pricier “audiophile grade” pressings are thicker, therefore heavier, ranging between 180 and 220 grams. Does the added heft make any difference, aside from the subtractive impact on your bank account balance? The answer’s at-best debatable; that said, I admittedly “go thick” whenever I have a choice. Then again, I also use a stabilizer even with new LPs, so any skepticism on your part is understandable:

Thicker vinyl, one could reasonably (IMHO, at least) argue, is more immune to warping effects. Also, as with a beefier plinth (turntable base), there’s decreased likelihood of vibration originating elsewhere (the turntable’s own motor, for example, or your feet striking the floor as you walk by) transferring to and being picked up by the stylus (aka “needle”), although the turntable’s platter mat material and thickness are probably more of a factor in this regard.

That all said, “audiophile grade” discs generally are not only thicker and heavier but also more likely to be made from “virgin” versus “noisier” recycled vinyl, a grade-of-materials differential which presumably has an even greater effect on sonic quality-of-results. Don’t underestimate the perceived quality differential between two products with different hefts, either.

And speaking of perceptions versus reality, when I recently started shopping for records again, I kept coming across mentions of “Pitman”, “Terre Haute” and various locales in Germany, for example. It turns out that these refer to record pressing plant locations (New Jersey and Indiana, in the first two cases), which some folks claim deliver(ed) differing quality of results, whether in general or specifically in certain timeframes. True or false? I’m not touching this one with a ten-foot pole, aside from reiterating a repeated past observation that one’s ears and brain are prone to rationalizing decisions and transactions that one’s wallet has already made.

Content optimization

One of the first LPs I (re-)bought when I reconnected with the vinyl infatuation of my youth was a popular classic choice, Fleetwood Mac’s Rumours. As I shopped online, I came across both traditional one- and more expensive two-disc variants, the latter, which I initially assumed was a “deluxe edition” also including studio outtakes, alternate versions, live concert recordings, and the like. But, as it turned out, both options list the same 11 tracks. So, what was the difference?

Playback speed, it turned out. Supposedly, since a 45 rpm disc devotes more groove-length “real estate” to a given playback duration than its conventional 33 1/3 rpm counterpart, it’s able to encode a “richer” presentation of the music. The tradeoff, of course, is that the 45 RPM version more quickly uses up the available space on each side of an LP. Ergo, two discs instead of one.

More generally, a conventional 33 1/3 rpm pressing generally contains between 18 and 22 minutes of music per side. It’s possible to fit up to ~30 minutes of audio, both by leveraging “dead wax” space usually devoted solely to the lead-in and lead-out groove regions and by compressing the per-revolution groove spacing. That said, audio quality can suffer as a result, particularly with wide dynamic range and bass-rich source material.

The chronological contrast between a ~40-minute max LP and a 74-80-minute max Red Book Audio CD is obvious, particularly when you also factor in the added complications of keeping the original track order intact and preventing a given track from straddling both sides (i.e., not requiring that the listener flip the record over mid-song). The original pressing of Dire Straits’ Brothers in Arms, for example, shortened two songs in comparison to their audio CD forms to enable the album to fit on one LP. Subsequent remastered and reissued versions switched to a two-LP arrangement, enabling the representation of all songs in full. Radiohead’s Hail to the Thief, another example, was single-CD but dual-LP from the start, so as to not shorten and/or drop any tracks (the band’s existing success presumably gave it added leverage in this regard).

Remastering (speaking of which) is a common approach (often in conjunction with digitization of the original studio tape content, ironic given how “analog-preferential” many audiophiles are) used to encourage consumers to both select higher-priced album variants and to upgrade their existing collections. Jimmy Page did this, for example, with the Led Zeppelin songs found on the various “greatest hits” compilations and box sets released after the band’s discontinuation, along with reissues of the original albums. Even more substantial examples of the trend are the various to-stereo remixes of original mono content from bands like the Beach Boys and Beatles.

Half-speed mastering, done for some later versions of the aforementioned Brothers of Arms, is:

A technique occasionally used when cutting the acetate lacquers from which phonograph records are produced. The cutting machine platter is run at half of the usual speed (16 2⁄3 rpm for 33 1⁄3 rpm records) while the signal to be recorded is fed to the cutting head at half of its regular playback speed. The reasons for using this technique vary, but it is generally used for improving the high-frequency response of the finished record. By halving the speed during cutting, very high frequencies that are difficult to cut become much easier to cut since they are now mid-range frequencies.

And then there’s direct metal mastering, used (for example) with my copy of Rush’s Moving Pictures. Here’s the Google AI Overview summary:

An analog audio disc mastering technique where the audio signal is directly engraved onto a metal disc, typically copper, instead of a lacquer disc used in traditional mastering. This method bypasses the need for a lacquer master and its associated plating process, allowing for the creation of stampers directly from the metal master. This results in a potentially clearer, more detailed, and brighter sound with less surface noise compared to lacquer mastering.

Packaging and other aspects of presentation

Last, but definitely not least, let’s discuss the various means by which the music content encoded on the vinyl media is presented to potential purchasers as irresistibly as possible. I’ve already mentioned the increasingly common deluxe editions and other expanded versions of albums (I’m not speaking here of multi-album box sets). Take, for example, the 25th anniversary edition of R.E.M.’s Monster, which “contains the original Monster album on the first LP, along with a second LP containing Monster, completely remixed by original producer, Scott Litt, both pressed on 180 gram vinyl. Packaging features reimagined artwork by the original cover artist, Chris Bilheimer, and new liner notes, featuring interviews from members of the band.”

The 40th anniversary remaster of Rush’s previously mentioned Moving Pictures is even more elaborate, coming in multiple “bundle” options including a 5-LP version described as follows:

The third Moving Pictures configuration will be offered as a five-LP Deluxe Edition, all of it housed in a slipcase including a single-pocket jacket for the remastered original Moving Pictures on LP 1, and two gatefold jackets for LPs 2-5 that comprise all 19 tracks from the complete, unreleased Live In YYZ 1981 concert. As noted above, all vinyl has been cut for the first time ever via half-speed Direct to Metal Mastering (DMM) on 180-gram black audiophile vinyl. Extras include a 24-page booklet with unreleased photos, [Hugh] Syme’s reimagined artwork and new illustrations, and the complete liner notes.

Both Target and Walmart also sell “exclusive vinyl” versions of albums, bundled with posters and other extras. Walmart’s “exclusive” variant of Led Zeppelin’s Physical Graffiti, for example, includes a backstage pass replica:

More generally, although records traditionally used black-color vinyl media, alternate-palette and -pattern variants are becoming increasingly popular. Take a look, for example, at Walmart’s appropriately tinted version of Amy Winehouse’s Back to Black:

You’ve gotta admit, that looks pretty cool, right?

 I’m also quite taken with Target’s take on the Grateful Dead’s American Beauty:

Countless other examples exist, some attractive and others garish (IMHO, although you know the saying: “beauty is in the eye of the beholder”), eye-candy tailored for spinning on your turntable or, if you don’t have one (to my earlier factoid), displaying on your wall. That said, why Lorde and her record label extended the concept to cover a completely clear CD of her most recent just-introduced album, seemingly fundamentally incompatible with the need for a reflective media substrate for laser pickup purposes, is beyond me…

Broader relevance

This write-up admittedly ended up being much longer than I’d originally intended! To some degree, it reflects the diversity of record-centric examples that my research uncovered. But as with the Bluetooth audio adapter and LED-based illumination case studies that preceded it, I think it effectively exemplifies one industry’s attempts to remain relevant (twice, in this case!) and maximize its ROI in response to market evolutions. What do you think of the records’ efforts to redefine themselves for the modern consumer era? And what lessons can you derive for your company’s target markets? Sound off with your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

The post Assessing vinyl’s resurrection: Differentiation, optimization, and demand maximization appeared first on EDN.

Top 10 Machine Learning Applications and Use Cases

ELE Times - Пн, 08/11/2025 - 13:30

Without requiring explicit programming for every situation, machine learning is a potent method in computer science that teaches systems to identify patterns and gradually enhance their performance. These systems are not an assemblage of rigidly set rules-they take data, predict an outcome, and change their course of action depending upon what they have learned.

Machine learning stands out as a significant technology due to its flexibility.

Machine learning thus stands as one of the major technological developments. It enables a machine to learn from data and improve with experience, without explicitly being programmed. Patterns discovered by machine learning models from data are used for forecasting or decision-making. Machine learning help companies automate processes, make better decisions, and glean insights. Machine learning is transforming industries worldwide from personalized content recommendations to breakthroughs in medical diagnostics. Some of the top 10 machine learning applications and use cases shaping the world today.

  1. Personalized Recommendations

A number of recommendation engines nowadays are created by online retailers and streaming sites that, depending on such data as location and past activity.

Machine learning lends to recommendation engines that suggest product, movie, or music according to past behavior of the user. Systems work on collaborative filtering, content-based filtering, etc.-methods of personalizing one’s experience.

Use Case:

Netflix recommends shows and movies based on what you have enlightened, whereas Amazon recommends items that are frequently purchased together.

  1. Fraud Detection

Banks use ML in real time to detect and prevent frauds. They work by analyzing patterns and variations in normal transaction behavior so that banks and credit card companies could detect suspicious activities concerning money laundering or unusual spending conduct.

Use Case:

Mastercard, for instance, uses AI to detect possible frauds in real-time and, under some circumstances, even predict some before they occur to protect a customer from theft.

  1. Predictive Maintenance

Machine learning is widely used in industries to forecast equipment failure before it actually happens. From an analysis of sensor data, such models forecast maintenance requirements for machines, thereby reducing downtime and saving costs.

Use Case:

Airlines keep track of engine performance to schedule repairs proactively.

  1. Healthcare & Medical Diagnosis

ML allows doctors to diagnose diseases faster and more precisely. It analyzes medical imaging or patient records to detect conditions early, such as tumors or diabetes. Tools are increasingly in use to recommend personalized treatments. Machine learning anticipates interactions between various substances and thus helps to speed up the drug discovery process and cut down on research expenses.

Use Case:

AI imaging systems to spot tumors in X-rays or MRIs, predictive models to identify patients at risk of diabetes.

  1. Autonomous Vehicles

Machine learning interprets sensor data, does object recognition, and cultivates decision-making scenarios for a closed-loop system for self-driving cars. Private entities such as Tesla and Waymo employ computer vision and reinforcement learning to drive autonomously and provide an autonomous ride service.

Use Case:

Tesla Autopilot applies deep learning for semi-autonomous driving including features such as lane keep assistance and adaptive cruise control.

  1. Natural Language Processing (NLP)

NLP enables machinery to understand, interpret, or generate human language. It is employed in chatbots, voice assistants, sentiment analysis, and translating tools.

Use Case:

For instance, GPT-based models can write essays, summarize articles, or answer questions with human-like fluency. NLP bridges the gap between human communication and machine understanding.

  1. Facial Recognition

The most important thing that machine learning can help facial recognition systems do is to identify individuals. Machine learning is a technique that enables images and videos to be identified and classified.

Use Case:

Used widely in smartphones for unlocking purposes, and airports for security checks as well as by law enforcement agencies, it is, however, very controversial in terms of ethics, privacy, and surveillance.

  1. Sentiment Analysis

The other important application of machine learning is sentiment analysis conducted on social media data. Sentiment analysis in real-time determines the feelings or opinions of a writer or speaker.

Use Case:

The sentiment analyzer can quickly provide insight into the true meaning and sentiment of a published review, email, or other documents. This sentiment analysis tool can be used for decision-making applications and for websites that provide reviews.

  1. Spam Filtering and Email Automation 

ML is used by email services for message categorization and spam detection. These are models that learn from user behavior and content of a message to distinguish genuine emails from junk. This saves time and keeps users safe from scams.

Use Case:

Email platforms like Gmail, Outlook, and Yahoo manage inboxes, automating responses and filtering out unwanted messages with high precision.

10. Social Media Optimization

ML is used by social media companies to target advertisements, identify hazardous content, and curate content feeds. The content-feed is algorithmically curated with the consideration of user engagements, and the same engine judges the advertisement placements. This keeps the user hooked-but it also creates discourse on algorithmic bias and user mental health.

Use Case:

Machine learning is employed by social media platforms like Facebook, Instagram, and Twitter to provide the best user experience by curating personalized content, targeting advertisements, and restraining harmful posts.

Conclusion:

Machine learning has come to revamp industries in the way that it provokes smarter decisions, smarter experiences, and smarter predictions. From healthcare to finance to social media, machine learning inhabits the very core of how people live and work. And as implementation increases, so does the need for ethical and responsible use in making sure that these powerful benefits are distributed fairly.

The post Top 10 Machine Learning Applications and Use Cases appeared first on ELE Times.

Turn pedals into power: A practical guide to human-powered energy

EDN Network - Пн, 08/11/2025 - 10:25

With a pedal generator, you can turn human effort into usable energy—ideal for off-grid setups, emergency backups, or just a fun DIY project. This guide gives you a fast-track look at how pedal generators work and how to build one on your own. Let’s turn motion into power!

Pedal generators, also known as pedal power generators, convert human kinetic energy into usable electrical power through a straightforward electromechanical process. As the user pedals, a rotating shaft drives a DC generator or alternator, producing voltage proportional to the speed and torque applied. A flywheel may be integrated to smooth out fluctuations, while a rectifier and voltage regulator ensure stable output for charging batteries or powering devices.

Figure 1 A commercial pedal generator delivers power through a standard 12-V automotive outlet. Source: K-Tor

Below is the blueprint of a basic pedal-powered generator built around a standard bicycle dynamo (bottle dynamo). It produces electricity as you pedal—using either your legs or arms—which can be used to charge small batteries or power portable electronics.

Figure 2 This blueprint illustrates how a basic pedal-powered generator works. Source: Author

It’s worth noting that a quick test was performed using the L-7113ID-5V LED as the test lamp/minimal load. Although overall efficiency varies with load and pedaling cadence, the system provides a hands-on demonstration of energy conversion ideal for educational setups.

Chances are you have already spotted that a DC motor can also function as a generator and that DC motors specifically designed for that purpose are now readily available. Below is a bit-enhanced version of the pedal generator built around a compact three-phase brushless DC (BLDC) motor.

Figure 3 A modestly upgraded pedal generator built around a three-phase brushless DC motor supplies unfiltered DC voltage for further conditioning. Source: Author

Just a quick note: If you are using a linear regulator, the small forward voltage drop you get from a Schottky diode (usually just a few tenths of a volt) does not really move the needle on efficiency. That’s because the regulator itself is dropping a lot more voltage across its control element. Where it does matter is when you are working with a low-dropout (LDO) regulator and trying to keep the output voltage as close as possible to the raw DC input. In that case, every little bit helps.

Also, it’s worth noting that readily available three-phase AC micro-generators can serve as viable substitutes, assuming they match your system’s specs. A typical example is the CrocSee Micro 3-phase brushless AC generator (Figure 4).

Figure 4 The micro generator’s internal view shows how elegant engineering simplifies complexity. Source: Author

To set expectations, pedal power is not sufficient to run an entire house, but it can be surprisingly useful. You can generate electricity for powering small devices and recharging batteries, all while using them. Pedal-powered generators can also work in tandem with other renewable sources, such as solar, to create a more versatile and sustainable setup.

On a related note, a pedal-powered bicycle generator (bike generator) is a practical solution that doubles as both an energy source and an exercise machine for household use. There are many ways to build a household bicycle generator, each offering its own set of advantages and trade-offs. Fortunately, even with basic tools and skills, constructing a functional bicycle generator is relatively straightforward.

Figure 5 A simple drawing shows how a household bicycle generator turns pedaling into electricity using a PMDC motor and a friction roller. Source: Author

Keep in mind that a flywheel can be a crucial component in this setup, as the dynamics of pedaling a stationary bicycle differ markedly from those of riding on the road. The flywheel helps smooth out the mechanical input, making the energy conversion process more consistent.

To convert this mechanical energy into electricity, a collector motor (Permanent Magnet DC Motor) serves well as a generator, offering reliable performance and simplicity. Alternatively, you can use a bicycle hub dynamo instead of the collector motor, but this demands some expertise.

Since the flywheel contributes to maintaining a relatively steady voltage output, it’s often feasible to run certain appliances directly from the generator, especially those that can tolerate raw, unregulated voltage. However, electronic devices and batteries are more sensitive to voltage fluctuations. Without proper regulation, they may malfunction or suffer damage, making a voltage regulator or controller a crucial addition to the system.

For a DC output pedal generator, such as the bicycle generator discussed here, a shunt regulator is the more suitable choice. Its ability to clamp excess voltage and safely dissipate surplus energy provides a critical layer of protection that a series regulator simply does not offer. Given the variable and often unpredictable nature of human-powered generation, overvoltage is a real concern, and the shunt regulator is specifically designed to handle this risk.

While a series regulator may offer slightly better efficiency under full load, its inability to manage voltage spikes or operate reliably without a constant load makes it less appropriate for this kind of setup. In contrast, the shunt regulator delivers consistent performance and robust overcharge protection, making it the safer and more practical option for a simple pedal generator system.

Additionally, in certain low-voltage, low-current systems that harvest energy from kinetic sources, pulse frequency modulation (PFM) modules can efficiently manage both power storage and delivery. These modules are particularly useful when energy input is sporadic or minimal, helping to optimize performance in compact or intermittent-generation setups.

Many folks working with motors might be surprised to learn that both brushed DC motors and brushless DC motors can actually function as generators. A brushed DC motor is a solid choice when you need a DC voltage output, while a BLDC motor is better suited for generating AC. If you are using a brushless DC motor to get DC output, you will need a rectifier circuit. On the flip side, if you are trying to get AC from a brushed DC motor, you will need DC-to-AC conversion electronics.

Moreover, it’s often assumed that a brushed DC motor running in generator mode is far less efficient than when it is driving a load as a motor. But with the right motor selection, load matching, and operating speed, you can achieve surprisingly good efficiency. Just be sure to consider both electrical and mechanical factors when dialing in your operating conditions.

See below a simplistic system diagram of a practical pedal power generator.

Figure 6 Here is a system diagram of a pedal generator that helps you build your own version. Source: Author

The core principle is straightforward: the raw input voltage (VT) is continuously monitored and compared against a stable reference voltage (VR). When VT exceeds VR, a power MOSFET activates the dump load, which must be capable of safely dissipating the excess energy.

Conversely, when VT falls below the reference, the dump load is deactivated. To prevent rapid switching near the threshold, it’s advisable to incorporate a small amount of hysteresis into the comparator circuit.

Now it’s over to you; review it, experiment, and bring your own version to life. Keep pedaling forward!

T. K. Hareendran is a self-taught electronics enthusiast with a strong passion for innovative circuit design and hands-on technology. He develops both experimental and practical electronic projects, documenting and sharing his work to support fellow tinkerers and learners. Beyond the workbench, he dedicates time to technical writing and hardware evaluations to contribute meaningfully to the maker community.

Related Content

The post Turn pedals into power: A practical guide to human-powered energy appeared first on EDN.

Двоетапна перевірка діяльності КПІ ім. Ігоря Сікорського

Новини - Пн, 08/11/2025 - 10:00
Двоетапна перевірка діяльності КПІ ім. Ігоря Сікорського
Image
kpi пн, 08/11/2025 - 10:00
Текст

Нещодавно у КПІ ім. Ігоря Сікорського завершилася комплексна двоетапна перевірка діяльності, ініціатором якої є адміністрація університету.

First Project: Bluetooth Speaker

Reddit:Electronics - Пн, 08/11/2025 - 00:24
 Bluetooth Speaker

Hey all! This is my first project and my first post here. I know it's a simple project, but I'm still really proud of how it turned out and wanted to share.

My friend and I are making a Bluetooth speaker for calls. Unfortunately, we assumed that audio was audio, so any audio amp would work for calls, but turns out different amps are needed for calls so all I could play on this one was music.

First, I put it all together with the breadboard and tape, and it was working but the signal was sparce, owing to loose connections with tape. So, I decided to solder the connections for a more continuous signal.

These are standard jumper wires from an Arduino starter kit; I presume you're not really supposed to solder them. But this was a throwaway prototype, I had plenty of wires, and I wanted to get experience soldering quickly, so I just did it and tried to desolder them afterward.

All in all, considering this was my first time soldering and I only burned myself once, I'm prepared to call this a success.

I know this setup doesn't look very safe; it was all done very impromptu. My friend probably has a better setup, but he wasn't available, so next time I'd like to do this at his place. If I keep doing this on my own, I'll go outside until I get a better setup.

Video Link: https://imgur.com/a/OUeYEi9

Song: Can You Hear the Whistle Blow by Default (缺省)
https://open.spotify.com/track/2bJjScKqL6XqhwL30X2SaZ?si=a9feec2f349c4391

缺省 Default - Can You Hear The Whistle Blow (Official MV)

Components:

XJ8002 Power Amplifier: 10PCS/LOT HXJ8002 Power Amplifier Board Mini Audio Voice Amplifier Module Replace PAM8403 - AliExpress 502
Bluetooth Audio Receiver Board VHM-314 (Type-C model): Bluetooth Audio Receiver Board VHM-314 Bluetooth 5.0 MP3 Lossless Decoder Board Wireless Stereo Music Module 3.7-5V - AliExpress 44
Speaker: 5pcs/lot New Ultra-thin Mini Speaker 4 Ohms 2 Watt 2w 4r Speaker Diameter 40mm 4cm Thickness 5mm - Acoustic Components - AliExpress

Breadboard, jumper wires, & 1k ohm resistors from REXQualis Starter Kit for R3 Project: Amazon.com: REXQualis Super Starter Kit Based on Arduino UNO R3 with Tutorial and Controller Board Compatible with Arduino IDE : Electronics

submitted by /u/Marcus_Meditates
[link] [comments]

My hand is cursed (rant)

Reddit:Electronics - Ндл, 08/10/2025 - 22:01
My hand is cursed (rant)

im not sure if this is allowed to be posted here, just scrolling and deleting pics from my phone and i found old pics of my uni class works and projects that somehow went wrong so often while i did nothing wrong. im pretty confident with my wiring and building the circuit because i used to be doing all good with correct results, but in my third year things just gone weird on my hands. i have officialy broken THREE breadboards and TWO arduino uno boards. context behind the second pic; i was building the circuit on the textbook but halfway through when i inserted a new jumper wire to ground row it sparks. no electric source, all machines were off. i told the lab assistant about my problem and he didnt believe it until he did what i did. in the end he just told me to buy a new breadboard.

whenever i retold this story to my seniors or friends from the same major, they kept on telling me "stop making up weird stories, i had my breadboard since high school", "did you break the arduino board in half? that thing is impossible to break", yada yada yada.

the project with arduino one was very important to me since it's a mandatory final project. even the simplest command would went wrong while there was nothing wrong physically, like the wrong LED lit up while it's not connected to the wiring i was testing, got 100% sensor reading while i didnt expose the sensor to anything yet, and the most frustrating was how often it sent me failed uploading message even if i have reset it, change the wires, clean the ports, multiple times. D-1 presentation morning everything finally worked but it had to be run separately (i used 3 sensors) so i quickly documented everything for the ppt attachments, but holy shit that evening it wouldnt let me run it again. so i ended showing up only with my poster and ppt (the paper was submitted via web). honestly im still very thankful that presentation was not graded, just need to show up and present it to the guests. it's just a mandatory project for the semester with progress reports every week and my professor said to not think about it too much, since he saw every weird shit from my project (he is also a very nice person as well). i could still remember showing up for the biweekly progress presentation just to show the video of me trying to show the sensor reading but it came out all different in multiple attempts and got stared by my project mate (1 professor could take 5 groups of students, i just volunteered to do the project alone since im an international student and i tried to avoid any miscommunications). that was the last time i touch any hardwares, not graduated yet since i failed a lot of classes which makes me wonder if this 4 years of uni actually worth the struggle.

thanking everyone that read this until the end.

submitted by /u/Dry-Union5199
[link] [comments]

Estimating FM Bandwidth: Solved Examples

AAC - Ндл, 08/10/2025 - 20:00
In this article, we'll illustrate the usefulness of Carson's rule for bandwidth estimation by working through a series of example problems.

Weekly discussion, complaint, and rant thread

Reddit:Electronics - Сбт, 08/09/2025 - 18:00

Open to anything, including discussions, complaints, and rants.

Sub rules do not apply, so don't bother reporting incivility, off-topic, or spam.

Reddit-wide rules do apply.

To see the newest posts, sort the comments by "new" (instead of "best" or "top").

submitted by /u/AutoModerator
[link] [comments]

Veeco’s Q2 revenue, operating income and EPS exceed guidance, but constrained by tariffs

Semiconductor today - Птн, 08/08/2025 - 20:03
For second-quarter 2025, epitaxial deposition and process equipment maker Veeco Instruments Inc of Plainview, NY, USA has reported revenue of $166.1m, down slightly on $167.3m last quarter and 6% on $175.9m a year ago, but exceeding the $135–165m guidance...

Renesas Intros 64-bit MPU Aimed at AI-Centric High-Performance HMI Designs

AAC - Птн, 08/08/2025 - 20:00
The new RZ/G3E MPU with quad CPU and NPU powers next-generation HMI devices with advanced processing.

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів