Новини світу мікро- та наноелектроніки

Cree LED licenses LED display technology to Daktronics

Semiconductor today - Tue, 12/17/2024 - 19:07
Cree LED Inc of Durham, NC, USA (a Penguin Solutions brand) has announced a multi-year, global agreement providing (with limited exceptions) a license for its patented technology to Daktronics of Brookings, SD, USA (which provides large-screen video displays, electronic scoreboards, LED text and graphics displays, and related control systems)...

X-FAB launches next-gen silicon carbide process platform for power MOSFET designs

Semiconductor today - Tue, 12/17/2024 - 18:27
As its next-generation XbloX platform, analog/mixed-signal, MEMS and specialty foundry X-FAB Silicon Foundries SE of Tessenderlo, Belgium has launched XSICM03 (available now for early access), advancing silicon carbide (SiC) process technology for power metal-oxide-semiconductor field-effect transistors (MOSFETs). A significantly reduced cell pitch enables increased die per wafer and improved on-state resistance without compromising reliability, the firm says...

Spain gains European approval for €81m subsidy for Diamond Foundry Europe’s new fab

Semiconductor today - Tue, 12/17/2024 - 17:10
The European Commission has approved, under European Union (EU) State aid rules, a direct grant of €81m from Spain to support Diamond Foundry Europe (a subsidiary of Diamond Foundry Inc of San Francisco, CA, USA) in setting-up a new factory — involving a total investment of about €675m — for the production of semiconductor-grade rough synthetic diamonds in Trujillo in Extremadura, an area eligible for regional aid under Article 107(3)(a) of the Treaty on the Functioning of the EU (TFEU)...

2025: A technology forecast for the year ahead

EDN Network - Tue, 12/17/2024 - 17:00

As has been the case the last couple of years, we’re once again flip-flopping what might otherwise seemingly be the logical ordering of this and its companion 2024 look-back piece. I’m writing this 2025 look-ahead in November for December publication, with the 2024 revisit to follow, targeting a January 2025 EDN unveil. While a lot can happen between now and the end of 2024, potentially affecting my 2025 forecasting in the process, this reordering also means that my 2024 retrospective will be more comprehensive than might otherwise be the case.

That all said, I did intentionally wait until after the November 5 United States elections to begin writing this piece. Speaking of which…

The 2024 United States election (outcome, that is)

Yes, I know I covered this same topic a year ago. But that was pre-election. Now, we know that there’s been a dominant political party transition both in the Executive Branch (the President and Vice President) and the Legislative Branch (the Senate, to be specific). And the other half of the Legislative Branch, the House of Representatives, will retain a (thin) Republican Party ongoing majority, final House results having been determined just as I type these words a bit more than a week post-election. As I wrote a year ago:

Trump aspires to fundamentally transform the U.S. government if he and his allies return to power in the executive branch, moves which would undoubtedly also have myriad impacts big and small on technology and broader economies around the world.

That said, a year ago I also wrote:

I have not (and will not) reveal personal opinions on any of this.

and I will be “staying the course” this year. So then why do I mention it at all? Another requote:

Americans are accused of inappropriately acting as if their country and its citizens are the “center of the world”. That said, the United States’ policies, economy, events, and trends inarguably do notably affect those of its allies, foes and other countries and entities, as well as the world at large, which is why I’m including this particular entry in my list.

Given that I’m clearly not going to be diving into other hot-button topics like immigration here, what are some of the potential technology impacts to come in 2025 and beyond? Glad you asked. Here goes, solely in the order in which they’ve streamed out of my noggin:

  • Network Neutrality: Support for net neutrality, which Wikipedia describes as “the principle that Internet service providers (ISPs) must treat all Internet communications equally, offering users and online content providers consistent transfer rates regardless of content, website, platform, application, type of equipment, source address, destination address, or method of communication (i.e., without price discrimination)” predictably waxes or wanes depending on which US political party—Democratic or Republican, respectively —is in power at any point in time. As such, it’s likely that any momentum that’s built up toward ISP regulation over the past four years will fade and likely even reverse course at the Federal Communications Commission (FCC) in the four-year Presidential term to come, along with course reversals of other technology issues over which the FCC holds responsibility. Note that the “ISP” acronym, traditionally applied to copper, coax and fiber wired Internet suppliers, has now expanded to include cellular and satellite service providers, too.
  • Tariffs: Wikipedia defines tariffs on imports, which is what I’m primarily focusing on here, as “designed to raise the price of imported goods and services to discourage consumption. The intention is for citizens to buy local products instead, thereby stimulating their country’s economy. Tariffs therefore provide an incentive to develop production and replace imports with domestic products. Tariffs are meant to reduce pressure from foreign competition and reduce the trade deficit.” The Trump administration, during his first term from 2016-2020, activated import tariffs on countries—notably China—and products determined to be running a trade surplus with the United States (tariffs which, in fairness, the subsequent Biden administration kept in place in some cases and to some degrees). And Trump has emphatically stated his intent to redouble his efforts here in the coming term, ranging up to 60%. The potential resultant “squeeze” problem for US domestic suppliers is multifold:
    • Tariff-penalized countries are likely to respond in kind with import tariffs of their own, hampering US companies’ abilities to compete in broader global markets
    • Those countries are likely to also tariff-tax exports (to the United States, specifically) of both product “building blocks” designed and manufactured outside the US—such as semiconductors and lithium batteries—and products built by subcontractors in other countries—like smartphones.
    • And broader supply-constraint retaliation, beyond fiscal encumbrance, is also likely to occur in areas where other countries already have global market share dominance due to supply abundance and high-volume manufacturing capacity: China once again, with solar cells, for example, along with rare earth minerals.

Perhaps this is why Wikipedia also notes that “There is near unanimous consensus among economists that tariffs are self-defeating and have a negative effect on economic growth and economic welfare, while free trade and the reduction of trade barriers has a positive effect on economic growth…Often intended to protect specific industries, tariffs can end up backfiring and harming the industries they were intended to protect through rising input costs and retaliatory tariffs.” Much will likely depend on if the tariffs to be applied will be selective and scalpel-like versus broadly wielded as blunt instruments.

  • Elon Musk (and his various companies): Musk spent an estimated $200M financially backing Trump’s campaign, not to mention the multiple rallies he spoke at and the formidable virtual megaphone of his numerous posts on X, the social media site formerly known as Twitter, which he owns. A week post-election, the return on his investment is already starting to become evident. What forms could it take?
  • Asia-based foundries: Taiwan, the birthplace of TSMC, and South Korea, headquarters of Samsung, are among the world’s largest semiconductor suppliers. Of particular note, as foundries they manufacture ICs for fabless chip companies, large and small alike. And although both companies are aggressively expanding their fab networks elsewhere in the world, their original home-country locations remain critical to their ongoing viability. Unfortunately, those locations are also rife with ongoing political tensions and invasion threats, whether from the People’s Republic of China (Taiwan) or North Korea (South Korea). All of which will make the Trump administration’s upcoming actions critical. Last summer, during an interview with Bloomberg, then-candidate Trump indicated that Taiwan should be paying the United States to defend it, that in this regard the US was “no different than an insurance company”, and that Taiwan “doesn’t give us anything”, accusing it of taking “almost 100%” of the US’s semiconductor industry. And during his first term, Trump also cultivated a relationship with North Korean dictator Kim Jong Un.
  • Ongoing CHIPS funding: Shortly before the election, and in seeming contradiction to Republican party leader Trump’s earlier noted expressed regret about lost US semiconductor dominance, then (and likely again) House of Representatives Speaker (and fellow Republican) Mike Johnson indicated that the legislative body he led would likely repeal the $280B CHIPS and Science Act funding bill if his party again won a majority in Congress. Shortly thereafter, he backpedaled, switching his wording choice from “repeal” to “streamline”. Which will it actually be? We’ll have to wait and see.
  • DJI and TikTok: Back in September, I mentioned that the US government was considering banning ongoing sales of DJI drones, citing the company’s China headquarters and claimed links to that country’s military and other government entities, resulting in US security concerns. Going forward, given Trump’s longstanding economic-and-other animosity toward China, it wouldn’t surprise me to see the proposed ban become a reality, which US-based drone competitors like Skydio would seemingly welcome (no matter that, to my earlier comments, China is already proactively reacting to the political pressure by cutting off battery shipments to Skydio). Conversely, although Trump championed a proposed ban of social media platform TikTok (a far more obvious security concern, IMHO) at the end of his first term, he’s now seemingly doing an about-face.
  • Etc.: What have I overlooked or left on the cutting room floor in the interest of reasonable wordcount constraint, folks? Sound off in the comments.

Ongoing unpredictable geopolitical tensions

This was the first topic on my 2024 look-ahead list. And I’m mentioning it here just to reassure you that it hasn’t fallen off my radar. But as for predictions? Aside from comments I’ve already made regarding semiconductor powerhouses Taiwan and S. Korea, along with up-and-comer China, I’m going to avoid prognosticating any further on Asia, or on Europe or the Middle East, for that matter. Instead, I’ll just reiterate and slightly update two comments I made a year ago:

I’m not going to attempt to hazard a guess as to how the situations in Europe, Asia, and the Middle East (and anywhere else where conflict might flare up between now and the end of 2024, for that matter) will play out in the year to come.

and, regarding the US election:

Who has ended up in power, not only in the presidency but also controlling both branches of Congress, and not only at the federal but also states’ levels, will heavily influence other issues, such as support (or not) for Ukraine, Taiwan, and Israel, and sanctions and other policies against Russia and China.

That’s all, at least on this topic, folks! To clarify, if necessary, please don’t incorrectly interpret my reduced comparative wordcount for this section versus the previous one as indicative of perceived lower importance in my mind, or heaven forbid, of “inappropriately acting as if my country and its citizens are the center of the world,” to requote an earlier…umm…requote. It’s just that a year and a month after the October 7, 2023 attack that initiated the latest iteration of armed conflict between Israel and Iran’s Hamas and Hezbollah proxies, nearly three years into Russia’s latest and most significant occupation of Ukraine sovereign territory, and a few weeks shy of three quarters of a century (as I write these words) since the Republic of China (ROC) fled the mainland for the island of Taiwan…I’ve given up trying to figure out the end game for any of this mess. And echoing the Serenity Prayer, I realize there’s only so much that I can personally do about it. Speaking of prayer, though, one thing I can do is to pray for peace. So, I shall, as ceaselessly as possible. I welcome any of you out there who are similarly inclined to join me.

AI: Will transformation counteract diminishing ROI?

In next month’s 2024 look-back summary, I plan to dive into detail about why I feel the bloom is starting to fade from the rose of AI. Briefly, the ever-increasing resource investments:

  • Processing hardware, both for training (in particular) and subsequent inference
  • Memory and mass storage
  • Interconnect and other system infrastructure
  • Money to pay for all this stuff
  • And energy and water (with associated environmental impacts) to power and keep cool all this stuff

are translating into diminishing capability, accuracy and other improvement “returns” on these investments, most recently noted in coverage appearing as I was preparing to write this section:

OpenAI’s next flagship model might not represent as big a leap forward as its predecessors, according to a new report in The Information. Employees who tested the new model, code-named Orion, reportedly found that even though its performance exceeds OpenAI’s existing models, there was less improvement than they’d seen in the jump from GPT-3 to GPT-4. In other words, the rate of improvement seems to be slowing down. In fact, Orion might not be reliably better than previous models in some areas, such as coding.

What can be done to re-boost the improvement trajectory seen initially? Thanks for asking:

  • Synthetic data: This one is, I’ll admit upfront, tricky. Conceptually, it would seem, the more training data you feed a model with, the more robust its resulting inference performance will be. And such an approach is particularly appealing when, for example, databases of real-life images of various objects are absent perspectives from certain vantage points, of certain colors and shapes, and captured under certain lighting conditions. Similarly, a training algorithm’s ability to access the entirety of the world’s literature is practically limited by copyright constraints. But that said, keep in mind that both the quantity and quality of training data are critical. A synthetic image of an object that has notable flaws compared to its real-life counterpart, for example, would be counterproductive. Same goes for the slang and gibberish (not to mention extremist language and other garbage) that pervades social media nowadays. And while on the one hand you want your training data set to be comprehensive (to prevent bias, for example), proportionality to real life is also important in guiding the model to the most likely subsequent inference interpretation of an input. After all, there’s a fundamental reason why pruning to reduce sparsity is key to optimizing both model size and accuracy.
  • Multimodal models: Large language models (LLMs), which I rightly showcased at the very top of my 2023 retrospective list, are increasingly impressive in their capabilities. But they’re also, admittedly somewhat simplistically speaking, “one-trick ponies”. As their name implies, they’re language-based from both input (typed) and output (displayed) standpoints. If you want to speak to one, you need to first run the audio through a separate speech-to-text model (or standalone algorithm); the same goes for spitting a response back at you through a set of speakers. Analogies to images and video clips, and other sensory and output data, are apt. Granted, this approach is at least somewhat analogous to human beings’ cerebral cortexes, which are roughly subdivided into areas optimized for language, vision and other processing functions. Still, given that humans are fundamentally multisensory in both input and output schema, any AI model that undershoots this reality will be inherently limited. That’s where newer multimodal models come in. Vision language models (VLMs), for example, augment language with equally innate still and video image perception and generation capabilities. And large multimodal models (LMMs) are even more input- and output-diverse. Think of them as the deep learning analogies to the legacy sensor fusion techniques applied to traditional processing algorithms, which I ironically alluded to in my 2022 retrospective.
  • Continued (albeit modified) transition from the cloud to the edge: Reiterating what I initially wrote a couple of years ago:

    One common way to reduce a device bill-of-materials cost (BOM) is to offload as much of the total required processing, memory and other required resources to other connected devices. A “cloud” server is one common approach, but it has notable downsides that also beg for consideration from the device supplier and purchaser alike, such as:

    • Sending raw data up to the “cloud” for processing, with the server subsequently sending results back to the device, can involve substantial roundtrip latency. There’s a reason why self-driving vehicles do all their processing locally, for example!
    • Sending data up to the “cloud” can also engender privacy concerns, depending on exactly what that data is (consider a “baby cam”, for example) and how well (or not) the data is encrypted and otherwise protected from unintended access by others.
    • Taking latency to the extreme, if the “cloud” connection goes down, the device can turn into a paperweight, and
      • You’re trading a one-time fixed BOM cost for ongoing variable “cloud” costs, encompassing both server usage fees (think AWS, for example) and connectivity bandwidth expenses. Both of those costs also scale with both the number of customers and the per-customer amount of use (both of each device and cumulatively for all devices owned by each customer).
  • Another popular BOM-slimming approach involves leveraging a wired or (more commonly) wireless tethered local device with abundant processing, storage, imaging, and other resources, such as a smartphone or tablet. This technique has the convenient advantage of employing a device already in the consumer’s possession, which he or she has already paid for, and for which any remaining “cloud” processing bandwidth involved in implementing the complete solution he or she will also bankroll. The latency is also notably less than with the pure “cloud” approach, privacy worries are lessened if not fully alleviated, and although the smartphone’s connection to the “cloud” may periodically go down, the connection between it and the device generally remains intact.
  • For these and other reasons, in recent years I’ve seen a gradually accelerating transition from cloud- to edge-based processing architectures. That said, an in-parallel transition from traditionally coded algorithms to deep learning-based implementations has also occurred. And of late, this latter shift has complicated the former cloud-to-edge move, due specifically to the aforementioned high processing, memory, and mass storage requirements required to run inference on locally housed deep learning models. New system architecture variants to address both transitions’ merits are therefore gaining prominence. In one, the hybrid exemplified by Apple Intelligence along with Google’s Pixel phones’ conceptually equivalent approach, a base level of inference occurs locally, with cloud resources tapped as-needed for beefier-function requirements. And in the other, whereas “edge” might have previously meant a network of “smart” standalone edge cameras in a store, now it’s a network of less “smart” cameras all connected to an edge server at each store (still, versus a “cloud” server at retail headquarters).
  • Deep learning architectures beyond transformers (and deep learning models beyond LLMs and their variants): The transformer, initially developed for language translation, quickly expanded into broader natural language processing and now also finds use for audio, still and video images, and various other applications. Similarly, usage of the LLM and its previously mentioned multimodal relatives is pervasive nowadays. However, when Yann LeCun, one of the “godfathers” of AI (and chief scientist at Meta), suggested earlier this year that the next generation of researchers should look beyond today’s LLM approaches and their associated limitations, accompanied by Meta’s public rollout of one such next-generation approach, and then more recently stated that today’s AI is as “dumb as a cat”, it caught a lot of industry attention. A recently published arXiv paper goes into detail on transformers’ limitations, along with the inherent strengths and shortcomings, current status and evolution potential of other “novel, alternative potentially disruptive approaches”. And I also commend to your attention a recent episode of Nova on AI. The entire near-hour is fascinating, and it specifically showcases an emerging revolutionary architecture alternative called the liquid neural network.
  • New hardware approaches: Today’s various convolutional neural network (CNN), recurrent neural network (RNN) and transformer-based deep learning network architectures are well-matched to the GPU-derived massively parallel processing hardware architectures championed for training by companies such as NVIDIA, today’s dominant market leader (and also one of the leading suppliers for inference processing, although architectural diversity is more common here). That said, any one chip supplier can only satisfy a subset of total market demand, and the resultant de facto monopoly also leads to higher prices, all of which act to constrain AI’s evolutionary cadence. And that said, the emerging revolutionary network architectures and models I’ve just discussed, should they gain traction, will also open the doors to new hardware approaches, along with new companies supplying products that implement those approaches. To be clear, I don’t envision this emergent hardware, or the new network architectures and models that it supports, to become dominant in 2025 (or, realistically, even before the end of this decade). That said, I feel strongly that such revolutionary transformation is essential to, as I said earlier, re-boosting AI’s initial trajectory.

Merry Christmas (and broader happy holidays) to all, and to all a good night

I wrote the following words a year ago and couldn’t think of anything better (or even different) to say a year later, given my apparent constancy of emotion, thought and resultant output. So, with upfront apologies for the repetition, a reflection of my ongoing sentiment, not laziness:

I’ll close with a thank-you to all of you for your encouragement, candid feedback and other manifestations of support again this year, which have enabled me to once again derive an honest income from one of the most enjoyable hobbies I could imagine: playing with and writing about various tech “toys” and the foundation technologies on which they’re based. I hope that the end of 2024 finds you and yours in good health and happiness, and I wish you even more abundance in all its myriad forms in the year to come. Let there be Peace on Earth.

 Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

 Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 2025: A technology forecast for the year ahead appeared first on EDN.

Silicon PIC market growing at 45% CAGR from $95m in 2023 to $863m in 2029

Semiconductor today - Tue, 12/17/2024 - 14:20
Silicon photonics continues to evolve rapidly, with diverse applications signaling significant opportunities ahead, notes market analyst firm Yole Group. Specifically, the market for silicon photonic integrated circuit (PIC) die is estimated to be increasing at a compound annual growth rate (CAGR) of 45% from 2023 to at least $863m by 2029, notes the firm in its annual report ‘Silicon Photonics 2024 – Focus on SOI [silicon-on-insulator], SiN [silicon nitride], and LNOI [lithium niobate-on-insulator] platforms’, which this year explores the photonics landscape, emphasizing materials for PICs, optical interconnects, and other applications...

Hall Effect Definition, Principle, Formula & Applications

ELE Times - Tue, 12/17/2024 - 13:30

The Hall Effect is a physical phenomenon discovered by Edwin Hall in 1879. It describes the generation of a voltage difference (called the Hall voltage) across an electrical conductor when a magnetic field is applied perpendicular to the flow of electric current.

Hall Effect Principle

The Hall effect principle states that when a current-carrying conductor or semiconductor is placed in a perpendicular magnetic field, a voltage can be measured at a right angle to the current path.

How it Works

1. When a current-carrying conductor or semiconductor is placed in a magnetic field, the magnetic field exerts a force on the moving charge carriers (electrons or holes).
2. This force (called the Lorentz force) causes the charge carriers to accumulate on one side of the conductor, creating a voltage difference across the conductor.
3. This voltage is known as the Hall voltage, and its presence is the essence of the Hall Effect.

Key Components

– Current: Flowing through the conductor.
– Magnetic Field: Applied perpendicularly to the current.
– Hall Voltage: The measurable voltage generated across the conductor.

Applications of the Hall Effect

1. Magnetic Field Sensing:
– Hall Effect sensors detect the presence, strength, and direction of a magnetic field.
– Used in position sensing, speed detection (e.g., automotive wheel speed sensors), and current sensing.

2. Proximity Sensors:
– Hall sensors can detect the approach of magnetic objects without physical contact.

3. Current Measurement:
– Hall Effect sensors are used to measure current in conductors without interrupting the circuit.

4. Automotive Applications:
– Found in crankshaft position sensors, ABS braking systems, and electric power steering systems.

5. Brushless DC Motors:
– Hall sensors detect rotor position, enabling precise control of motor operation.

6. Semiconductor Applications:
– Helps in understanding properties of materials like charge carrier type (electrons/holes), carrier concentration, and mobility.

Hall Effect Theory and Formula

When a conductive plate is connected to a circuit powered by a battery, an electric current begins to flow through it. The charge carriers, such as electrons in a conductor, initially follow a straight path from one end of the plate to the other. This movement of charge carriers produces a magnetic field around them.

If an external magnet is placed near the conductive plate, its magnetic field interacts with the field created by the charge carriers, disturbing the straight path of their motion. The force responsible for altering the direction of the charge carriers is called the Lorentz force.

As a result of this force, the negatively charged electrons are deflected toward one side of the plate, while the positively charged holes move toward the opposite side. This separation of charges generates a potential difference between the two sides of the plate, which is known as the Hall voltage (\( V_H \)). This voltage can be measured using a meter.

The formula for Hall voltage is expressed as:

\[
V_H = \frac{IB}{nqd}
\]

Where:
–  I is the current flowing through the sensor,
–  B  is the strength of the external magnetic field,
– n  is the number of charge carriers per unit volume,
– q is the charge of each carrier, and
– d is the thickness of the conductive plate (sensor).

This principle forms the basis of the Hall Effect, widely used for measuring magnetic fields, current, and position in various applications.

Summary
The Hall Effect is the basis of many modern magnetic field sensors and current-measuring devices. It is crucial in industrial, automotive, and consumer electronics applications due to its accuracy, reliability, and non-contact sensing capabilities.

The post Hall Effect Definition, Principle, Formula & Applications appeared first on ELE Times.

IoT Sensors Definition, Types, Examples & Applications

ELE Times - Tue, 12/17/2024 - 13:20

An IoT sensor is a device that collects real-world data (such as temperature, motion, light, humidity, or pressure) and transmits it over the internet or a network for further processing and analysis. These sensors are a core component of the Internet of Things (IoT) ecosystem, enabling devices to communicate, monitor, and interact with their environment.

How IoT Sensors Work

IoT sensors operate as part of the Internet of Things (IoT) ecosystem, where they collect, process, and transmit real-world data to enable monitoring, analysis, and automation. Here is a step-by-step breakdown of how IoT sensors work:

1. Data Collection
IoT sensors detect and measure specific physical or environmental parameters, such as temperature, light, motion, humidity, pressure, or sound.
Sensors convert these real-world measurements into electrical signals.
Example: A temperature sensor measures the surrounding temperature and generates an electrical signal proportional to it.

2. Signal Conversion and Processing
The raw data collected by the sensor is typically analog. A microcontroller or onboard circuitry processes and converts this analog data into a digital signal that can be understood by computers or cloud systems.
Many IoT sensors include built-in signal conditioning, data filtering, and pre-processing to ensure the data is accurate and clean.

3. Communication and Transmission
The processed data is transmitted to an IoT gateway, server, or cloud platform using wireless communication protocols such as:
Wi-Fi
Bluetooth
Zigbee
LoRa (Low Power Long Range)
Cellular Networks (4G/5G/NB-IoT)
RFID (Radio Frequency Identification)

The choice of communication protocol depends on the application’s range, power requirements, and data transmission needs.

4. Data Storage and Cloud Integration
The transmitted data is sent to an IoT platform or cloud storage for further processing.
Cloud-based systems store and analyze the data, enabling real-time access from anywhere.

5. Data Analysis and Decision-Making
The collected sensor data is analyzed using advanced tools like data analytics, artificial intelligence (AI), or machine learning (ML) algorithms. Insights are generated to trigger actions, automate processes, or provide reports and alerts.
Example: If a motion sensor detects activity in a secure area, it sends an alert to a security system or triggers a camera to record.

6. Feedback and Action
Based on the processed data and analysis, actions can be automated. These actions may include:
– Triggering an actuator (e.g., turning on a fan if the temperature rises too high).
– Sending alerts or notifications to a user’s device.
– Adjusting settings for optimized performance.
– Example: In a smart irrigation system, a soil moisture sensor can trigger water sprinklers when the soil is too dry.

Types of IoT Sensors

1. Temperature Sensors: Measure temperature changes (e.g., in HVAC systems, cold chain monitoring).
2. Proximity Sensors: Detect the presence or distance of an object (e.g., in parking systems or smartphones).
3. Motion Sensors: Detect movement (e.g., in security systems or smart lighting).
4. Humidity Sensors: Measure moisture in the air (e.g., in agriculture or industrial environments).
5. Pressure Sensors: Monitor pressure in gases or liquids (e.g., for weather forecasting or automotive systems).
6. Light Sensors: Measure light intensity (e.g., in smart lighting or camera systems).
7. Gas Sensors: Detect the presence of gases (e.g., for air quality monitoring).
8. Vibration Sensors: Measure vibrations in machinery (e.g., for predictive maintenance).
9. Sound Sensors: Capture sound levels (e.g., in noise monitoring systems).

Applications of IoT Sensors

IoT sensors have a wide range of applications across industries, enabling automation, monitoring, and real-time data-driven decision-making. Below are key areas where IoT sensors play a critical role:

Smart Homes: Used in thermostats, security systems, smart lighting, and appliances.
Healthcare: Monitor vital signs like heart rate, oxygen levels, or glucose levels.
Industrial IoT (IIoT): Measure machine performance, detect faults, and improve efficiency.
Agriculture: Monitor soil moisture, humidity, and weather conditions for optimized farming.
Smart Cities: Enable traffic monitoring, waste management, and energy-efficient infrastructure.
Transportation and Logistics: Track vehicles, cargo conditions, and fuel levels.
Environmental Monitoring: Detect pollution, temperature, and weather conditions.

Key Features of IoT Sensors
– Low Power Consumption: Designed to work efficiently for extended periods.
– Wireless Connectivity: Support protocols like Wi-Fi, Zigbee, Bluetooth, and NB-IoT.
– Compact and Scalable: Small in size and easy to integrate into systems.
– Real-Time Monitoring: Provide instant data feedback for faster decision-making.

Summary
An IoT sensor acts as the “eyes and ears” of an IoT system, enabling devices to collect data from the physical world and transmit it for analysis. This data-driven approach powers smart solutions across industries, improving efficiency, automation, and decision-making.

The post IoT Sensors Definition, Types, Examples & Applications appeared first on ELE Times.

Rio Tinto progresses development of gallium extraction process in Quebec

Semiconductor today - Tue, 12/17/2024 - 12:01
As part of an R&D program, global mining group Rio Tinto is assessing the potential for extracting and valorizing the gallium that is present in bauxite processed in its alumina refinery in Saguenay–Lac-Saint-Jean (the only one in Canada)...

Smart plug went bye-bye.

Reddit:Electronics - Tue, 12/17/2024 - 05:54
Smart plug went bye-bye.

Looks like the fuse burnt and spit out the board's protective epoxy or flux near the AC voltage terminals.

submitted by /u/ExBx
[link] [comments]

To press ON or hold OFF? This does both for AC voltages

EDN Network - Mon, 12/16/2024 - 16:47

On the October 14, 2024, a design idea (DI) by Nick Cornford entitled “To press ON or hold OFF? This does both” appeared. It is a very interesting DI for DC voltages, but what about AC voltages?

Wow the engineering world with your unique design: Design Ideas Submission Guide

After reading this DI, I decided to design a circuit with similar operation for the much-needed AC voltages since many of our gadgets are connected to 110V/230V AC mains. In Figure 1’s circuit, if the single push button SW1 is pressed momentarily once, the mains AC voltage is extended to the output where a gadget is connected. If push button SW1 is pressed for a long time—4 to 5 seconds—power is disconnected. In my opinion, a shiny modern push button looks more attractive and elegant than a toggle switch.

Figure 1 If you press SW1 once, the AC output terminal J2 gets AC supply. If you hold SW1 for a long time, i.e., 4 to 5 seconds, the path from the power supply to terminal J2 gets disconnected. One single pushbutton provides both ON and OFF functions for AC voltage.

In this circuit, mains AC is fed to output terminal through the U5 triac. It should be selected according to voltage and current requirement. When you press SW1 once momentarily, it triggers a monostable U2A. Its raising edge pulse output sets the flip-flop U4A. Q2 becomes ON and current flows through the photodiode of U1. Optotriac U1 conducts and hence triac U5 also conducts. Thus, mains voltage is extended to output terminal.

If you press SW1 for a long time, i.e., 4 to 5 seconds (this time can be adjusted by changing R4, R5), capacitor C1 charges. When its voltage reaches the reference voltage set by the R4, R5 division, the comparator U3A output goes HIGH which resets flip-flop U4A. Thus, the flip-flop output goes LOW, switching Q2 OFF. At this point, no current flows through photodiode of U1, hence U1 and U5 are switched OFF. This way, the mains voltage to output is disconnected.

When you press SW1, C1 is charged. When SW1 is open, there must be a path to discharge C1 for proper operation of the next cycle. This is done by Q1. When SW1 is open, current flows from C1 through the emitter-base of Q1 and R1. Hence Q1 gets saturated and discharges C1. When SW1 is pressed, voltage is applied to base of Q1 via R7 and hence Q1 becomes open and allows C1 to charge. Being CMOS IC-based, the entire circuit draws very little current.

VDD here is 5 VDC. The VDD and VSS pins of U2, U3, and U4 are not shown in the circuit. They must be wired to VDD and VSS inputs shown. If you want a more simple circuit, the U1, U5 circuit can be replaced with a simple relay.

Jayapal Ramalingam has over three decades of experience in designing electronics systems for power & process industries and is presently a freelance automation consultant.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post To press ON or hold OFF? This does both for AC voltages appeared first on EDN.

AI designs and the advent of XPUs

EDN Network - Mon, 12/16/2024 - 15:48

At a time when traditional approaches such as Moore’s Law and process scaling are struggling to keep up with performance demands, XUPs emerge as a viable candidate for artificial intelligence (AI) and high-performance computing (HPC) applications.

But what’s an XPU? The broad consensus on its composition calls it the stitching of CPU, GPU, and memory dies on a single package. Here, X stands for application-specific units critical for AI infrastructure.

Figure 1 An XPU integrates CPU and GPU in a single package to better serve AI and HPC workloads. Source: Broadcom

An XPU comprises four layers: compute, memory, network I/O, and reliable packaging technology. Industry watchers call XPU the world’s largest processor. But it must be designed with the right ratio of accelerator, memory and I/O bandwidth. And it comes with the imperative of direct or indirect memory ownership.

Below is an XPU case study that demonstrates sophisticated integration of compute, memory, and I/O capabilities.

What’s 3.5D and F2F?

The 2.5D integration, which involves integrating multiple chiplets and high-bandwidth memory (HBM) modules on an interposer, has initially served AI workloads well. However, increasingly complex LLMs and their training necessitate 3D silicon stacking for more powerful silicon devices. Next, 3.5D integration, which combines 3D silicon stacking with 2.5D packaging, takes silicon devices to the next level with the advent of XPUs.

That’s what Broadcom’s XDSiP claims to achieve by integrating more than 6000 mm2 of silicon and up to 12 HBM stacks in a single package. And it does that by developing a face-to-face (F2F) device to accomplish significant improvements in interconnect density and power efficiency compared to the face-to-back (F2B) approach.

While F2B packaging is a 3D integration technique that connects the top metal of one die to the backside of another die, F2F connection assembles two dies ended by a high-level metal interconnection without a thinning step. In other words, F2F stacking directly connects the top metal layers of the top and bottom dies. That provides a dense, reliable connection with minimal electrical interference and exceptional mechanical strength.

Figure 2 The F2F XPU integrates four compute dies with six HBM dies using 3D die stacking for power, clock, and signal interconnects. Source: Broadcom

Broadcom’s F2F 3.5D XPU integrates four compute dies, one I/O die, and six HBM modules while utilizing TSMC’s chip-on-wafer-on-substrate (CoWoS) advanced packaging technology. It claims to minimize latency between compute, memory, and I/O components within the 3D stack while achieving a 7x increase in signal density between stacked dies compared to F2B technology.

“Advanced packaging is critical for next-generation XPU clusters as we hit the limits of Moore’s Law,” said Frank Ostojic, senior VP and GM of the ASIC Products Division at Broadcom. “By stacking chip components vertically, Broadcom’s 3.5D platform enables chip designers to pair the right fabrication processes for each component while shrinking the interposer and package size, leading to significant improvements in performance, efficiency, and cost.”

The XPU nomenclature

Intel’s ambitious take on XPUs hasn’t gone much far as its Falcon Shores platform is no longer proceeding. On the other hand, AMD’s CPU-GPU combo has been making inroads during the past couple of years. Though AMD calls it an accelerated processing unit or APU. It partly comes from the industry nomenclature where AI-specific XPUs are called custom AI accelerators. In other words, it’s the custom chip that provides the processing power to drive AI infrastructure.

Figure 3 MI300A integrates CPU and GPU cores on a single package to accelerate the training of the latest AI models. Source: AMD

AMD’s MI300A combines the company’s CDNA 3 GPU cores and x86-based Zen 4CPU cores with 128 GB of HBM3 memory to deliver HPC and AI workloads. El Capitan—a supercomputer housed at Lawrence Livermore National Laboratory—is powered by AMD’s MI300A APUs and is expected to deliver more than two exaflops of double precision performance when fully deployed.

The AI infrastructure increasingly demands specialized compute accelerators interconnected to form massive clusters. Here, while GPUs have become the de facto hardware, XPUs seem to represent another viable approach for heavy lifting in AI applications.

XPUs are here, and now it’s time for software to catch up and effectively use this brand-new processing venue for AI workloads.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post AI designs and the advent of XPUs appeared first on EDN.

Wise-integration signs Astute as EMEA distributor

Semiconductor today - Mon, 12/16/2024 - 13:45
Fabless company Wise-integration of Hyeres, France — which was spun off from CEA-Leti in 2020 and designs and develops digital-control of gallium nitride (GaN) and GaN integrated circuits for power conversion — has agreed a strategic distribution partnership covering Europe, the Middle East and Africa (EMEA) with global electronics distributor and supply chain solutions provider Astute Group of Stevenage, UK...

Infineon and EVE Energy collaborate to enable the next generation of battery management systems

ELE Times - Mon, 12/16/2024 - 11:09

Infineon Technologies AG and Eve Energy Co., Ltd. (EVE Energy), a manufacturer of lithium batteries, have signed a memorandum of understanding (MoU). The two companies aim at enabling comprehensive battery management system solutions for the automotive market. As part of the MoU, Infineon will supply a complete chipset, including microcontroller units, balancing and monitoring ICs, power management ICs, drivers, MOSFETs, controller area networks and sensor products. Equipped with these solutions, EVE Energy’s battery management system can provide high safety, high reliability and optimized cost. It also enables more accurate monitoring, protection and optimization of electric vehicle battery performance and improves driving experience and energy efficiency.

“The rapid growth in electrification has driven the need for advanced battery solutions. The partnership between Infineon’s advanced battery management ICs and EVE Energy`s advanced battery technologies will pave the way for the next generation of intelligent battery packs,” said Andreas Doll, Senior Vice President and General Manager Smart Power at Infineon. “Infineon offers a comprehensive and advanced system-level solution that meets the diverse needs of customers. We believe that further cooperation between the two sides will foster positive interaction and collaborative development at various levels.”

“EVE Energy has experienced rapid growth in the field of battery management systems in recent years, and we are determined to continue this development. Therefore, we highly value the partnership with Infineon,” said Liu Jianhua, co-founder and president of EVE Energy. “Our goal is to jointly introduce more advanced solutions to the market that meet customers’ needs and drive the development of reliable and efficient systems.”

BMS solutions from Infineon

Electrification and battery management systems are key focus areas for Infineon. Infineon has a complete portfolio for battery management systems, including wired and wireless BMS solutions. The wired BMS solution is based on AURIX, PMIC and Balancing and Monitoring IC products, and others. TLE9012DQU and TLE9015DQU provide an optimized solution for battery cell monitoring and balancing. They combine excellent measurement performance with highest quality standards and application robustness, enabling the implementation of lean and cost-efficient designs. The ICs are suitable for a wide range of industrial, consumer and automotive battery applications and fulfill safety requirements up to ASIL-D. The wireless BMS solution, on the other hand, utilizes Infineon’s latest low-power CWY89829 chip to create an interconnected mesh network that ensures maximum node connectivity while maintaining sensor efficiency. In addition, Infineon offers reliable LV MOSFET and EiceDRIVER solutions including 2ED2410 and 2ED4820 products designed for future applications such as the electrification of 24V/48V BMS main switches.

The post Infineon and EVE Energy collaborate to enable the next generation of battery management systems appeared first on ELE Times.

Weekly discussion, complaint, and rant thread

Reddit:Electronics - Sat, 12/14/2024 - 18:00

Open to anything, including discussions, complaints, and rants.

Sub rules do not apply, so don't bother reporting incivility, off-topic, or spam.

Reddit-wide rules do apply.

To see the newest posts, sort the comments by "new" (instead of "best" or "top").

submitted by /u/AutoModerator
[link] [comments]

Pages

Subscribe to Кафедра Електронної Інженерії aggregator - Новини світу мікро- та наноелектроніки