-   Українською
-   In English
Feed aggregator
Part 1: A beginner’s guide to the power of IQ data and beauty of negative frequencies
Editor’s Note: This is a two-part series where DI authors Damian and Phoenix Bonicatto explore IQ signal representation and negative frequencies to ease the understanding and development of SDRs.
Part 1 explains the commonly used SDR IQ signal representation and negative frequencies without the complexity of math.
Part 2 (to be published) presents a device that allows you to play with and display live SDR signal spectrums with negative frequencies.
IntroductionSoftware-defined radio (SDR) firmware makes extensive use of I/Q representation of the received and transmitted signal. This representation can simplify and add ease to the manipulation of incoming signal. I/Q data also allows us to work with negative frequencies. My goal here is to explain the I/Q representation and negative frequencies without the complexity usually invoked by obscure terms and non-intuitive mathematics. Also, I will present a device that you can build to allow you to play with and display live spectrums with negative frequencies. So, let’s get started.
Wow the engineering world with your unique design: Design Ideas Submission Guide
I/Q and quadrature conceptsWhat is I/Q data? “I” is short for in-phase and “Q” is short for quadrature. It’s the first set of SDR terms that sound mysterious and tends to put people off—let’s just call them I and Q. Simply, if you have a waveform, like you see on an oscilloscope, you can break it into two sinusoidal components—one based on a sine, and another based on a cosine. This is done by using the trig “angle sum identity”. The I and Q are the amplitudes of these components, so our signal is now represented as:
Where: “A” is the original signal amplitude and:
We have just created the in-phase signal, I*cos(ωt), and the quadrature signal, Q*sin(ωt). Just to add to the confusion, when we deal with the in-phase and quadrature signals together it is referred to as “quadrature signaling” …sigh.
[Note: In SDR projects IQ data (or I/Q data) is generally referring to the digital data pairs at each sample interval.]
Most signal processing textbooks work with exponentials to describe and manipulate signals. For example, a transmitted signal is always “real” and is typically shown as something like:
This is another formula that creates obfuscation and puts off people just starting out in signal processing and SDR. I will say that exponential notation creates cleaner mathematical manipulation, but my preference is to use the trig representation as I can see the signal in my mind’s eye as I manipulate the equations. Also, explaining your design to people who are not up on signal processing is much easier when using things everyone learned in high school. Note that, although most SDR simulations tools like MATLAB use the exponential for signal processing work, when it comes down to writing C code in an MCU, the trig representation is normally used.
Without going into it, this exponential representation is based on Euler’s formula, which is related to the beautiful and cleverly derived Euler’s equation.
Now, you may wonder why we would go through the trouble to convert the data to this quadrature form and what this form of the signal is good for. In receivers, for example, just using the incoming signal and mixing it with another frequency and extracting the data has worked since the early days of radio. To answer this, let’s look at a couple of examples.
Example of the benefits of quadrature formFirst, when doing simple mixing of an incoming signal you get, as an output, two signals—the sum of the incoming signal and the mix frequency, and the difference of these two frequencies. The following equation demonstrates this by use for the trig product identity:
To continue in your receiver, you typically need to filter one of these out, usually the higher frequency. (The unwanted resultant frequency is often called the image frequency, which is removed by an image filter.) In a digital receiver this filter can take some valuable resources (cycles and memory). Using the I/Q form above, a mix can be created that removes either just the sum or just the difference without filtering.
You can see how this works in Figure 1. First, define the mix signal in an I/Q format:
Mix Signal I part = cos(ωmt)
Mix Signal Q part = sin(ωmt)
Figure 1 Quadrature (complex-to-complex) mix returning the lower frequency.
(There is more to this, but this mix architecture is the basic idea of this technique.)
You can see that only the lower frequency is output from the mixer. If you want the higher frequency and to remove the lower frequency, just change where the minus sign is in the final additions as shown in Figure 2.
Figure 2 Quadrature mix returning the higher frequency.
This quadrature, or complex-to-complex, mixing is a very powerful technique in SDR designs.
Next, let’s look at how I/Q data can allow us to play with negative frequencies.
When you perform a classical (non-quadrature) mix, any result that you get cannot go below a frequency of zero. The result will be two new frequencies: the sum of the input frequencies and the absolute value of the difference. This absolute value means the output frequencies cannot go negative. In a quadrature mixer the frequency is not constrained with an absolute value function, and you can get negative frequencies.
Let’s think about what this means if you are sweeping one of the inputs. In the classical mixer as the two input frequencies approach each other, the difference frequency will approach 0 Hz and then start to go back up in frequency. In a quadrature mixer the difference frequency will go right through 0 Hz and continue getting more and more negative.
One implication of this is that, in a sampled system you’re working on, bandwidth is the sample rate divided by 2. When using a quadrature representation, you have a working bandwidth that is twice as large. This is especially handy when you have a system where you want to deal with a large range of frequencies at a time. You can move any of the frequencies to baseband; the higher frequencies will stay in their relative position in the positive frequencies; and the lower frequencies will stay in their relative positions in the negative frequencies. You can slide up and down, by mixing, without image filters or corrupting the spectrum with images. Another very powerful technique in SDR designs.
A tool for exploring IQ dataThis positive and negative spectrum is very interesting but unfortunately the basic FFT on your oscilloscope probably won’t display them. It typically only displays positive frequencies. Vector network analyzers (VNAs) can display negative frequency but not all labs have one. You can play around in tools like MATLAB, but I usually like something a little closer to actual hardware and more real-time to get a better feel for the concept. A signal generator and a scope always help me. But I already said a scope does not display negative frequency. Well, the tool presented in Part 2 will allow us to play with I/Q data, negative frequencies, and mixing.
[Editor’s Note: An Arduino-Nano-based device will be presented in Part 2 that can generate IQ samples based upon user frequency, amplitude, and phase settings. This generated data will then display the spectrum showing both positive and negative frequencies. Stay tuned for more!]
Damian Bonicatto is a consulting engineer with decades of experience in embedded hardware, firmware, and system design. He holds over 30 patents.
Phoenix Bonicatto is a freelance writer.
Related Content
- Exploring software-defined radio (without the annoying RF) – Part 1
- Exploring software-defined radio (without the annoying RF)—Part 2
- SDR Basics Part 3: Transmitters
- The virtual reality of 5G – Part 2 (measurements)
- Ultra-wideband I/Q demodulator improves receiver performance
The post Part 1: A beginner’s guide to the power of IQ data and beauty of negative frequencies appeared first on EDN.
High-breakdown-voltage P-GaN gate HEMTs with threshold voltage of 7.1V
Voskhod 6n1p Glow
Yep not the best measurement whise but for my audio application it sounds pretty decent. And it’s one of the prettiest heaters there is for ECC88/6N1P [link] [comments] |
Sivers signs CHIPS Act contracts with Northeast Microelectronics Coalition Hub
Micro-LED display market to grow to 34.6 million units by 2031
The 2025 CES: Safety, Longevity and Interoperability Remain a Mess
Once again this year, I’m thankfully reporting on CES (formerly also known by its de-acronym’d “Consumer Electronics Show” moniker, although the longer-winded version is apparently no more) from the remote comfort of my home office. There are admittedly worse places to visit than Las Vegas, especially given its newfound coolness courtesy of the Sphere (which I sadly have yet to experience personally):
That said, given the option to remain here, I’ll take it any day, realizing as I say this that it precludes on-camera cameos…which, come to think of it, is a plus for both viewers and myself!
(great job, Aalyia!)
Anyhoo, I could spend the next few thousand words (I’m currently guesstimating, based on repeated past experience, which in some years even necessitated a multi-part writeup series), telling you about all the new and not-new-but-maturing products and technologies showcased at the show. I’ll still do some of that, in part as case study examples of bigger-picture themes. But, to the title of this writeup, this year I wanted to start by stepping back and discussing three overriding themes that tainted (at least in my mind) all the announcements.
Safety
(Who among you is, like me, old enough to recognize this image’s source without cheating by clicking through first?)
A decade-plus ago, I told you the tale of my remote residence-located Linksys router that had become malware-infected:
Ever since then, I’ve made it a point to collect news tidbits on vulnerabilities and the attack vectors that subsequently exploit them, along with manufacturers’ subpar compromise responses. It likely won’t surprise you to learn that the rate of stories I’ve accumulated has only accelerated over time, as well as broadened beyond routers to encompass other LAN and WAN-connected products. I showcased some of them in two-part coverage published five years ago, for example, and disassembled another (a “cloud”-connected NAS) just a few months back.
The insecure-software situation has become so rampant, in fact, that the U.S. Federal Communications Committee (FCC) just unveiled a new program and associated label, the U.S. Cyber Trust Mark, intended to (as TechCrunch describes it) “help consumers make more informed decisions about the cybersecurity of the internet-connected products they bring into their homes.” Here’s more, from Slashdot’s pickup of the news, specifically referencing BleepingComputer’s analysis:
It’s designed for consumer smart devices, such as home security cameras, TVs, internet-connected appliances, fitness trackers, climate control systems, and baby monitors, and it signals that the internet-connected device comes with a set of security features approved by the National Institute of Standards and Technology (NIST). Vendors will label their products with the Cyber Trust Mark logo if they meet NIST cybersecurity criteria. These criteria include using unique and strong default passwords, software updates, data protection, and incident detection capabilities. Consumers can scan the QR code included next to the Cyber Trust Mark labels for additional security information, such as instructions on changing the default password, steps for securely configuring the device, details on automatic updates (including how to access them if they are not automatic), the product’s minimum support period, and a notification if the manufacturer does not offer updates for the device.
Candidly, I’m skeptical that this program will be successful, even if it survives the upcoming Presidential administration transition (speaking of which: looming trade war fears weighed heavily on folks’ minds at the show) and in spite of my admiration for its honorable intention. As reader “Thinking_J” pointed out in response to my recent teardown of a Bluetooth receiver that has undergone at least one mid-life internal-circuits switcheroo, the FCC essentially operates on the “honor system” in this and similar regards after manufacturers gain initial certification.
One of the root causes of such vulnerabilities, IMHO, is any reliance on open-source code, no matter that doing so may ironically also improve initial software quality. Requoting myself:
Open-source software has some compelling selling points. For one thing, it’s free, and the many thousands of developer eyeballs peering over it generally result in robust code. When a vulnerability is discovered, those same developers quickly fix it. But among those thousands of eyeballs are sets with more nefarious objectives in mind, and access to source code enables them to develop exploits for unpatched, easily identified software builds.
I also suspect that at least some amount of laissez-faire tends to creep into the software-development process when you adopt someone else’s code versus developing your own, especially if you subsequently “forget” to make proper attribution and take other appropriate action regarding that adoption. The result is a tendency to overlook the need to maintain that portion of the codebase as exploits and broader bugs in it are discovered and dealt with by the developer community or, more often than note, the one-and-only developer.
Sometimes, though, code-update neglect is intentional:
Consumer electronics manufacturers as a rule make scant (if any) profit on each unit sold, especially after subtracting the “percentage” taken by retailer intermediaries. Revenue tangibly accrues only as a function of unit volume, not from per-unit profit margin. Initial-sale revenue is sometimes supplemented by after-sale firmware-unlocked feature set updates, services, and other add-ons. But more often than not, a manufacturer’s path to ongoing fiscal stability involves straightforwardly selling you a brand-new replacement/upgrade unit down the road; cue obsolescence by design for the unit currently in your possession.
Which leads to my next topic…
Longevity
One of the products “showcased” in my August 2020 writeup didn’t meet its premature demise due to intentionally unfixed software bugs (as was the case for a conceptually similar product in Belkin’s Wemo line, several examples of which I owned when the exploit was announced). Instead, its early expiration was the result of an intentional termination of the associated “cloud” service done by its retail supplier, Best Buy (Connect WiFi Smart Plug shown above).
More recently, I told you about a similar situation (subsequently resolved positively via corporate buyout and resurrection, I’m happy to note) involving SmartLabs’ various Insteon-branded powerline networking products. Then there was the Spotify Car Thing, which I tore down in early 2023. And right before this year’s CES opened its doors to the masses, ironically, came yet another case study example of the ongoing disappointing trend: the $800 (nope, no refunds) Moxie “emotional support” robot, although open source (which, yes, I know I just critiqued earlier here) may yet come to the rescue for the target 5-10 year old demographic:
Government oversight to the rescue, again (?). Here’s a summary, from Slashdot’s highlight:
Nearly 89% of smart device manufacturers fail to disclose how long they will provide software updates for their products, a Federal Trade Commission staff study found this week. The review of 184 connected devices, including hearing aids, security cameras and door locks, revealed that 161 products lacked clear information about software support duration on their websites.
Basic internet searches failed to uncover this information for two-thirds of the devices. “Consumers stand to lose a lot of money if their smart products stop delivering the features they want,” said Samuel Levine, Director of the FTC’s Bureau of Consumer Protection. The agency warned that manufacturers’ failure to provide software update information for warranted products costing over $15 may violate the Magnuson Moss Warranty Act. The FTC also cautioned that companies could violate the FTC Act if they misrepresent product usability periods. The study excluded laptops, personal computers, tablets and automobiles from its review.
Repeating what I said earlier, I’m skeptical that this effort will be successful, despite my admiration for its honorable intentions. In no small part, my pessimism stems from recent US election results, given that Republicans have (historically, at least) been disproportionally pro-business to the detriment of consumer rights. That said, were the manufacturer phase-out to instead be the result of something other than the shutdown of a proprietary “cloud” service, such as (for example) a no-longer-maintained-therefore-viable (or at-all available, for that matter) proprietary application, the hardware might still be usable if it could alternatively be configured and controlled using industry-standard command and communications protocols.
Which leads to my next topic…
Interoperability
Those of you who read to the bitter end of my recently published “2024 look-back” tome might have noticed a bullet list of topics there that I’d originally also hoped to cover but eventually decided to save for later. The first topic on the list, “Matter and Thread’s misfires and lingering aspirations,” I held back not just because I was approaching truly ridiculous wordcount territory but also because I suspected I’d have another crack at it a short time later, at CES to be precise.
I was right; that time is now. Matter, for those of you not already aware, is:
…a freely available connectivity standard for smart home and IoT (Internet of Things) devices. It aims to improve interoperability and compatibility between different manufacturers and security, always allowing local control as an option.
And Thread? I thought you’d never ask. It’s:
…an IPv6-based, low-power mesh networking technology for Internet of things (IoT) products…
Often used as a transport for Matter (the combination being known as Matter over Thread), the protocol has seen increased use for connecting low-power and battery-operated smart-home devices.
Here’s what I wrote about Matter and Thread a year ago, in my 2024 CES discourse:
The Matter smart home communication standard, built on the foundation of the Thread (based on Zigbee) wireless protocol, had no shortage of associated press releases and product demos in Las Vegas this week. But to date, its implementation has been underwhelming (leading to a scathing but spot-on recent diatribe from The Verge, among other pieces), both in comparison to its backers’ rosy projections and its true potential.
Not that any of this was a surprise to me, alas. Consider that the fundamental premise of Matter and Thread was to unite the now-fragmented smart home device ecosystem exemplified by, for example, the various Belkin Wemo devices currently residing in my abode. If you’re an up-and-coming startup in the space, you love industry standards, because they lower your market-entry barriers versus larger, more established competitors. Conversely, if you’re one of those larger, more established suppliers, you love barriers to entry for your competitors.
Therefore the lukewarm-at-best (and more frequently, nonexistent or flat-out broken) embrace of Matter and Thread by legacy smart home technology and product suppliers (for which, to be precise, and as my earlier Blink example exemplifies, conventional web browser access, vs a proprietary app, is even a bridge too far)…Suffice it to say that I’m skeptical about Matter and Thread’s long-term prospects, albeit only cautiously so. I just don’t know what it might take to break the logjam that understandably prevents competitors from working together, in spite of the reality that a rising tide often does end up lifting all boats…or if you prefer, it’s often better to get a slice of a large pie versus the entirety of a much smaller pie.
A year later, is the situation better? Not really, candidly. For a more in-depth supplier-sourced perspective, I encourage you to read Aalyia’s coverage of her time spent last week in Silicon Labs’ product suite, including an interview with Daniel Cooley, CTO of the company. Cooley is spot-on when he notes that “it is not unusual for standards adoption to progress slower than desired.” I’ve seen this same scenario play out plenty of times in the past, and Matter and Thread (assuming they eventually achieve widespread success) won’t be the last. I’m reminded, for example, of a quote attributed to Bill Gates, that “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next 10.”
Cooley is also spot-on when he notes that Matter and Thread don’t necessarily need to go together; the Matter connectivity standard can alternatively use Ethernet (either wireless, aka Wi-Fi, or wired) for transport, along with Bluetooth Low Energy for initial device setup purposes (and speaking of wireless smart home network protocols, by the way, a quick aside: check out Z-Wave’s just-announced long range enhancements). And granted, there has been at least progress with both Matter (in particular) and Thread over the past year.
Version 1.4 of the Matter specification, announced last November, promises (quoting from Ars Technica’s coverage) “more device types, improvements for working across ecosystems [editor note: a concept called “Enhanced Multi-Admin”], and tools for managing battery backups, solar panels, and heat pumps”, for example. And at CES, the Connectivity Standards Alliance (CSA), which runs Matter, announced that Apple, Google, and Samsung will accept its certification results for their various “Works With” programs, too. That said, Amazon is notably absent from the CSA’s fast-track certification list. And more generally, Ars Technica was spot-on with the title of its writeup, “Matter 1.4 has some solid ideas for the future home—now let’s see the support.” See you back here this same time next year?
The Rest of the Story
(no, I don’t know what ballet has to do with smart rings, either)
Speaking of “approaching truly ridiculous wordcount territory”, I passed through 2,000 words a couple of paragraphs back, so I’m going to strive to make the rest of this piece more concise. Looking again at the list of potential coverage technology and product topics I scribbled down a few days ago, partway through CES, and after subtracting out the “Matter and Thread” entry I just discussed, I find…16 candidates left. Let’s divide that in two, shall we? Without further ado, and in no particular order save for how they initially streamed out of my noggin:
- Smart glasses: Ray-Ban and Meta’s jointly developed second-generation smart glasses were one of the breakout consumer electronics hits of 2024, with good (initial experience, at least) reason. Their constantly evolving AI-driven capabilities are truly remarkable, on top of the first-generation’s foundational still and video image capture and audio playback support. Unsurprisingly, therefore, a diversity of smart glasses implementations in various function and price-point options, from numerous suppliers and in both nonfunctional mockup, prototype and already-in-production forms, populated 2025 CES public booths and private meeting rooms alike in abundance. I actually almost bought a pair of Ray-Ban Meta glasses during Amazon’s Black Friday…err…week-plus promotion to play around with for myself (and subsequently cover here at EDN, of course). But I decided to hold off for the inevitable barely-used (if at all) eBay-posting markdowns to come. Why? Well, the recent “publicity” stemming from the New Orleans tragedy didn’t help (and here I thought “glassholes” were bad). Even though Meta Ray-Ban offers product options with clear lenses, not just sunglasses, most folks don’t (and won’t) wear glasses all the time, not to mention that battery life limitations currently preclude doing so anyway (and don’t get me started on the embedded batteries’ inherent obsolescence by design). And when folks do wear them, they’re fashion statements. Multiple pairs for various outfits, moods, styles (invariably going in and out of fashion quickly) and the like are preferable, something that’s not fiscally feasible for the masses when the glasses cost several hundred dollars apiece.
- Smart rings: This wearable health product category is admittedly intriguing because unlike glasses (or watches, for that matter), rings are less obvious to others, therefore it’s less critical (IMHO, at least) for the wearer to perfectly match them with the rest of the ensemble…plus you have 10 options of where to wear one (that said, does anyone put a ring on their thumb?). There were quite a few smart rings at CES this year, and next year there’ll probably be more. Do me a favor; before you go further, please go read (but come back afterwards!) The Verge’s coverage of Ultrahuman’s Rare ring family (promo videos at the beginning of this section). The snark is priceless; it was the funniest piece of 2025 CES coverage I saw!
- HDMI: Version 2.2 is enroute, with higher bandwidth (96 Gbps) now supportive of 16K resolution displays (along with 4K displays at head-splitting 480 fps), among other enhancements. And there’s a new associated “Ultra96” cable, too. At first, I was a bit bummed when I heard this, due to the additional infrastructure investment that consumers will need to shoulder. But then I thought back to all the times I’d grabbed a random legacy cable out of my box o’HDMI goodies only to discover that, for example, it only supported 1080p resolution, not 4K…even though the next one I pulled out of the box, which looked just like its predecessor down to the exact same length, did 4K without breaking a sweat. And I decided that maybe making a break from HDMI’s imperfect-implementation past history wasn’t such a bad idea, after all…
- 3D spatial audio: Up to this point, Dolby’s pretty much had the 3D spatial audio (which expands—bad pun intended—beyond conventional surround sound to also encompass height) stage all to itself with Atmos, but on the eve of CES, Samsung unveiled the latest fruits of its partnership with Google to promulgate an open source alternative called IAMF, for Immersive Audio Model and Formats, now also known by its marketing moniker, “Eclipsa Audio”. In retrospect, this isn’t a terrible surprise; for high-end video, Samsung has already settled on HDR10+ versus Dolby Vision. But I have questions, specifically as to whether Google and Samsung are really going to be able to deliver something credible that doesn’t also collide with Dolby’s formidable patent portfolio. And I also gotta say that the fact that nobody at Samsung’s booth was able to answer one reporter’s questions doesn’t leave me with a great deal of early-days confidence.
- TVs: Speaking of video, I mentioned more than a decade ago that Chinese display manufacturers were beginning to “make serious hay” at South Korea competitors’ expense, much as those same South Korea-based companies had previously done to their Japanese competitors (that said, it sure was nice to see Panasonic’s displays back at CES!). To wit, TCS has become a particularly formidable presence in the TV market. While it and its competitors are increasingly using viewer-customized ads (logging and uniquely responding to the specific content you’re streaming at the time) and other smart TV “platform” revenue enhancements to counterbalance oft-unprofitable initial hardware prices, TCS takes it to the next level with remarkably bad AI-generated drivel shown on its own “free” (translation: advertising-rife) channel. No thanks, I’ll stick with reruns of The Office. That said, the on-the-fly auto-translation capabilities built into Samsung’s newest displays (along with several manufacturers’ earbuds and glasses) were way
- Qi: Good news/bad news on the wireless charging Bad news first: the Qi Consortium recently added the “Qi Ready” category to its Qi2 specification suite. What this means, simply stated, is that device manufacturers (notably, at least at the moment, of Android smartphones) no longer need to embed orientation-optimization magnets in the devices themselves. Instead, as I’m already doing with my Pixel phones, they can alternatively rely on magnets embedded in accompanying cases. On the one hand, as Apple’s MagSafe ecosystem already shows, if you put a case on a phone it needs to have magnets anyway, because the ones in the phone aren’t strong enough to work through the added intermediary case material. And—I dunno—maybe the magnets add notable bill-of-materials cost? Or they interfere with the phone’s speakers, microphones and the like? Or…more likely (cynically, at least), the phone manufacturers see branded cases-with-magnets as a lucrative upside revenue streams? Thoughts, readers? Now for the good news: auto-movable coils to optimize device orientation! How cool is that?
- Lithium battery-based storage systems: Leading suppliers are aggressively expanding beyond portable devices into full-blown home backup systems. EcoFlow’s monitoring and management software looks quite compelling, for example, although I think I’ll skip the solar cell-inclusive hat. And Jackery’s now also selling solar cell-augmented roof tiles.
- Last but not least: (the) RadioShack (licensed brand name, to be precise) is back, baby!
And, now well past 3,000 words, I’m putting this one to bed, saving discussions on robots, Wi-Fi standards evolutions, full-body scanning mirrors with cameras (!!), the latest chips, inevitable “AI” crap and the like for another day. I’ll close with iFixit’s annual “worst of show” coverage:
And with that, I look forward to your thoughts on the things I discussed, saved for later and overlooked alike in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- CES 2025 coverage
- IoT device vulnerabilities are on the rise
- Routers infected with malware: Owners (and manufacturers) beware
- Disassembling a Cloud-compromised NAS
- 2025: A technology forecast for the year ahead
- A Bluetooth receiver, an identity deceiver
- Open Source: Keep It Current Or Suffer The Consequences
- Heartbleed: the wakeup call the open-source community needed?
- Obsolescence by design, defect, or corporate decree
The post The 2025 CES: Safety, Longevity and Interoperability Remain a Mess appeared first on EDN.
CEA-Leti presenting at Photonics West, including Invited Paper on optical phased arrays for LiDAR
LED Meaning, Types, Working, Applications, Uses & Advantages
LED stands for Light Emitting Diode. It is a semiconductor component that transforms electrical energy into light through the process of electroluminescence.
Types of LED
- Standard LEDs: Basic LEDs used in indicators, displays, and signaling.
- High-Power LEDs: Brighter and used in floodlights, automotive headlights, and streetlights.
- RGB LEDs: Red, Green, and Blue LEDs that can produce a range of colours.
- COB LEDs (Chip on Board): Multiple LED chips mounted on a single circuit board for uniform light distribution.
- SMD LEDs (Surface Mounted Diodes): Compact and efficient for general-purpose lighting.
- Filament LEDs: Designed to resemble traditional incandescent bulbs with modern LED technology.
How Does LED Work?
- Semiconductor Material: LEDs use a semiconductor made of materials like gallium arsenide or gallium nitride.
- Electroluminescence: When electrical current flows through the semiconductor, it excites electrons, causing them to release energy in the form of photons (light).
- Phosphor Coating: For white light, blue LEDs are coated with phosphor materials to convert the blue light into white light.
LED Applications
- Residential Lighting: General lighting, ceiling lights, table lamps.
- Commercial Lighting: Offices, retail stores, and large venues.
- Street Lighting: Energy-efficient public illumination.
- Automotive Lighting: Headlights, brake lights, interior lights.
- Displays: TVs, computer monitors, and digital billboards.
- Signage: Outdoor and indoor advertising displays.
- Medical Equipment: Surgical lights, diagnostic tools.
LED Advantages
- Energy Efficiency: Uses up to 80% less energy than incandescent bulbs.
- Long Lifespan: Can last 25,000 to 50,000 hours or more.
- Durability: Resists shocks, vibrations, and extreme temperatures.
- Eco-Friendly: Free of toxic materials like mercury.
- Instant Lighting: Lights up immediately without warm-up time.
- Dimmable: Many LEDs can be adjusted for brightness.
- Design Flexibility: Available in various shapes, colors, and sizes.
LED Disadvantages
- Higher Initial Cost: More expensive upfront compared to traditional lighting.
- Heat Sensitivity: Requires proper heat dissipation to maintain performance.
- Color Accuracy: Lower-quality LEDs may have poor color rendering.
- Blue Light Emission: Excessive blue light exposure may cause discomfort or disrupt sleep.
- Compatibility Issues: Some older fixtures or dimmers may not work well with LEDs.
The post LED Meaning, Types, Working, Applications, Uses & Advantages appeared first on ELE Times.
LED Lighting Definition, Types, Applications and Benefits
LED (Light Emitting Diode) lighting is a lighting technology that utilizes semiconductors to transform electrical energy into visible light. LEDs are highly efficient, durable, and versatile, making them suitable for a wide range of applications, from home lighting to industrial and automotive use.
History of LED Lighting
- 1907: H.J. Round first observed electroluminescence in silicon carbide, which became a foundational discovery for the development of LED technology.
- 1962: Nick Holonyak Jr., working at General Electric, created the first visible-spectrum LED (red).
- 1970s: LED technology expanded with additional colors like green and yellow, though applications were limited to indicators and displays.
- 1990s: Blue LEDs were developed by Shuji Nakamura, enabling the creation of white LEDs by combining blue light with phosphor coatings.
- 2000s: LEDs began to replace traditional incandescent and fluorescent lighting in many applications due to advances in efficiency, color rendering, and cost.
- Today: LEDs dominate the lighting industry with widespread applications, from smart home systems to streetlights and displays.
Types of LED Lighting
- Miniature LEDs
- Used in indicators, displays, and small electronics.
- High-Power LEDs
- Brighter and used in high-intensity applications like floodlights and automotive headlights.
- RGB LEDs
- Combine red, green, and blue LEDs to produce various colors; used in displays and decorative lighting.
- COB LEDs (Chip on Board)
- Provide high brightness and even light distribution; common in spotlights and downlights.
- SMD LEDs (Surface-Mounted Diodes)
- Compact and versatile; widely used in strip lighting and general-purpose lighting.
- Filament LEDs
- Mimic traditional filament bulbs; used for decorative lighting.
How Does LED Lighting Work?
- Semiconductor Materials: LEDs use a semiconductor (typically gallium arsenide or gallium nitride).
- Electric Current: When electricity flows through the diode, electrons combine with holes in the semiconductor material, releasing energy in the form of photons (light).
- Phosphor Coating: For white light, a blue LED is coated with a phosphor material to convert blue light into white light.
Applications of LED Lighting
- Residential: General lighting, decorative lighting, and smart home systems.
- Commercial: Office spaces, retail displays, and signage.
- Industrial: Factory lighting, warehouse illumination, and hazardous environments.
- Automotive: Headlights, interior lighting, and brake lights,
- Street Lighting: Energy-efficient public lighting systems.
- Displays: TVs, monitors, and large digital billboards.
- Medical: Surgical lighting and diagnostic devices.
How to Use LED Lighting
- Select the Right Type: Choose LEDs based on brightness (lumens), color temperature (warm, cool, or daylight), and beam angle.
- Install Proper Fixtures: Use fixtures designed for LEDs to ensure optimal performance and longevity.
- Control Options: Utilize dimmers, smart systems, or RGB controllers for customized lighting.
- Placement: Position LEDs effectively to reduce glare and enhance the desired ambiance.
Advantages of LED Lighting
- Energy Efficiency: LEDs consume up to 80% less power compared to traditional incandescent bulbs.
- Long Lifespan: Can last 25,000–50,000 hours, significantly longer than traditional lighting.
- Durability: Resistant to shocks, vibrations, and extreme temperatures.
- Eco-Friendly: Contains no toxic materials like mercury and emits less heat.
- Design Flexibility: Available in various shapes, colours, and sizes.
- Instant Illumination: LEDs turn on immediately without any warm-up period.
- Dimmable and Controllable: Many LEDs support dimming and integration into smart lighting systems.
Disadvantages of LED Lighting
- Higher Upfront Cost: LEDs are more expensive initially compared to traditional lighting.
- Heat Sensitivity: Performance can degrade if not properly cooled.
- Color Rendering: Some cheaper LEDs may have lower color rendering accuracy.
- Blue Light Concerns: Excessive blue light exposure from LEDs may cause eye strain or disrupt sleep cycles.
- Compatibility Issues: May not work well with older dimmers or fixtures without modifications.
The post LED Lighting Definition, Types, Applications and Benefits appeared first on ELE Times.
Troubleshooting Flowchart from Practical Electronics for Inventors. What would you add? Is this a good guide?
submitted by /u/SkunkaMunka [link] [comments] |
EEVblog 1661 - AC Basics Tutorial Part 5: Time Domain vs Frequency Domain
Weekly discussion, complaint, and rant thread
Open to anything, including discussions, complaints, and rants.
Sub rules do not apply, so don't bother reporting incivility, off-topic, or spam.
Reddit-wide rules do apply.
To see the newest posts, sort the comments by "new" (instead of "best" or "top").
[link] [comments]
Automotive insights from CES 2025
OEMs are shifting from installing black box solutions that specialized functions in the more conventional domain architecture to a zone architecture and a function-agnostic processing backbone where each node handles location-specific data. Along with this trend, there is a push towards optimizing sensor functions, fusing multimodal input data with ML for contextual awareness. Sensors no longer serve one function, instead they can be leveraged in a series of automotive systems from driver monitoring systems (DMSs) to smart door access. As a result, camera/sensor count is minimized and power consumption maximized. A tour of several booths at CES 2025 showed some of the automotive-oriented solutions.
Automotive lightingMicrochip’s intelligent smart embedded LED (ISELED), ISELED light and sensor network (ILaS), and Macroblock lighting solutions can be seen in Figure 1. The ISELED protocol was developed to overcome the issue of requiring an external IC per LED to control the color/brightness of individual LEDs. Instead, Microchip has integrated an intelligent ASIC into each LED where the entire system can be controlled with a simple 16-bit MCU. The solution allows for more styling control for aesthetics with additional use cases such as broadcasting the status of a car via text that appears on display-based matrix lighting.
Figure 1: Microchip ISELED lighting solution where all of these LEDS are individually addressable allowing designers to change color/brightness levels of each LED.
ADI’s 10BASE-T1S ethernet to edge bus (E2B) tech has been used as a body control and automotive lighting connectivity solution. And, while this solution is not directly related to LED control, it can be used to update OEM automotive lighting systems that leverage the 10BASE-T1S automotive bus.
In-cabin sensing systemsOne of the more pervasive themes were child presence detection (CPD) and occupancy monitoring system (OMS) products, with many companies showing off their ultra-wide band (UWB) detection and/or ranging tech and 60-GHz radar chips. The inspiration here comes from the incessant pressure on OEMs to meet stringent safety regulations. For instance, The Euro NCAP advanced program will only offer rewards to OEMs for direct sensing systems for CPD. For UWB sensing, the typical setup involved 4 UWB anchors placed outside of the vehicles and two on the inside to detect a phone equipped with UWB. The NXP booth’s automotive UWB demo can be seen in Figure 2. As shown in the image, the UWB radar will be able to identify the distance of the phone from the UWB anchor and unlock the car from the outside using the UWB ranging feature with time of flight (ToF) measurements. The very same principles can be applied for smart door locks and train stations, allowing passengers with pre-purchased train tickets to pass the turnstile from outside of the station to the inside of it.
Figure 2: The NXP automotive UWB radar smart car access solution.
Qorvo also showed their UWB solution, Figure 3 shows one UWB anchor on a toy car for demonstration purposes. The image also highlights another ADAS application of radar (UWB or 60 GHz): respiration and heartbeat detection.
An engineer at NXP granted a basic explanation of the process: the technology measures signal reflections from occupants to detect, for instance, how often the chest is expanding/contracting to measure breathing. This allows for direct-sensing of occupants with algorithms that can discern whether or not a child is present in the vehicle, offering a CPD, OMS, intrusion & proximity alert, and a host of other functions with the established sensor infrastructure. It is apparent that there is no clear answer on the number of wireless chips but there is more of a clear requirement that sensors are becoming more intelligent to minimize part-count—a single radar chip could eliminate five in-seat weight sensors.
Figure 4: Qorvo’s UWB keyless entry and vitals monitoring solutions in partnership with other companies.
TI’s CPD, OMS, and driver monitoring system (DMS) can be seen in Figure 5 with a combination of their 60-GHz radar chip and a camera. Naturally, the shorter wavelength 60-GHz radar offers much more range resolution so this system would likely be more accurate in CPD applications potentially offering less false positives. However, possibly the most obvious benefit of utilizing 60 GHz radar is the fact that a single module replaces the 6 UWB modules for CPD, OMS, intrusion detection, gesture detection, etc. This however, does not entirely sidestep UWB technology; the ranging aspect of UWB allows for accurate smart door access and this is something that may be impractical for 60-GHz technology, especially considering the atmospheric absorption at that particular frequency.
Figure 5: TI’s CPD, OMS, and driver monitoring system (DMS) CES demo.
AD and surround view systemsAutomotive surround view cameras for AD and ADAS functions were also presented in a number of booths. Microchip’s can be seen in Figure 6 where their serializers are used in three cameras that can transmit up to 8 Gbps. The Microchip deserializers are configured to receive the video data and aggregate it via the Automotive SerDes Alliance Motion Link (ASA-ML) standard to the central compute, or high-performance computer (HPC), mimicking a zonal architecture.
Figure 6: Microchip’s ASA-ML standard 360o surround view solution.
ADI also used a serializer/deserializer (SerDes) solution with a gigabit multimedia serial link (GMSL) demo. GMSL’s claim to fame is its lightweight nature, the single-strand solution transports up to 12 Gbps over a single bidirectional cable, shaving weight.
Figure 7: ADI GMSL demo aggregating feeds from six cameras into a deserializer board and going into a single MIPI port on the Jetson HPC-platform.
Using VLMs for ADAmbarella, a company that specializes in AI vision processors showed a particularly interesting AD demo that integrated LLM in the stack. This technology was originally developed by Vislab, an Italian startup that is now an R&D automotive center under Ambarella. The system consisted of 6 cameras, 5 radars, and Ambarella’s CV3 automotive domain controller for L2+ to L4 autonomy. The use of the vision language model (VLM) LLaVA-OneVision allowed for more context-aware decision making.
Founder of Vislab, Alberto Broggi hosted the demo and explained the benefits of leveraging an LLM in this particular use case, “Suppose you have the best perception in the world, so you can perceive everything; you can understand the position of cars, locate pedestrians, and so on. You will still have problems, because there are situations that are ambiguous.” He continued by describing a few of these situations, “If you have a car in front of you in your lane, you don’t really know whether or not you can overtake because it depends on the situation. If its a broken down vehicle, you can obviously overtake it. If it’s a vehicle that is waiting for a red light, you can’t. So you really need some higher level description and context.”
Figure 8 and the video below shows one such example of contextual-awareness that a VLM can offer.
Figure 8: Ambarella VLM AD demo with use case offering some contextual-awareness and suggestions.
Aalyia Shaukat, associate editor at EDN, has worked in the design publishing industry for six years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.
Related Content
- CES 2025: A Chat with Siemens EDA CEO Mike Ellow
- ADI’s efforts for a wirelessly upgraded software-defined vehicle
- CES 2025: Moving toward software-defined vehicles
- CES 2025: Approaches towards hardware acceleration
- CES 2025: Day 2 Wrap and Interview with EdgeCortix’s CEO
The post Automotive insights from CES 2025 appeared first on EDN.
CES 2025 coverage
Editors from EDN and our AspenCore sister publications are covering the Consumer Electronics Show (CES). Scroll down to see coverage of this year’s CES!
CES 2025: Day 2 Wrap and Interview with EdgeCortix’s CEO A constant theme at CES 2025 this week has been around the deployment of AI in all kinds of applications, how to drive as much intelligence as possible to the edge, sensor fusion and making everything smart. We saw many large and small companies developing technologies and products to optimize this process, aiming to get more “smarts” or performance with less effort and power. |
|
CES 2025: Approaches towards hardware acceleration It is clear that support for some kind of hardware acceleration has become paramount for success in breaking into the intelligent embedded edge. Company approaches to the problem run the full gamut from hardware accelerated MCUs with abundant software support and reference code, to an embedded NPU. |
|
CES 2025: It’s All About Digital Coexistence, and AI is Real CES 2025 commenced in Las Vegas, Nev., on Sunday at the Mandalay Bay Convention Center for the trade media with the Consumer Technology Association’s annual tech trends survey and forecast. Plus, there was a sneak preview provided to some of the exhibiting companies at the CES Unveiled event. |
|
Integration of AI in sensors prominent at CES 2025 Miniaturization and power efficiency have long defined sensor designs. Enter artificial intelligence (AI) and software algorithms to dramatically improve sensing performance and enable a new breed of features and capabilities. This trend has been apparent at this year’s CES in Las Vegas, Nevada. |
|
Software-defined vehicle (SDV): A technology to watch in 2025 Software-defined vehicle (SDV) technology has been a prominent highlight in the quickly evolving automotive industry. But how much of it is hype, and where is the real and tangible value? CES 2025 in Las Vegas will be an important venue to gauge the actual progress this technology has made with a motto of bringing code on the road. |
|
CES 2025: Wirelessly upgrading SDVs SDVs rethink underlying vehicle architecture so that cars are broken into zones that will directly service the vehicle subsystems that surround it locally, cutting down wiring, latency, and weight. Another major benefit of this is over-the-air (OTA) updates using Wi-Fi or cellular to update cloud-connected cars; however, bringing Ethernet to the automotive edge comes with its complexities. |
|
CES 2025: Moving toward software-defined vehicles TI’s automotive innovations are currently focused in powertrain systems; ADAS; in-vehicle infotainment (IVI); and body electronics and lighting. The recent announcements fall into the ADAS with the AWRL6844 radar sensor as well as IVI with the AM275 and AM62D processors and the class-D audio amplifier. |
|
CES 2025: Day 1 Recap with Synaptics, Ceva EE Times and AspenCore staff are on-site at CES 2025, providing expert coverage on the latest and greatest developments at one of the largest tech events in the world. |
|
CES 2025: A Chat with Siemens EDA CEO Mike Ellow Siemens showcased its latest PAVE360 digital twin solution this year at CES 2025, lowering the barrier between design efforts that are typically siloed. EE Times had an opportunity to chat with Siemens EDA CEO Mike Ellow about how this approach to design is relevant for the semiconductor industry—especially considering the recent uptick in using AI tools at every level of a system to dynamically assess the trickle up/down effects of design adjustments. |
|
CES 2025: An interview with Si Labs’ Daniel Cooley At the forefront of many of the CES wireless solutions is WiFi’s newest iteration (WiFi 6), BLE and BLE audio for their already-established place in consumer devices. A chat with Silicon Labs CTO Daniel Cooley illuminated the company’s presence and future in IoT and the intelligent edge. |
The post CES 2025 coverage appeared first on EDN.
SemiLEDs’ revenue falls slightly in December quarter
VueReal appoints VP of semiconductor engineering
Plessey and Meta develop brightest red micro-LED display for AR glasses
Teledyne HiRel releases high-power 30MHz–5GHz RF GaN switch
Integration of AI in sensors prominent at CES 2025
Miniaturization and power efficiency have long defined sensor designs. Enter artificial intelligence (AI) and software algorithms to dramatically improve sensing performance and enable a new breed of features and capabilities. This trend has been apparent at this year’s CES in Las Vegas, Nevada.
See full story at EDN’s sister publication, Planet Analog.
Related Content
- The sensor parade at CES 2025
- Unlocking New Possibilities with Smart Sensors
- Designer’s Guide to Industrial IoT Sensor Systems
- Smart sensors need smart power integrity analysis
- Smart sensor emulation speed design of signal conditioning systems
The post Integration of AI in sensors prominent at CES 2025 appeared first on EDN.
Exploring Artificial General Intelligence: A Leap Toward Thinking Machines
Artificial General Intelligence (AGI) represents the ultimate frontier in the world of artificial intelligence—a vision of machines that think, learn, and understand as flexibly and broadly as humans do. Unlike today’s narrow AI systems that excel in specific tasks, such as translating languages or diagnosing diseases, AGI aims to bridge the gap between computational efficiency and human-like cognition. It’s the dream of creating an AI so versatile that it can seamlessly adapt to any intellectual challenge across diverse domains.
What Exactly is AGI?
AGI isn’t just about making machines smarter in specific ways; it’s about giving them a brainpower equivalent to our own. Imagine an AI that not only plays chess like a grandmaster but also writes poetry, learns to cook, solves intricate physics problems, and holds deep, meaningful conversations—all without needing to be reprogrammed for each task. AGI aspires to be this all-encompassing, adaptable system that can reason, learn, and apply knowledge to new situations, much like a human.
The Difference Between AGI and Narrow AI
To understand AGI, it’s essential to contrast it with what we currently have: “Narrow AI”.
Narrow AI dominates our lives today, powering virtual assistants like Alexa, recommendation algorithms on Netflix, and even self-driving cars. These systems are exceptionally good at what they’re designed to do but lack the ability to generalize or step outside their predefined capabilities. A narrow AI trained to diagnose diseases, for example, can’t suddenly start solving math equations.
AGI, in contrast, has the potential to overcome these constraints. It wouldn’t just perform tasks; it would learn how to approach them, adapt to new ones, and even innovate solutions we humans might never conceive.
The Path to AGI: Still a Theoretical Dream
At present, AGI remains a theoretical concept, with scientists and engineers dedicating their efforts to unraveling the complexities of human-like cognition. Progress is being made in areas like neural networks, reinforcement learning, and natural language processing, but creating a machine that truly “understands” remains elusive.
The challenge isn’t just computational—it’s deeply philosophical. How do we model consciousness, creativity, and abstract thinking? How do we design a machine capable of ethical reasoning or emotional intelligence? AGI isn’t just about programming; it’s about unraveling the mysteries of human thought itself.
The Promise and Peril of AGI
If achieved, AGI could revolutionize every facet of society. It could accelerate scientific discovery, solve complex global challenges like climate change, and redefine education and healthcare. Imagine a world where machines collaborate with humans to unlock limitless potential.
However, this vision isn’t without risks. AGI raises profound ethical questions: How do we ensure it aligns with human values? How do we prevent misuse? And how do we safeguard against scenarios where AGI outpaces our control? These are questions that must be addressed alongside technological progress.
The Road Ahead
AGI represents the culmination of human ambition—a synthesis of technology and intellect that mirrors our own capabilities. While it may still be a distant goal, its pursuit inspires us to explore the very essence of intelligence, creativity, and ethics. The journey to AGI isn’t just about building machines; it’s about redefining what it means to be human in a world of infinite possibilities.
The post Exploring Artificial General Intelligence: A Leap Toward Thinking Machines appeared first on ELE Times.