Українською
  In English
Feed aggregator
The next AI frontier: AI inference for less than $0.002 per query

Inference is rapidly emerging as the next major frontier in artificial intelligence (AI). Historically, the AI development and deployment focus has been overwhelmingly on training with approximately 80% of compute resources dedicated to it and only 20% to inference.
That balance is shifting fast. Within the next two years, the ratio is expected to reverse to 80% of AI compute devoted to inference and just 20% to training. This transition is opening a massive market opportunity with staggering revenue potential.
Inference has a fundamentally different profile—it requires lower latency, greater energy efficiency, and predictable real-time responsiveness than training-optimized hardware, which entails excessive power consumption, underutilized compute, and inflated costs.
When deployed for inference, the training-optimized computing resources result in a cost-per-query at one or even two orders of magnitude higher than the benchmark of a cost of $0.002 per query established by a 2023 McKinsey analysis based on the Google 2022 search activity estimated to be in average 100,000 queries per second.
Today, the market is dominated by a single player whose quarterly results reflect its stronghold. While a competitor has made some inroads and is performing respectably, it has yet to gain meaningful market share.
One reason is architectural similarity; by taking a similar approach to the main player, rather than offering a differentiated, inference-optimized alternative, the competitor faces the same limitations. To lead in the inference era, a fundamentally new processor architecture is required. The most effective approach is to build dedicated, inference-optimized infrastructure, an architecture specifically tailored to the operational realities of processing generative AI models like large language models (LLMs).
This means rethinking everything from compute units and data movement to compiler design and LLM-driven architectures. By focusing on inference-first design, it’s possible to achieve significant gains in performance-per-watt, cost-per-query, time-to-first-token, output-token-per-second, and overall scalability, especially for edge and real-time applications where responsiveness is critical.
This is where the next wave of innovation lies—not in scaling training further, but in making inference practical, sustainable, and ubiquitous.
The inference trinity
AI inference hinges on three critical pillars: low latency, high throughput and constrained power consumption, each essential for scalable, real-world deployment.
First, low latency is paramount. Unlike training, where latency is relatively inconsequential—a job taking an extra day or costing an additional million dollars is still acceptable as long as the model is successfully trained—inference operates under entirely different constraints.
Inference must happen in real time or near real time, with extremely low latency per query. Whether it’s powering a voice assistant, an autonomous vehicle or a recommendation engine, the user experience and system effectiveness hinge on sub-millisecond response times. The lower the latency, the more responsive and viable the application.
Second, high throughput at low cost is essential. AI workloads involve processing massive volumes of data, often in parallel. To support real-world usage—especially for generative AI and LLMs—AI accelerators must deliver high throughput per query while maintaining cost-efficiency.
Vendor-specified throughput often falls short of peak targets in AI workload processing due to low-efficiency architectures like GPUs. Especially, when the economics of inference are under intense scrutiny. These are high-stakes battles, where cost per query is not just a technical metric—it’s a competitive differentiator.
Third, power efficiency shapes everything. Inference performance cannot come at the expense of runaway power consumption. This is not only a sustainability concern but also a fundamental limitation in data center design. Lower-power devices reduce the energy required for compute, and they ease the burden on the supporting infrastructure—particularly cooling, which is a major operational cost.
The trade-off can be viewed from the following two perspectives:
- A new inference device that delivers the same performance at half the energy consumption can dramatically reduce a data center’s total power draw.
- Alternatively, maintaining the same power envelope while doubling compute efficiency effectively doubles the data center’s performance capacity.
Bringing inference to where users are
A defining trend in AI deployment today is the shift toward moving inference closer to the user. Unlike training, inference is inherently latency-sensitive and often needs to occur in real time. This makes routing inference workloads through distant cloud data centers increasingly impractical—from both a technical and economic perspective.
To address this, organizations are prioritizing edge-based inference processing data locally or near the point of generation. Shortening the network path between the user and the inference engine significantly improves responsiveness, reduces bandwidth costs, enhances data privacy, and ensures greater reliability, particularly in environments with limited or unstable connectivity.
This decentralized model is gaining traction across industry. Even AI giants are embracing the edge, as seen in their development of high-performance AI workstations and compact data center solutions. These innovations reflect a clear strategic shift: enabling real-time AI capabilities at the edge without compromising on compute power.
Inference acceleration from the ground up
One high-tech company, for example, is setting the engineering pace with a novel architecture designed specifically to meet the stringent demands of AI inference in data centers and at the edge. The architecture breaks away from legacy designs optimized for training workloads with near-theoretical performance in latency, throughput, and energy efficiency. More entrants are certain to follow.
Below are some of the highlights of this inference technology revolution in the making.
Breaking the memory wall
The “memory wall” has challenged chip designers since the late 1980s. Traditional architectures attempt to mitigate the impact on performance introduced by data movement between external memory and processing units by layering memory hierarchies, such as multi-layer caches, scratchpads and tightly coupled memory, each offering tradeoffs between speed and capacity.
In AI acceleration, this bottleneck becomes even more pronounced. Generative AI models, especially those based on incremental transformers, must constantly reprocess massive amounts of intermediate state data. Conventional architectures struggle here. Every cache miss—or any operation requiring access outside in-memory compute—can severely degrade performance.
One approach collapses the traditional memory hierarchy into a single, unified memory stage: a massive SRAM array that behaves like a flat register file. From the perspective of the processing units, any register can be accessed anywhere, at any time, within a single clock. This eliminates costly data transfers and removes the bottlenecks that hamper other designs.
Flexible computational tiles with 16 high-performance processing cores dynamically reconfigurable at run-time executes either AI operations, like multi-dimensional matrix operations (ranging from 2D to N-dimensional), or advanced digital signal processing (DSP) functions.
Precision is also adjustable on-the-fly, supporting formats from 8 bits to 32 bits in both floating point and integer. Both dense and sparse computation modes are supported, and sparsity can be applied on the fly to either weights or data—offering fine-grained control for optimizing inference workloads.
Each core features 16-million registers. While a vast register file presents challenges for traditional compilers, two key innovations come to rescue:
- Native tensor processing, which handles vectors, tensors, and matrices directly in hardware, eliminates the need to reduce them to scalar operations and manually implements nested loops—as required in GPU environments like CUDA.
- With high-level abstraction, developers can interact with the system at a high level—PyTorch and ONNX for AI and Matlab-like functions for DSP—without the need to write low-level code or manage registers manually. This simplifies development and significantly boosts productivity and hardware utilization.
Chiplet-based scalability
A physical implementation leverages a chiplet architecture, with each chiplet comprising two computational cores. By combining chiplets with high-bandwidth memory (HBM) chiplet stacks, the architecture enables highly efficient scaling for both cloud and edge inference scenarios.
- Data center-grade inference for efficient tailoring of compute and memory resources suits edge constraints. The configuration pairs eight VSORA chiplets with eight HBM3e chiplets, delivering 3,200 TFLOPS of compute performance in FP8 dense mode and optimized for large-scale inference workloads in data centers.
- Edge AI configurations allow efficient tailoring of compute resources and lower memory requirements to suit edge constraints. Here, two chiplets + one HBM chiplet = 800 TFLOPS and four chiplets + one HBM chiplet = 1,600 TFLOPS.
Power efficiency as a side effect
The performance gains are clear as is power efficiency. The architecture delivers twice the performance-per-watt of comparable solutions. In practical terms, the chip draw stops at just 500 watts, compared to over one kilowatt for many competitors.
When combined, these innovations provide multiple times the actual performance at less than half the power—offering an overall advantage of 8 to 10 times compared to conventional implementations.
CUDA-free compilation
One often-overlooked advantage of the architecture lies in its streamlined and flexible software stack. From a compilation perspective, the flow is simplified compared to traditional GPU environments like CUDA.
The process begins with a minimal configuration file—just a few lines—that defines the target hardware environment. This file enables the same codebase to execute across a wide range of hardware configurations, whether that means distributing workloads across multiple cores, chiplets, full chips, boards, or even across nodes in a local or remote cloud. The only variable is execution speed; the functional behavior remains unchanged. This makes on-premises and localized cloud deployments seamless and scalable.
A familiar flow without complexity
Unlike CUDA-based compilation processes, the flow appears basic without layers of manual tuning and complexity through a more automated and hardware-agnostic compilation approach.
The flow begins by ingesting standard AI inputs, such as models defined in PyTorch. These are processed by a proprietary graph compiler that automatically performs essential transformations such as layer reordering or slicing for optimal execution. It extracts weights and model structure and then outputs an intermediate C++ representation.
This C++ code is then fed into an LLVM-based backend, which identifies the compute-intensive portions of the code and maps them to the architecture. At this stage, the system becomes hardware-aware, assigning compute operations to the appropriate configuration—whether it’s a single A tile, an edge device, a full data center accelerator, a server, a rack or even multiple racks in different locations.
Invisible acceleration for developers
From a developer’s point of view, the accelerator is invisible. Code is written as if it targets the main processor. During compilation, the compilation flow identifies the code segments best suited for acceleration and transparently handles the transformation and mapping to hardware, lowering the barrier for adoption and requiring no low-level register manipulation or specialized programming knowledge.
The instruction set is high-level and intuitive, carrying over capabilities from its origins in digital signal processing. The architecture supports AI-specific formats such as FP8 and FP16, as well as traditional DSP operations like FP16/ arithmetic, all handled automatically on a per-layer basis. Switching between modes is instantaneous and requires no manual intervention.
Pipeline-independent execution and intelligent data retention
A key architectural advantage is pipeline independence—the ability to dynamically insert or remove pipeline stages based on workload needs. This gives the system a unique capacity to “look ahead and behind” within a data stream, identifying which information must be retained for reuse. As a result, data traffic is minimized, and memory access patterns are optimized for maximum performance and efficiency, reaching levels unachievable in conventional AI or DSP systems.
Built-in functional safety
To support mission-critical applications such as autonomous driving, functional safety features are integrated at the architectural level. Cores can be configured to operate in lockstep mode or in redundant configurations, enabling compliance with strict safety and reliability requirements.
In the final analysis, a memory architecture that eliminates traditional bottlenecks, compute units tailored for tensor operations, and unmatched power efficiency sets a new standard for AI inference.
Lauro Rizzatti is a business advisor to VSORA, an innovative startup offering silicon IP solutions and silicon chips, and a noted verification consultant and industry expert on hardware emulation.
Related Content
- AI at the edge: It’s just getting started
- Custom AI Inference Has Platform Vendor Living on the Edge
- Partitioning to optimize AI inference for multi-core platforms
- Revolutionizing AI Inference: Unveiling the Future of Neural Processing
The post The next AI frontier: AI inference for less than $0.002 per query appeared first on EDN.
Bosch Propels Advanced ADAS Forward With Pair of Radar SoCs
Beijing IP Court denies Innoscience’s appeal against EPC’s compensated-gate patent
Who needs DC-DC converters anyway.
![]() | submitted by /u/No_Pilot_1974 [link] [comments] |
Why modulate a power amplifier?—and how to do it

We recently saw how certain audio power amplifiers can be used as oscillators. This Design Idea shows how those same parts can be used for simple amplitude modulation, which is trickier than it might seem.
Wow the engineering world with your unique design: Design Ideas Submission Guide
The relevant device is the TDA7052A, which we explored in some detail while making it oscillate. It has a so-called logarithmic gain-control input, the gain in dBs being roughly proportional to the voltage on that pin over a limited range.
However, we may want a reasonably linear response, which would mean undoing some of the chip designers’ careful work.
First question: why—what’s the application?
The original purpose of this circuit was to amplitude-modulate the power output stage of an infrasonic microphone. That gadget generated both the sub-10‑Hz baseband signal and an audio tone whose pitch varied linearly with it, allowing one to hear at least a proxy for the infrasonics. The idea was to keep the volume low during relatively inactive periods and only increase it during the peaks, whether those were positive or negative, so that frequency and amplitude modulation would work hand in hand.
The two basic options are to use the device’s inherent “log” law (more like antilog), so that the perceived loudness was modulated, or to feed the control pin with a logarithmically-squashed signal—the inverse of the gain-control curve—to linearize the modulation. The former is simpler but sounded rather aggressive; the latter, more complicated but smoother, so we’ll concentrate on that. The gain-control curve from the datasheet, overlaid with real-life measurements, is shown in Figure 1. Because we need gain to drive the speaker, we can only use the upper, more bendy, part of the curve, with around 26 dB of gain variation available.
Figure 1 The TDA7052A’s control voltage versus its gain, with the theoretical curve and practical readings.
For accurate linear performance, an LM13700 OTA configured as an amplitude modulator worked excellently, but needed a separate power output stage and at least ±6-V supplies rather than the single, split 5-V rail used for the rest of the circuitry. An OTA’s accuracy and even precision are not needed here; we just want the result to sound right, and can cut some corners. (The LM13700’s datasheet is full of interesting applications.)
Next question: how?
At the heart of this DI is an interesting form of full-wave rectifier. We’ll look at it in detail, and then pull it to pieces.
If we take a paralleled pair of current sources, one inverting and the other not, we can derive a current proportional to the absolute value of the input: see Figure 2.
Figure 2 A pair of current sources can make a novel full-wave rectifier.
The upper, inverting, section sources current towards ground when the input is positive (with respect to the half-rail point), and the lower, non-inverting part does so for negative half-cycles. R1 sets the transconductance for both stages. Thus, the output current is a function of the absolute value of the input voltage. It’s shown as driving R4 to produce a voltage with respect to 0 V, which sounds more useful than it really is.
Conventional full-wave rectifiers usually have a voltage output, stored on a capacitor, and representing the peak levels. This circuit can’t do that: connecting a capacitor across R4 merely averages the signal. To extract the peaks, another stage would be needed: pointless. By the way, the original thoughts for this stage were standard precision rectifiers with incorporated or added current sources, but they proved to be more complicated while performing no better—except for inputs below ~5 mV, where they had less “crossover distortion.”
The maximum output voltage swing is limited by the ratios of R4 to R2 (or R3). Excessive positive inputs will tend to saturate Q1, so VOUT can approach Vs/2. (The transistor’s emitter is servoed to Vs/2.) With R4 = R2 = R3, negative swings saturate Q2, but the ratio of R3 and R4 means that VOUT can only approach Vs/4. Q1 and Q2 respond differently to overloads, with Q2’s circuit folding back much sooner. If R2, R3, and R4 are all equal, the maximum unclipped voltage swing across R4 is just less than a quarter of the supply rail voltage.
Increasing R1 and making R4 much greater than R2 or R3 allows a greater swing for those negative inputs, but at the expense of increased offset errors. Adding an extra gain stage would give those same problems while needing more parts.
Applying the current source to the power amp
Conclusion: This circuit is great for sourcing a current to ground, but if you need a linear voltage output, it’s less useful. We don’t want linearity but something close to a logarithmic response, or the inverse of the power amp’s control voltage. Feeding the current through a network containing a diode can do just that, and the resulting circuit is shown in Figure 3.
Figure 3 Schematic of a power amplifier that is amplitude-modulated using the dual current source.
The current source is just as described above. With R1 = 100k, the output peaks at 23 µA for ±2.5 V inputs. That current feeds the network R4/R5/D3, which suitably squashes the signal, ready for buffering into A2’s Vcon input. The gain characteristic is now much more linear, as the waveforms in Figure 4 indicate. The TDA7052A’s Vcon pin normally either sinks or sources current, but emitter follower Q3 overrides that as well as buffering the output from the network.
Figure 4 Some waveforms from Figure 3, showing its operation.
To show the operation more cleanly, the plots were made using a 10-Hz tri-wave to modulate a 700-Hz sine wave. (The target application would have an infrasonic signal—from, say, 300 mHz to 10 Hz—modulating a pitch-linear audio tone ranging from about 250 to 1000 Hz depending on the signal’s absolute level.)
Some further notes on the circuitry
The values for R4/R5/D3 were optimized by a process of heuristic iteration, which is fancy-speak for lots of fiddling with trimmers until things looked right on the ’scope. These worked for me with the devices to hand. Others gave similar results; the absolute values are less important than the overall combination.
R7 and R8 may seem puzzling: there’s nothing like them on the PA’s datasheet. I found that applying a little bias to the audio input pin helps minimize the chip’s internal offsets, which otherwise cause some (distorted) feedthrough from the control voltage to the outputs. With a modulating input but no audio present, trim R7 for minimum signal at the output(s). The difference is barely audible, but it shows up clearly on a ’scope as traces that are badly slewed.
The audio feed needs to come from a volume-control pot. While it might seem more obvious to incorporate gain control in the network driving A2.4—after all, that’s the primary function of that pin—that proved over-complicated, and introduced yet more temperature effects.
Temperature effects! The current source is (largely) free of them, but D3, Q3, and A2 aren’t, and I have made no attempt to compensate for their contributions. The practical solution is to make R6 variable: a large, user-friendly knob labeled “Effect”, thus turning the problem into A Feature.
A2’s Vcon pin sinks/sources some (temperature-dependent) current, so varying R6 allows reasonable, if manual, temperature compensation. Because its setting affects both the gain and the part of the gain curve that we are using, the effective baseline is shifted, allowing more or less of the audio corresponding to low-level modulating signals to pass through. Figure 5 shows its effect on the output at around 20°C.
Figure 5 Varying R6 helps compensate for temperature problems and allows different audible effects.
Don’t confuse this circuit with a “proper” amplitude modulator! But for taking an audio signal, modulating it reasonably linearly, and driving the result directly into a speaker, it works well. The actual result can be seen in Figure 6, which shows both the detected infrasonic signal resulting from a gusty day and the audio output, whose frequency changes are invisible with the timebase used, but whose amplitude can be seen to track the modulating signal quite nicely.
Figure 6 A real-life infrasonic signal with the resulting audio modulated in both frequency (too fast to show up here) and amplitude.
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- Power amplifiers that oscillate— Part 1: A simple start.
- Power amplifiers that oscillate—deliberately. Part 2: A crafty conclusion.
- Revealing the infrasonic underworld cheaply, Part 1
- Revealing the infrasonic underworld cheaply, Part 2
- Ultra-low distortion oscillator, part 1: how not to do it.
- Ultra-low distortion oscillator, part 2: the real deal
The post Why modulate a power amplifier?—and how to do it appeared first on EDN.
Wolfspeed appoints Bret Zahn as general manager of Automotive business
5N Plus scales up and expands critical materials supply agreement with First Solar
Дистанційне керування будівельною технікою із Yachiyo Engineering Co., Ltd
🇺🇦🇯🇵 КПІ ім. Ігоря Сікорського співпрацюватиме з японськими компаніями та організаціями щодо дистанційного керування будівельною технікою.
Alcoa exploring feasibility of gallium production in Western Australia by 2026
Kyma and Novel Crystal Technology collaborate on gallium oxide epiwafers
Anritsu and AMD Showcase Electrical PCI Express Compliance up to 64 GT/s
Anritsu Corporation announced that it has helped AMD accelerate the testing of electrical compliance for PCI Express (PCIe) specification for pre-production AMD EPYC CPUs. Achieving a maximum data rate of 64 GT/s using the high-performance Anritsu BERT Signal Quality Analyzer-R MP1900A, testing was done under challenging backchannel conditions with insertion loss exceeding the specified 27 dB in the CEM specification, along with stress conditions applied using Spread Spectrum Clocking (SSC).
“In collaboration with Anritsu, we have achieved a stable demonstration of electrical compliance up to 64 GT/s,” said Amit Goel, corporate vice president, Server Engineering, AMD. “This early validation furthers our commitment to delivering reliable, high-speed I/O for future platforms powered by our next-generation AMD EPYCTM CPUs.”
“AMD is a key technology partner in advancing PCIe technology,” said Takeshi Shima, Director, Senior Vice President, Test & Measurement Company President of Anritsu Corporation. “We will continue to respond to various test needs and expand functions for PCIe compliance testing, while also contributing to quality evaluation and design efficiency of PCIe devices through proposals to standards organizations.”
PCIe 6.0 technology is the next-generation standard that provides a bandwidth of 64 GT/s per lane and up to 256 GB/s in a x16 configuration as a high-speed interface between internal devices such as CPUs, GPUs, SSDs, and network cards. While maintaining compatibility with previous standards, it achieves highly reliable and efficient communications in fields such as AI (Artificial Intelligence), HPC (High Performance Computing), and high-speed storage, greatly contributing to improving the performance of next-generation data centers and analytical systems.
The post Anritsu and AMD Showcase Electrical PCI Express Compliance up to 64 GT/s appeared first on ELE Times.
Broadcom Reveals Ethernet Fabric Router IC for Distributed AI Computing
onsemi’s EliteSiC technology powering 800V drive platform in Xiaomi SUVs
Half Adder PCB DIY
![]() | Me and my friend made this pcb that adds 2 bit with 2 bits and gives the result with 3 leds! It's the first pcb we design :) [link] [comments] |
Disassembling a LED-based light that’s not acting quite right…right?

A few months back, I came across an LED-based desk lamp queued up to go out to the trash. When I asked my wife about it, she said (or at least my recollection is that she said) that it had gone dim, so she’d replaced it with another one. But the device didn’t include any sort of “dimmer” functionality, and I wasn’t (at the time, at least) aware that LED lighting’s inherent intensity could fade over time, only that it would inevitably flat-out fail at some point.
My curiosity sufficiently piqued, especially since I’d intercepted it on the way to the landfill anyway, I decided to take it apart first. It’s Hampton Bay’s 15.5 in. Black Indoor LED Desk Lamp, originally sold by Home Depot and currently “out of stock” both in-store and online; I assume it’s no longer available for purchase. Here are some stock shots of what it looks like, to start:
See: no dimmer. Just a simple on/off toggle:
I don’t remember when we bought it or what we paid for it; it had previously resided on my wife’s sewing table. The Internet Archive has four “snapshots” of the page, ranging from the end of June 2020, when it was apparently on sale for $14.71 versus the $29.97 MSRP (I hope we snagged it then!), through early December of last year. My wife took up sewing during the COVID-19 lockdown, so a 2020-era acquisition sounds about right.
Here’s what it looks like in “action” (if you can call it that) in my furnace room, striving (and effectively failing) to differentiate its “augmentation” of the baseline overhead lighting:
Turn off the room light, and the lamp’s standalone luminary capabilities still aren’t impressive:
And here’s a close-up of the light source in “action”, if you can call it that, in my office:
Scan through the reviews on the product page and, unless I overlooked something, you won’t find anyone complaining that it’s not bright enough. Several of the positive reviews go so far as to specifically note that it’s very bright. And ironically, one of the (few) negative ones indicates that it’s too bright. The specs claim that it has a 3W output (no explicit lumens rating, however, far from a color temperature), which roughly translates to a 30W incandescent equivalent.
Time to dive in. Let’s begin with the underside, where a label is attached to a felt “foot”:
A Google search on “Arcadia AL40165” reveals nothing meaningful results-wise aside from the Home Depot product page. “Intertek 4000145” isn’t any more helpful. And, regardless of when we actually bought it, this particular unit was apparently manufactured in December 2016.
Peeling back the felt “foot”, I was initially confused by the three closed-end crimp connectors revealed underneath:
until I peeled it away the rest of the way and…oh yeah, the on/off switch:
Note the wiring colors. Typically, in my experience, the “+” DC feed corresponds to the white wire, with the “-“ return segment handled by the black wire, and the “+” portion of the circuit is what’s switched. This implementation seems opposite of that convention. Hold that thought.
Now for the light source. With the lamp switched off, you can see what appears to be a central LED surrounded by several others in circumference. Conceptually, this matches the arrangement I’ve seen before with LED light bulbs, so my initial working theory was that whatever circuitry was driving the LEDs in the perimeter had failed, leaving only the central one still operational. Why there would be such a two-stage arrangement at all wasn’t obvious, although I postulated that this same hardware might find use in another lamp with a three-position (bright/dim/off) toggle switch.
Removing the diffuser:
unfortunately dashed that theory; there was only a single LED in the center:
Here’s what it looks like illuminated, this time absent the diffuser:
A brief aside at this point: what’s with the second “right?” in the title? Well, when I mentioned to my wife the other day that I’d completed the teardown but hadn’t definitively figured out why the lamp had dimmed over time, she now said that to the best of her recollection, it had always been dim. Hmmm. If indeed I’d previously misunderstood her (and of course, my default stance is to always assume my wife is right…right?), then what we have is a faulty LED from the get-go. But just for grins, let’s pretend my dimmer-over-time recollection is correct and proceed.
One other root cause possibility is that the power supply feeding the LED is in the process of failing, thereby outputting under-spec voltage and/or current. Revisiting the earlier white-vs-black wire discussion, when I initially probed the solder connections with my multimeter using standard polarity conventions, I got a negative voltage reading:
The LED theoretically could have been operating in reverse-bias breakdown (albeit presumably not for long). But more likely, in conjunction with the earlier-mentioned switch location in the circuit, the wire colors were just reversed. Yep, that’s more like it:
Note that their connections to the LED might still be reversed, however. Or perhaps the lamp’s power supply was current output-compromised. To test both of these suppositions, I probe-connected and fueled the LED with my inexpensive-and-passable power supply instead:

With the connections using standard white vs. black conventions, I got…nothing. Reversed, the LED light output weakly matched that delivered when driven by the lamp’s own power supply. And my standalone power supply also informed me that the lamp pulls 180 mA at 10 V.
About that “lamp’s own power supply”, by the way (as-usual accompanied by a 0.75″/19.1 mm diameter U.S. penny for size comparison purposes):
The label refers to it as an “LED Driver,” but I’m guessing that it’s just a normal “wall wart”, absent a plug on the output end. And a Google search of “JG-LED1-5UPPL” (that’s the number 5, not an S, by the way) further bolsters that hypothesis (“Intertek 4002637” conversely wasn’t helpful at all, aside from suggesting that this power supply unit (PSU) was originally intended for a different lamp model). But I’m still baffled by the “DC5-10V MAX” notation in the labeled output specs…???
And removing two more screws, here’s what the plate the LED is mounted to looks like when separated from the “heatsink” behind it (note the trivial dab of thermal paste between them):
All leaving me with the same question I had at the start: what caused the LED-based desk lamp’s light output to dim, either over time or from the very beginning (depending on which spouse’s story you’re going with)? The most likely remaining culprit, I’m postulating, is the phosphor layer above the LED. I’ve already noted the scant-at-best heat-transfer interface between the LED and the metal plate behind it. More generally, as this device definitely exemplifies, my research suggests that inexpensive designs skimp on the number of LEDs to keep the BOM cost down, compensating by overdriving the one(s) that remain. The resulting thermal stress prematurely breaks down the phosphor, resulting in color temperature shifts and reduced light output, along with eventual complete component failure.
That’s my take; what’s yours? Let me know your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- LDR PC Desk Lamp
- Constant-current wall warts streamline LED driver design for lamps, cabinet lights
- Magic carpets come alive with LEDs
- Can GE’s new LED bulbs help you get to sleep?
- Six LED challenges that still remain
- LED lamp cycles on and off, why?
The post Disassembling a LED-based light that’s not acting quite right…right? appeared first on EDN.
Promising Longer Run Times, TI Rolls Out Predictive Battery Management ICs
San’an Optoelectronics and Inari to acquire Lumileds
Having fun Calibrating my Nau7802 inside my freezer
![]() | Before building a full temperature controlled chamber for slow /"natural" temp variance... I'm trying to see how my Scale behave in various environnements ahah [link] [comments] |
TrainingKit (3rd try)
![]() | im so proud making this :) [link] [comments] |
Top 10 TMT Bar in India
With India undergoing rapid urbanisation and infrastructural growth, there is a demand for materials that are either strong, resilient, or sustainable. These very TMT bars constitute the crux of modern construction. The city-based constructions in the country represent a host of varieties, such as residential, flyovers, industrial plants, skyscrapers, bridges, and buildings. The TMT bars warrant strength with a little flexibility so that its enhancement work on both durability and safety.
The whole range of TMT bars is competing to attract the buyers who seek high-performance bars with most advanced features. Raw materials, the technological process used for their manufacture, strength, ductility, and resistance to corrosion are among the differences existing among bars.
Here follows an all-encompassing guide for the Top 10 TMT Bar Brands in India, showing the feature-based distinction, application, and technology advantage.
- TATA Tiscon 550SD:
TATA Tiscon is the pioneer in the Indian TMT industry, being the first rebar brand in India. Supported by TATA Steel, it came out with the super ductile 550SD TMT bar using an advanced technology of Morgan, USA. TATA Tiscon 550SD bars, being GreenPro certified, hence are the environmentally friendly variety. With very high tensile strength and flexibility, they become an ideal candidate for earthquake-prone zones and heavy-duty infrastructure.
- SAIL’s SeQR:
Outstanding ductility is attributed to SAIL’s SeQR TMT bars that are manufactured by Steel Authority of India Limited, along with fire resistance and UTS/YS ratio. These bars are heat resistant up to the temperature of 600°C, and special corrosion-resistant varieties (HCR) are available for coastal or damp environments.
The TMT bars by SAIL possess excellent energy absorption that is desired for resisting the shocks from seismic or other sudden structural stresses.
- JSW Neosteel:
Produced from virgin iron ore, JSW Neosteel 500D/550D bars offer superior metallurgical quality. Because of their weldability and flexibility, they can resist seismic forces, making them sought after in regions prone to earthquakes.
Their low carbon content maintains structural integrity and allows easy fabrication, especially for large projects.
- Jindal Panther:
Jindal Panther TMT bars have ductility and bonding strength imparted by German TEMPCORE technology. Uniform rib patterns allow a better grip for concrete, which serves the objectives for high-rise buildings.
The FE 500D grade is said to embody the right mix of strength and flexibility.
- SRMB Steel:
The SRMB uses special X-pattern ribs to ensure better grip with cement and thereby minimize slippage and.mvare a better performance of the structure. These bars come with ISO and BIS certification, and therefore, they have good corrosion resistance and can be used for various residential and commercial applications.
- Kamdhenu TMT:
These micro-alloyed steel bars from Kamdhenu are ISI-certified and supplied all over India. They are classified as 550D TMT bars and have properties like good elongation, flexibility, and fire resistance.
They are a cost-effective option for home construction, real estate, and semi-urban projects.
- Vizag Steel:
Produced by the RINL- Rashtriya Ispat Nigam Limited, the Vizag Steel TMT bars find wide use in government and public infrastructure projects. Due to their uniform quality and corrosion-resistant properties, they are used for large-scale civil projects that include metros, flyovers, and industrial buildings.
- Shyam Steel:
Shyam Steel FE 500D TMT bars have advantages of fire resistance, corrosion protection, and elongation values and are ISO-certified with German technology. Such bars are recommended for high-rise residential complexes as well as commercial buildings.
- Electrosteel:
Electrosteel TMT bars, widely renowned for their bendability, low carbon content, and resistance to rust, give the buyers value for their money in terms of durability. Private contractors and small-scale builders looking for good value for money are going for these.
- Essar TMT Bars:
Essar TMT bars, processed by Thermex technology, ensure uniformity, weldability, and good finish; hence, they find widespread use in commercial buildings, real estate projects, and infrastructure and thus guarantee long durability.
Comparison:
Brand | Key Strengths | Grades | Technological Edge |
TATA Tiscon | GreenPro certified, super ductility, earthquake resistant | FE 415, FE 500, 550SD | Morgan USA tech, automated production |
SAIL SeQR | Fire-resistant up to 600°C, corrosion & seismic resistant | FE 500, EQR, HCR | High UTS/YS ratio |
JSW Neosteel | Made from virgin iron ore, high strength-to-weight ratio | 500D, 550D | Thermo-Mechanical Treatment, low carbon content |
Jindal Panther | German technology, ductile, strong bonding | FE 500D | TEMPCORE technology |
SRMB Steel | X-pattern ribs for superior grip, BIS & ISO certified | FE 500, 550 | X-rib technology, corrosion resistance |
Kamdhenu TMT | Micro-alloyed steel, pan-India reach | FE 500, 550D | ISI-certified |
Vizag Steel | Corrosion-resistant, government-preferred | FE 500D | Integrated steel plant production |
Shyam Steel | Weldability, fire resistance, high elongation | FE 500D | German machinery, ISO certified |
Electrosteel TMT | Rust-proof, strong bendability, BIS certified | FE 500D | Uniform heat treatment |
Essar TMT Bars | Excellent finish, good weldability, long life | FE 500D | Thermex process |
Choosing the right TMT bars is the cornerstone of structural integrity and durability into the future. Some factors are considered when choosing a TMT bar:
Grade of the Bar:
FE 415 is suitable for small residential buildings. FE 500 and 550D are used for high-rises, bridges, and commercial structures because of their higher tensile strength.
Corrosion Resistance:
Bars such as SAIL SeQR HCR or JSW Neosteel can be used for corrosion resistance in the coastal or humid environment.
Earthquake Resistance:
In seismic zones, bars with high ductility and UTS/YS ratio are required, such as Tata Tiscon 550SD or Jindal Panther.
Certifications & Quality Assurance:
Look for brands certified by BIS, ISO, or GreenPro, which assures compliance with Indian construction standards.
Conclusion:
India’s future infrastructure will rely on materials that combine strength and safety with sustainability. The right choice of TMT bar brand is therefore important for structural integrity. Whether it be for a small house or a mega commercial complex, the above-mentioned brands provide features suitable for a whole range of applications.
The post Top 10 TMT Bar in India appeared first on ELE Times.
Pages
