-   Українською
-   In English
Feed aggregator
I spent so long making this as artistically as possible only to realize I connected the op amp's output to the non-inverting pin while trying to make a buffer....
I hate myself I hate myself I hate myself I hate myself I hate myself I hate myself [link] [comments] |
How to control your impulses—part 1
Editor’s note: The first part of this two-part design idea (DI) shows how modifications to an oscillator can produce a useful and unusual pulse generator. The second part will extend this to step function generation.
The principle behind testing the impulse response of circuits is simple: hit them with a sharp pulse and see what happens. As usual, Wikipedia has an article detailing the process. This notes that the ideal pulse—a unit impulse, or Dirac delta—is infinitely high and infinitely narrow with an area beneath it of unity, so it’s infinitely tricky to generate, which is just as well, considering the effects one would have on everything from protection diodes to slew rates. Fortunately, it’s just an extreme case of the normal or Gaussian distribution, or bell curve, which is a tad easier to generate or at least emulate and, which this DI shows how to do.
Wow the engineering world with your unique design: Design Ideas Submission Guide
In the real world, the best testing impulses come from arbitrary waveform generators. An older technique is to filter narrow rectangular pulses, but if you change the pulse width, the filter’s characteristics also need to be varied to maintain the pulse shape. The approach detailed in here avoids that problem by generating raised cosine pulses (not to be confused with raised-cosine filters) which are close enough to the ideal to be interesting. But let’s be honest: simple rectangles, slightly slugged to avoid those slew-rate problems, are normally quite adequate.
Producing our pulsesWe make our pulses by taking the core of a squashed-triangle sine-wave oscillator and adding some logic and gating so that when triggered, it produces single cycles which rise from a baseline to their peak and then fall back again, following a cosine curve. The schematic in Figure 1 shows the essentials.
Figure 1 A simple oscillator with some added logic generates single pulses when triggered.
How the oscillator worksThe oscillator’s core is almost identical to the original, though it looks different having been redrawn. Its basic form is that of an integrator-with-Schmitt, where C1 is charged up through resistors R2 and R3 until its voltage reaches a positive threshold defined by D3, which flips A1b’s polarity, so that C1 starts to discharge towards D4’s negative threshold. D1/D2 provide bootstrapping to give linear charge/discharge ramps while compensating for variations in D3/D4’s forward voltages with temperature (and supply voltage, though that should not worry us here). The resulting triangle wave on A2’s output is fed through R7 into D5/D6 which squash it into a reasonable (co)sine wave (<0.5% THD). The diode pairs’ forward voltages need to be matched to maintain symmetry and so minimize even-harmonic distortion. A4 amplifies the signal across D5/6 so that the pulse just spans the supply rails, thermistor Th1 giving adequate compensation for temperature changes.
If A2’s output were connected directly to R1’s input, the circuit would oscillate freely—and we’ll allow it to later on—but for now we need it to start at its lowest point, make one full cycle, and then stop.
In the resting condition, U2a is clear and A1b’s output is high, producing a positive reference voltage across D3. (That’s positive with respect to the common, half-supply internal rail.) That voltage is inverted by A2a and applied through U1a to R1, so that there is negative feedback round the circuit, which stabilizes at the negative reference. (Using a ‘4053 for U1 may seem wasteful, but the other sections of it will come in handy in Part 2.)
When U2a’s D input sees a (positive-going) trigger, its outputs change state. This way, U1a connects R1 to A1b’s (still high) output, starting the cycle; the feedback is now positive. After a full cycle, A1b’s output goes high again, triggering U2b and resetting U2a, thus stopping the cycle and restoring the circuit to its resting state. The relevant waveforms are shown in Figure 2.
Figure 2 Some waveforms from the circuit in Figure 1.
Comparing raised cosines with ideal normal-distribution pulses is instructive, and Figure 3 shows both. While most of the curves match reasonably, the bottom third or so is somewhat wanting, though it can be improved on with some extra complexity—but that’s for later.
Figure 3 A comparison between an ideal normal-distribution curve and a raised cosine, including the output from Figure 1.
As previously mentioned, and apparent from the schematic, the circuit works as a simple oscillator if U2a’s operation is disabled by inhibiting its trigger input and jamming its preset input low to force its Q high and Q low. U1a now connects A1b’s output to R1, and the circuit runs freely. Apart from being useful as a feature, this helps us to set it up.
Trimming the oscillatorA few trims, in the oscillator mode, are needed to get the best results.
- R3 must be set to give equal tri-wave amplitudes at the maximum and minimum settings of R2, or distortion will vary with frequency (or pulse width). Set R2 to max (lowest frequency) and R3 to min (towards the right on the schematic), then measure the amplitude at A1’s output. Now set R2 to min and adjust R3 to give the same amplitude as before. (Thanks to Steve Woodward for the idea behind this.)
- R7 defines the drive to the squashing diodes D5/6 and thus the distortion. Using a ‘scope’s FFT is preferable: adjust R7 to minimize the third and fifth harmonics. (The seventh remains fairly constant.) Failing that, set R7 so that the voltage across the diodes is precisely 2/3 of the tri-wave’s value. As a last resort, a 30k fixed resistor may be close enough, as it was in my build.
- Set the output level using R9. The waveform should run from rail to rail, just shaving the tips of the residual pips (which are mainly responsible for those seventh harmonics) from the peaks. Don’t overdo it, or the third and fifth harmonics will start to increase. This depends on using RRO op-amps for at least A1b and A2b and carefully-split rails for symmetry.
Once trimmed as an oscillator, it’s good to go as a pulse generator, which relies on exactly the same settings, so that each pulse will be a single cycle of a cosine wave, offset by half its amplitude.
The schematic in Figure 1 gives the bare bones of the circuit, which will be fleshed out in Part 2. The op-amps used are Microchip MCP6022s, which are dual, 5-V, 10-MHz CMOS RRIO devices with <500 µV input offsets. Power is at 5 V, with the central “common” rail derived from another op-amp used as a rail-splitter: shown in Figure 4 together with a suitable output buffer.
Figure 4 A simple rail-splitter to derive the 2.5-V “common” rail, and an output level control and buffer with both AC- and DC-coupled outputs.
C1 can be switched to give several ranges, allowing use from way over 20 kHz (for 25 µs pulses, measured at half their height) down to as low as you like. R3 then also needs to be switched; see Figure 5 for a three-range version. (The lowest range probably won’t need an HF trim.) While the tri-wave performance is good to around 1 MHz, the squashing diodes’ capacitance starts to introduce waveform distortion well before that, at least for the 1N4148 or the like.
Figure 5 For multi-range use, timing capacitor C1 is switched. To trim the HF response for each range, R3 must also vary.
Improving the pulse shapeNow for that extra complexity to improve the pulse shape. In very crude terms, the top half of the desired pulse looks (co)sinusoidal but the bottom more exponential, and that part must be squashed even further if we want a better fit. We can do that by bridging D6 with a series pair of Schottky diodes, D7 and D8. The waveform’s resulting asymmetry needs offsetting, necessitating a slightly higher gain and different temperature compensation in the buffer stage A2b. These mods are shown in Figure 6.
Figure 6 Bridging D6 with a pair of Schottky diodes gives a better fit to the desired curve, though the gain and offset need adjusting.
In this mode, R16 sets the offset and R9A the gain. The three sections of U3 will:
- Switch Schottkys D7/8 into circuit
- Select the gain- and offset-determining components according to the mode
- Short out R8 to place the thermistor directly across R12 and optimize the temperature compensation of the pulse’s lower half
Figure 7 shows the modified pulse shape. Different diodes or combinations thereof could well improve the fit, but this seems close enough.
Figure 7 The improved pulse shape resulting from Figure 6.
To set this up, adjust R16 and R9A (which interact; sorry about that) so that the bottom of the waveform is at 0 V while the peaks are at a little less than 5 V. Because the top and bottom halves of each pulse rely on different diodes, their tempcos will be slightly different. The 0-V baseline is now stable, but the peak height will increase slightly with temperature.
To be continued…By now, we’ve probably passed the point at which it’s simpler, cheaper, and more accurate to reach for a microcontroller (Arduino? RPi?) and add a DAC—or just use a PWM output, at these low frequencies—equip it with look-up tables (probably calculated and formatted using Python, rather like the reference curves in these Figures) and then worry about how to get continuous control of the repetition rate and pulse width. Or even just buy a cheap AWG, which is cheating, though practical.
But all that is a different kind of fun, and we have not yet finished with this approach. Part 2 will show how to add more tweaks so that we can also generate well-behaved step-functions.
—Nick Cornford built his first crystal set at 10, and since then has designed professional audio equipment, many datacomm products, and technical security kit. He has at last retired. Mostly. Sort of.
Related Content
- Squashed triangles: sines, but with teeth?
- Dual RRIO op amp makes buffered and adjustable triangles and square waves
- Arbitrary waveform generator waveform creation using equations
- 555 triangle generator with adjustable frequency, waveshape, and amplitude; and more!
- Adjustable triangle/sawtooth wave generator using 555 timer
- Voltage-controlled triangle wave generator
The post How to control your impulses—part 1 appeared first on EDN.
Chiplets diary: Controller IP complies with UCIe 1.1 standard
While physical layer (PHY) interconnect IP has been making headlines after the emergence of the Universal Chiplet Interconnect Express (UCIe) specification, a Korean design house has announced the availability of controller IP that complies with the UCIe 1.1 standard.
The PHY part in UCIe encompasses link initialization, training, power management states, lane mapping, lane reversal, and scrambling. On the other hand, UCIe’s controller part includes the die-to-die adapter layer and the protocol layer.
Openedges Technology calls it OUC, and it derives its name from the term Openedges UCIe controller. Openedges, a supplier of memory subsystem IP, is based in Seoul, South Korea. Its controller IP extends on-chip AXI interconnections to multi-die connections to deliver multi-die connectivity across diverse applications.
The chiplet controller IP employs flits or flow control units for reliability and latency, thus preventing overflow at the receiver buffer. It also ensures seamless communication by synchronizing AXI parameters with its link partner, accommodating different AXI configurations through padding and cropping as per the default operation rules defined in AXI.
The highly configurable UCIe controller IP facilitates die-to-die interconnect and protocol connections. Source: Openedges Technology
In short, the new controller IP effortlessly integrates with the company’s on-chip interconnect IP. That synergy simplifies multi-chiplet interconnects while facilitating efficient bandwidth transfer capabilities.
Related Content
- TSMC, Arm Show 3DIC Made of Chiplets
- Chiplets Get a Formal Standard with UCIe 1.0
- How the Worlds of Chiplets and Packaging Intertwine
- Cadence and Arm launch ADAS chiplet development platform
- Imec’s Van den hove: Moving to Chiplets to Extend Moore’s Law
The post Chiplets diary: Controller IP complies with UCIe 1.1 standard appeared first on EDN.
🏠 Порядок поселення до гуртожитків КПІ ім. Ігоря Сікорського на 2024/2025 н.р.
🟡 Зважаючи на особливості організації навчального процесу в університеті та наявну кількість місць в укриттях студмістечка КПІ ім.
VisIC adds Daimler/Mercedes Benz veteran Wolfgang Wondrak as consultant
SweGaN secures frame agreements for QuanFINE GaN-on-SiC epiwafers
The Energy Crisis in AI and the Analog Chip Solution
Artificial Intelligence (AI) has ushered in a new era of innovation, transforming industries with its ability to process vast amounts of data, make complex decisions, and automate tasks. However, this rapid advancement comes at a significant cost: AI’s intense computational demands are raising alarm bells about energy consumption and environmental sustainability. Currently, AI technologies account for approximately 7% of global electricity usage, a figure comparable to the entire annual electricity consumption of India. As AI continues its exponential growth, it becomes increasingly urgent to explore more sustainable alternatives in AI hardware. One promising solution lies in the development and adoption of analog chips.
Why Pursue Sustainable AI?The dramatic rise in AI applications has led to a corresponding surge in energy consumption, primarily due to the vast computational resources required. Traditional digital computing, the backbone of most AI systems today, is notoriously energy-intensive, contributing significantly to the global carbon footprint. Data centers, which are central to AI computations, currently consume about 1% of the world’s electricity—a figure projected to rise to between 3% and 8% in the coming decades if current trends continue.
The environmental impact of AI extends beyond just energy use. The production and disposal of electronic hardware contribute to the growing problem of electronic waste (e-waste), which poses serious environmental hazards. Furthermore, the cooling systems required to maintain large data centers exacerbate water consumption and environmental degradation. These challenges underscore the need for sustainable AI technologies that can reduce energy and resource use while minimizing e-waste. Developing energy-efficient hardware and optimizing algorithms to lower power consumption are critical steps toward achieving sustainable AI. Analog chips, which have the potential to significantly reduce energy consumption, offer a promising path forward.
IBM and Startups Lead Analog Chip InnovationIBM has been a leader in the development of analog chips for AI, pioneering innovations with its brain-inspired designs. IBM’s analog chip utilizes phase-change memory (PCM) technology, which operates with much lower energy consumption than traditional digital chips. PCM technology works by altering the material state between crystalline and amorphous forms, enabling high-density storage and rapid access times—key qualities for efficient AI data processing. In IBM’s design, PCM is employed to replicate synaptic weights in artificial neural networks, enabling energy-efficient learning and inference processes.
Beyond IBM, various startups and research institutions are also exploring the potential of analog chips in AI. For instance, Austin-based startup Mythic has developed analog AI processors that integrate memory and computation. This integration allows AI tasks to be performed directly within the memory, reducing data movement and enhancing energy efficiency. Additionally, Rain Neuromorphics is focused on neuromorphic computing, using analog chips designed to mimic biological neural networks. These chips process signals continuously and perform neuronal computations, making them ideal for scalable and adaptable AI systems that can learn and respond in real-time.
Applications of Analog Chips in AIAnalog chips could revolutionize several AI applications by providing energy-efficient and scalable hardware solutions. Some key areas where analog chips could have a significant impact include:
- Edge Computing: Edge computing involves processing data near the source, such as sensors or IoT devices, rather than relying on centralized data centres. This approach can reduce latency, enhance real-time decision-making, and lower the energy costs associated with data transmission. Analog chips, with their low power consumption and compact designs, are well-suited for edge computing applications. They allow AI-powered devices to execute complex computations directly at the edge, thereby cutting down on data transfer requirements and significantly lowering energy consumption.
- Neuromorphic Computing: Neuromorphic computing aims to replicate the structure and function of the human brain to create more efficient and adaptive AI systems. Analog chips are particularly well-suited for neuromorphic computing because they can process continuous signals and perform parallel computations. By mimicking the analog nature of neural processes, analog chips can enable energy-efficient and scalable AI systems capable of learning and adapting in real time.
- Efficiency in AI Inference and Training: Analog chips are inherently well-equipped for AI inference and training, not just as an application but as a core design feature. These chips excel at performing matrix multiplication operations—a fundamental component of neural network computations—with far greater efficiency than digital chips. This efficiency translates into substantial energy savings during AI training and inference, allowing for the scalable deployment of AI models without the prohibitive energy costs typically associated with digital chips. As a result, analog chips are a natural choice for enhancing the sustainability and scalability of AI technologies.
While the potential of analog chips for sustainable AI is immense, several challenges must be addressed to fully realize their potential. A major challenge lies in developing analog computing architectures that can match the precision and accuracy of digital computations. Analog computations are naturally prone to noise and variations, potentially impacting the reliability of AI models.
Ongoing research is focused on developing techniques to mitigate these concerns and improve the robustness of analog AI systems. Despite these challenges, analog chips remain highly suitable for applications such as sensor data processing and real-time environmental monitoring, where slight variability introduced by noise does not outweigh the benefits of reduced power consumption and faster processing speeds. Another challenge is integrating analog chips into the predominantly digital infrastructure of current AI systems. This transition will require significant modifications to both hardware and software stacks.
Efforts are underway to create hybrid architectures that combine the strengths of analog and digital computing, facilitating a smoother transition to more sustainable AI hardware. Despite these obstacles, the future of analog chips in AI looks promising. Ongoing progress in materials science, circuit design, and AI algorithms is fueling the creation of more efficient and scalable analog AI systems. As the demand for environmentally friendly AI solutions grows, analog chips are poised to play a critical role in powering energy-efficient AI technologies.
Case Study: IBM’s Brain-Inspired Analog ChipGenerative AI technologies such as ChatGPT, DALL-E, and Stable Diffusion have dramatically impacted various fields, from marketing to drug discovery. Despite their innovative potential, these systems are substantial energy consumers, demanding data centers that emit considerable carbon dioxide and use enormous amounts of energy. As neural networks grow more complex and their usage expands, energy consumption is expected to rise even more.
IBM has made a significant advancement in tackling this issue with a novel 14-nanometer analog chip equipped with 35 million memory units. Unlike conventional chips, where data must constantly move between processing units, IBM’s chip performs computations directly within these memory units, drastically reducing energy consumption. Typically, data transfer can cause energy usage to soar by a factor of 3 to 10,000 times the actual computational requirement.
This chip showcased remarkable energy efficiency in two speech recognition tasks. The first task, Google Speech Commands, is relatively small but requires high-speed processing. The second, Librispeech, is a more extensive system designed for converting speech into text, testing the chip’s ability to handle large volumes of data. When compared to traditional computing systems, IBM’s chip delivered comparable accuracy but completed tasks more quickly and with significantly lower energy consumption—using as little as one-tenth of the energy required by standard systems for certain tasks.
Analog Chips: Bridging the Gap Between Digital and Neuromorphic ComputingThis analog chip is part of IBM’s broader efforts to push neuromorphic computing from theory to practicality—a chip that could one day power everyday devices with efficiency approaching that of the human brain.
Traditional computers are built on the Von Neumann architecture, which separates the central processing unit (CPU) and memory, requiring data to be shuttled between these components. This process consumes time and energy, reducing efficiency. In contrast, the brain combines computation and memory in a single unit, allowing it to process information with far greater efficiency.
IBM’s analog chips mimic this brain-like structure, using phase-change materials that can encode multiple states, not just binary 0s and 1s. This ability to exist in a hybrid state allows the chip to perform multiple calculations without moving a single bit of data, dramatically increasing efficiency.
Overcoming Challenges in Analog AI ChipsDespite the promise of analog chips, they are still in their early stages of development. One major challenge is the initialization of the AI chip, given the vast number of parameters involved. IBM addressed this issue by pre-programming synaptic weights before computations begin, akin to “seasoning” the chip for optimal performance. The results were impressive, with the chip achieving energy efficiency tens to hundreds of times greater than the most powerful CPUs and GPUs.
However, the path forward for analog chips requires overcoming several hurdles. One key area for improvement is the design of the memory technology and its surrounding components. IBM’s current chip does not yet contain all the elements needed for full functionality. The next crucial step involves consolidating all components into a single chip without compromising its effectiveness.
On the software side, developing algorithms specifically tailored to analog chips and creating software that can readily translate code into machine-understandable language are essential. As these chips become more commercially viable, developing dedicated applications will be crucial to keeping the dream of an analog chip future alive.
Building the computational ecosystems in which CPUs and GPUs operate successfully took decades, and it will likely take years to establish a similar environment for analog AI. Nevertheless, the enormous potential of analog chips for combating AI’s sustainability challenges suggests that the effort will be well worth it.
The post The Energy Crisis in AI and the Analog Chip Solution appeared first on ELE Times.
Highest power density for DCDC converters demonstrated by Vitesco Technologies using Infineon’s CoolGaN Transistors
DCDC converters are essential in any electric or hybrid vehicle to connect the high-voltage battery to the low-voltage auxiliary circuits. This includes 12 V power headlights, interior lights, wiper and window motors, fans, and at 48 V, pumps, steering drives, lighting systems, electrical heaters, and air conditioning compressors. In addition, the DCDC converter is important for developing more affordable and energy-efficient vehicles with an increasing number of low voltage functions. According to TechInsights, the global automotive DC-DC converter market size was valued at USD 4 billion in 2023 and is projected to grow to USD 11 billion by 2030, exhibiting a CAGR of 15 percent during the forecast period. Gallium nitride (GaN) in particular plays a crucial role here, as it can be used to improve the power density in DCDC converters and on-board chargers (OBC). For this reason, Vitesco Technologies, a leading supplier of modern drive technologies and electrification solutions, has selected GaN to improve the power efficiency of its Gen5+ GaN Air DCDC converter. The CoolGaN Transistors 650 V from Infineon Technologies AG significantly improve the overall system performance while minimizing system cost and increasing ease of use. As a result, Vitesco created a new generation of DCDC converters that set new standards in power density (efficiency of over 96 percent) and sustainability for power grids, power supplies, and OBCs.
The advantages of GaN-based transistors in high-frequency switching applications are considerable, but even more important is the high switching speed, which has been increased from 100 kHz to over 250 kHz. This enables very low switching losses, even in hard-switched half-bridges, with minimized thermal and overall system losses. In addition, Infineon’s CoolGaN Transistors feature high turn-on and turn-off speeds and are housed in a top-cooled TOLT package. They are air-cooled, eliminating the need for liquid cooling and thereby reducing overall system costs. The 650 V devices also improve power efficiency and density, enabling an output of 800 V. In addition, they feature an ON-resistance (RDS(on)) of 50 mΩ, a transient drain-to-source voltage of 850 V, an IDS,max of 30 A, and an IDSmax,pulse of 60 A.
“We are delighted to see industry leaders like Vitesco Technologies using our GaN devices and innovating with their applications,” said Johannes Schoiswohl, Senior Vice President &General Manager, GaN Systems Business Line Head at Infineon. “The ultimate value of GaN is demonstrated when it changes paradigms, as in this example of moving from a liquid-cooled system to an air-cooled system.”
With GaN Transistors, Vitesco Technologies was able to design its Gen5+ GaN Air DCDC converters with passive cooling, which reduces the system’s overall cost. The GaN devices also allow for simplified converter design and mechanical integration. As a result, the DCDC converters can be flexibly positioned in the vehicle, reducing the workload for manufacturers. The use of GaN also allows the power of the converters to be scaled up to 3.6 kW and the power density to be increased to over 4.2 kW/l. The Gen5+ GaN Air DCDC converters offer an efficiency of over 96 percent and improved thermal behavior compared to the Gen5 Liquid-Cooled converters. They provide a two-phase output of 248 A at 14.5 V continuous. The phases can be combined to achieve the maximum output power. Still, it is also possible to switch off one phase under partial load conditions and interleave the switching frequency between the two phases. In addition, by switching the input of two phases in series, the converters based on the CoolGaN power transistors 650 V can be used to implement 800 V architectures without exceeding the maximum blocking voltage of the device. The converters also feature an isolated half-bridge topology consisting of a GaN-based half-bridge, a fully isolated transformer, and an active rectifier unit for each phase.
AvailabilityInfineon’s CoolGaN Transistors 650 V are available now. More information about Infineon’s GaN solutions can be found at www.infineon.com/gan.
The post Highest power density for DCDC converters demonstrated by Vitesco Technologies using Infineon’s CoolGaN Transistors appeared first on ELE Times.
ASMPT Wins Exclusive Texas Instruments Supplier Excellence Award for Second Year Running
ASMPT Limited, the world’s leading maker of integrated solutions for the manufacture of semiconductors and electronics, was recently honoured with the 2023 Supplier Excellence Award from Texas Instruments (TI) for the second year running. This prestigious recognition underscores ASMPT’s unmatched commitment to excellence in supplying products and services that consistently meet TI’s high standards.
Texas Instruments’ (TI) extensive global supplier network comprises over 10,000 companies, which ASMPT is among an elite group of suppliers selected for exemplary performance in the areas of cost, environmental and social responsibility, technology, responsiveness, assurance of supply, and quality.
“ASMPT is honoured to receive this prestigious award from Texas Instruments for the second year running,” said Joseph Poh Tson Cheong, Senior Vice President, ASMPT. “The standards for this award are very high, and to achieve it for two years is a wonderful testament highlighting our commitment to sustained customer success and strong partnerships.”
“As a key supplier to TI, we are committed to actively engage in offering continuous comprehensive support for assembly equipment across all types of supply chain conditions, thereby helping them to drive progress with their roadmap and achieve their goals.”
The post ASMPT Wins Exclusive Texas Instruments Supplier Excellence Award for Second Year Running appeared first on ELE Times.
Infineon presents high-performance CIPOS Maxi Intelligent Power Modules for industrial motor drives of up to 4 kW
Infineon Technologies AG expands its 7th generation TRENCHSTOP IGBT7 product family with the CIPOS Maxi Intelligent Power Module (IPM) series for low-power motor drives. The new IM12BxxxC1 series is based on the new TRENCHSTOP IGBT7 1200 V and rapid diode EmCon 7 technology. Thanks to the latest micro-pattern trench design, it offers exceptional control and performance. This results in significant loss reduction, increased efficiency, and higher power density. The portfolio includes three new products in variants ranging from 10 A to 20 A for power ratings of up to 4.0 kW: IM12B10CC1, IM12B15CC1 and IM12B20EC1.
The IM12BxxxC1 series is packaged in a DIP 36x23D housing. It integrates various power and control components to increase reliability, optimize PCB size and reduce system costs. This makes it the smallest package for 1200 V IPMs with the highest power density and best performance in its class. The IM12BxxxC1 series is particularly suitable for low-power drives in applications such as motors, pumps, fans, heat pumps and outdoor fans for heating, ventilation, and air conditioning.
The new IPM series offers an isolated dual-in-line molded housing for excellent thermal performance and electrical isolation. It also meets the EMI and overload protection requirements of demanding designs. In addition to the protection features, the IPM is equipped with an independent UL-certified temperature thermistor. The CIPOS Maxi integrates a rugged 6-channel SOI gate driver to provide built-in dead time to prevent damage from transients. It features under-voltage lockout at all channels and over-current shutdown. With its multi-function pin, this IPM allows for high design flexibility for various purposes. The low side emitter pins can be accessed for all phase current monitoring making the device easy to control.
AvailabilityThe three variants of the CIPOS Maxi IM12BxxxC1 portfolio can be ordered now. More information is available at www.infineon.com/CIPOS-Maxi.
The post Infineon presents high-performance CIPOS Maxi Intelligent Power Modules for industrial motor drives of up to 4 kW appeared first on ELE Times.
Data linkage- Meaning, Aims, Applications, and Tools for the collection of data
Meaning of data linkage
Data linkage is an exercise of collating data from multiple sources about a particular entity at one place. This exercise generates a completely new data about that particular entity.
The benefit of this newly generated data about that particular entity is that it is more comprehensive, organized, scientific, rational, and logical. Besides, it can be arranged as per any preferred criterion of any analysis or reporting about that particular entity.
Aim of data linkage
The aim of data linkage is to collate data about a particular entity at one place from multiple sources.
These entities can range from an individual, place, company, country, performance over various indices or parameters, per capita income, national income of different countries, criminal records of an individual, tax paid by different companies, prevalence of any disease across different countries or in a community, etc.
The sources of information about these entities can range from research papers, books, magazines, government’s statistical data such as census, surveys, and reports published on various media platforms such as print, electronic, digital, etc.
What the collation of data from multiple sources at one place intends to achieve?
Once the data about a particular entity from multiple sources is collated at one place, it provides a completely new data about that entity.
This arrangement of data brings to fore new analyses about any entity. It makes the understanding about it more lucid and logical. Many a times, it brings to conclusion hitherto unexplored aspects or dimensions about that particular entity.
Approaches to data linkage
There are four approaches to data linkage. They are as follows:
First, clerical approach. In this approach, each record that matches with the concerned entity is entered manually. Since matching and entering of data is done manually, it is a very difficult and time-taking process.
Second, deterministic approach. It is also called rule-based approach. In this approach, attributes for a given record pair is matched according to some rule. All records that satisfy that rule are compiled to generate a new data.
Third, probabilistic approach. It is also called score-based approach. In this approach, the probability of matching of a record pair as per a fixed rule is calculated. Based on this, whichever record satisfies the rule is collated to generate a new data.
Fourth, a combination of any of the aforesaid three approaches.
Earliest examples of data linkage in modern times
In the modern-day world, the earliest examples of data linkage that can be cited are census and reporting about revenue collection from agricultural land.
Earlier, the census data was collected manually. Its details were entered by the application of clerical methods. Every census used to profile various aspects of each and every citizen of a country.
Whereas, the computation of revenue collection from agricultural land was based on the quantum of different crops grown on a piece of land in a year.
Applications of data linkage in today’s world
In today’s information age, data is the new gold. It is the new power. Access to a scientifically arranged data is the pre-requisite for any analysis and research. All strategic decisions are based on access to original, scientifically arranged, and a well analysed data.
All such applications are possible only if data linkage is successfully accomplished. Hence, data linkage has become a necessity today. There is no scope to evade it. As long as humanity will use data as a decisive factor in decision-making, data linkage will remain a must-do thing.
Data linkage is playing a decisive role in the fourth industrial revolution, that is, industry 4.0. Till now, three industrial revolutions have taken place. And, this fourth industrial revolution is solely based on changes resulting from automation and data exchange due to new technologies.
Challenges to data linkage
Data linkage is a complex process. It involves analysing data from multiple sources of information about a particular entity. Besides, a very complicated methodology is involved in the process of data linkage.
As a result, many challenges to data linkage emerge. Few of them are as follows:
First, different softwares are used across different departments of the government and the private sector companies. This makes data linkage difficult. Hence, there is an urgent need to develop convergence of these different softwares. Besides, commonly used open source software must be developed for data linkage. This would reduce the cost of data linkage tools. This would make it easily available for the government departments and the private sector.
Second, different data linkage methodology and skills are applied by the workforce of different departments of the government and the private sector. Hence, there is a need for scientific rationalisation of methodologies and skills used for data linkage.
Third, there is a need to develop new softwares to eliminate errors in data linkage. For instance, longitudinal linkage, that is, linking of data across time, is a cumbersome process and leads to errors. Such errors can be eliminated by developing new softwares that would be used for data linkage and analysis of data. For instance, the software Statistical Package for the Social Sciences (SPSS) is used for data linkage.
Examples of data linkage in India
India is undertaking massive data linkage projects. Few such examples are as follows:
First, all criminal records of an individual are being compiled at one place. This includes all the FIRs, sections of various laws under which an individual has been arraigned, sections of different laws under which that individual has been convicted or acquitted. This is being done under the Crime and Criminal Tracking Network & Systems (CCTNS).
Second, data pertaining to research undertaken by different scientists and academics are being compiled at one place. The aim behind this is to create a database of research undertaken by an individual professor or scientist and compilation of all research papers pertaining to a specific research topic/ area.
Third, the medical history of an individual from all hospitals across the length and breadth of our country is being compiled at one place under the initiative Online Registration System (ORS) under the Digital India programme. This data has also been linked with the Ayushman Bharat Health Account.
The post Data linkage- Meaning, Aims, Applications, and Tools for the collection of data appeared first on ELE Times.
Ah desoldering
submitted by /u/FishingReasonable810 [link] [comments] |
A new era in electrochemical sensing technology
At the forefront of scientific exploration, electrochemical sensing is an indispensable and adaptable tool that impacts a diverse range of industries. From life and environmental science to industrial material and food processing, the ability to quantify chemicals can provide greater insight, elevating safety, efficiency and awareness.
In this era of advanced interconnected technology, the significance of low power and highly accurate electrochemical sensors cannot be overstated. In our homes, connected devices allow us to monitor the quality of our air, water, and soil for our plants.
Across the industry, there is even greater demand. Smart medical devices, including wearables, move healthcare into the 21st century by providing real-time continuous monitoring of patient vital signs both inside and outside of clinical facilities, improving insight and increasing quality of care.
Similarly, the expanse of Industry 4.0 in manufacturing and industrial automation has seen many sectors deploy extensive networks of sensing nodes in order to improve their efficiency and safety. Sensors can monitor toxic gasses created during various industrial processes and enable feedback systems in industrial equipment. In food processing, the detection of spoilage and allergenic substances is essential—electrochemical sensors can help to automate pre-cooking taste verification, reporting pH levels and detecting histamines.
Whether it’s monitoring glucose levels in diabetic patients, assessing environmental pollutants, ensuring food safety, or characterizing materials at the atomic level, electrochemical sensors play a pivotal role in advancing scientific knowledge and improving our quality of life.
This article will explore the principles that support electrochemical sensing, the requirements for effective sensor performance, how an analog front-end (AFE) device can be a bridge for current measurement and analysis and delve into specific examples of how these sensors are utilized in medical, environmental, food, and material science applications.
Electrochemical sensor requirements
The typical setup for an electrochemical sensor in electronic engineering involves a three-electrode system, an arrangement seen across many other sensor types (Figure 1).
Figure 1 Two diagrams indicate the construction of a typical electrochemical sensor. Source: onsemi
Within the sensor, there is a substrate surface material which acts as a protective layer for the sensing electrode. This material’s primary function is to regulate the quantity of molecules that can access the electrode surface and filter out any undesirable particles that may impact the accuracy of the sensor.
At the core of the sensor are three main parts. The working electrode (WE) is where the electrochemical reaction takes place. As particles impact the WE, a reaction occurs, creating either a loss or gain of electrons, leading to electron flow and the production of current. Maintaining a constant potential at the WE is vital, as it enables accurate measurement of the current generated by redox reactions (Figure 1).
The counter electrode (CE) supplies sufficient current to balance out the redox reactions happening at the WE, creating a complementary pair. While the reference electrode (RE) is employed for measuring the potential of the WE and offering feedback to establish the CE voltage.
Figure 2 The circuit diagram highlights an electrochemical sensor design. Source: onsemi
The high-side resistance in an electrochemical sensor (Figure 2) is an undesired factor that should be minimized, which can be achieved by positioning the RE near the WE. The current flowing through the lower-side resistance indicates the output of the electrochemical measurement and is therefore used to derive the sensor’s output voltage.
Whether an electrochemical sensor is being used in consumer, healthcare, or industrial applications, there are several key technical requirements set by designers that sensors must meet. Factors like high accuracy and low noise go without saying, but alongside this, electrochemical sensors must allow for simple calibration to help cater for the wide range of applications—as packaging or usage may influence calibration, either immediately or over time.
Moreover, with many electrochemical sensors being deployed in portable or low-power solutions, such as wearable medical technology or industrial technology nodes, there are a number of packaging requirements that must be addressed.
Engineers require solutions that feature low-power operation, thus supporting battery powered applications, and that are miniaturized and flexible, allowing for various sensor configurations and easy system integration. Intelligent pre-processing is another important feature on many engineer’s radars, as it can enable more sophisticated calibration and noise filtering, supporting more accurate data delivery.
Common sensor applications
Electrochemical sensors are extensively utilized for several purposes in life science and healthcare, including in the detection of blood alcohol levels and facilitating continuous glucose monitoring (CGM)—a critical component in the management of diabetes, a chronic illness that affects 1 in 11 people worldwide. The CGM device market is projected to grow at a compound annual growth rate (CAGR) of 9% during 2023 to 2032.
Targeting the latest clinical and portable medical devices, a miniaturized AFE is employed for highly accurate measurement of electrochemical currents. The combination of ultra-low-power consumption, flexible configuration, and small size makes it a compelling solution wherever an electrochemical sensor is used.
Beyond medical sciences, electrochemical sensors are ideal for detecting toxic gasses in industrial applications, or for measuring pollution and air quality in environmental applications. They employ a chemical reaction between the target gas and an electrode, generating an electrical current proportional to the specified gas concentration.
The 20-mm electrochemical sensors are widespread and are available for several toxic gasses, including carbon monoxide, hydrogen sulfide and oxides of nitrogen and sulfur, and allow for simple ‘drop in’ replacement. These sensors are utilized in a diverse array of applications, spanning from air quality sensors in urban settings to smart agricultural applications for monitoring plant growth.
Similarly, electrochemical sensors such as potentiostat or corrosion sensors are crucial in environments such as laboratories, mining operations, and material production. They serve as important tools for providing feedback within production systems and managing hazardous substances, ensuring the safety of the operation.
In search of increased yield and production efficiency, food production has also turned to electrochemical sensors. Here, both handheld portable devices and larger automations are deployed for food quality control, ensuring taste and identifying spoilage, allergens or hazardous chemicals.
Sensor design blueprint
Sensors based on electrochemical measurements are readily available. From healthcare and glucose monitoring to broader environmental applications, these sensors provide a complete solution that is designed to increase reliability, accuracy and improve the user experience of wearables and portable medical devices.
These solutions, for instance, can pair with AFE for continuous electrochemical measurement and Bluetooth Low Energy 5.2 technology-enabled microcontroller. Such integrations play a crucial role in making devices smaller and ensuring long-lasting functionality—a vital factor for battery-powered solutions.
The solution, built around CEM102 AFE and RSL15 microcontroller, is complemented with development support, firmware and software, including iOS and Android demo applications (Figure 3).
Figure 3 Examples screens display demo applications for iOS and Android platforms. Source: onsemi
There is also a CEM102 evaluation board complete with sample code for setting up and conducting measurements with CEM102, making it easier to begin system development. This combined offering is designed to streamline development and promote greater integration and innovation for the next generation of amperometric sensor technologies.
During operation, the CEM102’s function is to connect the sensor network to the digital processing. It is responsible for conditioning the sensor by applying the necessary signals to the electrodes and ensuring accurate measurement from the sensor network, while the RSL15 connects the sensor to wireless Bluetooth LE networks (Figure 4).
Figure 4 Here is how the CEM102 + RSL15 combo facilitates a wireless electrochemical sensing solution. Source: onsemi
Advancing scientific research
The precise measurement provided by electrochemical sensors is a critical enabler for advancing scientific knowledge. For example, by carefully examining factors such as glucose levels, researchers can obtain valuable insight into chronic illnesses like diabetes. This knowledge can enhance our understanding and expedite innovation, ultimately benefiting a significant portion of the global population.
In the ever-evolving world of electronics, companies require pioneering solutions that not only redefine expectations but also allow for shorter time to market and increased flexibility to provide scope for new applications. From remote healthcare to environmental monitoring and industrial safety, electrochemical sensors fulfill a diverse range of applications and have a significant impact on society.
And the potential of this versatility extends far beyond current applications. Through manufacturing support and collaboration, electrochemical sensors can contribute to advancing research and enhancing comprehension in the medical field and beyond.
The ongoing development of smart technology, along with complementary technologies such as artificial engineering and machine learning, will drive the growing influence of electrochemical sensors on our lives, resulting in the emergence of new innovations and the effective resolution of many longstanding global challenges.
Hideo Kondo is product marketing engineer at onsemi’s Analog Mixed-Signal Group.
Related Content
- Alchimer improves electrochemical coating process
- NEXX licenses Alchimer’s electrochemical coating process
- AFE facilitates electrochemical and impedance measurement
- ADI: impedance & potentiostat AFE for biological and chemical sensing
- Configurable sensor AFE solution aims to simplify sensor systems designs, speeds time-to-market
The post A new era in electrochemical sensing technology appeared first on EDN.
Ground strikes and lightning protection of buried cables
There was a recent lightning incident where fifty people were hurt while standing on wet soil at the moment of a nearby lightning strike that caused an electrical current to flow through the ground. Seven people were hospitalized but fortunately, there were no fatalities.
The incident raises a point that I have seen made as to whether overhead power lines are more prone or less prone to lightning strike damage than buried power lines.
The issue is not as simple as some would have you believe.
Consider the following image in Figure 1.
Figure 1 An elevated power line and a lightning strike where the power line is isolated from the wet soil current. Source: John Dunn
Apart from a direct strike to the power line itself (I once saw that very thing happen but that’s a separate story), an overhead power line is pretty much isolated and protected from the wet soil’s current paths.
However, if the power line is buried, the wet soil’s current paths can impinge on the power line very much in the same the way as lightning currents impinged on those fifty people (See Figure 2).
Figure 2 A buried power line and a lightning strike where the power line is subjected to the wet soil current. Source: John Dunn
It has been suggested from time to time that power line burial is a guaranteed way to protect any power line from a lightning event. That may or may not be true depending on many circumstances, but power line burial is NOT an absolute panacea, not by any means.
Soil composition, the presence or absence of nearby structures, the presence or absence of water mains, various lightning arrestor arrangements, dollar expenditures for excavation efforts, and so forth must all be assessed by experts of which I most definitely am NOT.
In the midst of many buildings—many tens of stories tall—within the borough of Manhattan, New York City, many power lines are located below ground, underneath all sorts of concrete and asphalt. In the borough of Queens, however, where I grew up (Rego Park to be precise); overhead power lines are found all over the place.
There are no simple answers and no clear-cut conclusions. Rather, this essay’s purpose is merely to dispel any simplistic thinking about the issue.
John Dunn is an electronics consultant, and a graduate of The Polytechnic Institute of Brooklyn (BSEE) and of New York University (MSEE).
Related Content
- Lightning rod ball
- Teardown: Zapped weather station
- No floating nodes
- Why do you never see birds on high-tension power lines?
- Birds on power lines, another look
- A tale about loose cables and power lines
- Shock hazard: filtering on input power lines
- Misplaced insulator proves fatal
The post Ground strikes and lightning protection of buried cables appeared first on EDN.
NANO Nuclear Energy: Pioneering Portable Microreactors and Vertically Integrated Fuel Solutions for Sustainable Power
NANO Nuclear Energy Inc. is making significant strides in the nuclear energy sector, focusing on becoming a diversified and vertically integrated company. On July 18, the company successfully closed an additional sale of 135,000 common stock shares at $20.00 per share, marking a significant financial milestone. NANO Nuclear, recognized as the first publicly listed portable nuclear microreactor company in the U.S., according to its website, is dedicated to advancing sustainable energy solutions through four main business areas: portable microreactor technology, nuclear fuel fabrication, nuclear fuel transportation, and consulting services within the nuclear industry.
NANO Nuclear is led by a team of world-class nuclear engineers who are developing cutting-edge products like the ZEUS solid core battery reactor and the ODIN low-pressure coolant reactor. These cutting-edge nuclear microreactors are engineered to provide clean, portable, and on-demand energy solutions, effectively meeting both present and future energy demands.
In a recent interview, NANO Nuclear Energy’s CEO, James Walker, outlined the company’s ambitious plans to establish a vertically integrated nuclear fuel business through its subsidiaries, Advanced Fuel Transportation Inc. (AFT) and HALEU Energy Fuel Inc. (HEF). The goal is to secure a reliable supply chain for high-assay, low-enriched uranium (HALEU) fuel, which is crucial for advanced nuclear reactors. HALEU, enriched to contain 5-19.9% of the fissile isotope U-235, enhances reactor performance, allowing for smaller designs with higher power density. Recognizing these advantages, HEF is planning to invest in fabrication facilities to meet the growing demand for advanced reactor fuel.
AFT, a key subsidiary of NANO Nuclear, is led by former executives from the world’s largest transportation companies. The subsidiary aims to establish a North American transportation network to supply commercial quantities of fuel to small modular reactors, microreactor companies, national laboratories, the military, and Department of Energy (DoE) programs. AFT’s position is strengthened by its exclusive license for a patented high-capacity HALEU fuel transportation basket, developed in collaboration with three prominent U.S. national nuclear laboratories and funded by the DoE. Concurrently, HEF is dedicated to establishing a domestic HALEU fuel fabrication pipeline to cater to the expanding advanced nuclear reactor market.
Walker acknowledged several challenges that the company faces and outlined strategies to overcome them. One of the main challenges lies in navigating the intricate regulatory landscape. Obtaining numerous permits and licenses from bodies like the Nuclear Regulatory Commission (NRC) and the DoE is essential for nuclear fuel operations. To address this, NANO Nuclear plans to invest in a dedicated regulatory affairs team to manage the licensing process and ensure ongoing compliance with stringent safety and environmental standards. Early and consistent engagement with regulators will also be crucial to align operations with regulatory expectations.
Technical and engineering challenges are also a significant focus for NANO Nuclear. Walker emphasized the importance of developing and optimizing the deconversion process to safely and efficiently handle enriched uranium hexafluoride (UF6) and convert it into other uranium fuel forms. Meeting reactor specifications requires attaining the high precision and quality essential in HALEU fuel fabrication. To overcome these challenges, NANO Nuclear intends to leverage expertise from experienced nuclear engineers and collaborate with research institutions for technology development. Rigorous quality control systems and continuous improvement practices will be key components in addressing these technical hurdles.
Another set of challenges relates to supply chain and logistics. Given the stringent safety protocols required for handling radioactive materials, ensuring the secure and safe transport of HALEU fuel is of utmost importance. Walker noted the importance of synchronizing activities across multiple facilities to avoid bottlenecks and delays. To effectively manage the supply chain, NANO Nuclear intends to establish strong transportation and security protocols in collaboration with specialized logistics companies, along with implementing advanced tracking and coordination systems.
Economic and financial viability is another critical consideration. Building facilities for deconversion, fuel fabrication, and transportation demands significant capital investment. To ensure the economic viability of the integrated supply chain, managing operational costs is essential. Walker highlighted the need to secure a range of funding sources, such as government grants, private investments, and strategic partnerships. To support these efforts, NANO Nuclear will develop detailed financial models to forecast costs and revenues and implement cost-control measures.
Market and demand uncertainties also pose challenges for the company. It is crucial to secure adequate demand for HALEU fuel, especially from microreactor manufacturers and other potential clients. To tackle this, NANO Nuclear intends to carry out market research to identify and secure long-term contracts with key customers. By differentiating its product offerings through quality, reliability, and integrated services, the company aims to compete effectively with existing fuel suppliers and new market entrants.
Addressing human resources and expertise is equally important for NANO Nuclear’s success. Recruiting and retaining highly skilled personnel with expertise in nuclear technology, engineering, and regulatory compliance is critical. To this end, Walker mentioned that the company will develop a comprehensive human resources strategy focusing on recruitment, training, and career development to ensure the necessary talent is in place.
The company’s advancements in microreactor technology are particularly noteworthy. The latest advanced microreactors, with a thermal energy output ranging from 1 to 20 megawatts, provide a flexible and portable option compared to traditional nuclear reactors. Microreactors can generate clean and reliable electricity for commercial use while also supporting a range of non-electric applications, such as district heating, water desalination, and hydrogen fuel production.
NANO Nuclear is at the forefront of this technology with its innovative ZEUS microreactor ZEUS boasts a distinctive design with a fully sealed core and a highly conductive moderator matrix for effective dissipation of fission energy. The entire core and power conversion system are housed within a single shipping container, making it easy to transport to remote locations. Engineered to deliver continuous power for a minimum of 10 years, ZEUS provides a dependable and clean energy solution for isolated areas, utilizing conventional materials to lower costs and expedite time to market.
The ZEUS microreactor’s completely sealed core design eliminates in-core fluids and associated components, significantly impacting overall system reliability and maintenance requirements. By reducing the number of components prone to failure, such as pumps, valves, and piping systems, the reactor’s design decreases the likelihood of mechanical failures and leaks, thereby enhancing overall reactor reliability. This inherently safer design also eliminates coolant loss scenarios, which are among the most severe types of reactor incidents.
With fewer moving parts, the maintenance intervals for ZEUS are significantly reduced. Components that avoid exposure to corrosive and erosive fluids have an extended service life, leading to fewer and less extensive maintenance activities. The absence of fluids simplifies inspections and replacements, making routine maintenance easier and quicker, ultimately reducing reactor downtime and operational costs.
Using an open-air Brayton cycle for power conversion in the ZEUS microreactor presents both significant benefits and challenges. The cycle’s high thermodynamic efficiency and mechanical robustness make it suitable for remote locations. By using air as the working fluid, the need for water is eliminated, reducing corrosion risk and making the reactor ideal for arid regions. However, challenges include managing high temperatures and ensuring material durability. Efficient heat exchanger design and advanced control systems are crucial, along with robust filtration and adaptable systems to handle dust and temperature extremes in remote areas.
The highly conductive moderator matrix in the ZEUS microreactor significantly enhances safety and efficiency in dissipating fission energy compared to traditional reactor designs. This advanced matrix ensures superior thermal conductivity, allowing for rapid and efficient heat transfer away from the reactor core. The matrix’s thermal properties also support passive cooling mechanisms, such as natural convection, that operate without external power, adding a critical safety layer during emergencies.
NANO Nuclear is also developing the ODIN advanced nuclear reactor to diversify its technology portfolio. The ODIN design will use conventional fuel with up to 20% enrichment, minimizing development and testing costs With its low-pressure coolant system, the design improves structural reliability and extends service life. ODIN’s high-temperature operation ensures resilient performance and high-power conversion efficiency. Utilizing natural convection for heat transfer and decay heat removal, it offers robust safety features that align with the company’s commitment to advancing nuclear technology.
In summary, NANO Nuclear Energy Inc. is pioneering advancements in nuclear energy through its focus on portable microreactor technology and a vertically integrated supply chain. The company’s innovative ZEUS and ODIN reactors, along with its strategic approach to addressing regulatory, technical, and market challenges, position it as a key player in the future of sustainable energy solutions.
The post NANO Nuclear Energy: Pioneering Portable Microreactors and Vertically Integrated Fuel Solutions for Sustainable Power appeared first on ELE Times.
Budget 24-25 calls for India-first schemes & policies to boost industries and morale of the nation
With the idea of “Viksit Bharat” in the making, the Union Budget 24-25 has brought a sense of motive and accomplishment to the social and economic fabric of the country. The compelling vision towards upskilling, research and development, employment, and women-centric opportunities seems to be a just and progressive way forward.
Not to mention, the govt. has stepped up substantially to elevate the electronics and technology industry. The intention is crystal clear and the focus is sharp. The allocation of Rs 21,936 crore to the Ministry of Electronics and Information Technology (MeitY), marks a significant 52% increase from the revised estimates of FY24, which were Rs 14,421 crore. This boost supports various incentive schemes and programs under MeitY, including semiconductor manufacturing, electronics production, and the India AI Mission.
Speaking of the ministry’s departments, the modified scheme for establishing compound semiconductors, silicon photonics, sensor fabs, discrete semiconductor fabs, and facilities for semiconductor assembly, testing, marking, and packaging (ATMP) and outsourced semiconductor assembly and testing (OSAT) received the highest allocation of Rs 4,203 crore, up from Rs 1,424 crore in FY24. Additionally, the scheme for setting up semiconductor fabs in India has been allocated Rs 1,500 crore for FY25, a big shout-out.
The production-linked incentive (PLI) scheme for large-scale electronics manufacturing also increased, with its outlay rising from Rs 4,489 crore in the revised estimates to Rs 6,125 crore for FY25. For the India AI Mission, the government has allocated Rs 511 crore for FY25.
Furthermore, the National Informatics Centre (NIC), responsible for e-governance and digital infrastructure has received an increased outlay of Rs 1,748 crore, up from Rs 1,552 crore in the previous fiscal year’s revised estimates. The substantial rise in MeitY’s budget, reaching Rs 21,936.9 crore for 2024-25, compared to Rs 14,421.25 crore for 2023-24, is largely due to the capital allocation towards the Modified Programme for Development of Semiconductors and Display Manufacturing Ecosystem in India, which saw a 355% increase to Rs 6,903 crore from Rs 1,503.36 crore in FY24.
Incentive schemes for semiconductors and large-scale electronics manufacturing, as well as IT hardware, are providing significant support to large companies like Micron and Tata Electronics to establish facilities in India. Additionally, Rs 551.75 crore has been allocated for the “India AI Mission” to enhance the country’s AI infrastructure. The previous NDA cabinet had approved over Rs 10,300 crore for the India AI Mission in March, aimed at catalyzing various components, including IndiaAI Compute Capacity, IndiaAI Innovation Centre, IndiaAI Datasets Platform, IndiaAI Application Development Initiative, IndiaAI FutureSkills, IndiaAI Startup Financing, and Safe & Trusted AI.
The other aspects of the budget concentrating on the Prime Minister’s “Vocal for Local” vision including “PM Surya Ghar Muft Bijli Yojana” among others is both timely and commendable. Overall, the budget is sure to empower the Indian spirit in action and open new growth avenues for indigenous players. I am excited to see how things will pan out for us as a nation in the next decade.
The post Budget 24-25 calls for India-first schemes & policies to boost industries and morale of the nation appeared first on ELE Times.
AXT’s Q2 revenue up 50% year-on-year
When did short range radio waves begin to shape our daily life?
Courtesy: u-blox
The roots of short-range wireless communicationYou arrive at your smart home after a long day. The phone automatically connects to the local network and the temperature inside is perfect, neither too cold nor too hot. As you settle into your favourite couch and plug in your headphones, ready to enjoy a good song, a family member asks you to connect your devices to share some files. While waiting, you are drawn to an old radio that once belonged to your grandmother. For a moment, everything vanishes, and you catch a glimpse into the past, imagining a distant decade when none of these short-range wireless technologies existed.
The mentioned activities require the transmission of data via radio waves traveling through the air at the speed of light. Although we cannot observe them, radio waves carry information between transmitters and receivers at different frequencies and distances. As a fundamental and ubiquitous information carrier, short-range wireless technology is now part of our daily lives. For this to happen, many scientific and technological developments had to come first.
A peek into short-range prehistoryThe electric telegraph was the first step – a revolutionary development that took shape in the first decades of the 19th century. Then, in the 1880s, Heinrich Hertz demonstrated the existence of electromagnetic waves (including radio waves), proving the possibility of transmitting and receiving electrical waves through the air. Building on Hertz’s work, Guillermo Marconi succeeded in sending a wireless message in 1895.
At the turn of the century, the application of radio waves for communication was a significant innovation. Thanks to the discovery of the radio and the development of transmitters and receivers, by the 1920s, it was possible to send messages, broadcast media, and listen to human voices and music remotely.
Radios penetrated millions of homes within just a few decades. While audio transmissions opened a new chapter in communications, visual broadcasting became the next challenge. Television quickly emerged as the next widely available communication technology.
The common denominator of these early communications and broadcasting tools was the use of high-power transmitters and radio frequency channels in the lower part of the spectrum. At the time, they were defined as long, medium, and short waves. But since the 1960s, using the specific frequency band or channel for each communication link has been more common than referring to the wavelength.
For decades, these developments focused on perfecting broadcast technologies, exploring the scope of long range communication, and reaching ever farther away places. The story didn’t stop there, though. Scientists and engineers went several steps further and began experimenting with cellular technology for mobile applications in the licensed spectrum and short range radio and wireless technologies in the license-free spectrum, opening up new personal and data communications possibilities.
The history of short range and cellular radio technology is rich. For this reason, we will focus on the former for now, while a future blog will cover the latter.
Short range radioWhen we talk about short-range wireless technologies, we refer to technologies that can communicate between devices within a range up to typically 10-30 m. Bluetooth and Wi-Fi are the most common short-range technologies. This communication is made possible by short-range wireless chips and modules embedded in smartphones and many other devices, enabling them to connect and communicate with others nearby.
Once the long-range transmission infrastructure and broadcast systems were in place, a sudden interest in short-range communications occurred about forty years ago. The expansion of the radio spectrum frequencies by the U.S. Federal Communications Commission allowed civilian devices to transmit at 900 MHz, 2.4 GHz, and 5.8 GHz. With the development of various communication technologies, the short-range wireless technology era was about to commence.
Wi-FiWe are all familiar with this term, and today, the first thing we do when we arrive at a new place, be it a friend’s house, a restaurant, or a train station, is to request the Wi-Fi password. Once your phone is ‘in,’ high-speed data transfer via radio waves begins.
What were you up to in the 1980s? While many of us were immersed in 80s culture, including fashion, music, and movies, technology companies were busy building the infrastructure for wireless local area networks (WLANs). Relying on this infrastructure, manufacturers began producing tons of devices. Soon, the incompatibility between devices from different brands led to an uncertain period that yearned for a common wireless standard.
This period came to an end with an agreement in 1997. The Institute of Electrical and Electronics Engineers released the common 802.11 standard, uniting some of the largest companies in the industry and paving the way for the Wireless Ethernet Compatibility Alliance (WECA). With the 802.11 standard, the technology soon to be known as Wi-Fi was born.
In 2000, the Wi-Fi Alliance organization continued promoting the new wireless networking technology, popularizing the term Wi-Fi (Wireless Fidelity). In the years that followed, Alliance members devoted much effort to secure applications, use cases, and interoperability for Wi-Fi products.
BluetoothAn iconic piece of technology from the 80s was the Walkman. It was everywhere and everyone loved it. Mixing your tapes to listen to music for at least an hour was like creating your favourite lists on Spotify.
Invented in the late 1970s, the Walkman was so revolutionary that it remained on the market for about 40 years, with sales peaking in the first two decades.
While highly innovative, this technology had one major drawback: the cord. When you exercised or engaged in any activity that required movement, you would inevitably get stuck or tangled in the objects around you.
The idea for Bluetooth technology originated from a patent issued in 1989 by Johan Ullman, a Swedish physician born in 1953. He obtained this patent while researching analog cordless headsets for mobile phones, possibly inspired by the inconvenience of tangled wires while using a Walkman. His work was the seed that laid the foundation for wireless headsets.
One of Ericsson’s most ambitious endeavors in the 1990s was materializing Ullman’s idea. Building upon his patent and another one from 1992, Nils Rydbeck, then CTO of Ericsson Mobile, commissioned a team of engineers led by Sven Mattisson to develop what we know today as Bluetooth technology. The innovation is captured as a modern runestone replica erected by Ericsson in Lund in 1999 in memory of Harald Bluetooth.
ThreadAlthough not defined as a short-range technology, this networking protocol is a newer tool for smart home and Internet of Things (IoT) applications. It is highly advantageous because it can provide reliable, low-power, and secure connectivity.
Thread’s origins date back to 2013, when a team at Nest Labs set out to develop a new networking protocol for smart home devices. The company had previously created an earlier version called Nest Weave. Much like the early days of Wi-Fi, this version showed a significant shortcoming: a lack of interoperability between devices from different manufacturers.
With the advent of IoT devices, the need for a specific networking protocol became evident. In 2015, the Thread Group – initially consisting of seven companies, including Samsung, and later joined by Google and Apple ‒ released the Thread Specification 1.0.
This specification defined the details of the networking protocol designed for IoT devices. Critical for manufacturers, this protocol enables the development of secure and reliable Thread-compatible devices and facilitates communication between smart devices in home environments.
This networking protocol is unique because of its mesh networking architecture, a key differentiator. The architecture enables multiple devices, or nodes, to form a sectioned mesh network in which each device can communicate with the other members of the set. A mesh topology makes communication efficient and reliable, even when specific nodes fail or are unavailable.
Thread technology has gained traction and support over the past decade, particularly among companies developing solutions for the smart home and IoT ecosystem. Device manufacturers, semiconductor companies, software developers, and service providers all recognize the relevance of this protocol for building connected and interoperable smart home systems.
Wave me up before you go!The amount of data transmitted over the air has never been as extreme as today. Signal transmission between electronic devices has increased exponentially. Both long- and short-range waves enable transmission and communication to, from, and between devices to join networks for accessing the Internet, for instance. Now, a myriad of radio waves surrounds us.
Over the past 34 years, each of these short-range technologies (comprising the protocol) has contributed to the advancement of connectivity in various industries, including automotive, industrial automation, and many others. Until recent years, they have done so independently.
Today, the challenge for manufacturers and other stakeholders is choosing the most appropriate technology for each application, such as Bluetooth or Thread. They have also realized that combining these technologies can further advance the possibilities of IoT connectivity.
Next time you connect your smartphone to your wireless headphones, ask for the network password at a coffee shop, or communicate with colleagues on a Thread network, take a moment to remember the steps needed to live in such a connected world.
The post When did short range radio waves begin to shape our daily life? appeared first on ELE Times.
STM32CubeProgrammer 2.17 simplifies serial numbering and option byte configurations
Author : STMicroelectronics
STM32CubeProgrammer 2.17 is the very definition of a quality-of-life improvement. While it ensures support for the latest STM32s, it also brings features that will make a developer’s workflow more straightforward, such as writing ASCII strings in memory, automatic incrementation in serial numbering, or exporting and importing byte options. This new release also shows how ST listens to its community, which is why we continue to bring better support to Segger probes. In its own way, each release of STM32CubeProgrammer is a conversation we have with STM32 developers, and we can’t wait to hear what everyone has to say.
What’s new in STM32CubeProgrammer 2.17? New MCU SupportThis latest version of STM32CubeProgrammer supports STM32C0s with 128 KB of flash. It also recognizes the STM32MP25, which includes a 1.35-TOPS NPU , and all the STM32WB0s we recently released, including the STM32WB05, STM32WB05xN, STM32WB06, and STM32WB07. In the latter case, we announced their launch just a few weeks ago, thus showing that STM32CubeProgrammer keeps up with the latest releases to ensure developers can flash and debug their code on the newest STM32s as soon as possible.
New Quality-of-Life Improvements.The other updates brought on by STM32CubeProgrammer 2.17 aim to make a developer’s job easier by tailoring our utility to their workflow. For instance, we continue to build on Segger’s previous support of the J-Link and Flasher probes to ensure they support a read protection level (RDP) regression with password, thus bridging the gap between what’s possible with an STLINK probe and what’s available on the Segger models. Consequently, developers already using our partner’s probes won’t feel like they are missing out. Another update brought on by version 2.17 is the ability to generate serial numbers and automatically increment them within STM32CubeProgrammer, thus hastening the process of flashing multiple STM32s in one batch.
Other quality-of-life improvements aim to make STM32CubeProgrammer more intuitive. For instance, it is now possible to export an STM32’s option bytes. Very simply, they are a way to store configuration options, such as read-out protection levels, watchdog settings, power modes, and more. The MCU loads them early in the boot process, and they are stored in a specific part of the memory that’s only accessible by debugging tools or the bootloader. By offering the ability to export and import option bytes, STM32CubeProgrammer enables developers to configure MCUs much more easily. Similarly, version 2.17 can now edit memory fields in ASCII to make certain section a lot more readable.
What is STM32CubeProgrammer? An STM32 flasher and debuggerAt its core, STM32CubeProgrammer helps debug and flash STM32 microcontrollers. As a result, it includes features that optimize these two processes. For instance, version 2.6 introduced the ability to dump the entire register map and edit any register on the fly. Previously, changing a register’s value meant changing the source code, recompiling it, and flashing the firmware. Testing new parameters or determining if a value is causing a bug is much simpler today. Similarly, engineers can use STM32CubeProgrammer to flash all external memories simultaneously. Traditionally, flashing the external embedded storage and an SD card demanded developers launch each process separately. STM32CubeProgrammer can do it in one step.
Another challenge for developers is parsing the massive amount of information passing through STM32CubeProgrammer. Anyone who flashes firmware knows how difficult it is to track all logs. Hence, we brought custom traces that allow developers to assign a color to a particular function. It ensures developers can rapidly distinguish a specific output from the rest of the log. Debugging thus becomes a lot more straightforward and intuitive. Additionally, it can help developers coordinate their color scheme with STM32CubeIDE, another member of our unique ecosystem designed to empower creators.
STM32CubeProgrammer What are some of its key features? New MCU supportMost new versions of STM32CubeProgrammer support a slew of new MCUs. For instance, version 2.16 brought compatibility with the 256 KB version of the STM32U0s. The device was the new ultra-low power flagship model for entry-level applications thanks to a static power consumption of only 16 nA in standby. STM32CubeProgrammer 2.16 also brought support for the 512 KB version of the STM32H5, and the STM32H7R and STM32H7S, which come with less Flash so integrators that must use external memory anyway can reduce their costs. Put simply, ST strives to update STM32CubeProgrammer as rapidly as possible to ensure our community can take advantage of our newest platforms rapidly and efficiently.
SEGGER J-Link probe supportTo help developers optimize workflow, we’ve worked with SEGGER to support the J-Link probe fully. This means that the hardware flasher has access to features that were previously only available on an ST-LINK module. For instance, the SEGGER system can program internal and external memory or tweak the read protection level (RDP). Furthermore, using the J-Link with STM32CubeProgrammer means developers can view and modify registers. We know that many STM32 customers use the SEGGER probe because it enables them to work with more MCUs, it is fast, or they’ve adopted software by SEGGER. Hence, STM32CubeProgrammer made the J-Link vastly more useful, so developers can do more without leaving the ST software.
Automating the installation of a Bluetooth LE stackUntil now, developers updating their Bluetooth LE wireless stack had to figure out the address of the first memory block to use, which varied based on the STM32WB and the type of stack used. For instance, installing the basic stack on the STM32WB5x would start at address 0x080D1000, whereas a full stack on the same device would start at 0x080C7000, and the same package starts at 0x0805A000 on the STM32WB3x with 512 KB of memory. Developers often had to find the start address in STM32CubeWB/Projects/STM32WB_Copro_Wireless_Binaries. The new version of STM32CubeProgrammer comes with an algorithm that determines the right start address based on the current wireless stack version, the device, and the stack to install.
A portal to security on STM32Readers of the ST Blog know STM32CubeProgrammer as a central piece of the security solutions present in the STM32Cube Ecosystem. The utility comes with Trusted Package Creator, which enables developers to upload an OEM key to a hardware secure module and to encrypt their firmware using this same key. OEMs then use STM32CubeProgrammer to securely install the firmware onto the STM32 SFI microcontroller. Developers can even use an I2C or SPI interface, which gives them greater flexibility. Additionally, the STM32H735, STM32H7B, STM32L5, STM32U5, and STM32H5 also support external secure firmware install (SFIx), meaning that OEMs can flash the encrypted binary on memory modules outside the microcontroller.
Secure ManagerSecure Manager is officially supported since STM32CubeProgrammer 2.14 and STM32CubeMX 1.13. Currently, the feature is exclusive to our new high-performance MCU, the STM32H573, which supports a secure ST firmware installation (SSFI) without requiring a hardware secure module (HSM). In a nutshell, it provides a straightforward way to manage the entire security ecosystem on an STM32 MCU thanks to binaries, libraries, code implementations, documentation, and more. Consequently, developers enjoy turnkey solutions in STM32CubeMX while flashing and debugging them with STM32CubeProgrammer. It is thus an example of how STM32H5 hardware and Secure Manager software come together to create something greater than the sum of its parts.
Other security features for the STM32H5STM32CubeProgrammer enables many other security features on the STM32H5. For instance, the MCU now supports secure firmware installation on internal memory (SFI) and an external memory module (SFIx), which allows OEMs to flash encrypted firmware with the help of a hardware secure module (HSM). Similarly, it supports certificate generation on the new MCU when using Trusted Package Creator and an HSM. Finally, the utility adds SFI and SFIx support on STM32U5s with 2 MB and 4 MB of flash.
Making SFI more accessible The STM32HSM used for SFI with STM32CubeProgrammerSince version 2.11, STM32CubeProgrammer has received significant improvements to its secure firmware install (SFI) capabilities. For instance, in version 2.15, ST added support for the STM32WBA5. Additionally, we added a graphical user interface highlighting addresses and HSM information. The GUI for Trusted Package Creator also received a new layout under the SFI and SFIx tabs to expose the information needed when setting up a secure firmware install. The Trusted package creator also got a graphical representation of the various option bytes to facilitate their configuration.
Secure secret provisioning for STM32MPxSince 2.12, STM32CubeProgrammer has a new graphical user interface to help developers set up parameters for the secure secret provisioning available on STM32MPx microprocessors. The mechanism has similarities with the secure firmware install available on STM32 microcontrollers. It uses a hardware secure module to store encryption keys and uses secure communication between the flasher and the device. However, the nature of a microprocessor means more parameters to configure. STM32CubeProgrammers’ GUI now exposes those settings previously available in the CLI version of the utility to expedite workflows.
Double authenticationSince version 2.9, the STM32CubeProgrammer supports a double authentication system when provisioning encryption keys via JTAG or a Boot Loader for the Bluetooth stack on the STM32WB. Put simply, the feature enables makers to protect their Bluetooth stack against updates from end-users. Indeed, developers can update the Bluetooth stack with ST’s secure firmware if they know what they are doing. However, a manufacturer may offer a particular environment and, therefore, may wish to protect it. As a result, the double authentication system prevents access to the update mechanism by the end user. ST published the application note AN5185 to offer more details.
PKCS#11 supportSince version 2.9, STM32CubeProgrammer supports PKCS#11 when encrypting firmware for the STM32MP1. The Public-Key Cryptography Standards (PKCS) 11, also called Cryptoki, is a standard that governs cryptographic processes at a low level. It is gaining popularity as APIs help embedded system developers exploit its mechanisms. On an STM32MP1, PKCS#11 allows engineers to segregate the storage of the private key and the encryption process for the secure secret provisioning (SSP).
SSP is the equivalent of a Secure Firmware Install for MPUs. Before sending their code to OEMs, developers encrypt their firmware with a private-public key system with STM32CubeProgrammer. The IP is thus unreadable by third parties. During assembly, OEMs use the provided hardware secure module (HSM) containing a protected encryption key to load the firmware that the MPU will decrypt internally. However, until now, developers encrypting the MPU’s code had access to the private key. The problem is that some organizations must limit access to such critical information. Thanks to the new STM32CubeProgrammer and PKCS#11, the private key remains hidden in an HSM, even during the encryption process by the developers.
Supporting new STM32 MCUs Access to the STM32MP13’s bare metalMicrocontrollers demand real-time operating systems because of their limited resources, and event-driven paradigms often require a high level of determinism when executing tasks. Conversely, microprocessors have a lot more resources and can manage parallel tasks better, so they use a multitasking operating system, like OpenSTLinux, our Embedded Linux distribution. However, many customers familiar with the STM32 MCU world have been asking for a way to run an RTOS on our MPUs as an alternative. In a nutshell, they want to enjoy the familiar ecosystem of an RTOS and the optimizations that come from running bare metal code while enjoying the resources of a microprocessor.
Consequently, we are releasing today STM32CubeMP13, which comes with the tools to run a real-time operating system on our MPU. We go into more detail about what’s in the package in our STM32MP13 blog post. Additionally, to make this initiative possible, ST updated its STM32Cube utilities, such as STM32CubeProgrammer. For instance, we had to ensure that developers could flash the NOR memory. Similarly, STM32CubeProgrammer enables the use of an RTOS on the STM32MP13 by supporting a one-time programmable (OTP) partition.
Traditionally, MPUs can use a bootloader, like U-Boot, to load the Linux kernel securely and efficiently. It thus serves as the ultimate first step in the boot process, which starts by reading the OTP partition. Hence, as developers move from a multitasking OS to an RTOS, it was essential that STM32CubeProgrammer enable them to program the OTP partition to ensure that they could load their operating system. The new STM32CubeProgrammer version also demonstrates how the ST ecosystem works together to release new features.
STM32WB and STM32WBA supportSince version 2.12, STM32CubeProgrammer has brought numerous improvements to the STM32WB series, which is increasingly popular in machine learning applications, as we saw at electronica 2022. Specifically, the ST software brings new graphical tools and an updated wireless stack to assist developers. For instance, the tool has more explicit guidelines when encountering errors, such as when developers try to update a wireless stack with the anti-rollback activated but forget to load the previous stack. Similarly, new messages will ensure users know if a stack version is incompatible with a firmware update. Finally, STM32CubeProgrammer provides new links to download STM32WB patches and get new tips and tricks so developers don’t have to hunt for them.
Similarly, STM32CubeProgrammer supports the new STM32WBA, the first wireless Cortex-M33. Made official a few months ago, the MCU opens the way for a Bluetooth Low Energy 5.3 and SESIP Level 3 certification. The MCU also has a more powerful RF that can reach up to +10 dBm output power to create a more robust signal.
STM32H5 and STM32U5The support for STM32H5 began with STM32CubeProgrammer 2.13, which added compatibility with MCUs, including anything from 128 KB up to 2 MB of flash. Initially, the utility brought security features like debug authentication and authentication key provisioning, which are critical when using the new life management system. The utility also supported key and certificate generation, firmware encryption, and signature. Over time, ST added support for the new STM32U535 and STM32U545 with 512 KB and 4 MB of flash. The MCUs benefit from RDP regression with a password to facilitate developments and SFI secure programming.
Additionally, STM32CubeProgrammer includes an interface for read-out protection (RDP) regression with a password for STM32U5xx. Developers can define a password and move from level 2, which turns off all debug features, to level 1, which protects the flash against certain reading or dumping operations, or to level 0, which has no protections. It will thus make prototyping vastly simpler.
STLINK-V3PWRIn many instances, developers use an STLINK probe with STM32CubeProgrammer to flash or debug their device. Hence, we quickly added support for our latest STLINK-PWR probe, the most extensive source measurement unit and programmer/debugger for STM32 devices. If users want to see energy profiles and visualize the current draw, they must use STM32CubeMonitor-Power. However, STM32CubeProgrammer will serve as an interface for all debug features. It can also work with all the probe’s interfaces, such as SPI, UART, I2C, and CAN.
Script modeThe software includes a command-line interface (CLI) to enable the creation of scripts. Since the script manager is part of the application, it doesn’t depend on the operating system or its shell environment. As a result, scripts are highly sharable. Another advantage is that the script manager can maintain connections to the target. Consequently, STM32CubeProgrammer CLI can keep a connection live throughout a session without reconnecting after every command. It can also handle local variables and even supports arithmetic or logic operations on these variables. Developers can thus create powerful macros to automate complex processes. To make STM32CubeProgrammer CLI even more powerful, the script manager also supports loops and conditional statements.
A unifying experienceSTM32CubeProgrammer aims to unify the user experience. ST brought all the features of utilities like the ST-LINK Utility, DFUs, and others to STM32CubeProgrammer, which became a one-stop shop for developers working on embedded systems. We also designed it to work on all major operating systems and even embedded OpenJDK8-Liberica to facilitate its installation. Consequently, users do not need to install Java themselves and struggle with compatibility issues before experiencing STM32CubeProgrammer.
Qt 6 supportSince STM32CubeProgrammer 2.16, the ST utility uses Qt 6, the framework’s latest version. Consequently, STM32CubeProgrammer no longer runs on Windows 7 and Ubuntu 18.04. However, Qt 6 patches security vulnerabilities, brings bug fixes, and comes with significant quality-of-life improvements.
The post STM32CubeProgrammer 2.17 simplifies serial numbering and option byte configurations appeared first on ELE Times.
How Synopsys IP and TSMC’s N12e Process are Driving AIoT
Hezi Saar | Synopsys
Artificial intelligence (AI) is revolutionizing nearly every aspect of our lives in all industries, driving the transformation of technology from development to consumption and reshaping how we work, communicate, and interact. On the other hand, the Internet of Things (IoT) connects everyday objects to the internet, enabling a network of interconnected devices that adds additional improved efficiency and enhanced convenience in our lives.
The union of AI and IoT, known as AIoT, integrates AI capabilities into IoT devices and is further poised to change our lives and drive the semiconductor industry’s expansion in the foreseeable future. AIoT devices can analyze and interpret data in real-time, enabling smart decisions, autonomously adapting to observed conditions. Promising heightened intelligence, connectivity, and device interactivity, AIoT is capable of handling vast data volumes without needing to rely on cloud-based processing methods.
Within AIoT devices, AI seamlessly integrates into infrastructure components, including programs and chipsets, all interconnected via IoT networks. From smart cities to smart homes and industrial automation, AIoT applications require real-time data processing that is powered by high-capacity on-chip memories, compute power, and minimal power consumption.
Read on to learn more about the opportunities and challenges of AIoT applications at the edge as well as Synopsys IP on TSMC’s N12e process and how it supports pervasive AI at the edge.
AIoT Applications at the EdgeAI is truly everywhere and can be found in data centers, cars, and high-end compute devices. However, processing data at or close to the source of information complements the cloud-based AI approach and allows for the immediate processing of data and speedy results for optimal service, more personalized functions to the user, protection of information/additional privacy, and additional reliability.
Everything from smartwatches, security cameras, smart fridges, automation-enabled factory machinery, smart traffic lights, and more are considered AIoT devices. Each of these devices is unique in some way which requires chip designers to find the right balance between performance, power usage, and cost.
For an application like smart cities, low power is the much bigger factor (although performance can’t be completely ignored). For example, think about a smart streetlamp with sensing capabilities that are programmed to come on at sunset and sunrise. With an average streetlamp measuring around 30 feet tall, changing out a burnt-out light bulb and any other components becomes a larger, costlier, and more time-consuming task. Also, controlling the time the lights are on at night at a lower strength creates a more cost-effective as well as environmentally friendly approach, and reduces the light pollution that these streetlamps usually cause. That’s why designing these smart devices to take up as little power as possible for years of use is so important; it extends the life of the streetlamp and enables a smart City environment.
Additionally, minimizing power consumption naturally leads to a smaller cost, size, and weight. It can also help to maximize the user experience, increase the silicon reliability, maximize the lifespan of the IoT device, and lessen environmental impact. Overall, AIoT applications are driving demand for high-performance and low-latency memory interfaces on low leakage nodes.
AIoT Products and Their Corresponding Power-Saving ApproachMany different power-saving approaches can be built into the IP and, ultimately, the chip depending on how the AIoT device is charged.
- Battery-Powered: Sensors that detect water, fire/smoke, intruders, etc. are idle until the alarm/camera/Wi-fi trigger is detected. Many times, the entire sensor needs to be replaced after the job is finished. External power gating (read more on that below) is the best solution. Other battery-powered applications such as door locks and key fobs allow for battery replacement and may require USB 1.1/2.0 connectivity with a power island from Vbus, and NVM.
- Battery-Powered with Energy Harvesting: Examples of this type might include doorbells, security cameras, environment sensors, price tagging, remote controls, and more. MAC IIP opportunities to address these products involve CSI for Camera, M-PHY or eMMC for storage, SPI/PCIe for Wi-Fi, DSI for display, and USB 2.0 for advanced products to assist with charging and firmware download.
- Portable: Users charge these products when/if needed based on use-case. For instance, wearables, personal infotainment devices, audio headsets, e-readers, etc. need to be charged every few days to several weeks depending on how often they are used. For other devices like laptops and phones, it is mandatory to save power when they are not connected to an external power source. This means requiring a fast sleep/resume and power gating if applicable.
- Stationary: Devices that facilitate home networking, home automation, and security, as well as home hubs like Alexa Echo Show or Google Nest are either powered most of the time in a docking station or need to be plugged in all the time with battery backup for keeping settings. The ability to fast sleep/resume and DVFS are both useful for saving power.
The semiconductor industry has considered 16nm and 12nm “long nodes” (or nodes that will be around for many, many years to come) for consumer, IoT, wireless, and certain automotive applications. These nodes can leverage AI because they have great performance using the FinFet process but are also cost-effective and low power.
TSMC has made investments to boost performance and power in these nodes, making them even more appealing for power-conscious designs. For example, N12e offers a device boost for higher density with good performance/power tradeoffs and ultra-low leakage static random-access memory (SRAMs).
Not only does this provide approximately 15% power savings and the memory required to process all that data at the edge, but it is also compatible with existing design rules to minimize IP investment. That’s where Synopsys comes in.
Synopsys IP reduces leakage even further through a variety of different techniques:
- Power Gating: This technique can be used from an active state or a disabled or “turned-off state.” An active state requires retention/restoring such that it is possible to save the state the IP is currently at once power gating is exited. To use this mode, circuits such as always-on domain retention and power control logic are required. Entering power gating from a disabled state requires IP that supports power collapsing and needs to be restarted after power gating is exited.
- Voltage Scaling: IP is also available to scale the voltage down in order to reduce the leakage consumption. There are two types of voltage scaling — dynamic voltage and frequency scaling. With frequency scaling, IP is still performing functional activities but the voltage is reduced in order to meet timing requirements.
- Retention: In this technique, voltage is reduced to a level where registers still hold their current value but it is expected that the IP is not performing functional activity. This means that there is no toggling, IDLE mode, or setup/hold sign-off.
The IP used to design AIoT chips must be versatile in order to support the many different use cases and applications that are powered by N12e process and other low power nodes. Higher-performance chips require a more sophisticated low-power strategy using the various techniques described above.
As AIoT devices become even more prevalent in our homes, workplaces, and cities, Synopsys and TSMC will continue to develop even more sophisticated high-performance, low-power solutions to fuel further innovation in this space.
The post How Synopsys IP and TSMC’s N12e Process are Driving AIoT appeared first on ELE Times.