Microelectronics world news

Thermoelectric cooler efficiency and heatsink thermal impedance

EDN Network - Wed, 01/17/2024 - 12:53

Thermoelectric coolers (TECs) are common and (look) simple, but simple (and usefully accurate) design models and equations for them are less common. This design model has served well in a variety of applications, and its input needs only numbers provided in typical TEC datasheets. Though with a simplification of TEC physics, it’s realistic and accurate enough to be useful. It predicts TEC thermal load temperature (T) as a function of TEC data sheet parameters, drive current (I), thermal load power dissipation, thermal conductivity, heatsink thermal impedance, and ambient temperature (T3).

Wow the engineering world with your unique design: Design Ideas Submission Guide

The model is summarized in a single second-order equation for T = TEC output temperature.

T = (-P I + I2 Rp/2 + Q1)/(C1 + Cp) + Zh(Q1 + I2 Rp) + T3

Where:

  • P (Watts/Amp) = Peltier constant = (Qmax + Imax2 Rp/2)/Imax
  • Qmax (Watts) = maximum heat transfer across zero delta T (from TEC datasheet)
  • Imax = current for max cooling with perfect (Zh = 0) heatsink (from TEC datasheet)
  • Vmax = TEC voltage drop at Imax (from TEC datasheet)
  • Rp = TEC resistance = Vmax/Imax
  • Q1 = heat produced by thermal load
  • C1 (W/°C) = thermal conductivity of thermal load to ambient
  • Cp = TEC thermal conductivity = Qmax/DeltaTmax
  • DeltaTmax  = max cooling with Imax and perfect heatsink (from TEC datasheet)
  • Zh (oC/W) = heatsink thermal impedance to ambient
  • T3 = ambient temperature

For a typical example of how this math applies to a real TEC, consider the Laird Thermal Systems 430007-509:

Qmax: 3 W

Imax: 1.5 A

Vmax: 3.4 V

Delta Tmax: 67°C

Then:

Rp = 3.4/1.5 = 2.27

P = 3 + 1.5 * 3.4 / 2 = 5.55 / 1.5 = 3.7 W/A

Cp = 3 W/67°C = 0.0448 W/°C

A useful relationship quantified by the design model math is the effect of heatsink thermal impedance on the optimum TEC drive current that generates maximum cooling. It results when the T equation is differentiated with respect to I and then solved for the maximum at dT/dI = 0. It yields:

Io = (P Zh-1)/{Rp[Zh-1 + 2(C1 + Cp)]}

 Io(Zh-1) is plotted for the Laird TEC in Figure 1 (black) with the corresponding maximum Delta T (blue). Note how both curves trend to zero as Zh-1 is reduced. This effect is mainly due to the fact that the I2Rp heat dissipated by the TEC must be dumped to ambient by the heatsink, which raises its temperature, and therefore that of the TEC, in direct proportion to Zh.

Figure 1 TEC max-cooling drive current (black) and resulting cooling (blue) as functions of heatsink thermal admittance (Zh-1).

Even in situations when TEC cooling ability remains adequate and DeltaT constant, the effect on TEC current draw and power consumption is dramatic, as illustrated in Figure 2 for an example DeltaT of 40oC (Q1 and C1 = 0).

Figure 2 TEC current draw I (black) versus heatsink thermal admittance (Zh-1) for constant 40°C DeltaT.

Note that current consumption increases by 63% and power by 165% as Zh-1 declines from 1.0 to 0.13 W/°C.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Thermoelectric cooler efficiency and heatsink thermal impedance appeared first on EDN.

Connecting Continents: Exploring the Essence, Operations, and Future Trajectory of GSM Technology

ELE Times - Wed, 01/17/2024 - 12:21

What is GSM Technology:

GSM, or Global System for Mobile Communications, is a standard that has revolutionized the way we communicate wirelessly. Emerging in the 1980s, GSM quickly became the predominant technology for mobile networks globally. It provides the framework for 2G, 3G, and even 4G networks, offering a standardized platform for seamless communication across borders.

How GSM Technology Works:

At its essence, GSM operates on a foundation of time and frequency division. This is achieved through a technique known as Time-Division Multiple Access (TDMA), where time is divided into slots, and frequencies are segmented into channels. This efficient use of spectrum enables multiple users to share the same frequency without interference.

The GSM system comprises several key components. Mobile stations, or phones, connect to Base Transceiver Stations (BTS) that send and receive signals. Base Station Controllers (BSC) manage these BTS, ensuring smooth handovers between different stations as a mobile device moves. Mobile Switching Centers (MSC) oversee call routing and communication between mobile devices, while the Home Location Register (HLR) stores crucial subscriber information.

Digital transmission lies at the heart of GSM. When a GSM-enabled device is turned on, it searches for the nearest available BTS to establish a connection. Voice and data are transmitted in digital form, using a combination of circuit-switching for voice calls and packet-switching for data transmission. This dynamic approach ensures efficient use of the network, maintaining call quality and enabling data transfer.

GSM Technology Architecture:

The architecture of GSM is designed to facilitate reliable and widespread mobile communication. The Mobile Switching Center (MSC) plays a pivotal role in call routing, handovers, and overall network management. The Home Location Register (HLR) stores subscriber information, including user profiles and current locations, enabling efficient call routing.

A distinctive feature of GSM is the Subscriber Identity Module (SIM) card. This small chip, inserted into mobile devices, holds user information, allowing users to switch devices while retaining their identity and personal data. This modular approach enhances user flexibility and security, contributing to the widespread adoption of GSM.

GSM Technology Uses:

GSM technology has transcended traditional voice communication, finding applications in diverse sectors. Short Message Service (SMS) was one of its initial breakthroughs, allowing users to send text messages. The introduction of General Packet Radio Service (GPRS) enabled faster data transmission, opening the doors to internet access and multimedia messaging.

Beyond communication, GSM has made significant inroads into various industries. In healthcare, GSM-enabled devices are employed for remote patient monitoring, facilitating real-time data transmission from medical devices to healthcare providers. The financial sector leverages GSM for mobile banking and secure transactions, ensuring reliable and secure communication. Transportation systems utilize GSM technology for tracking and managing fleets, improving efficiency and safety.

Future of GSM Technology:

As technology continues to evolve, the future of GSM technology holds promise. While newer generations like 5G are gaining traction, GSM remains a critical player in the connectivity landscape. Its widespread infrastructure and compatibility make it a reliable choice for many regions. Moreover, the legacy of GSM is likely to persist as networks transition and upgrade, ensuring backward compatibility and a seamless user experience.

In conclusion, GSM technology stands as a beacon of wireless communication, connecting people across the globe. Its robust working mechanism, architectural design, and diverse applications have shaped the way we communicate and paved the way for further innovations. As we look ahead, GSM is poised to continue playing a significant role in the evolving landscape of mobile technology.

The post Connecting Continents: Exploring the Essence, Operations, and Future Trajectory of GSM Technology appeared first on ELE Times.

Lenovo’s Smart Clock 2: A “charged” device that met a premature demise

EDN Network - Wed, 01/17/2024 - 11:46

Back in April 2022, EDN published my teardown of Lenovo’s first-generation Google Assistant-cognizant diminutive smart speaker, the Smart Clock Essential:

As I mentioned at the time, the second-generation Smart Clock Essential had already been introduced, and switched to supporting Amazon’s Alexa ecosystem. Lenovo had at that point also already introduced two generations’ worth of its larger touchscreen-inclusive Smart Clock, both Google Assistant-based, most recently (nearly a year earlier, to be exact) the redesigned Smart Clock 2, which initially cost $69.99 standalone:

or $89.99 bundled with an optional charging dock for both it, a wireless-charging (with MagSafe support, to boot) smartphone or other device, and a USB-tethered device (the USB charging port moved from the back of the speaker itself to the dock in this second-generation design):

That said, I bought mine brand new direct from Lenovo at the end of 2022 for only $29.99, complete with the docking station (which I’ll save for another teardown to come). Why the substantial price drop, along with the broader Amazon-to-Google voice interface service switch? I don’t have direct insight, but I suspect it had something to do with coverage that appeared a few months later, like “Google cuts off third-party smart displays as Assistant support dwindles”:

Google has stopped pushing updates for some third-party smart displays, reflecting a broader shift away from Assistant products. On a support page spotted by 9to5Google, the company says it will no longer provide software updates for the Lenovo Smart Display, JBL Link View, and the LG Xboom AI ThinQ WK9 Smart Display.

 All three displays made their debut in 2018, just months after Google first announced the Smart Display platform and its own Home Hub (now called Nest Hub) as it sought to compete with Amazon’s Alexa. While Google provided some new features to these devices in the years that followed, they never received the same kind of attention it gives to its Nest Hub displays.

 Google’s decision to end support for its third-party devices doesn’t mean they’ll stop working, but it is indicative of what’s been happening over the past few years: they just won’t receive any new features or updates.

That’s unfortunate, because the Smart Clock 2 (along with its Smart Clock 1 and Smart Clock 1 Essential predecessors) were generally quite well-reviewed. And, as you’ll soon see, it’s also interesting internally.

Let’s dive in, beginning with some external packaging shots:

Pop open the (outer) box lid:

and inside, you’ll find a box at the bottom, which I assume is the dock:

a cardboard space-filler:

and the box containing our victim (perhaps obviously, the packaging modularity affords optional smart speaker standalone sale absent the dock, as per the earlier noted pricing differential):

(As shown here, the smart speaker comes in three color options, Shadow Black—which is oddly not listed on the product spec sheet, Heather Gray and Abyss Blue. Mine’s Heather Gray. The charging dock comes only in white.)

Open sesame, once again:

To the right, obviously, is the power adapter, with a barrel connector to the smart speaker:

Below are two pieces of documentation:

And last but not least, here’s the star of the show:

Now freed from its clear-plastic confines, and as-usual accompanied by a 0.75″ (19.1 mm) diameter U.S. penny for size comparison purposes. The Smart Clock 2 has dimensions of 3.67″ x 4.47″ x 2.81″ (93.30mm x 113.48mm x 71.33mm) with a weight “starting at” (whatever that means) 0.66 lbs. (298 grams):

Compare it to the first-generation Smart Clock, released in January 2019 (and available only in gray and black, by the way):

and the particulars of the dock-friendly redesign will likely be immediately obvious. Here are some more perspectives on our patient:

(note the “pogo plug” contacts for the charging dock interface)

I’m betting that the rubber “foot” around the bottom-side edges is our pathway inside. Agree or disagree, readers?

Ayuh (or, if you prefer, ayup):

Putting aside for the moment the now-revealed bulk of the device’s “guts”, let’s check out the inside of the bottom panel first:

Perhaps obviously, considering its proximity, the circuitry on this mini-PCB handles two main tasks: managing the power coming in either via the barrel connector (if used standalone) or the “pogo pins” (when used with the dock), as well as as-appropriate muting the microphone array based on the next-door switch setting. The flex cable you likely already saw earlier transfers them both (analog DC power and digital on/off) to the main PCB.

Let’s get that mini-PCB out of the bottom chassis piece for a closer look:

Now-revealed underside of the mini-PCB first:

And now the (standalone) other side you saw already:

Not much to write home about, eh? (post your thoughts in the comments if you disagree!)

At this point, returning to the main assembly, I admittedly was a bit stuck:

Let’s first orient ourselves; at bottom of this photo, resting on my desk, is the front of the device. See those two screws toward the bottom?

Unfortunately, removing them didn’t get me anywhere. The insides wouldn’t budge. But diving into the photos included with the FCC certification documentation (ID O57CD24502F) clued me in that the touchscreen display was enclosed within an assembly that, with a bit of dexterity, could be popped out the front of the device:

Voila:

Removing those two screws, by the way, had absolutely been necessary; they’re what held the display assembly in “permanent” place.

Here’s a closeup of the currently-reinforcement-taped flex PCB cable connection to the remainder of the system:

Along with another closeup, this our first peek at the two-microphone-array PCB:

(note that one of the two rubber seals between the mic ports and the display assembly ended up still attached to the former, with the other instead clinging to the latter)

A quick aside, while we’re staring at this particular circuit board. If you look closely at the tech specs on the product page, you’ll see two sensors listed:

  • L-Sensor
  • G-Sensor

Neither is described in any detail whatsoever. I’m guessing (limited online chatter concurs) that:

  • “L” stands for “luminance”, i.e. “ambient light”, and it’s probably the sensor in-between the two microphones, one on either end. It adjusts the LCD backlight based on the ambient illumination (or lack thereof) in whatever environment the speaker’s located.
  • “G” does not stand for “gyro”, because whatever would be the reason to discern the orientation of a device designed to sit on a flat surface? Instead, I think it stands for “g-force”, i.e., an accelerometer. It senses when the Smart Clock 2 has been tapped (or slapped? Punched?) by the (cranky, sleepy) owner and shuts off the alarm. Dunno if we’re going to be able to definitively locate (nearby the top-side volume switches would be a reasonable guess, methinks) and ID this one…

Onward, again. Let’s disconnect that touchscreen LCD assembly:

That’s all the further I’m going here. The display on the first-generation Smart Clock is specified as “4” HD+ (800 x 480) IPS, touch compatible”, but the one on this successor is only spec’d as being a “4.0″ LCD IPS Touchscreen” (I’m betting the resolution’s the same, and reviews concur).

Now let’s get that mic array PCB outta there:

The microphones are MEMS-based (and PCB backside-mounted), as I suspected:

And now once again to the front for a standalone (and closer) inspection:

Look back at the earlier shot of the LCD assembly separated from the main chassis and within the latter you’ll see four more screws (in addition to the four holding the speaker in place, which we’ll get to later), one in each newly revealed corner. Let’s get those out next:

And now the “guts” slide easily out of the remainder of the device chassis:

Let’s again orient ourselves at this point:

You’re looking at the front of the device; the speaker points toward and slightly downward below the backside of the display, and the (majority of the) sound exits through the mesh below the display. The flex cables for the display (to the left) and mic array PCB (right) come out the top, and the one going to the power-and-mute-switch PCB comes out the bottom.

Side views of the assembly:

Note the antenna on this side:

The bottom:

And finally, on the back, there’s the main system PCB!

Let’s free it from its captivity, unhooking the antenna connection while we’re at it:

Here’s the antenna peeled off and standalone:

The Smart Clock 2 supports 2.4 GHz-only 802.11 Wi-Fi, along with Bluetooth 4.2. I can’t find any other discrete antennae anywhere, and (foreshadowing) I don’t see any embedded in the main PCB, either, so I assume this antenna does RF double-duty for both 2.4 GHz technologies. The Smart Clock 1 precursor had handled 5 GHz Wi-Fi, too; was this a Gen2 cost-reduction decision?

Once again, onward. I’m betting that thick two-wire cable harness goes to the speaker:

I’d also wager a bet that the square hole the speaker harness goes through also acts as a rear bass port for the speaker:

And here we are, first showing the now-revealed heat sink attached to the PCB backside:

which pops right off:

Oh goodie, a Faraday Cage!

To orient (and educate) myself, I did an after-the-fact partial reassembly to remind myself how these pieces fit together in the first place. Here are the photos I snapped, which may also be useful to at least some of you:

And here’s the PCB initially-exposed side again, standalone this time:

Pressing “pause” on the inevitable Faraday cage top lid peel-off to come, I returned my attention to the transducer at this point:

The spec sheet says that it’s a “1.5″ 3 W front-firing speaker”, although to be precise, it says “speakers” (unless the other one’s invisible, that’s a typo). As comparison, the specs for the first-generation Smart Clock state “1.5″ 3 W speaker, with 2x passive radiators”:

And now, back to that Faraday cage:

Time for a closer view:

Along the top, left to right, are first two ICs with Micron logos and the following package marks:

IFP77
D9SHD

Google tells me that they’re MT41K256M16LY-093 4 Gbit DDR3-1066 SDRAMs, which jives with the Smart Clock 2 spec sheet claim that the device contains 1 GByte of total system DRAM. Next to them at far right is a Toshiba-now-Kioxia THGBMJg6C1LBAIL 8 GByte eMMC flash memory. And in the bottom row are two Mediatek SoCs. The larger one at left is the MT8167A application processor, in contrast to Lenovo’s product spec sheet, which claims that it’s the “MT8167S Processor (1.50 GHz)”. And at right is the smaller MT6392A, information for which is quite difficult to ascertain but which appears to be a power management IC.

One other region of this side of the PCB, in the upper right corner, begs for closeup attention:

The notable IC here is Texas Instruments’ TAS5805M 23 W (stereo) and 45 W (mono, in this configuration) Class D amplifier. Its function is unsurprising given that it’s in close proximity to one of the volume switches (above it; the other one is in the upper left corner of the PCB) and the speaker wiring harness cable below it.

Flipping the PCB back over, the only thing of note is a closeup of the connector for the flex cable that mates this PCB to the power-and-mute one you saw earlier:

I’ll close with some side views:

and an interesting link that I found during my research to the documented progress of some enterprising hackers that have attempted to tap into the Smart Clock 2’s hardware and Android 10 firmware foundations. And with that, it’s over to you for your thoughts in the comments!

Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Lenovo’s Smart Clock 2: A “charged” device that met a premature demise appeared first on EDN.

SweGaN appoints Stefan Axelsson as CFO and Anders Lundskog as R&D manager

Semiconductor today - Wed, 01/17/2024 - 11:41
SweGaN AB of Linköping, Sweden — which develops and manufactures custom gallium nitride on silicon carbide (GaN-on-SiC) epitaxial wafers (based on a unique growth technology) for telecoms, satcoms, defense and power electronics applications — has appointed Stefan Axelsson as chief financial officer and Anders Lundskog as R&D manager. Axelsson has joined the executive management team, and Lundskog has joined in a new R&D role...

Infineon’s CoolGaN enables OMRON’s new V2X charging systems

Semiconductor today - Wed, 01/17/2024 - 11:40
Infineon Technologies AG of Munich, Germany has partnered to combine its gallium nitride (GaN)-based power solutions with the circuit topology and control technology of Tokyo-based OMRON Social Solutions Co Ltd, enabling what is reckoned to be one of Japan’s smallest and lightest vehicle-to-everything (V2X) charging systems. The partnership is expected to further drive innovation towards wide-bandgap materials in power supplies, helping to accelerate the transition to renewable energies, a smarter grid, and the adoption of electric vehicles, while fostering decarbonization and digitalization...

Infineon and OMRON Social Solutions Collaborate to Revolutionize Electric Vehicle Charging in Japan

ELE Times - Wed, 01/17/2024 - 09:26

In a groundbreaking announcement, Infineon Technologies AG has joined forces with OMRON Social Solutions Co. Ltd., a trailblazing company in social systems technology. This strategic partnership integrates Infineon’s cutting-edge gallium nitride (GaN) based power solutions with OMRON’s innovative circuit topology and control technology, resulting in the creation of one of Japan’s smallest and lightest vehicle-to-everything (V2X) charging systems.

The collaboration leverages Infineon’s CoolGaN technology within the KPEP-A series, a multi-V2X system by OMRON Social Solutions. This system achieves a 60% reduction in size and weight compared to conventional designs while offering a charging capability of 6 kW. With the integration of Infineon’s CoolGaN solution, the V2X system exhibits increased power efficiency, with improvements exceeding 10% at light load and approximately 4% at rated load.

OMRON Social Solutions has enhanced its EV charger and discharger system, enabling bi-directional charging and discharging paths between renewable energy sources, the grid, and EV batteries. This development aligns with broader efforts to accelerate the transition to renewable energies, promote a smarter grid, and facilitate the widespread adoption of electric vehicles, thereby advancing decarbonization and digitalization initiatives.

Adam White, Division President of Power & Sensor Systems at Infineon, expressed excitement about the collaboration, stating, “Our CoolGaN-based solutions directly contribute to speeding up the transition to renewable energies, reducing CO2 emissions, and driving decarbonization. It will also make charging electric vehicles easier and more convenient for consumers, helping to overcome one of the biggest barriers to EV adoption.”

Atsushi Sasawaki, Managing Executive Officer and Senior General Manager for the Energy Solutions Business of OMRON Social Solutions highlighted the significance of the collaboration, stating, “Having access to a broad portfolio of wide bandgap (WBG) solutions significantly increases the functionality, performance, and quality of our products. We look forward to further developing GaN- and SiC-based power solutions with Infineon to help drive renewable energy and electric vehicles.”

Wide bandgap semiconductors made of silicon carbide and gallium nitride play a pivotal role in this collaboration, offering greater power efficiency, smaller size, lighter weight, and lower overall cost than conventional semiconductors. With over two decades of heritage in SiC and GaN technology development, Infineon is positioned as a leading power supplier, addressing the need for smarter, more efficient energy generation, transmission, and consumption.

The post Infineon and OMRON Social Solutions Collaborate to Revolutionize Electric Vehicle Charging in Japan appeared first on ELE Times.

STM32WBA, 1st wireless Cortex-M33 for more powerful and more secure Bluetooth applications #STM32InnovationLive

ELE Times - Wed, 01/17/2024 - 08:15

Author: STMicroelectronics

Update, December 21, 2023

The STM32WBA52xx are now available in a QFN32 package measuring only 5 mm x 5 mm as opposed to the QFN48 package of 7 mm x 7 mm. Integrators will gravitate towards the models with fewer pins for projects that use fewer interfaces and timers, which are often used for wake-up capabilities, among other things. While the first STM32WBA maximized features, we also know that not all developers need 16 wake-up pins and would rather get the benefits of a smaller package. The QFN32 housing can thus help them tailor their systems to save space to create a more compact and cost-effective design.

Original publication, March 10, 2023

The STM32WBA is the first wireless STM32 to open the way for a Bluetooth Low Energy 5.3 and SESIP Level 3 certification. At its heart, the new series uses an architecture inspired by the STM32U5. We find a similar Cortex-M33, but running at 100 MHz, and flash capacities varying from 512 KB to 1 MB. While the STM32WBA has a dedicated firmware package (STM32CubeWBA), it supports current profiles for STM32WB microcontrollers, thus vastly facilitating the transition from the STM32WB to the STM32WBA. ST also improved the radio to reach +10 dBm in output power, making it the first wireless MCU of its kind to provide such a robust link.

A new architectural foundation A Cortex-M33 The STM32WBAThe STM32WBA

The STM32WBA represents a new approach to our wireless MCUs. The original STM32WB had a Cortex-M0+ running the radio stack and a Cortex-M4 for the application. The STM32WBA uses a single Cortex-M33 with a score of 407 in CoreMark, which is twice the performance of the previous generation. Beyond computational improvements, the new architecture simplifies developments and provides new features. For instance, the STM32WBA offers an interface for touch sensors that could serve industrial applications and one advanced timer for motor control.

Similarly, the new device supports a background autonomous mode (BAM). It enables peripherals to remain functional and use direct memory access (DMA) without waking the CPU. Engineers can perform sensor monitoring operations using BAM through I2C, SPI, or UART, increasing the usefulness while keeping the power consumption low. Additionally, the STM32WBA supports low-power STOP0, STOP1, and standby modes that developers find in the STM32U5, but ST tweaked them to go rapidly from a running mode with connectivity to Standby mode with the radio context written in the memory. Standby mode with RTC only needs 200 nA, and the Stop mode with 64 KB of RAM demands 16.3 µA.

The first STM32WBAsThe first STM32WBAs A more robust signal

The radio also received significant optimizations as it’s the first in this kind of product to reach +10 dBm in output power, thus offering a more robust wireless link. The new performance can make a significant difference when connecting to a device despite an obstruction diminishing the signal. The STM32WBA also supports important features like long-range transmissions, a high-speed connection of up to 2 Mbps, and advertising extensions to optimize communication management. Moreover, while the STM32WBA52 relies on LDOs, future models in the series will also feature a switched-mode power supply. Similarly, while the STM32WBA52 only focuses on, future devices will support Matter, OpenThread, and Zigbee.

A new security paradigm

The presence of a Cortex-M33 in STM32WBA devices also means that, for the first time, our wireless microcontrollers can help provide a SESIP Level 3 certification. Developers can use functionalities like TrustZone, Trusted Firmware, Secure Boot, Secure Debug, and more to bolster their security and protect sensitive applications from the radio stack. Thanks to ST software packages and firmware, developers can more easily implement privileged and unprivileged sections to safeguard sensitive information like cloud credentials or user data.

Existing solutions within the STM32Trust initiative will help users implement these safeguards. Furthermore, because the STM32WBA takes cues from the STM32U5, developers can reuse some of the information or documentation. Nevertheless, ST will have specific content on the STM32 Wiki to address issues related to wireless stacks. The new devices will also include mechanisms protecting against physical attacks, such as anti-tamper pins, a unique hardware key, and more.

Getting started NUCLEO-WBA52CGNUCLEO-WBA52CG

The best way to start creating a proof-of-concept is to grab the new NUCLEO-WBA52CG, a new type of Nucleo board where the microcontroller sits on a removable daughter card. The solution can help engineers more easily swap between microcontrollers, making the device more portable. By using the board, developers could determine whether their application can use ST’s basic Bluetooth stack, which helps save memory, or if they require the full-featured version. ST will also provide bare metal middleware and firmware using AzureRTOS. A software package using FreeRTOS will also be available on GitHub Hotspot, which already contains a repository for a web interface supporting the new device.

Read the full article at https://blog.st.com/stm32wba/

The post STM32WBA, 1st wireless Cortex-M33 for more powerful and more secure Bluetooth applications #STM32InnovationLive appeared first on ELE Times.

Onsemi Designs DC SiC Power Modules to Fast Track EV Charging

AAC - Wed, 01/17/2024 - 02:00
One critical barrier to EV adoption is their notoriously long charge times. Onsemi aims to address one piece of the puzzle with new power-integrated modules.

Bosch Claims Smallest MEMS Accelerometers for Hearables and Wearables

AAC - Tue, 01/16/2024 - 20:00
The new MEMS devices have a 76% smaller footprint than Bosch’s current generation of acceleration sensors.

IQE appoints chief financial officer

Semiconductor today - Tue, 01/16/2024 - 15:59
Epiwafer and substrate maker IQE plc of Cardiff, Wales, UK has appointed Jutta Meier to its board as chief financial officer, effective from 22 January...

IQE appoints VP of government affairs

Semiconductor today - Tue, 01/16/2024 - 15:54
Epiwafer and substrate maker IQE plc of Cardiff, Wales, UK has appointed Rina Pal-Goetzen as VP of government affairs...

Bistable switch made on comparators

EDN Network - Tue, 01/16/2024 - 15:30

The bistable load switch is made on two comparators. The load is switched on and off sequentially by applying a voltage of two different levels to the input of the device.

Earlier in [1], a new class of bistable elements was proposed—two-threshold thyristors, which are switched on/off from one state to another when control voltages of two levels (“High” or “Low”) other than zero are applied to the input of the thyristor.

Wow the engineering world with your unique design: Design Ideas Submission Guide

The bistable load switch, Figure 1, is designed for switching the load when an Uon or Uoff voltage is applied to the input of the device. The device contains two comparators U1.1 and U1.2, as well as an output transistor Q1, for example, 2N7000.

Figure 1 A bistable switch controlled by input voltage levels, with separately adjustable load on and off thresholds.

The device works as follows. Its input (inverting inputs of comparators U1.1 and U1.2) is briefly supplied with a voltage of a certain level (Uon or Uoff). The comparators comparison noninverting inputs are supplied with voltages of two levels from the potentiometers R2 and R3. When the switching voltage Uon (Uon<Uoff) is applied to the input of the device, the comparator U1.1 switches. At its output Uout1, the voltage switches from the conditional level of the logical unit to the level of the logical zero. The LED indicates the enabled state of the device. On the drain of the transistor Q1 (Uout2), on the contrary, the voltage changes from the level of logical zero to the level of logical unit. Through the resistor R10, the high-level voltage enters the inverting input of the comparator U1.1, fixing his condition.

To return the device to its initial state (disconnecting the load), a voltage of a higher level (Uoff) is applied to the input, which is able to switch the state of the second comparator U1.2. When switching this comparator, the voltage at the inverting input of the comparator U1.1 drops to zero, the circuit returns to its original state.

Such a device, with some simplification and modification, can be placed in the DIP6 housing, Figure 2. Switching the output signal level from the conditional level 0 to 1 occurs when a low-level voltage Uon is briefly applied to the input of the device, a return to the initial state occurs when a high-level voltage Uoff is applied to the input.

A typical circuit for switching on such a chip is shown in Figure 3. External adjustment elements R1 and R2 are used to adjust the on and off switching thresholds (Uthr1 and Uthr2).

Figure 2 A bistable switch, as well as a possible of integrated circuit based on it.

Figure 3 Variants of a bistable switch chip with external switching thresholds control circuits, or internal unregulated ones, and the possibility of using the circuit in a DIP4 case for an unregulated version with fixed switching thresholds.

If using a resistive divider R1–R3 to set constant on and off levels of Uthr2 and Uthr1, then the bistable switch can be placed in the DIP4 chip housing, Figure 3, which has power terminals as well as input and output. To obtain the switching levels, which do not depend on the supply voltage, you can use a simple voltage regulator (Zener diode) built into the microcircuit to power the resistive divider R1–R3.

Michael A. Shustov is a doctor of technical sciences, candidate of chemical sciences and the author of over 800 printed works in the field of electronics, chemistry, physics, geology, medicine, and history.

 Related Content

References

  1. Shustov M.A. Two-threshold ON/OFF thyristors, switchable by the input signal level // International Journal of Circuits and Electronics. – 2021. – V. 6. – P. 60–63. Pub. Date: 09 December 2021. https://www.iaras.org/iaras/home/computer-science-communications/caijce/two-threshold-on-off-thyristors-switchable-by-the-input-signal-level
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Bistable switch made on comparators appeared first on EDN.

IQE expecting full-year 2023 of £115m after 20% growth from first half to second half

Semiconductor today - Tue, 01/16/2024 - 14:51
In a a pre-close trading update for full-year 2023, epiwafer and substrate maker IQE plc of Cardiff, Wales, UK says that it expects revenue to be at least £115m. This is down 31% on 2022’s £167.5m. However, it reflects a more than 20% increase from first-half to second-half 2023, in line with previously issued guidance. IQE expects this to result in an adjusted EBITDA (earnings before interest, tax, depreciation and amortization) of at least £3m and a net debt position of about £3m...

Latest Littelfuse Sub-miniature 12.7 mm Reed Switches Provide High-Reliability, Longer Life Cycles

ELE Times - Tue, 01/16/2024 - 14:14

Ideal limit-sensing solution for appliances and automatic testing equipment (ATE) applications

Littelfuse, Inc., an industrial technology manufacturing company empowering a sustainable, connected, and safer world, is excited to announce the availability of the MATE-12B Reed Switch Series. These sub-miniature reed switches provide longer life and higher reliability than currently available in existing 12.7 mm reed switches, achieving millions of cycles. Their extensive longevity exceeds the requirements for automatic test equipment and appliance applications. View the video.

The MATE-12B is a normally open switch with a 12.7 mm x 1.8 mm (0.276” x 0.071”) glass envelope, which can switch up to 200 Vdc 10 W. They provide a high insulation resistance of 1012 ohms (minimum) and a low contact resistance of less than 100 milli-ohms.

The MATE-12B Reed Switch Series is ideally suited for markets that require long-life cycles and high reliability, such as:

  • Automatic Test Equipment (ATE) for power semiconductor testing,
  • Appliances, and
  • Other limit switching applications.

The MATE-12B key benefits and differentiators include:

  • High reliability and prolonged lifecycle: Extensively tested and proven to achieve millions of operation cycles, a significant advantage over currently available 7 mm reed switches.
  • Design flexibility: The sub-miniature magnet size and hermetically sealed glass envelope enable use in more challenging environments and applications.
  • PCB space savings: Extremely compact size and light weight help reduce the end product’s size.
  • Suitable for harsh environments: Hermetically sealed and meets cULus requirements.

“The MATE-12B is an extension of our existing product line, which helps our end customers with significantly higher efficiency and longer lifetime,” said Wayne Wang, Global Product Manager at Littelfuse. “The minimal risk of failure is especially critical to limit switching applications such as in appliances and power semiconductor automatic test equipment.”

Availability

The MATE-12B Reed Switch Series is available in bulk quantities of 1000 pieces. Place sample requests through authorized Littelfuse distributors worldwide. For a listing of Littelfuse distributors, please visit Littelfuse.com.

The post Latest Littelfuse Sub-miniature 12.7 mm Reed Switches Provide High-Reliability, Longer Life Cycles appeared first on ELE Times.

SemiLEDs’ quarterly revenue rebounds to $1.65m

Semiconductor today - Tue, 01/16/2024 - 12:54
For its fiscal first-quarter 2024 (to end-November 2023), LED chip and component maker SemiLEDs Corp of Hsinchu, Taiwan has reported revenue of $1.65m, down slightly on $1.695m a year ago but up on $1.453m last quarter...

Building Blocks for IIoT Edge Nodes

ELE Times - Tue, 01/16/2024 - 12:03

Courtesy: Mouser Electronics

Early-stage Internet of Things (IoT) concepts defined sensors that linked directly to the cloud. However, as vertical industries started seriously evaluating IoT architectures to extract greater business value, it became clear that this one-size-fits-all approach was impractical for various reasons.

Consider just a few of the implications of a cloud-first model in industrial IoT (IIoT) deployments:

  • Data and device security: The potential of insecure endpoints communicating directly with the cloud meant hackers could exploit vulnerabilities to access sensitive industrial networks.
  • Runaway networking costs: Sensor-to-server data transmissions (especially over public networks) can be so costly they prohibit scaling to the thousands of nodes required by many IIoT deployments. Add large volumes of measurement and status data generated by industrial sensors, and network congestion, packet delays, and inefficient bandwidth usage abound.
  • Power consumption of always-on sensor nodes: Remote sensor nodes require continuous connection to the network and an energy source. This is particularly challenging in remote settings like mining and agriculture, where limited access can mean replacing batteries or troubleshooting networks costs thousands of dollars.

New classes of secure hardware, networking, and battery technology emerged from these challenges to redefine how IoT systems were architected and industrial devices were designed. The technology revolution began by combining security and energy efficiency in edge-centric silicon.

The Low-Power Foundations of IoT Processors

Introduced as real-world IoT requirements were being defined in 2009, ArmCortex-M0 CPUs offered the ability to operate solely on 16-bit “thumb” instructions rather than the 32-bit instructions required by its predecessors.

Thumb instructions’ compact encoding method enables code density improvements of roughly 30 percent on processors like the Cortex-M0, which has a cascading effect on memory usage (lower), die sizes (smaller), power consumption (less), and ultimately cost (reduced). Fast-forward to today and devices based on the Arm Cortex-M33 architecture feature thumb instructions and built-in hardware security via features like TrustZone.

TrustZone delivers hardware-based data and device security through a secure root of trust (RoT). When combined with the energy efficiency of Cortex-M33 CPU cores, TrustZone creates secure, battery-powered IoT devices that can operate for extended periods in remote settings. It also doesn’t detract from CPU performance, as Cortex-M33 processors deliver an impressive 1.5 DMIPS/MHz and 4.09 CoreMark/MHz for handling complex tasks at the edge to reduce reliance on centralized cloud processing.

From the beginning of IoT rollouts through today, Cortex-M-class chips continue to deliver possibilities for various IoT use cases.

The Rise of LPWAN

The success of energy-efficient IIoT edge nodes is not only a result of their host processor but also how they connect. In the late 2000s, the advent of 4G technology signaled the decline of earlier networks, highlighting the need for a new low-power, wide-area networking (LPWAN) technology that facilitates long-range communication for IoT devices.

LPWAN technologies such as LoRa have emerged as an appealing method for linking battery-powered IoT devices to networks. Its long-range capabilities and low energy consumption make it an ideal choice for IIoT applications like asset tracking, environmental monitoring, industrial automation, smart agriculture, and smart cities.

Today’s LoRa transceiver modules facilitate LPWAN communications over distances of up to 15km while consuming approximately 40mA of current during transmission. Typically, LoRa modules interface with host processors like Cortex-M-class devices through UART and communicate via ASCII commands, streamlining integration with IoT devices.

These transceivers pair with sub-GHz antennas that meet the frequency requirements of LPWAN networks, many of which are available in compact SMD form factors that fit the space constraints of edge devices. In addition to supporting protocols like LoRaWAN, some of these antennas also support short-range wireless technologies like Wi-Fi, Zigbee, and Bluetooth to enable the creation of backhaul-enabled wireless sensor networks.

Lithium Battery Technology Advances for IoT Edge Nodes

Thanks to the availability of secure, energy-efficient computing technology and LPWAN networking, the idea of battery-powered IIoT sensor nodes became a reality. The IIoT industry embraced the concept of battery-powered sensors, and demand for dependable, high-density power sources increased.

Lithium-ion batteries emerged as the preferred choice for powering these sensors thanks to consistent power density and reliability improvements. These advancements yielded the ability for IoT devices to operate for extended periods on a single battery charge—a critical requirement for many agriculture, mining, and industrial applications. Meanwhile, the improved reliability of lithium-ion battery technology led to reductions in maintenance and operational expenses while ensuring uninterrupted data collection and communication.

A Qoitech study on the compatibility of LoRaWAN technology and coin cell batteries highlighted the pairing’s potential in enduring, low-power wireless IoT sensor nodes. In the study, researchers tested the performance of coin cell batteries using a battery-profiling tool. The tool measured a 40mA (peak current) LoRaWAN power profile with an exit condition that triggered when the voltage dropped below 0.6V or 2V. The study provides insightful results, revealing disparities in coin cell performance among manufacturers that are particularly evident at higher current levels. It also proved that CR2032 and CR2450 are viable options for powering LoRaWAN devices.

This harmony between LPWAN technology and high-density lithium-ion batteries has helped propel the IIoT landscape, enabling new energy-efficient wireless sensor nodes. Lithium coin cell batteries have emerged as the go-to power source for these devices due to their compact size, impressive energy density, and extended lifespan. The availability of diverse lithium coin cell battery options—available in various chemistries and configurations tailored to specific IoT applications—gives developers freedom of choice.

Mouser Electronics offers a comprehensive selection of coin cell batteries, enabling developers to select the most suitable power source for their IoT projects. Additionally, many tools are available to help developers evaluate battery performance under practical conditions. These can ensure IoT sensor nodes operate reliably over long lifecycle deployments and help identify the most efficient and cost-effective power solutions for a given application.

Future of Technology for the Industrial IoT

Recent IIoT technology advancements have not been limited to the edge; they’ve also extended to the control layer. These improvements have led to multicore systems-on-chips (SoCs) featuring multiple CPU or graphics cores, integrated neural network accelerators, and dedicated IP blocks for executing analog, security, and other workloads.

These high-performance chipsets almost always contain multiple high-speed I/O interfaces that streamline system integration in a number of deployment contexts. They are also candidates for embedded virtualization using technologies like hypervisors and single-root I/O virtualization (SR-IOV) that partition on-chip cores, memory, and I/O resources. As a result, multiple mixed-criticality workloads can run and execute simultaneously on a single physical processor, maximizing resource utilization and reducing overall size, weight, power consumption, and cost versus multiprocessor solutions.

Elsewhere, networking standards like Ethernet Time-Sensitive Networking (TSN) are rising. TSN introduces deterministic communication capabilities from the control layer to sensor nodes and enterprise systems for fine-grained timing control, precision device management, and task-oriented workflows like virtual programmable logic controllers (vPLCs). The convergence of these technologies is expanding functionality as IIoT nodes continue to evolve.

The evolution of IIoT technology building blocks started at the far edge and continues today at the control layer. For instance, the emergence of multicore SoCs with integrated accelerators and the adoption of networking standards like Ethernet TSN have paved the way for improved device management and the implementation of containerized enterprise applications.

The post Building Blocks for IIoT Edge Nodes appeared first on ELE Times.

Littelfuse Unveils Advanced Overtemperature Detection Solution for Electric Vehicle Li-ion Battery Packs

ELE Times - Tue, 01/16/2024 - 09:22

TTape revolutionizes the EV industry by delivering a unique capability to detect overtemperature at every Li-ion cell, offering superior safety and battery life enhancement.

Littelfuse, Inc., an industrial technology manufacturing company empowering a sustainable, connected, and safer world, is excited to introduce TTape™, a groundbreaking overtemperature detection platform designed to transform the management of Li-ion battery systems. With its innovative features and unparalleled benefits, TTape helps vehicle systems manage premature cell aging effectively while reducing the risks associated with thermal runaway incidents. View the video.

TTape is ideally suited for a wide range of applications, including automotive EV/HEVs, commercial vehicles, and Energy Storage Systems (ESS). Its distributed temperature monitoring capabilities enable superior detection of localized cell overheating, thereby improving battery life and enhancing the safety of battery installations.

TTape’s key benefits and differentiators include:

  • Premature Cell Aging Management: TTape aids vehicle systems in managing premature cell aging, significantly reducing the risks associated with thermal runaway.
  • Extended Battery Pack Life: TTape ensures that the battery pack remains serviceable for an extended period by initiating temperature management at an earlier stage.
  • Efficient Multi-cell Monitoring: With a single TTape device, multiple cells can be monitored, thus alerting the BMS sooner in case of overtemperature scenarios.
  • Ultra-fast Response: With a response time of less than one second, TTape guarantees quicker alerts, signaling the potential onset of thermal runaway conditions.
  • Seamless Integration: Calibration isn’t necessary. TTape can easily integrate with existing BMS, making it a go-to solution for many battery applications.

Moreover, the extremely thin design of TTape makes it ideal for conformal installations. With a single MCU input, its distributed temperature monitoring capability drastically improves the detection of localized cell overheating. This approach enables efficient cooling measures to prolong battery life and significantly heightens the safety standards of battery installations.

“Distinguishing itself from NTCs, TTape is a stellar addition to the Littelfuse product family. The profound advantage of localized cell overheating detection ensures quicker alerts to the BMS compared to traditional NTC setups,” explained Tong Kiang Poo, Global Product Manager at Littelfuse. “The TTape Platform is a distributed temperature monitoring device for battery packs that helps to improve the detection of localized cell overheating. With no calibration or temperature lookup tables required, and only one MCU input needed, it integrates seamlessly with current BMS solutions alongside NTCs, delivering an enhanced detection of cell overheating.”

This groundbreaking product builds upon the Littelfuse legacy of innovation. It leverages the company’s rich research, design, and development expertise in PPTCs, bringing forth a temperature monitoring solution that the industry eagerly awaits.

TTape promises to be a game-changer for the Li-ion battery pack market, emphasizing safety and efficiency. As the industry moves rapidly towards more sustainable and safe energy solutions, Littelfuse products like TTape are a testament to the company’s commitment to innovation and excellence.

The post Littelfuse Unveils Advanced Overtemperature Detection Solution for Electric Vehicle Li-ion Battery Packs appeared first on ELE Times.

Pages

Subscribe to Кафедра Електронної Інженерії aggregator - Новини світу мікро- та наноелектроніки