-   Українською
-   In English
Feed aggregator
“Half & Half” piezo drive algorithm tames overshoot and ringing
Piezoelectric actuators (benders, stacks, chips, etc.) are excellent fast and precise means for generation and control of micro, nano, and even atomic scale movement on millisecond and faster timescales. Unfortunately, they are also excellent high-Q resonators. Figure 1 shows what you can expect if you’re in a hurry to move a piezo and simply hit it with a unit step. Result: a massive (nearly 100%) overshoot with prolonged follow-on ringing.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Figure 1 Typical piezo actuator response to squarewave drive with ringing and ~100% overshoot.
Don’t worry. It’ll get there. Eventually. But don’t hold your breath. Clearly something has to be done to modify the drive waveshape if we’re at all interested in speed and settling time. Many possibilities exist, but Figure 2 illustrates a remarkably simple yet effective trick that actually takes advantage of the piezo’s natural 2x overshoot: Half and Half step drive.
Figure 2 Half &Half drive step with half amplitude and half resonance period kills overshoot and ringing.
The surprisingly simple trick is to split the drive step into an initial step with half the desired movement amplitude and a duration of exactly half the piezo resonance period. Hence: “Half & Half”(H&H) drive. The half-step is then followed by application of the full step amplitude to hold the actuator in its new position.
The physics underlying H&H rely on kinetic energy imparted to the mass of the actuator during the first quarter cycle to be just sufficient to overcome actuator elasticity during the second quarter, this bringing the actuator to a graceful stop at half cycle’s end. The drive voltage is then stepped to the full value, holding the actuator stationary at the final position.
Shown in Figure 3 is H&H would work for a sequence of arbitrary piezo moves.
Figure 3 Example of three arbitrary H&H moves: (T2 – T1) = (T4 – T3) = (T6 – T5) = ½ piezo resonance period.
If implemented in software, the H&H algorithm would be simplicity itself and look something like this:
Let DAC = current contents of DAC output register
N = new content for DAC required to produce desired piezo motion
Step 1: replace DAC = (DAC + N) / 2
Step 2: wait one piezo resonance half-period
Step 3: replace DAC = N
Done
If implemented in analog circuitry, H&H might look like Figure 4. Here’s how it works.
Figure 4 The analog implementation of H&H.
The C1, R1, C2, R2||R3 voltage divider performs the half-amplitude division function of the H&H algorithm, while dual-polarity comparators U2 detect the leading edge of each voltage step. Step detection triggers U3a, which is adjusted via the TUNE pot to have a timeout equal to half the piezo resonance period, giving us the other “half”.
U3a timeout triggers U3b, which turns on U1, outputting the full step amplitude, completing the move. The older metal gate CMOS 4066 is used due to its superior low-leakage Roff spec’ while the parallel connection of all four of its internal switches yields an adequately low Ron.
U4 is just a place keeper for a suitable piezo drive amplifier to translate from the 5-V logic of the H&H circuitry to piezo drive voltage and power levels.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Piezoelectric Technology: A Primer
- Reducing MLCCs’ piezoelectric effects and audible noise
- Increase piezoelectric transducer acoustic output with a simple circuit
- Reduce acoustic noise from capacitors
The post “Half & Half” piezo drive algorithm tames overshoot and ringing appeared first on EDN.
NXP software bolsters edge AI development
NXP has expanded its eIQ AI and ML software development environment with two new tools to simplify AI deployment at the edge. The software supports low-latency, energy-efficient, and privacy-focused AI, enabling ML algorithms to run on a range of edge processors, from small MCUs to advanced application processors.
The eIQ Time Series Studio introduces an automated machine learning workflow for efficiently developing and deploying time-series ML models on MCU-class devices, including NXP’s MCX series of MCUs and i.MX RT crossover MCUs. It supports various input signals—voltage, current, temperature, vibration, pressure, sound, and time of flight—as well as multi-modal sensor fusion.
GenAI Flow provides the building blocks for creating Large Language Models (LLMs) that power generative AI applications. With Retrieval Augmented Generation (RAG), it securely fine-tunes models on domain-specific knowledge and private data without exposing sensitive information to the model or processor providers. By linking multiple modules in a single flow, users can customize LLMs for specific tasks and optimize them for edge deployment on MPUs like the i.MX 95 application processor.
To learn more and access the newest version of the eIQ machine learning development environment, click on the product page link below.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post NXP software bolsters edge AI development appeared first on EDN.
RS-485 transceivers operate in harsh environments
Half-duplex RS-485 transceivers from MaxLinear offer extended ESD and EFT protection to ensure reliable communication in industrial environments. The MxL8312x and MxL8321x families include nine product SKUs with three speed options—250 kbps, 500 kbps, and 50 Mbps—and three package variants, including small 3×3-mm types.
These families expand MaxLinear’s portfolio with mid- and high-tier products alongside the MxL8310x and MxL8311x lineup of RS-485 transceivers. Smaller form-factor packages, higher speeds, and enhanced system-level ESD and EFT protection make them well-suited for delivering high performance under harsh conditions. Key applications include factory automation, industrial motor drives, robotics, and building automation.
The transceivers’ bus pins tolerate up to ±4 kV of electrical fast transients (IEC 61000-4-4) and up to ±12 kV of electrostatic discharge (IEC 61000-4-2). A supply range of 3.3 V to 5 V supports reliable operation in systems with potential power drops, while an extended common-mode range of up to ±15 V ensures stable communication over long distances or in applications with significant ground plane shifts between devices. MxL83214 devices are cable of supporting 50-Mbps data rates with strong pulse symmetry and low propagation delays.
In addition to conventional 4.9×3.9-mm NSOIC-8 packages, the transceivers are offered in 3×3-mm MSOP-8 and VSON-8 packages. The MxL83121, MxL83122, MxL83211, MxL83212, and MxL83214 are available now.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post RS-485 transceivers operate in harsh environments appeared first on EDN.
IGBT and MOSFET drivers manage high peak current
Vishay’s VOFD341A and VOFD343A IGBT and MOSFET drivers, available in stretched SO-6 packages, provide peak output current of up to 4 A. Their high peak output current allows for faster switching by eliminating the need for an additional driver stage. Each device contains an AlGaAs LED optically coupled to an integrated circuit with a power output stage, specifically designed for driving power IGBTs and MOSFETs in motor control inverters.
The drivers support an operating voltage range of 15 V to 30 V and feature an extended temperature range of -40°C to +125°C, ensuring a sufficient safety margin for more compact designs. They also have a maximum propagation delay of 200 ns, which minimizes switching losses and facilitates more precise PWM regulation.
Additionally, the drivers’ high noise immunity of 50 kV/µs helps prevent failures in fast-switching power stages. Their stretched SO-6 package provides a maximum rated withstanding isolation voltage of 5000 VRMS.
Samples and production quantities of the 3-A VOFD341A and 4-A VOFD343A are available now, with lead times of six weeks.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post IGBT and MOSFET drivers manage high peak current appeared first on EDN.
Infineon shrinks silicon wafer thickness
With a thickness of only 20 µm and a diameter of 300 mm, Infineon’s silicon power wafers are the thinnest in the industry. These ultra-thin wafers are a quarter the thickness of a human hair and half the thickness of typical wafers, which range from 40 µm to 60 µm. This achievement in semiconductor manufacturing technology will increase energy efficiency, power density, and reliability in power conversion for applications such as AI data centers, motor control, and computing.
Infineon reports that reducing the thickness of a wafer by half lowers the substrate resistance by 50%, resulting in over a 15% reduction in power loss in power systems compared to conventional silicon wafers. For high-end AI server applications, the ultra-thin wafer technology enhances vertical power delivery designs based on vertical trench MOSFETs, enabling a close connection to the AI chip processor. This minimizes power loss and improves overall efficiency.
Infineon’s wafer technology has been qualified and integrated into its smart power stages, which are now being delivered to initial customers. As the ramp-up of ultra-thin wafer technology progresses, Infineon anticipates that it will replace existing conventional wafer technology for low-voltage power converters within the next three to four years.
Infineon will present the first ultra-thin silicon wafer publicly at electronica 2024.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Infineon shrinks silicon wafer thickness appeared first on EDN.
Stretchable printed circuit enhances medical devices
Murata’s stretchable printed circuit (SPC) offers both flexibility and the ability to stretch or deform without losing functionality. It can be used in wearable therapeutic devices and vital monitoring tools, providing improved accuracy, durability, and patient comfort compared to current devices.
Many existing devices are too rigid for certain applications, leading to patient discomfort, poor contact with surfaces, and unstable data acquisition. The SPC’s flexibility, stretchability, and adaptability support multi-sensing capabilities and a wide range of user requirements. Its soft material is gentle on the skin, making it well-suited for disposable EEG, EMG, and ECG electrodes that meet ANSI/AAMI EC12 standards. The stretchable design allows a single device to fit various body areas, like elbows and knees, and accommodate patients of different sizes.
SPC technology ensures seamless integration and optimal performance through telescopic component mounting and hybrid bonding between substrates. Its shield layer effectively blocks electromagnetic noise, providing reliable signal-path protection. The substrate construction also enhances moisture resistance and supports sustained high-voltage operation.
Murata’s SPC is customizable based on application requirements.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Stretchable printed circuit enhances medical devices appeared first on EDN.
PIC Microncontrollers with Integrated FPGA Features in TME
The new PIC16F131xx microcontrollers in TME’s offer from Microchip are ideal for the evolving and miniaturizing electronic equipment market, offering efficient power management and predictable response times for controllers.
Key features include core independent peripherals (CIPs) like the configurable logic block (CLB), which allows for predictable circuit behavior without burdening the CPU, thereby saving energy. These microcontrollers, based on the classic 8-bit Harvard architecture, come in various packages (DIP, DFN, SSOP, TSSOP, SOIC, and VQFN) with 6 to 18 I/O pins, and support a wide voltage range (1.8V to 5.5V DC). They operate at a 32 MHz clock frequency, with instruction execution times as low as 125 ns, and offer 256 to 1024 bits of SRAM and up to 14 kB of FLASH program memory.
The microcontrollers are equipped with an array of peripherals, including PWM generators, counters/timers, EUSART serial bus controllers, and MSSP modules for I2C or SPI operation. They also feature configurable comparators, an 8-bit DAC, and a 10-bit ADC with hardware processing capabilities (ADCC)
The core independent peripherals (CIPs) allow the microcontrollers to handle tasks like sensor communication without using the CPU, thus enhancing efficiency and simplifying programming. The CLB technology, a highlight of the PIC16F131xx series, uses basic logic gates configurable by the designer, facilitating functional safety and rapid response times.
The Curiosity Nano development kit for the PIC16F131xx series offers a convenient platform for exploring the microcontrollers’ capabilities, featuring an integrated debugger, programming device, and access to microcontroller pins. The EV06M52A kit, equipped with the PIC16F13145 microcontroller, includes a USB-C port for power and programming, an LDO MIC5353 regulator, a green LED for power and programming status, a yellow LED, and a button for user interaction.
Additionally, adapters like the AC164162 extend the functionality of the Curiosity Nano boards, offering compatibility with mikroBUS standard connectors and an integrated charging system for lithium-ion and lithium-polymer cells.
The new microcontroller series from Microchip offers efficient power management, predictable response times, and innovative features like core independent peripherals (CIPs) and configurable logic blocks (CLB). These microcontrollers, ideal for modern embedded systems, come in various packages and support a wide voltage range, enhancing their versatility and performance. The Curiosity Nano development kit and its adapters further facilitate easy development and prototyping.
These products are available in TME’s offer, providing a comprehensive solution for designers and developers looking to leverage the latest advancements in microcontroller technology.
Text prepared by Transfer Multisort Elektronik S.p. z o.o.
The post PIC Microncontrollers with Integrated FPGA Features in TME appeared first on EDN.
Understanding and combating silent data corruption
The surge in memory-hungry artificial intelligence (AI) and machine learning (ML) applications has ushered in a new wave of accelerated computing demand. As new design parameters ramp up processing needs, more resources are being packed into single units, resulting in complex processes, overburdened systems, and higher chances of anomalies. In addition, demands of these complex chips presents challenges with meeting reliability, availability, and serviceability (RAS) requirements.
One major, yet often overlooked, RAS concern and root cause of computing errors is silent data corruption (SDC). Unlike software-related issues, which typically trigger alerts and fail-safe mechanisms, SDC issues in hardware can go undetected. For instance, a compromised CPU may miscalculate data, leading to corrupt datasets that can take months to resolve and cost organizations significantly more to fix.
Figure 1 A compromised CPU may lead to corrupt datasets that can take months to resolve. Source: Synopsys
Meta Research highlights that these errors are systemic across generations of CPUs, stressing the importance of robust detection mechanisms and fault-tolerant hardware and software architectures to mitigate the impact of silent errors in large-scale data centers. Anything above zero errors is an issue given the size, speed, and reach of hyperscalers. Even a single error can result in a significant issue.
This article will explore the concept of SDC, why it continues to be a pervasive issue for designers, and what the industry can do to prevent it from impacting future chip designs.
The multifaceted hurdle
Industry leaders are often hesitant to invest in resources to address SDC because they don’t fully understand the problem. This reluctance can lead to higher costs in the long run, as organizations may face significant operational setbacks due to undetected SDC errors. Debugging these issues is costly and not scalable, often resulting in delayed product releases and disrupted production cycles.
To put this into perspective, today’s machine learning algorithms run on tens of thousands of chips, and if even one in 1,000 chips is defective, the resulting data corruption can obstruct entire datasets, leading to massive expenditures for repairs. While cost is a large factor, the hesitation to invest in SDC prevention and fixes is not the only challenge. The complexity and scale of the problem also make it difficult for decision makers to take proactive measures.
Figure 2 Defect screening rate is shown using DCDIAG test to assess a processor. Source: Intel
Chips have long production cycles, and addressing SDC can take several years before fixes are reflected in new hardware. Beyond the lengthy product lifecycles, it’s also difficult to measure the scale of SDC errors, presenting a big challenge for chipmakers. Communicating the magnitude and urgency of an issue to decision makers without solid evidence or data is a daunting task.
How to combat silent data corruption
When a customer receives a faulty chip, the chip is typically sent back to the manufacturer for replacement. However, this process is merely a remedy for the larger SDC issue. To shift from symptom mitigation to a problem-solving solution, here are some avenues the industry should consider:
- Research investments: SDC is an area the industry is aware of but lacks comprehensive understanding. We need researchers and engineers to focus on SDC despite how costly the investment will be. This involves generating and sharing extensive data for analysis, identifying anomalies, and diagnosing potential issues like time delays or data leaks. All things considered, enhanced research will help clarify and manage SDC effectively.
- Incentive models: Establishing stronger incentives with more data for manufacturers to address SDC will help tackle the growing problem. Like the cybersecurity industry, creating industry-wide standards for what constitutes a safe and secure product could help mitigate SDC risks.
- Sensor implementation: Implementing sensors in chips that alert chip designers to a potential problem is another solution to consider, similar to automotive sensors that alert the owner when tire pressure is low. A faulty chip can go one to two years without being detected, but sensors will be able to detect a problem before it’s too late.
- AI and ML toolbox: AI algorithms, an option that is still in the early stages, could flag conditions indicative of SDC, though this requires substantial data for training. Effective implementation would necessitate careful curation of datasets and intentional design of AI models to ensure accurate detection.
- Silicon lifecycle management (SLM) strategy: SLM is a process that allows chip designers to monitor, analyze and optimize their semiconductor devices throughout its life. By executing this strategy, it makes it easier for designers to track and gain actionable insights on their device’s RAS in real time and ultimately, detecting SDC before it’s too late.
Partly due to its stealthy nature, SDC has become a growing problem as the scale of computing has increased over time, and the first step to solving a problem is recognizing that a problem exists.
Now is the time for action, and we need stakeholders from all areas—academics, researchers, chip designers, manufacturers, software and hardware engineers, vendors, government and others—to collaborate and take a closer look at underlying processes. Together, we can develop solutions at every step of the chip lifecycle that effectively mitigate the lasting impacts of SDC.
Jyotika Athavale is the director for engineering architecture at Synopsys, leading quality, reliability and safety research, pathfinding, and architectures for data centers and automotive applications.
Randy Fish is the director of product line management for the Silicon Lifecycle Management (SLM) family at Synopsys.
Related Content
- Uncovering Silent Data Errors with AI
- Avoid corruption in nonvolatile memory
- A systems approach to embedded code fault detection
- Understanding the effects of power failure on flash-based SSDs
- Protecting your embedded software against memory corruption
The post Understanding and combating silent data corruption appeared first on EDN.
EEVblog 1649 - Mailbag: M5stack, Beelink Mini PC, ZOYI LCR, ToughOn Battery
Sanan Semiconductor adds 1700V and 2000V devices to silicon carbide portfolio
JST appoints new chief technology officer
Latest issue of Semiconductor Today now available
III–V Epi’s CTO Richard Hogg chairing International Workshop on PCSELs
🔥 Оголошується конкурс творчих робіт до Дня боротьби з корупцією!
З нагоди Дня боротьби з корупцією, який щорічно відзначається 9 грудня, та з метою популяризації ідеї доброчесності, гідності й антикорупції, у період з 01 жовтня по 09 грудня 2024 року на базі Факультету соціології і права (ФСП) проходитиме університетський конкурс творчих робіт.
Real examples of the IoT edge: A guide of NXP’s Innovation Lab
Most tradeshow experiences tend to be limited to the exhibition floor and a couple of breakout sessions, all housed within the spacious convention center floor plan. However, embedded world North America seemed to diverge from this with a number of companies offering tours of their facilities, one of these companies was NXP. EDN was able to tour their Austin campus with a guide of their “Smart Home Innovation Lab”. This lab is a proving ground for a number of IoT applications and edge computing applications where systems and applications engineers can take the NXP microcontrollers (MCU) and microprocessors (MPU) as well as their RF and sensor tech, and see how they might be able to build a prototype. However, Smart Home Innovation Lab might be a bit of a misleading name since many of proof-of-concept designs fell into the medical and automotive realms where many of the underlying technologies would naturally find use cases that extended well beyond these fields.
The concept and implementation of the internet of things (IoT) has been a very well-discussed topic, especially within the smart home where endless companies have found (and are continuing to find) innovative ways to automate home functions. However, using inference at the edge is relatively nascent, and therefore the use cases where existing IoT applications can be augmented or improved by AI is growing rapidly. In all of these demos, NXP engineers integrated one of their i.MX crossover MCUs for local edge processing capabilities. So, the tour was geared more towards the use cases of TinyML.
The tour spanned over an hour, with Austin-based systems engineers walking the group through each demonstration that took place in a “garage”, “kitchen”, “living room”, media room/theater”, and a “gym”. Many of the demonstrations involved modified appliances that were taken off-the-shelf while some prototypes were co-developed with customers in partnership.
Home mode automationsMany of the solutions were focused on using more unified application-level connectivity standards such as Thread and Matter to simplify integration where smart home devices from different vendors can be used in a single smart home “fabric”. The lab contained two Matter fabrics, including a commercially available Thread border router and an NXP open Thread border router that used the i.MX 93. The NXP open source home automation system that connects many of the IoT devices and acts as a backend to the “dashboard” that appears in Figure 1.
Figure 1 NXP Innovation Lab tour with the home dashboard appearing on the screen and door lock device to the left.
Their proprietary home control system has two main “home mode” automations available: one where the user was away from home and one where they were present at home. The “away from home” demo included automated functions such as the dimming of the lab lights, blinds going down, pausing any audio streaming, and the locking the door. When the user is present, all of these processes are essentially reversed, automating some very basic home functions.
A touch-free lighting experienceThe ultra wideband (UWB) technology found in the recent SR150 chip includes a ranging feature that can, for instance, track a person as they are walking through their home. This was another demonstration where a system engineer held the UWB-enabled mobile device and the lights and speakers within the lab essentially followed them, turning on the lights and streaming the radio station through the speakers that were available locally, in the room they were physically occupying and turning off all lights/speakers in the rooms that they had exited. Other use cases for this are in agriculture for locating sprinklers covered in mud, or, in medical applications to kick off automations/check-ups when a nurse walks into a patient’s room. This could also be extended to the automotive space, automatically opening the door that the user is approaching.
Door lockAs with many smart home appliances, smart locks are nothing new. Typically though, these door locks can be remotely engaged with an app, requiring a more manual approach to the solution. The door lock prototype used five different technologies–keypad, fingerprint, face recognition, NFC, and UWB–as well as the i.MX RT1070 MCU/MPU to lock or unlock (Figure 2). The lock used a face recognition algorithm with depth perception while the UWB tech used an angle of attack (AoA) algorithm to ascertain whether the user is approaching the lock from outside the facility or within it. This way, the door lock can be engaged only with multiple forms of identification for building security management; or, in smart home applications, where the door lock will automatically open upon approach from the outside.
Figure 2 Smart door lock using the SR150 and i.MX RT1070 with integrated keypad, fingerprint, face recognition, NFC, and UWB.
The garage: Automotive automationsThe “garage” included a model EV where i.MX MCUs are used to run cluster and infotainment systems, demonstrating the graphics capability of the platform. There was also a system that displayed a bird’s eye view of the vehicle where the MCU takes the warp image from the four cameras mounted at different angles, dewarps them, and stitches them together to recreate this inclusive view of the vehicle’s surroundings.
Figure 3 Garage demos showing the EV instrument cluster and infotainment running on i.MX MCUs.
The demo in Figure 4, shows a potential solution to a current potential problem in EVs: a large, singular human-machine interface (HMI) that both the driver and passenger are meant to use. While it does offer a clean, sleek aesthetic, the single screen could be inconvenient when one user needs it as a dashboard, while the other might want it for entertainment purposes. The dual-view display will simultaneously display two entirely different images for users sitting on the right-hand side or left-hand side of the screen. This is made possible by the large viewing angles of the display, so that the driver and passenger are able to view their specific application on the entire screen without impacting the user experience. The technology involves sending two outputs interleaved together where the screen has the ability to deinterleave them and show them on the screen.
This comes with the additional ability to independently control the screen using the entire space available within HMI without impacting the application of the driver or passenger. In other words, a passenger could essentially play Tetris on the screen without messing around with the driver’s map view. This is achieved through electrodes installed under the seat where each electrode is connected to the driver’s, or passenger’s, respective touch controller. Another quite obvious application for this would be in gaming, removing the need for two screens or a split-screen view.
Figure 4 A single dual-view display that simultaneously offers two different views for users sitting to the left or right of it. Electrodes installed under the seats allow one user to independently control the screens via touch without impacting the application of the other user.
Digital intrusion alarmThe digital intrusion alarm prototype seen in Figure 5 can potentially be added on to a consumer access point or router to protect it from malicious traffic such as a faulty IoT device that might jam the network. The design uses the i.MX 8M+ where a ML model is trained with familiar network traffic over a period of time so when unfamiliar traffic is observed, it is flagged as malicious and blocked from the network. The demo showcased a denial of service (DoS) attack that was blocked. If the system detects a known device and blocks it, the user is able to fix the issue, and unblock the device so that it can connect back to the network.
Figure 5 Digital intrusion alarm that is first trained to monitor the traffic specific to the network for a period of time before beginning the process of monitoring network traffic for any potential bad actors.
Smart cooktop, coffee maker, and pantryA smart cooktop can be seen in Figure 6, the prototype uses face detection to detect whether or not a chef is present, all of this information is processed locally on the device itself. In the event of unsafe conditions, e.g., water boiling over, a burner left on without cookware present, excessive smoke, burning food, the system could potentially detect it and shut off. Once shut off, the home dashboard will show that the cooktop is turned off. Naturally, the entire process can be done without AI, however, it can massively speed up the time it takes for the cooktop to recognize that a cook is present. Other sensors can be integrated to either fine-tune the performance of the system or eliminate the potential intrusion of having a camera.
Figure 6 Smart cooktop demo with facial recognition to sense if a cook is present.
The guide continued to a “smart barista” that uses facial recognition on the i.MX’s eIQ neural processing unit (NPU) to customize the type of coffee delivered from the coffee maker. A pantry classification system also uses the i.MX RT1170 along with a classification and detection model to take streams of the pantry and performance inference to inform the user of the items that are taken out of the pantry. The system could potentially be used in the refrigerator as well to offer the user with recipe recommendations or grocery list recommendations. However, as one member of the tour noted, pantries are generally packed with goods that would not necessarily be within view of this system for vision-based classification.
Current state indicatorAnother device was trained, at a very basic level, with knowledge on car maintenance using a GM car manual and used a large language model (LLM) to respond to prompts such as: “How do I use cruise control?” or “Why isn’t my car turning on?” The concept was presented as a potential candidate for the smart home where these smart speakers could potentially be trained on the maintenance of various appliances, e.g., washing machines, dryers, dishwashers, coffee makers, etc., so that the user can ask questions on maintenance or use.
The natural question is how is this concept any different from established smart speakers? Like many of the devices already described, this is all processed locally and where there is no interaction with the cloud to process data and present an answer. This concept can also be expanded for preventive or predictive maintenance in the case where appliances are outfitted with sensors that transmit appliance status information to, for instance, show a continuous record on the service life motor bearings within a CNC machine; or, the estimation life of a drive belt in a washing machine.
An automated media roomThe Innovation Lab houses a living room space that experiments with automated media control using UWB, millimeter-wave, vision, and voice activation (Figure 7). In this setup, the multiple mediums will first detect the presence of individuals seated on the couch to trigger a sequence of events including the lights turning on, the blinds going up, and the TV turning on to a preferred program of choice. A system utilizing the i.MX 8M+ and an attached Basler camera as well as another system with an overhead camera will use vision to detect persons and perform personalizations such as changing the channel from a show with more adult content to one catered to a younger audience if a child walks into the area. For those who would find that particular personalization vexing (myself included) the system is meant to be trained towards the preference of the individual.
Another demo in this area included NXP’s “Audio Vista” or sonic optimization. This solution uses a UWB ranging to detect the precise location of the person/people sitting on the couch and communicates with the four speakers located throughout the space to let the user know where/how speakers will have to be moved for an optimal audio setup. This very same underlying UWB technology can be trained to detect heart arrhythmias, breathing, or falls for home health applications. Another media control experiment involved using echo cancellation to extract a voice from a noisy environment so that users do not have to speak over audio to, for instance, ask a smart speaker to pause a TV program.
Figure 7 The living room space that experiments with automated media control using UWB, millimeter-wave, vision, and voice activation. The UWB system can be seen up front, millimeter-wave transmitter and receiver are seated above the speakers, and Basler camera to the far right.
The home theater: Downsizing the AV receiverIn the second-to-last stop, everyone sat in a theater to experience the immersive Dolby Atmos surround sound, an experience provided by the i.MX 8M Mini (Figure 8). The traditional AV receiver design involves a specific audio codec IC as well as an MCU and MPU to handle functions such as the various connectivities and the rendering of video. The multicore i.MX 8M Mini’s Arm Cortex A53s have abundant processing capability so that the audio portion of the processing in a traditional AV receiver takes only ~30% of the entire IC; all this while the 8M Mini handles its own controls, processing, and many other renderings as well.
Dolby Atmos has previously been considered a premium sound function that was not easily provided by products such as soundbars or low- to mid-tier AV receivers. Powerful processors such as the 8M Mini can integrate these functions to lower the barrier of entry for companies, providing not only Dolby Atmos decoding, but MPEG and DTS:X as well. The i.MX also runs a Linux operating system in conjunction with a real-time operating system (RTOS) allowing users to easily integrate Matter, Thread, and other home automation connectivity protocols onto the AV receiver or soundbar.
Figure 8 Theater portion of the Innovation Lab with the Dolby Atmos immersive surround sound experience processed on the i.MX 8M Mini.
The gym: Health and wellbeing demosThe gym showcased a number of medical solutions starting with medical devices with embedded NTAGs so users can scan and commission the device using NFC to, for example, verify the authenticity of the medication that you are injecting into your body. Other medical devices included insulin pouches that utilized NXP’s BLE MCUs that allowed them to be scanned with a phone so that a user could learn the last time they took an insulin shot. Smart watches, fitness trackers, based upon NXP’s RTD boards were also shown that go for up to a week without being charged.
Another embedded device that measured ECG was demonstrated (Figure 9) that has the ability to take ECG data, encrypt it, and send the information to the doctor of choice. There are three main secure aspects of this process:
- Authentication that establishes the OEM credentials
- Verification of insurance details through NFC
- Encryption of health data being sent
The screen in the image shows what a doctor might view on a daily basis to track their patients. This could, for instance, sense a heart attack and call an ambulance. This concept could also be extended to diabetic patients that must track insulin and blood sugar levels as they change through the day.
Figure 9 Tour of health and wellness devices with a monitor displaying patient information for a doctor that has authenticated themselves through an app.
Aalyia Shaukat, associate editor at EDN, has worked in the engineering publishing industry for over 8 years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.
Related Content
- An engineer’s playground: A tour of Silicon Labs’ labs
- Analog Devices’ approach to heterogeneous debug
- Ultra-wideband tech gets a boost in capabilities
- New i.MX MCU serves TinyML applications
The post Real examples of the IoT edge: A guide of NXP’s Innovation Lab appeared first on EDN.
The search for "well enough" (not perfection)
I have mad respect for anyone who nails a well-designed PCB on the first go. Meanwhile, I'm embracing the 'iterative approach'—which is a fancy way of saying I make a lot of prototypes and have a constant love-hate relationship with my own designs.
Take, for instance, my simple mix-mode display side project. All I wanted was a nice combo of a 7-segment displays, LEDs, and a bargraph, controlled by a MAX7221 for some other projects. Easy, right? Well, fast forward two years, and I've got a beautiful timeline of my trials, errors, and the occasional "Aha!" moments. Honestly, it's been a journey. My first design was basically a cry for help, but now it's evolved to the point where I am okay with it. But hey, it works now for my main projects.
[link] [comments]
DARPA awards University of Michigan’s Zetian Mi $3m to scale III–V materials on silicon
NS Nanotech releases first solid-state semiconductor to produce human-safe disinfecting UV light
🎥 IV Міжнародна конференція «Біобезпека та сучасні реабілітаційні технології. Теорія, практика, перспективи»
Це вже IV міжнародна науково-практична конференція, яку організовує факультет біомедичної інженерії (ФБМІ) КПІ ім. Ігоря Сікорського. Цього разу вона об’єднала науковців, фахівців медичної та інженерної галузей з України, Австралії, США, Італії та Нідерландів, щоб обговорити перспективи розвитку реабілітаційних технологій і обмінятися найкращим досвідом з підготовки кваліфікованих реабілітологів.
PUF security IPs bolstered by test suite, PSA certification
Internet of Things (IoT) security, one of the biggest challenges for embedded developers, is making way for physical unclonable functions (PUFs) into microcontroller (MCU) and system-on-chip (SoC) designs. And a new design ecosystem is emerging to make PUF implementation simpler and more cost-effective.
PUF, which creates secure, unclonable identities based on manufacturing variations unique to each semiconductor chip, facilitates the essential hardware root-of-trust IP required in security implementations. A cryptographic root-of-trust forms the security foundation of modern hardware infrastructures.
Here, PUF creates random numbers on demand, so there is no need to store cryptographic keys in flash memory. That, in turn, eliminates the danger of side-channel memory attacks revealing the keys. But PUF’s technical merits aside, where does it stand as a cost-effective hardware security solution?
Below are two design case studies relating to PUF’s certification and testing. They provide anecdotal evidence of how this hardware security technology for IoT and embedded systems is gaining traction.
PUF certification
PUFsecurity, a supplier of PUF-based security solutions and a subsidiary of eMemory, has achieved Level 3 Certification from PSA for its PUF security IP, which it calls a crypto coprocessor. PSA Certified is a safety framework that tests and verifies the reliability of secure boot, secure storage, firmware update, secure boundary, and crypto engines.
PUFsecurity has teamed up with Arm to test its crypto coprocessor IP, subsequently passing the PSA Certified Level 3 RoT Component. Its PUFcc crypto coprocessor IP, incorporated into the Arm Corstone-300 IoT reference design platform, was evaluated under the Security Evaluation Standard for IoT Platforms (SESIP) profile.
Figure 1 The PUF security IP has been certified on Arm’s reference platform. Source: PUFsecurity
The PSA Certified framework—a globally recognized safety standard platform to ensure that the security features of IoT devices are secured during the design phase—guarantees that all connected devices are built upon a root-of-trust. “PSA Certified has become the platform of choice for our partners to swiftly meet regional cybersecurity and regulatory requirements,” said Paul Williamson, senior VP and GM for IoT Line of Business at Arm.
The evaluation, carried out by an independent laboratory, used five mandatory and five optional security functional requirements (SFRs). The mandatory requirements verify platform identity, secure platform update, physical attacker resistance, secure communication support, and secure communication enforcement.
On the other hand, the optional requirements include verification of platform instance identity, attestation of platform genuineness, cryptographic operation, cryptographic random number generation, and cryptographic key generation.
PUF testing
PUFs used in semiconductors for secure, regenerable random number generation have unique testing challenges. While PUF’s random number generation provides a basis for unique device identities and cryptographic key generation, unlike traditional random number generators (RNGs), PUFs produce a fixed-length output.
That makes existing tests inadequate for determining randomness, a fundamental requirement for a secure device root-of-trust. Crypto Quantique, a supplier of quantum-driven security solutions for IoT devices, has developed a randomness test suite tailored specifically for PUFs.
Figure 2 Test suite overcomes the limitations of NIST 800-22 in evaluating PUF randomness. Source: Crypto Quantique
The new test suite adapts existing tests from the NIST 800-22 suite and makes them suitable for unique PUF characteristics like spatial dependencies and limited output length. It also introduces a test to ensure the independence of PUF outputs, a vital consideration for maintaining cryptographic security by identifying correlated outputs.
In short, the test suite ensures that PUFs meet randomness requirements without excessive data demands. It does that by running tests in different data orderings to account for potential spatial correlations in PUF outputs. Therefore, by reducing the number of required bits for certain tests, the suite enables more efficient testing. It also minimizes the risk of misrepresenting PUF quality.
The availability of PUF-centric test solutions shows that the design ecosystem around this security technology is steadily taking shape. The certification of PUF IPs further affirms its standing as a reliable root-of-trust subsystem.
Related Content
- PUF up your IoT security
- How PUF Technology is Securing IoT
- Building a path through the IoT security maze
- Microcontroller with ChipDNA PUF Technology for IoT
- Hardware Root of Trust: The Key to IoT Security in Smart Homes
The post PUF security IPs bolstered by test suite, PSA certification appeared first on EDN.