-   Українською
-   In English
Feed aggregator
Spent the weekend making a logic simulation
submitted by /u/flippont [link] [comments] |
ADI’s efforts for a wirelessly upgraded software-defined vehicle
In-vehicle systems have massively grown in complexity with more installed speakers, microphones, cameras, displays, and compute burden to process the necessary information and provide the proper, often time-sensitive output. The unfortunate side effect of this complexity is the massive increase in ECUs and subsequent cabling to and from its allocated subsystem (e.g., engine, powertrain, braking, etc.). The lack of practicality with this approach has become apparent where more OEMs are shifting away from these domain-based architectures and subsequently traditional automotive buses such as local interconnect network (LIN), controlled area network (CAN) for ECU communications, FlexRay for x-by-wire systems, and media oriented transport (MOST) for audio and video systems. SDVs rethink underlying vehicle architecture so that cars are broken into zones that will directly service the vehicle subsystems that surround it locally, cutting down wiring, latency, and weight. Another major benefit of this are over-the-air (OTA) updates using Wi-Fi or cellular to update cloud-connected cars, however bringing ethernet to the automotive edge comes with its complexities.
ADI’s approach to zonal architecturesThis year at CES, EDN spoke with Yasmine King, VP of automotive cabin experience at Analog Devices (ADI). The company is closely working with the underlying connectivity solutions that allow vehicle manufacturers to shift from domain architectures to zonal with ethernet-to-edge (E2B) bus, automotive audio bus (A2B), and gigabit multimedia serial link (GMSL) technology. “Our focus this year is to show how we are adding intelligence at the edge and bringing the capabilities from bridging the analog of the real world into the digital world. That’s the vision of where automotive wants to get to, they want to be able to create experiences for their customers, whether it’s the driving experience, whether it’s the back seat passenger experience. How do you help create these immersive and safe experiences that are personalized to each occupant in the vehicle? In order to do that, there has to be a fundamental change of what the architecture of the car looks like,” said King. “So in order to do this in a way that is sustainable, for mobility to remain green, remain long battery range, good fuel efficiency, you have to find a way of transporting that data efficiently, and the E2B bus is one of those connectivity solutions where it’s it allows for body control, ambient lighting.”
E2B: Remote control protocol solution 10BASE-T1S solutionBased on the OPEN alliance 10BASE-T1S physical layer (PHY), the E2B bus aims at removing the need for MCUs centralizing the software to the high performance compute (HPC) or central compute (Figure 1). “The E2B bus is the only remote control protocol solution available on the market today for the 10BASE-T1S so it’s a very strong position for us. We just released our first product in June of this past year, and we see this as a very fundamental way to help the industry transform to zonal architecture. We’re working with the OPEN alliance to be part of that remote control definition.” These transceivers will integrate low complexity ethernet (LCE) hardware for remote operation and, naturally, can be used on the same bus as any other 10BASE-T1S-compliant product
BMW has already adopted the E2B bus for their ambient lighting system, King mentioned that there has already been further adoption by other OEMs but they were not public yet. “The E2B bus is one of those connectivity solutions where it allows for body control, ambient lighting. Honestly, there’s about 50 or 60 different applications inside the vehicle.” She mentioned how E2B is often used for ambient lighting today but there are many other potential applications such as driver monitoring systems (DMSs) that might detect a sleeping driver via the in-vehicle biometric capabilities to then respond with a series of measures to wake them up, E2B allows OEMs to apply these measures with an OTA update. Without E2B, you’d have to not only update the DMS, but you’d have to update the multiple nodes that are controlling the ambient light. The owner might have to take it back into the shop to apply the updates, it just takes longer and is more of a hassle. With E2B, it’s a single OTA update that is an easy, quick download to add safety features so it’s more realistic to get that safer, more immersive driver experience.” The goal for ADI is to move all the software from all edge nodes to the central location for updates.
Figure 1: EDN editor, Aalyia Shaukat (left) and VP of automotive cabin experience, Yasmine King (right) in front of a suspension control demo with 4 edge nodes sensing the location of the weighted ball, sends the information back to the HPC to send commands back to control the motors.
A2B: Audio system based on 100BASE-T1Based upon the 100BASE-T1 standard, the A2B audio follows a similar concept of connecting edge nodes with a specialization in sound limiting the installation of weighty shielded analog cables going to and from the many speakers and microphones in vehicles today for modern functions such as active noise cancellation (ANC) and road noise cancellation (RNC). “We have RNC algorithms that are connected through A2B, and it’s a very low latency, highly deterministic bus. It allows you to get the inputs from, say, the wheel base, where you’re listening for the noise, to the brain of the central compute very quickly.” King mentioned how audio systems require extremely low latencies for an enhanced user experience, “your ears are very susceptible to any small latency or distortion.” The technology has more maturity than the newer E2B bus and has therefore seen more adoption, “A2B is a technology that is utilized across most OEMs, the top 25 OEMs are all using it and we’ve shipped millions of ICs.” ADI is working on a second iteration of the A2B bus that multiplies the data rate of the previous generation, this is likely due to the maturation of the 1000BASE-T1 standard for automotive applications that is meant to reach 1 Gbps. When asked about the data rate King responded, “I’m not sure exactly what we are publicly stating yet but it will be a multiplier.”
GMSL: Single-wire SerDes display solutionGMSL is the in-vehicle serializer/deserializer (SerDes) video solution that shaves off the significant wiring typically required with camera and subsequent sensor infrastructure (Figure 2). “As you’re moving towards autonomous driving and you want to replace a human with intelligence inside the vehicle, you need additional sensing capabilities along with radar, LiDAR, and cameras to be that perception sensing network. It’s all very high bandwidth and it needs a solution that can be transmitted in a low-cost, lightweight cable.” Following a similar theme as the E2B and A2B audio buses, using a single cable to manage a cluster display or an in-vehicle infotainment (IVI) human-to-machine interface (HMI) minimizes the potential weight issues that could damage range/fuel efficiency. King finished by mentioning one overlooked benefit of lowering the weight of vehicle harnessing “The other piece that often gets missed is it’s very heavy during manufacturing, when you move over 100 pounds within the manufacturing facilities you need different safety protocols. This adds expense and safety concerns for the individuals who have to pick up the harness where now you have to get a machine over to pick up the harness because it’s too heavy.”
Figure 2: GMSL demo aggregating feeds from six cameras into a deserializer board going into a single MIPI port on the Jetson HPC-platform.
Aalyia Shaukat, associate editor at EDN, has worked in the design publishing industry for six years. She holds a Bachelor’s degree in electrical engineering from Rochester Institute of Technology, and has published works in major EE journals as well as trade publications.
Related Content
The post ADI’s efforts for a wirelessly upgraded software-defined vehicle appeared first on EDN.
Happy workbench Wednesday! Today, I wanted to share how terrifying the exhaust fan module of a Keysight/Ixia XGS12 mainframe is.
If you’re not careful with this thing, it’ll probably lop off your fingers: https://imgur.com/a/XuLKBF1 [link] [comments] |
PWMpot approximates a Dpot
Digital potentiometers (“Dpots”) are a diverse and useful category of digital/analog components with up to a 10-bit resolution, element resistance from 1k to 1M, and voltage capability up to and beyond ±15v. However, most are limited to 8 bits, monopolar (typically 0v to +5v) signal levels, and 5k to 100k resistances with loose tolerances of ±20 to 30%.
Wow the engineering world with your unique design: Design Ideas Submission Guide
This design idea describes a simple and inexpensive Dpot-like alternative. It has limitations of its own (mainly being restricted to relatively low signal frequencies) but offers useful and occasionally superior performance in areas where actual Dpots tend to fall short. These include parameters like bipolar signal range, terrific differential nonlinearity, tight resistance accuracy, and programmable resolution. See Figure 1.
Figure 1 PWM drives opposing-phase CMOS switches and RC network to simulate a Dpot
RC ripple filtering limits frequency response to typically tens to hundreds of Hz.
Switch U1b connects wiper node W to node B when PWM = 1, and to A when PWM = 0. Letting the PWM duty factor, P = 0 to 1, and assuming no excessive loading of W:
Vw = P(Vb – Va) + Va
Meanwhile, switch U1a connects W to node A when PWM = 1, and to B when PWM = 0, thus 180o out of phase with U1b. Due to AC coupling, this has no effect on pot DC output, but the phase inversion relative to U1b delivers active ripple attenuation as described in “Cancel PWM DAC ripple with analog subtraction.”
The minimum RC time-constant required to attenuate ripple to no more than 1 least significant bit (lsb) for any given N = number of PWM bits of resolution and Tpwm = PWM period is given by:
RC = Tpwm 2(N/2 – 2)
For example:
for N = 8, Fpwm = 10 kHz
RC = 10 kHz-1*2(8/2 – 2) = 100 µs*22 = 400 µs
The maximum acceptable value for R is dictated by the required Vw voltage accuracy under load. Minimum R is determined by:
- Required resistance accuracy after factoring in the variability of U1b switch Ron: r which is 40 ±40Ω for the HC4053 powered as in Figure 1.
- Required integral nonlinearity (INL) as affected by switch-to-switch Ron variation, which is just 5 Ω for the HC4053 as powered here.
R = 1k to 10k would be a workable range of choices for N = 8-bit resolution. N is programmable.
The net result is the equivalent circuit shown in Figure 2. Note that, unlike a mechanical pot or Dpot, where output resistance varies dramatically with wiper setting, the PWMpot’s output resistance (R +r) is nominally constant and independent of setting.
Figure 2 The PWMpot’s equivalent circuit where r = switch Ron, P = PWM duty factor, and where the ripple filter capacitors are not shown.
Funny footnote: While pondering a name for this idea, I initially thought “PWMpot” was too long and considered making it shorter and catchy-er by dropping the “WM.” But then, after reading the resulting acronym out loud, I decided it was maybe a little too catchy.
And put the “WM” back!
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Cancel PWM DAC ripple with analog subtraction
- A faster PWM-based DAC
- Parsing PWM (DAC) performance: Part 1—Mitigating errors
- PWM power DAC incorporates an LM317
- Cancel PWM DAC ripple with analog subtraction but no inverter
- Cancel PWM DAC ripple with analog subtraction—revisited
The post PWMpot approximates a Dpot appeared first on EDN.
Aledia makes available micro-LED technology for immersive AR
Network Switch Meaning, Types, Working, Benefits & Applications
A network switch is a hardware device that connects devices within a Local Area Network (LAN) to enable communication. It operates at the data link layer (Layer 2) or network layer (Layer 3) of the OSI model and uses MAC or IP addresses to forward data packets to the appropriate device. Unlike hubs, switches efficiently direct traffic to specific devices rather than broadcasting to all network devices.
Types of Network Switch
- Unmanaged Switch:
- Basic plug-and-play device with no configuration options.
- Suitable for small or home networks.
- Managed Switch:
- Allows advanced configuration, monitoring, and control.
- Used in enterprise networks for better security and performance management.
- Smart Switch:
- A middle ground between unmanaged and managed switches.
- Provides limited management features for smaller networks.
- PoE Switch (Power over Ethernet):
- Delivers power to connected devices such as VoIP phones and IP cameras.
- Layer 3 Switch:
- Integrates routing functions with Layer 2 switching capabilities.
- Ideal for larger, more complex networks.
How Does a Network Switch Work?
A network switch operates by analyzing incoming data packets, determining their destination addresses, and forwarding them to the correct port. It maintains a MAC address table that maps devices to specific ports, ensuring efficient communication.
Steps in operation:
- Receives data packets.
- Reads the packet’s destination MAC or IP address.
- Matches the address with its internal table to find the correct port.
- Forwards the packet only to the intended recipient device.
Network Switch Uses & Applications
- Home Networks: Connect devices like PCs, printers, and smart home systems.
- Enterprise Networks: Facilitate communication across servers, workstations, and other IT infrastructure.
- Data Centers: Support high-speed communication and load balancing.
- Industrial Applications: Manage devices in IoT and automation systems.
- Surveillance Systems: Power and connect IP cameras via PoE switches.
How to Use a Network Switch
- Select the Right Switch: Choose based on your network size and requirements (e.g., unmanaged for simple networks, managed for complex ones).
- C Connect Devices: Insert Ethernet cables from your devices into the available ports on the switch.
- Connect to a Router: Link the switch to a router for internet access.
- Power On the Switch: If using PoE, ensure the switch supports the connected devices.
- Configure (if applicable): For managed switches, use the web interface or CLI to set up VLANs, QoS, or security settings.
Network Switch Advantages
- Improved Network Efficiency: Directs traffic only to the intended recipient device.
- Scalability: Allows multiple devices to connect and communicate.
- Enhanced Performance: Supports higher data transfer rates and reduces network congestion.
- Security Features: Managed switches offer advanced security controls.
- Flexibility: PoE switches provide power to connected devices, removing the requirement for individual power sources.
The post Network Switch Meaning, Types, Working, Benefits & Applications appeared first on ELE Times.
eSIM Meaning, Types, Working, Card, Architecture & Uses
An eSIM (embedded SIM) is an integrated SIM solution embedded within a device, removing the necessity for a physical SIM card. Integrated into a device’s hardware, it enables users to activate a mobile network plan without the need for a physical SIM card. This technology simplifies connectivity and is gaining popularity in smartphones, wearables, IoT devices, and automotive applications.
How Does eSIM Work?
An eSIM functions through a reprogrammable SIM chip that is built into the device’s hardware. In contrast to traditional SIM cards that require physical replacement, eSIMs can be activated or reconfigured using software. Mobile network operators (MNOs) provide QR codes or activation profiles that users scan or download to enable network connectivity.
The process typically involves the following steps:
1. Provisioning: The user receives a QR code or activation data from the MNO.
2. Activation: The eSIM-capable device connects to the MNO’s server to download and install the profile.
3. Switching Networks: Users can store multiple profiles and switch between them as needed.
eSIM Architecture
The architecture of an eSIM integrates hardware and software components to ensure seamless connectivity:
1. eUICC (Embedded Universal Integrated Circuit Card): This is the hardware component that houses the eSIM profile.
2. Profile Management: eSIM profiles are managed remotely by MNOs using Over-the-Air (OTA) technology.
3. Security Framework: Ensures secure provisioning, activation, and data transmission.
4. Interoperability Standards: Governed by GSMA specifications to ensure compatibility across devices and networks.
Types of eSIM
1. Consumer eSIM: Designed for smartphones, tablets, and wearables to provide seamless personal connectivity.
2. M2M (Machine-to-Machine) eSIM: Designed for IoT devices to enable seamless global connectivity.
3. Automotive eSIM: Implemented in connected cars for telematics, navigation, and emergency services.
eSIM Uses & Applications
1. Smartphones and Wearables:
– Enables dual SIM functionality.
– SMakes it easy to switch between carriers without needing to replace SIM cards.
2. IoT Devices:
– Powers smart meters, trackers, and sensors with global connectivity.
3. Automotive:
– Supports connected car applications like real-time navigation, diagnostics, and emergency calls.
4. Travel:
– Allows travelers to activate local plans without buying physical SIMs.
5. Enterprise:
– Facilitates centralized management of employee devices.
How to Use eSIM
1. Verify Device Compatibility: Confirm that the device is equipped with eSIM support.
2. Obtain an eSIM Plan: Contact an MNO to get an eSIM-enabled plan.
3. Activate the eSIM:
– Use the QR code supplied by the network operator for activation.
– Adhere to the displayed prompts to download and set up the eSIM profile.
4. Manage Profiles: Use the device settings to switch between profiles or add new ones.
Advantages of eSIM
1. Convenience: Removes the dependency on physical SIM cards for connectivity.
2. Flexibility: Supports multiple profiles, enabling seamless switching between carriers.
3. Compact Design: Saves space in devices, allowing for sleeker designs or additional features.
4. Remote Provisioning: Simplifies activation and profile management.
5. Eco-Friendly: Reduces plastic waste from physical SIM cards.
Disadvantages of eSIM
1. Limited Compatibility: eSIM technology is not universally supported across all devices.
2. Dependency on MNOs: Activation relies on operator support.
3. Security Concerns: Potential vulnerability during OTA provisioning.
4. Complexity in Migration: Switching devices requires transferring eSIM profiles, which can be less straightforward than swapping physical SIMs.
What is an eSIM Card?
An eSIM card is a built-in chip integrated into the device’s hardware, functioning as a replacement for conventional SIM cards. It operates electronically, allowing devices to connect to networks without physical card insertion.eSIM Module for IoT
In IoT, eSIM modules are integral for providing reliable, scalable, and global connectivity. They:
– Enable remote management of IoT devices.
– Streamline logistics by removing the necessity for region-specific SIM cards.
– Provide a robust solution for devices operating in diverse environments.
Conclusion
eSIM technology represents a significant step forward in connectivity, offering unmatched flexibility and convenience. From smartphones to IoT devices, its applications are broad and transformative. While it has limitations, advancements in compatibility and security are likely to drive its widespread adoption in the coming years.
The post eSIM Meaning, Types, Working, Card, Architecture & Uses appeared first on ELE Times.
AI at the edge: It’s just getting started
Artificial intelligence (AI) is expanding rapidly to the edge. This generalization conceals many more specific advances—many kinds of applications, with different processing and memory requirements, moving to different kinds of platforms. One of the most exciting instances, happening soonest and with the most impact on users, is the appearance of TinyML inference models embedded at the extreme edge—in smart sensors and small consumer devices.
Figure 1 The TinyML inference models are being embedded at the extreme edge in smart sensors and small consumer devices. Source: PIMIC
This innovation is enabling valuable functions such as keyword spotting (detecting spoken keywords) or performing environmental-noise cancellation (ENC) with a single microphone. Users treasure the lower latency, reduced energy consumption, and improved privacy.
Local execution of TinyML models depends on the convergence of two advances. The first is the TinyML model itself. While most of the world’s attention is focused on enormous—and still growing—large language models (LLMs), some researchers are developing really small neural-network models built around hundreds of thousands of parameters instead of millions or billions. These TinyML models are proving very capable on inference tasks with predefined inputs and a modest number of inference outputs.
The second advance is in highly efficient embedded architectures for executing these tiny models. Instead of a server board or a PC, think of a die small enough to go inside an earbud and efficient enough to not harm battery life.
Several approaches
There are many important tasks involved in neural-network inference, but the computing workload is dominated by matrix multiplication operations. The key to implementing inference at the extreme edge is to perform these multiplications with as little time, power, and silicon area as possible. The key to launching a whole successful product line at the edge is to choose an approach that scales smoothly, in small increments, across the whole range of applications you wish to cover.
It is the nature of the technology that models get larger over time.
System designers are taking different approaches to this problem. For the tiniest of TinyML models in applications that are not particularly sensitive to latency, a simple microcontroller core will do the job. But even for small models, MCUs with their constant fetching, loading, and storing are not an energy-efficient approach. And scaling to larger models may be difficult or impossible.
For these reasons many choose DSP cores to do the processing. DSPs typically have powerful vector-processing subsystems that can perform hundreds of low-precision multiply-accumulate operations per cycle. They employ automated load/store and direct memory access (DMA) operations cleverly to keep the vector processors fed. And often DSP cores come in scalable families, so designers can add throughput by adding vector processor units within the same architecture.
But this scaling is coarse-grained, and at some point, it becomes necessary to add a whole DSP core or more to the design, and to reorganize the system as a multicore approach. And, not unlike the MCU, the DSP consumes a great deal of energy in shuffling data between instruction memory and instruction cache and instruction unit, and between data memory and data cache and vector registers.
For even larger models and more latency-sensitive applications, designers can turn to dedicated AI accelerators. These devices, generally either based on GPU-like SIMD processor arrays or on dataflow engines, provide massive parallelism for the matrix operations. They are gaining traction in data centers, but their large size, their focus on performance over power, and their difficulty in scaling down significantly make them less relevant for the TinyML world at the extreme edge.
Another alternative
There is another architecture that has been used with great success to accelerate matrix operations: processing-in-memory (PiM). In this approach, processing elements, rather than being clustered in a vector processor or pipelined in a dataflow engine, are strategically dispersed at intervals throughout the data memory. This has important benefits.
First, since processing units are located throughout the memory, processing is inherently highly parallel. And the degree of parallel execution scales smoothly: the larger the data memory, the more processing elements it will contain. The architecture needs not change at all.
In AI processing, 90–95% of the time and energy is consumed by matrix multiplication, as each parameter within a layer is computed with those in subsequent layers. PiM addresses this inefficiency by eliminating the constant data movement between memory and processors.
By storing AI model weights directly within memory elements and performing matrix multiplication inside the memory itself as input data arrives, PiM significantly reduces data transfer overhead. This approach not only enhances energy efficiency but also improves processing speed, delivering lower latency for AI computations.
To fully leverage the benefits of PiM, a carefully designed neural network processor is crucial. This processor must be optimized to seamlessly interface with PiM memory, unlocking its full performance potential and maximizing the advantages of this innovative technology.
Design case study
The theoretical advantages of PiM are well established for TinyML systems at the network edge. Take the case of Listen VL130, a voice-activated wake word inference chip,which is also PIMIC’s first product. Fabricated on TSMC’s standard 22-nm CMOS process, the chip’s always-on voice-detection circuitry consumes 20 µA.
This circuit triggers a PiM-based wake word-inference engine that consumes only 30 µA when active. In operation, that comes out to a 17-times reduction in power compared to an equivalent DSP implementation. And the chip is tiny, easily fitting inside a microphone package.
Figure 2 Listen VL130, connected to external MCU in the above diagram, is an ultra-low-power keyword-spotting AI chip designed for edge devices. Source: PIMIC
PIMIC’s second chip, Clarity NC100, takes on a more ambitious TinyML model: single-microphone ENC. Consuming less than 200 µA, which is up to 30 times more efficient than a DSP approach, it’s also small enough for in-microphone mounting. It is scheduled for engineering samples in January 2025.
Both chips depend for their efficiency upon a TinyML model fitting entirely within an SRAM-based PiM array. But this is not the only way to exploit PiM architectures for AI, nor is it anywhere near the limits of the technology.
LLMs at the far edge?
One of today’s undeclared grand challenges is to bring generative AI—small language models (SLMs) and even some LLMs—to edge computing. And that’s not just to a powerful PC with AI extensions, but to actual edge devices. The benefit to applications would be substantial: generative AI apps would have greater mobility while being impervious to loss of connectivity. They could have lower, more predictable latency; and they would have complete privacy. But compared to TinyML, this is a different order of challenge.
To produce meaningful intelligence, LLMs require training on billions of parameters. At the same time, the demand for AI inference compute is set to surge, driven by the substantial computational needs of agentic AI and advanced text-to-video generation models like Sora and Veo 2. So, achieving significant advancements in performance, power efficiency, and silicon area (PPA) will necessitate breakthroughs in overcoming the memory wall—the primary obstacle to delivering low-latency, high-throughput solutions.
Figure 3 Here is a view of the layout of Listen VL130 chip, which is capable of processing 32 wake words and keywords while operating in the tens of microwatts, delivering energy efficiency without compromising performance. Source: PIMIC
At this technology crossroads, PiM technology is still important, but to a lesser degree. With these vastly larger matrices, the PiM array acts more like a cache, accelerating matrix multiplication piecewise. But much of the heavy lifting is done outside the PiM array, in a massively parallel dataflow architecture. And there is a further issue that must be resolved.
At the edge, in addition to facilitate model execution, it’s of primary importance to resolve the bandwidth and energy issues that come with scaling to massive memory sizes. Meeting all these challenges can improve an edge chip’s power-performance-area efficiency by more than 15 times.
PIMIC’s studies indicate that models with hundreds of millions to tens of billions of parameters can in fact be executed on edge devices. It will require 5-nm or 3-nm process technology, PiM structures, and most of all a deep understanding of how data moves in generative-AI models and how it interacts with memory.
PiM is indeed a silver bullet for TinyML at the extreme edge. But it’s just one tool, along with dataflow expertise and deep understanding of model dynamics, in reaching the point where we can in fact execute SLMs and some LLMs effectively at the far edge.
Subi Krishnamuthy is the founder and CEO of PIMIC, an AI semiconductor company developing processing-in-memory (PiM) technology for ultra-low-power AI solutions.
Related Content
- Getting a Grasp on AI at the Edge
- Tiny machine learning brings AI to IoT devices
- Why MCU suppliers are teaming up with TinyML platforms
- Open-Source Development Comes to Edge AI/ML Applications
- Edge AI: The Future of Artificial Intelligence in embedded systems
The post AI at the edge: It’s just getting started appeared first on EDN.
Aehr receives initial FOX-XP system order from GaN power semi supplier
Keysight Expands Novus Portfolio with Compact Automotive Software Defined Vehicle Test Solution
Keysight Technologies announces the expansion of its Novus portfolio with the Novus mini automotive, a quiet small form-factor pluggable (SFP) network test platform that addresses the needs of automotive network engineers as they deploy software defined vehicles (SDV). Keysight is expanding the capability of the Novus platform by offering a next generation vehicle interface that includes 10BASE-T1S, and multi-gigabyte BASE-T1 support for 100 megabytes per second, 2.5 gigabits per second (Gbit/s), 5Gbit/s, and 10Gbit/s. Keysight’s SFP architecture provides a flexible platform to mix and match speeds for each port with modules plugging into existing cards rather than requiring a separate card, as many current test solutions necessitate.
As vehicles move to zonal architectures, connected devices are a critical operational component. As a result, any system failures caused by connectivity and network issues can impact safety and potentially create life-threatening situations. To mitigate this risk, engineers must thoroughly test the conformance and performance of every system element before deploying them.
Key benefits of the Novus mini automotive platform include:- Streamlines testing – The combined solution offers both traffic generation and protocol testing on one platform. With both functions on a single platform, engineers can optimize the testing process, save time, and simplify workflows without requiring multiple tools. It also accelerates troubleshooting and facilitates efficient remediation of issues.
- Helps lower costs and simplify wiring – Supports native automotive interfaces BASE-T1 and BASE-T1S that help lower costs and simplify wiring for automotive manufacturers, reducing the amount of required cabling and connectors. BASE-T1 and BASE-T1S offer a scalable and flexible single-pair Ethernet solution that can adapt to different vehicle models and configurations. These interfaces support higher data rates compared to traditional automotive communication protocols for faster, more efficient data transmission as vehicles become more connected.
- Compact, quiet, and affordable – Features the smallest footprint in the industry with outstanding cost per port, and ultra-quiet, fan-less operation.
- Validates layers 2-7 in complex automotive networks– Provides comprehensive performance and conformance testing that covers everything from data link and network protocols to transport, session, presentation, and application layers. Validating the interoperability of disparate components across layers is necessary in complex automotive networks where multiple systems must seamlessly work together.
- Protects networks from unauthorized access – Supports full line rate and automated conformance testing for TSN 802.1AS 2011/2020, 802.1Qbv, 802.1Qav, 802.1CB, and 802.1Qci. The platform tests critical timing standards for automotive networking, as precise timing and synchronization are crucial for the reliable and safe operation of ADAS and autonomous vehicle technologies. Standards like 802.1Qci help protect networks from unauthorized access and faulty or unsecure devices.
Ram Periakaruppan, Vice President and General Manager, Network Test & Security Solutions, Keysight, said: “The Novus mini automotive provides real-world validation and automated conformance testing for the next generation of software defined vehicles. Our customers must trust that their products consistently meet quality standards and comply with regulatory requirements to avoid costly fines and penalties. The Novus mini allows us to deliver this confident assurance with a compact, integrated network test solution that can keep pace with constant innovation.”
Keysight will demonstrate its portfolio of test solutions for automotive networks, including the Novus mini automotive, at the Consumer Technology Show (CES), January 7-10th in Las Vegas, NV, West Hall, booth 4664 (Inside the Intrepid Controls booth).
The post Keysight Expands Novus Portfolio with Compact Automotive Software Defined Vehicle Test Solution appeared first on ELE Times.
Soft Soldering Definition, Process, Working, Uses & Advantages
Soft soldering is a popular technique in metal joining, known for its simplicity and versatility. It involves the use of a low-melting-point alloy to bond two or more metal surfaces. The process is widely used in electronics, plumbing, and crafting due to its ease of application and the reliability of the joints it produces.
What is Soft Soldering?Soft soldering refers to the process of joining metals using a filler material, known as solder, that melts and flows at temperatures below 450°C (842°F). Unlike brazing or welding, the base metals are not melted during this process. The bond is achieved by the solder adhering to the surface of the base metals, which must be clean and properly prepared to ensure a strong joint.
The solder typically consists of tin-lead alloys, although lead-free alternatives are now common due to health and environmental concerns. Flux is often used alongside solder to remove oxides from the metal surfaces, promoting better adhesion and preventing oxidation during heating.
How Soft Soldering WorksSoft soldering is a straightforward process that follows these basic steps:
- Preparation:
- Clean the surfaces to be joined by removing dirt, grease, and oxidation. This can be done using sandpaper, a wire brush, or chemical cleaners.
- Apply flux to the cleaned surfaces to prevent oxidation during heating and enhance solder flow.
- Heating:
- Utilize a soldering iron, soldering gun, or any appropriate heat source to warm the joint. Make sure the temperature is adequate to liquefy the solder while keeping the base metals intact.
- Application of Solder:
- After heating the joint, introduce the solder to the targeted area. The solder will melt and flow into the joint by capillary action, creating a strong bond upon cooling.
- Cooling:
- Let the joint cool down gradually without being disturbed. This ensures the integrity of the bond and prevents the formation of weak spots.
The essential tools and materials for soft soldering include:
- Soldering iron or gun
- Solder (tin-lead or lead-free)
- Flux
- Cleaning tools (e.g., sandpaper, wire brush)
- Heat-resistant work surface
- Surface Preparation: Clean the metal surfaces thoroughly. Apply flux to prevent oxidation and enhance solder adherence.
- Preheating: Warm the area to ensure uniform heating and improve solder flow.
- Solder Application: Melt the solder onto the heated joint, ensuring it flows evenly.
- Inspection: Examine the joint for uniformity and proper adhesion.
- Cleanup: Remove excess flux residue to prevent corrosion.
Soft soldering is widely employed in various industries and applications, including:
- Electronics:
- Circuit board assembly
- Wire connections
- Repair of electrical components
- Plumbing:
- Joining copper pipes
- Creating watertight seals in plumbing joints for water supply systems
- Jewellery Making:
- Crafting and repairing delicate metal items
- Arts and Crafts:
- Creating stained glass
- Assembling small metal models
- Automotive Repairs:
- Fixing radiators and other small components
- Ease of Use: The process is simple and does not require extensive training.
- Low Temperature: Operates at lower temperatures, reducing the risk of damaging components.
- Versatility: Capable of accommodating diverse materials and a variety of applications..
- Cost-Effective: Requires minimal equipment and materials.
- Repairability: Joints can be easily reworked or repaired.
- Weak Joint Strength: The bond is not as strong as those produced by welding or brazing.
- Temperature Limitations: Joints may fail under high temperatures.
- Toxicity: Lead-based solders pose health risks, necessitating the use of proper ventilation and safety measures.
- Corrosion Risk: Residual flux can lead to corrosion if not cleaned properly.
- Limited Material Compatibility: Not suitable for all types of metals, especially those with high melting points.
Soft soldering remains a valuable technique for joining metals in numerous applications, particularly where ease of use and low-temperature operation are essential. Its advantages make it ideal for delicate tasks in electronics, plumbing, and crafting, while its limitations must be considered when high strength or temperature resistance is required. With advancements in soldering materials and techniques, soft soldering continues to be a reliable and accessible method for metal joining.
The post Soft Soldering Definition, Process, Working, Uses & Advantages appeared first on ELE Times.
Researchers enhance longevity of neural implants with protective coating
Logic Simulator in Javascript
I've spent the past couple of days making a logic simulation inspired by Sebastian Lague's video series. It's missing quite a few features I wanted to initially add, but I wanted to share my progress. This is the link to the github repository: https://github.com/flippont/simple-program-editor The controls are in the README file. [link] [comments] |
Unconventional headphones: Sonic response consistency, albeit cosmetically ungainly
Back in mid-2019, I noted that the ability to discern high quality music and other audio playback (both in an absolute sense and when relatively differentiating between various delivery-format alternatives) was dependent not only on the characteristics of the audio itself but also on the equipment used to audition it. One key link in the playback chain is the speakers, whether integrated (along with crossover networks and such) into standalone cabinets or embedded in headphones, the latter particularly attractive because (among other reasons) they eliminate any “coloration” or other alteration caused by the listening room’s own acoustical characteristics (not to mention ambient background noise and imperfect suppression of its derogatory effects).
However, as I wrote at the time, “The quality potential inherent in any audio source won’t be discernable if you listen to it over cheap (i.e., limited and uneven frequency response, high noise and distortion levels, etc.) headphones.” To wit, I showcased three case study examples from my multi-headphone stable: the $29.99 (at the time) Massdrop x Koss Porta Pro X:
$149.99 Massdrop x Sennheiser HD 58X Jubilee:
and $199.99 Massdrop x Sennheiser HD 6XX:
I’ve subsequently augmented the latter two products with optional balanced-connection capabilities via third-party cables. Common to all three is an observation I made about their retail source, Drop (formerly Massdrop): the company “partners with manufacturers both to supply bulk ‘builds’ of products at cost-effective prices in exchange for guaranteed customer numbers, and (in some cases) to develop custom variants of those products.” Hold that thought.
And I’ve subsequently added another conventional-design headphone set to the menagerie: Sony’s MDR-V6, a “colorless” classic that dates from 1985 and is still in widespread recording studio use to this day. Sony finally obsoleted the MDR-V6 in 2020 in favor of the MDR-7506, more recent MDR-M1 and other successor models, which motived my admitted acquisition of several gently used MDR-V6 examples off eBay:
One characteristic that all four of these headphones share is that, exemplifying the most common headphone design approach, they’re all based on electrodynamic speaker drivers:
At this point, allow me a brief divergence; trust me, its relevance will soon be more obvious. In past writeups I’ve done on various kinds of both speakers and microphones, I’ve sometimes intermingled the alternative term “transducer”, a “device that converts energy from one form to another,” for both words. Such interchange is accurate; even more precise would be an “electroacoustic transducer”, which converts between electrical signals and sound waves. Microphones input sound waves and output electrical signals; with speakers, it’s the reverse.
I note all of this because electrodynamic speaker drivers, specifically in their most common dynamic configuration, are the conceptual mirror twins to the dynamic microphones I more recently wrote about in late November 2022. As I explained at the time, in describing dynamic mics’ implementation of the principle of electromagnetic induction:
A dynamic microphone operates on the same basic electrical principles as a speaker, but in reverse. Sound waves strike the diaphragm, causing the attached voice coil to move through a magnetic gap creating current flow as the magnetic lines are broken.
Unsurprisingly, therefore, the condenser and ribbon microphones also discussed in that late 2022 piece also have (close, albeit not exact, in both of these latter cases) analogies in driver design used for both standalone speakers and in headphones. Condenser mics first; here’s a relevant quote from my late 2022 writeup, corrected thanks to reader EMCgenius’s feedback:
Electret condenser microphones (ECMs) operate on the principle that the diaphragm and backplate interact with each other when sound enters the microphone. Either the diaphragm or backplate is permanently electrically charged, and this constant charge in combination with the varying capacitance caused by sound wave-generated varying distance between the diaphragm and backplate across time results in an associated varying output signal voltage.
Although electret drivers exist, and have found use both in standalone speakers and within headphones, their non-permanent-charge electrostatic siblings are more common (albeit still not very common). To wit, an excerpt from a relevant section of Wikipedia’s headphones entry:
Electrostatic drivers consist of a thin, electrically charged diaphragm, typically a coated PET film membrane, suspended between two perforated metal plates (electrodes). The electrical sound signal is applied to the electrodes creating an electrical field; depending on the polarity of this field, the diaphragm is drawn towards one of the plates. Air is forced through the perforations; combined with a continuously changing electrical signal driving the membrane, a sound wave is generated…A special amplifier is required to amplify the signal to deflect the membrane, which often requires electrical potentials in the range of 100 to 1,000 volts.
Now for ribbon microphones; here’s how Wikipedia and I described them back in late 2022:
A type of microphone that uses a thin aluminum, duraluminum or nanofilm of electrically conductive ribbon placed between the poles of a magnet to produce a voltage by electromagnetic induction.
Looking at that explanation and associated image, you can almost imagine how the process would work in reverse, right? Although ribbon speakers do exist, my focus for today is their close cousins, planar magnetic (also known as orthodynamic) speakers. Wikipedia again:
Planar magnetic speakers (having printed or embedded conductors on a flat diaphragm) are sometimes described as ribbons, but are not truly ribbon speakers. The term planar is generally reserved for speakers with roughly rectangular flat surfaces that radiate in a bipolar (i.e. front and back) manner. Planar magnetic speakers consist of a flexible membrane with a voice coil printed or mounted on it. The current flowing through the coil interacts with the magnetic field of carefully placed magnets on either side of the diaphragm, causing the membrane to vibrate more or less uniformly and without much bending or wrinkling. The driving force covers a large percentage of the membrane surface and reduces resonance problems inherent in coil-driven flat diaphragms.
I’ve chronologically ordered electrostatic and planar magnetic driver technologies based on their initial availability dates, not based on when examples of them came into my possession. Specifically, I found a good summary of the two approaches (along with their more common dynamic driver forebear) on Ken Rockwell’s always-informative website, which is also full of lots of great photography content (it’s always nice to stumble across a kindred interest spirit online!). Rockwell notes that electrostatics were first introduced in 1957 [editor note: by Stax, who’s still in the business], and “have been popular among enthusiasts since the late 1950s, but have always been on the fringe as they are expensive, require special amplifiers and power sources and are delicate—but they sound flawless.” Conversely, regarding planar magnetics, which date from 1972, he comments, “Planar magnetic drivers were invented in the 1970s and didn’t become popular until modern ultra-powerful magnet technology become common in the 2000s. Planar magnetics need tiny, ultra powerful magnets that didn’t used to exist. Planar magnetics offer much of the sound quality of electrostatics, with the ease-of use and durability of conventional drivers, which explains why they are becoming more and more popular.”
Which takes us, roughly 1,200 words in, to the specifics of my exotic headphone journey, which began with two sets containing planar magnetic drivers. Back in late May 2024, Woot! was selling the Logitech for Creators Blue Ella headset (Logitech having purchased Blue in mid-2018) for $99.99, versus the initial $699.99 MSRP when originally introduced in early January 2017. The Ella looked (and still looks) weird, and is also heavy, albeit surprisingly comfortable; the only time I’ve ever seen anyone actually using one was a brief glimpse on Trey Parker and Matt Stone’s heads while doing voice tracks for South Park within the recently released Paramount+ documentary ¡Casa Bonita Mi Amor!. But reviewers rave about the headphones’ sound quality, a headphone amplifier is integrated for use in otherwise high impedance-unfriendly portable playback scenarios, and my wife was bugging me for a Father’s Day gift suggestion. So…
A couple of weeks later, a $10-off promotional coupon from Drop showed up in my email inbox. Browsing the retailer’s inventory, I came across another set of planar magnetic headphones, the Drop + HIFIMAN HE-X4 (remember my earlier comments about Drop’s longstanding history of partnering with name-brand suppliers to come up with custom product variants?), at the time selling for $99.99. They were well reviewed by the Drop community, and looked much less…err…alien…than the Blue Ella, so…(you’ve already seen one stock photo of ‘em earlier):
Look how happy she is (in spite of how big they are on her head)!
And of course, with two planer magnetic headsets now in the stable, I just had to snag an electrostatic representative too, right? Koss, for example, has been making (and evolving) them ever since 1968’s initial ESP/6 model:
The most recent ESP950 variant came out in 1990 and is still available for purchase at $999.99 (or less: Black Friday promotion-priced at $700 on Amazon as I type these words). Believe it or not, it’s one of the most cost-effective electrostatic headphone options currently in the market. Still, its price tag was too salty for my curiosity taste, lifetime factory warranty temptation aside.
That box to the right is the “energizer”, which tackles both the aforementioned high voltage generation and output signal amplification tasks. Koss includes with the ESP950 kit, believe it or not, a 6 C-cell battery pack to alternatively power the energizer (therefore enabling use of the headphones) when away from an AC outlet. Portability? Hardly, although in fairness, the ESP950 was originally intended for use in live recording settings.
But then I stumbled across the fact that back in April 2019, Drop (doing yet another partnership with a brand-name supplier, this one reflective of a long-term multi-product engagement also exemplified by the earlier-shown Porta Pro X) had worked with Koss to introduce a well-reviewed $499.99 version of the kit called the Massdrop x Koss ESP/95X Electrostatic System:
Drop tweaked the color scheme of both the headphones themselves (to midnight blue) and the energizer, swapped out the fake leather (“pleather”) earpads for foam ones wrapped in velour, and dropped both the battery pack and the leather case (the latter still available for purchase standalone for $150) from Koss’s kit to reduce the price point:
Bad news: at least for the moment, the ESP/95X is no longer being sold by Drop. Good news: I found a gently used kit on eBay for $300 plus shipping and tax (and for likely obvious reasons, I also purchased a two-year extended warranty for it).
And what did all of this “retail therapy” garner me? To set the stage for this section, I’ll again quote from the introduction to Ken Rockwell’s earlier mentioned writeup:
Almost all speakers and headphones today are “dynamic.”
Conventional speakers and headphones stick a coil of wire inside a magnet, and glue this coil to a stiff cone or dome that’s held in place with a springy suspension. Current passes through this coil, and electromagnetism creates force on the coil while in the field of the magnet. The resulting force vibrates the coil, and since it’s glued to a heavy cone, moves the whole mess in and out. This primitive method is still used today because it’s cheap and works reasonably well for most purposes.
Dynamic drivers are the standard today and have been the standard for close to a hundred years. These systems are cheap, durable and work well enough for most uses, however their heavy diaphragms and big cones lead to many more sound degrading distortions and resonances absent in the newer systems below.
By “newer systems below”, of course, he’s referring to alternative electrostatic and planar magnetic approaches. And although he’s not totally off-base with his observations, the choice of words like “primitive method” reveals a bias, IMHO. It’s true that the large, flat, thin and lightweight membrane-based approaches have inherent (theoretical, at least) advantages when it comes to metrics such as distortion and transient response, leading to descriptions such as “unmatched clarity and impressive detail”, which admittedly concur with my own ears-on impressions. That said, theoretical benefits are moot if they don’t translate into meaningful real-life enhancements. To wit, for a more balanced perspective, I’ll close with a (fine-tuned by yours truly) post within an August 2023 discussion thread titled “Is there really any advantage to planar magnetics or electrostats?” at Audio Science Review, a site that I regularly reference:
For electrostatics, the strong points are the low membrane weight and drive across the entire membrane. The disadvantage is output level. The driver surface area is big, which has advantages and disadvantages. On can play with shape to change modal behavior. Electrostatics are difficult to drive in the sense that they require a bias voltage (or electret charge) and high voltage on the plates, which necessitates mains voltage or converters. Mechanical tension is a must and ‘sticking’ to one stator is a potential problem.
For planar magnetics, the strong points are the maximum sound pressure level (SPL), linearity and the driver size. The latter can be both a blessing and (frequency-dependent) downside. Fewer tuning methods are available, and it is difficult to get a bass boost in a passive way. The magnets obstruct the sound waves more than does the stator of electrostatic planars, which has an influence on mid to high frequencies. Planar magnetics are easier to drive than electrostatics but in general are inefficient compared to dynamic drivers, especially when high SPL is needed with good linearity. They are heavy (weight) due to the magnets compared to other drivers. They can handle a lot of power. They need closed front volume to work properly.
Dynamics can have a much higher efficiency, at the expense of maximum undistorted SPL. They can be used directly from low power sources. There are many more ways to ‘shape’ the sound signature of the driver, and the headphone containing it. They are less expensive to make, and lighter in weight. Membrane size and shape can both find use in controlling modal issues. Linearity (max SPL without distortion) can be much worse than planar alternatives, although for low to normal SPLs, this usually is not an issue.
Balanced armature drivers [editor note: an alternative to dynamic drivers not discussed here, commonly found in earbuds] are smaller and can be easily used close to the ear canal. These drivers too have strong and weak points and are quite different from dynamic drivers. They are easier to make custom molds for due to their size.
In closing, speaking of “balance” (along with the just-mentioned difference between theoretical benefits and meaningful real-life enhancements), I found it interesting that none of the electrostatic or planar magnetic headphones discussed here offer the balanced-connection output (even optional) that I covered at length back in December 2020:
And with that, having just passed through the 2,500-word threshold, I’ll close for today with an as-usual invitation for your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Balanced headphones: Can you hear the difference?
- Microphones: An abundance of options for capturing tones
- Is high-quality streamed audio an oxymoron?
- Earbud implementation options: Taking a test drive(r)
- Teardown: Analog rules over digital in noise-canceling headphones
- Audio Perceptibility: Mind The Headphone Sensitivity
The post Unconventional headphones: Sonic response consistency, albeit cosmetically ungainly appeared first on EDN.
A brief history and technical background of heat shrink tubing
Heat shrink tubing, rarely referred to simply as “HST” even in our acronym-intensive world, is made of cross-linked polymers and is primarily used to cover and protect wire splices. EDN and Planet Analog contributor Bill Schweber provides a sneak peek of this important but often underrated technology in his latest blog.
Read the full story at EDN’s sister publication, Planet Analog.
Related Content
- Consumer connectors get ruggedized
- Be aware of connector mating-cycle limits
- Read this and give electric insulation a second thought
- Fry’s: Will hands-on opportunities shrink as component stores close?
The post A brief history and technical background of heat shrink tubing appeared first on EDN.
Reflow Oven Definition, Types, Working, Temperature & Machine
Reflow ovens are essential tools in the world of electronics manufacturing, particularly in the soldering process of Surface Mount Technology (SMT). Their precision and efficiency make them indispensable in ensuring the integrity and reliability of printed circuit boards (PCBs). This guide offers a comprehensive overview of reflow ovens, covering their various types, operational principles, temperature profiles, and additional key aspects.
What is a Reflow Oven?
A reflow oven is a specialized machine used to solder electronic components onto PCBs. The process involves heating solder paste applied to the board until it melts, allowing the components to adhere securely to their respective pads. Once cooled, the solder solidifies, creating strong electrical and mechanical connections.
Reflow ovens are commonly used in SMT assembly, where components are placed on the surface of PCBs rather than through holes. This method is widely preferred due to its high efficiency and suitability for miniaturized, densely packed designs.
How Does a Reflow Oven Work?
A reflow oven operates by exposing PCBs to controlled heating cycles. These cycles are meticulously designed to gradually heat the solder paste, reflow it, and then cool it down without causing thermal stress to the board or components. Here’s an overview of the process:
- Preheating Zone:
- The PCB enters the oven and is gradually heated to prevent thermal shock. During this phase, the flux in the solder paste becomes active, helping to eliminate oxides from the metal surfaces..
- Soak Zone:
- The temperature is held steady to ensure uniform heating of the entire board and stabilization of the solder paste.
- Reflow Zone:
- The temperature is raised above the melting point of the solder paste, causing it to liquefy and form bonds between components and PCB pads.
- Cooling Zone:
- The temperature is quickly reduced to solidify the solder, ensuring strong and reliable connections.
The process is controlled by a temperature profile, which is a graph showing the temperature over time as the PCB moves through the oven.
Reflow Oven Temperature Profile
Establishing a precise temperature profile is essential for achieving effective and reliable results in the reflow soldering process. A standard profile consists of four main stages:
- Ramp-Up (Preheat):
- Typical temperature range: 150°C to 200°C.
- Time: 60-120 seconds.
- Thermal Soak:
- The usual temperature range for reflow ovens is between 200°C and 210°C.
- Time: 60-120 seconds.
- Reflow (Peak):
- Peak temperature: 230°C to 260°C (depending on the solder paste).
- Time above melting point: 30-90 seconds.
- Cooling:
- Gradual cooling to ambient temperature.
- Rapid cooling can lead to thermal stress, so a controlled rate is preferred.
Maintaining the correct temperature profile is crucial to avoid defects such as cold joints, tombstoning, or component damage.
Types of Reflow Ovens
Reflow ovens come in various types, each suited to specific applications and production scales:
- Infrared (IR) Reflow Ovens:
- Rely on infrared radiation as a heating method for PCBs and solder paste.
- Advantages: Simple and cost-effective.
- Drawbacks: Non-uniform heating due to differences in component absorption rates.
- Convection Reflow Ovens:
- Use hot air to achieve uniform heating.
- Advantages: Consistent temperature distribution.
- Drawbacks: Higher energy consumption compared to IR ovens.
- Vapor Phase Reflow Ovens:
- Use a boiling liquid (e.g., Galden) to transfer heat.
- Advantages: Precise temperature control and reduced oxidation risk.
- Drawbacks: High cost and limited throughput.
- Combination Ovens:
- Combine IR and convection heating methods for better efficiency and uniformity.
- Batch Reflow Ovens:
- Process a single batch of PCBs at a time.
- These are ideal for prototype development and managing limited production batches.
- Inline Reflow Ovens:
- Continuously process PCBs on a conveyor belt.
- Suitable for high-volume production.
How to Make a Reflow Oven
Creating a DIY reflow oven is a popular choice for hobbyists and small-scale manufacturers. Here’s a step-by-step guide:
- Acquire a Toaster Oven:
- Choose one with adjustable temperature controls and sufficient interior space.
- Install a Thermocouple and Controller:
- Attach a thermocouple to monitor temperature.
- Use a PID controller to manage heating cycles accurately.
- Modify Heating Elements:
- Ensure even heat distribution by adjusting or replacing heating elements.
- Add Insulation:
- Improve heat retention with additional insulation around the oven.
- Test and Calibrate:
- Run test cycles with a temperature profiler to ensure consistent results.
While DIY reflow ovens are cost-effective, they may lack the precision of commercial models, making them suitable for small-scale or experimental projects.
How to Use a Reflow Oven
- Prepare the PCB:
- Spread solder paste onto the designated pads with the help of a stencil.
- Position the components precisely on the PCB in their designated spots.
- Set the Temperature Profile:
- Configure the oven based on the solder paste’s specifications.
- Load the PCB:
- Place the PCB on the conveyor belt or tray.
- Run the Reflow Process:
- Monitor the oven to ensure the profile is followed.
- Inspect the Board:
- Check for soldering defects using visual inspection or X-ray analysis.
Reflow Oven Machine Features
Modern reflow ovens include advanced features such as:
- Multiple Heating Zones: Independent control over preheating, soak, reflow, and cooling zones.
- Conveyor Systems: The speed can be adjusted to ensure precise control over the process.
- Data Logging: Record temperature profiles for quality assurance.
- Nitrogen Atmosphere: Reduce oxidation during soldering.
Reflow Oven Zones
The performance and versatility of a reflow oven are influenced by the number of zones it has. Typically, reflow ovens have 4-12 zones, divided into:
- Heating Zones: These include the preheating, soaking, and reflow phases.
- Cooling Zones: Gradual temperature reduction.
More zones allow for finer temperature control and accommodate complex profiles.
Reflow Oven for PCB Soldering
Reflow ovens are crucial in soldering PCBs, ensuring consistent and reliable connections. They excel in handling:
- High-density SMT assemblies.
- Fine-pitch components and BGAs.
- Complex multi-layer boards.
Reflow Oven for SMT Soldering
In SMT soldering, reflow ovens streamline the assembly process by:
- Minimizing thermal stress on components.
- Ensuring uniform soldering across the board.
- Supporting high-volume, automated production lines.
Conclusion
Reflow ovens are vital tools in modern electronics manufacturing, offering precision, reliability, and efficiency in soldering SMT components. Whether you’re using a high-end inline oven or a DIY setup, understanding their operation, temperature profiles, and types is key to achieving optimal results. As the demand for miniaturized and high-performance electronics grows, reflow ovens will remain a cornerstone of PCB assembly processes.
The post Reflow Oven Definition, Types, Working, Temperature & Machine appeared first on ELE Times.
Almae orders Riber GSMBE production system
USPTO issues Notice of Allowance for AmpliTech’s MMIC LNA patent application
InnoScience floats in IPO on Hong Kong Stock Exchange
New edge AI-enabled radar sensor and automotive audio processors from TI empower automakers to reimagine in-cabin experiences
Texas Instruments (TI) today introduced new integrated automotive chips to enable safer, more immersive driving experiences at any vehicle price point. TI’s AWRL6844 60GHz mmWave radar sensor supports occupancy monitoring for seat belt reminder systems, child presence detection and intrusion detection with a single chip running edge AI algorithms, enabling a safer driving environment. With TI’s next-generation audio DSP core, the AM275x-Q1 MCUs and AM62D-Q1 processors make premium audio features more affordable. Paired with TI’s latest analog products, including the TAS6754-Q1 Class-D audio amplifier, engineers can take advantage of a complete audio amplifier system offering. TI is showcasing these devices at the 2025 Consumer Electronics Show (CES), Jan. 7-10, in Las Vegas, Nevada.
“Today’s drivers expect any car – entry-level to luxury, combustion to electric – to have enhanced in-cabin experiences,” said Amichai Ron, senior vice president, TI Embedded Processing. “TI continues to provide innovative technologies to enable the future of the automotive driving experience. Our edge AI-enabled radar sensors allow automakers to make vehicles safer and more responsive to the driver, while our audio systems-on-chip elevate the drive through more immersive audio. Together they create a whole new level of in-cabin experiences.”
Edge AI-enabled, three-in-one radar sensor increases detection accuracyOriginal equipment manufacturers (OEMs) are gradually designing in more sensors to enhance the in-vehicle experience and meet evolving safety standards. TI’s edge AI-enabled AWRL6844 60GHz mmWave radar sensor enables engineers to incorporate three in-cabin sensing features to replace multiple sensor technologies, such as in-seat weight mats and ultrasonic sensors, lowering total implementation costs by an average of US$20 per vehicle.
The AWRL6844 integrates four transmitters and four receivers, enabling high-resolution sensing data at an optimized cost for OEMs. This data feeds into application-specific AI-driven algorithms on a customizable on-chip hardware accelerator and DSP, improving decision-making accuracy and reducing processing time. The edge intelligence capabilities of the AWRL6844 sensor that help improve the driving experience include these examples:
- While driving, it supports occupant detection and localization with 98% accuracy to enable seat belt reminders.
- After parking, it monitors for unattended children in the vehicle, using neural networks that detect micromovements in real time with over 90% classification accuracy. This direct sensing capability enables OEMs to meet 2025 European New Car Assessment Program (Euro NCAP) design requirements.
- When parked, it adapts to different environments through intelligent scanning, reducing false intrusion detection alerts caused by car shaking and external movement.
To learn more, read the technical article, “Reducing In-Cabin Sensing Complexity and Cost with a Single-Chip 60GHz mmWave Radar Sensor.”
Deliver premium automotive audio with TI’s complete audio portfolioAs driver expectations grow for elevated in-cabin experiences across vehicle models, OEMs aim to offer premium audio while minimizing design complexity and system cost. AM275x-Q1 MCUs and AM62D-Q1 processors reduce the number of components required for an automotive audio amplifier system by integrating TI’s vector-based C7x DSP core, Arm cores, memory, audio networking and a hardware security module into a single, functional safety-capable SoC. The C7x core, coupled with a matrix multiply accelerator, together form a neural processing unit that processes both traditional and edge AI-based audio algorithms. These automotive audio SoCs are scalable, allowing designers to meet memory and performance needs, from entry-level to high-end systems, with minimal redesign and investment.
TI’s next-generation C7x DSP core achieves more than four times the processing performance of other audio DSPs, allowing audio engineers to manage multiple features within a single core. AM275x-Q1 MCUs and AM62D-Q1 processors enable immersive audio inside the cabin with features such as spatial audio, active noise cancellation, sound synthesis and advanced vehicle networking, including Audio Video Bridging over Ethernet.
“Dolby’s longtime collaboration with Texas Instruments has enabled incredible audio experiences in the home, which we’re now bringing into the car,” said Andreas Ehret, senior director of Automotive Business at Dolby Laboratories. “With TI’s C7x DSP core, we can now deliver the latest Dolby Atmos capabilities more efficiently, including support for even smaller form factor audio systems so nearly all vehicles can have Dolby Atmos. Together, these products can help turn every car ride into an immersive entertainment experience.”
To further optimize their automotive audio designs, engineers can use TI’s TAS6754-Q1 audio amplifier with innovative 1L modulation technology to deliver class-leading audio performance and power consumption, with half the number of inductors compared to existing Class-D amplifiers. The TAS67xx-Q1 family of devices, which integrates real-time load diagnostics required by OEMs, helps engineers simplify designs, decrease costs, and increase efficiency without sacrificing audio quality.
The post New edge AI-enabled radar sensor and automotive audio processors from TI empower automakers to reimagine in-cabin experiences appeared first on ELE Times.