Українською
  In English
Feed aggregator
Aledia makes available micro-LED technology for immersive AR
Network Switch Meaning, Types, Working, Benefits & Applications
A network switch is a hardware device that connects devices within a Local Area Network (LAN) to enable communication. It operates at the data link layer (Layer 2) or network layer (Layer 3) of the OSI model and uses MAC or IP addresses to forward data packets to the appropriate device. Unlike hubs, switches efficiently direct traffic to specific devices rather than broadcasting to all network devices.
Types of Network Switch
- Unmanaged Switch:
- Basic plug-and-play device with no configuration options.
- Suitable for small or home networks.
- Managed Switch:
- Allows advanced configuration, monitoring, and control.
- Used in enterprise networks for better security and performance management.
- Smart Switch:
- A middle ground between unmanaged and managed switches.
- Provides limited management features for smaller networks.
- PoE Switch (Power over Ethernet):
- Delivers power to connected devices such as VoIP phones and IP cameras.
- Layer 3 Switch:
- Integrates routing functions with Layer 2 switching capabilities.
- Ideal for larger, more complex networks.
How Does a Network Switch Work?
A network switch operates by analyzing incoming data packets, determining their destination addresses, and forwarding them to the correct port. It maintains a MAC address table that maps devices to specific ports, ensuring efficient communication.
Steps in operation:
- Receives data packets.
- Reads the packet’s destination MAC or IP address.
- Matches the address with its internal table to find the correct port.
- Forwards the packet only to the intended recipient device.
Network Switch Uses & Applications
- Home Networks: Connect devices like PCs, printers, and smart home systems.
- Enterprise Networks: Facilitate communication across servers, workstations, and other IT infrastructure.
- Data Centers: Support high-speed communication and load balancing.
- Industrial Applications: Manage devices in IoT and automation systems.
- Surveillance Systems: Power and connect IP cameras via PoE switches.
How to Use a Network Switch
- Select the Right Switch: Choose based on your network size and requirements (e.g., unmanaged for simple networks, managed for complex ones).
- C Connect Devices: Insert Ethernet cables from your devices into the available ports on the switch.
- Connect to a Router: Link the switch to a router for internet access.
- Power On the Switch: If using PoE, ensure the switch supports the connected devices.
- Configure (if applicable): For managed switches, use the web interface or CLI to set up VLANs, QoS, or security settings.
Network Switch Advantages
- Improved Network Efficiency: Directs traffic only to the intended recipient device.
- Scalability: Allows multiple devices to connect and communicate.
- Enhanced Performance: Supports higher data transfer rates and reduces network congestion.
- Security Features: Managed switches offer advanced security controls.
- Flexibility: PoE switches provide power to connected devices, removing the requirement for individual power sources.
The post Network Switch Meaning, Types, Working, Benefits & Applications appeared first on ELE Times.
eSIM Meaning, Types, Working, Card, Architecture & Uses
An eSIM (embedded SIM) is an integrated SIM solution embedded within a device, removing the necessity for a physical SIM card. Integrated into a device’s hardware, it enables users to activate a mobile network plan without the need for a physical SIM card. This technology simplifies connectivity and is gaining popularity in smartphones, wearables, IoT devices, and automotive applications.
How Does eSIM Work?
An eSIM functions through a reprogrammable SIM chip that is built into the device’s hardware. In contrast to traditional SIM cards that require physical replacement, eSIMs can be activated or reconfigured using software. Mobile network operators (MNOs) provide QR codes or activation profiles that users scan or download to enable network connectivity.
The process typically involves the following steps:
1. Provisioning: The user receives a QR code or activation data from the MNO.
2. Activation: The eSIM-capable device connects to the MNO’s server to download and install the profile.
3. Switching Networks: Users can store multiple profiles and switch between them as needed.
eSIM Architecture
The architecture of an eSIM integrates hardware and software components to ensure seamless connectivity:
1. eUICC (Embedded Universal Integrated Circuit Card): This is the hardware component that houses the eSIM profile.
2. Profile Management: eSIM profiles are managed remotely by MNOs using Over-the-Air (OTA) technology.
3. Security Framework: Ensures secure provisioning, activation, and data transmission.
4. Interoperability Standards: Governed by GSMA specifications to ensure compatibility across devices and networks.
Types of eSIM
1. Consumer eSIM: Designed for smartphones, tablets, and wearables to provide seamless personal connectivity.
2. M2M (Machine-to-Machine) eSIM: Designed for IoT devices to enable seamless global connectivity.
3. Automotive eSIM: Implemented in connected cars for telematics, navigation, and emergency services.
eSIM Uses & Applications
1. Smartphones and Wearables:
– Enables dual SIM functionality.
– SMakes it easy to switch between carriers without needing to replace SIM cards.
2. IoT Devices:
– Powers smart meters, trackers, and sensors with global connectivity.
3. Automotive:
– Supports connected car applications like real-time navigation, diagnostics, and emergency calls.
4. Travel:
– Allows travelers to activate local plans without buying physical SIMs.
5. Enterprise:
– Facilitates centralized management of employee devices.
How to Use eSIM
1. Verify Device Compatibility: Confirm that the device is equipped with eSIM support.
2. Obtain an eSIM Plan: Contact an MNO to get an eSIM-enabled plan.
3. Activate the eSIM:
– Use the QR code supplied by the network operator for activation.
– Adhere to the displayed prompts to download and set up the eSIM profile.
4. Manage Profiles: Use the device settings to switch between profiles or add new ones.
Advantages of eSIM
1. Convenience: Removes the dependency on physical SIM cards for connectivity.
2. Flexibility: Supports multiple profiles, enabling seamless switching between carriers.
3. Compact Design: Saves space in devices, allowing for sleeker designs or additional features.
4. Remote Provisioning: Simplifies activation and profile management.
5. Eco-Friendly: Reduces plastic waste from physical SIM cards.
Disadvantages of eSIM
1. Limited Compatibility: eSIM technology is not universally supported across all devices.
2. Dependency on MNOs: Activation relies on operator support.
3. Security Concerns: Potential vulnerability during OTA provisioning.
4. Complexity in Migration: Switching devices requires transferring eSIM profiles, which can be less straightforward than swapping physical SIMs.
What is an eSIM Card?
An eSIM card is a built-in chip integrated into the device’s hardware, functioning as a replacement for conventional SIM cards. It operates electronically, allowing devices to connect to networks without physical card insertion.eSIM Module for IoT
In IoT, eSIM modules are integral for providing reliable, scalable, and global connectivity. They:
– Enable remote management of IoT devices.
– Streamline logistics by removing the necessity for region-specific SIM cards.
– Provide a robust solution for devices operating in diverse environments.
Conclusion
eSIM technology represents a significant step forward in connectivity, offering unmatched flexibility and convenience. From smartphones to IoT devices, its applications are broad and transformative. While it has limitations, advancements in compatibility and security are likely to drive its widespread adoption in the coming years.
The post eSIM Meaning, Types, Working, Card, Architecture & Uses appeared first on ELE Times.
AI at the edge: It’s just getting started

Artificial intelligence (AI) is expanding rapidly to the edge. This generalization conceals many more specific advances—many kinds of applications, with different processing and memory requirements, moving to different kinds of platforms. One of the most exciting instances, happening soonest and with the most impact on users, is the appearance of TinyML inference models embedded at the extreme edge—in smart sensors and small consumer devices.
Figure 1 The TinyML inference models are being embedded at the extreme edge in smart sensors and small consumer devices. Source: PIMIC
This innovation is enabling valuable functions such as keyword spotting (detecting spoken keywords) or performing environmental-noise cancellation (ENC) with a single microphone. Users treasure the lower latency, reduced energy consumption, and improved privacy.
Local execution of TinyML models depends on the convergence of two advances. The first is the TinyML model itself. While most of the world’s attention is focused on enormous—and still growing—large language models (LLMs), some researchers are developing really small neural-network models built around hundreds of thousands of parameters instead of millions or billions. These TinyML models are proving very capable on inference tasks with predefined inputs and a modest number of inference outputs.
The second advance is in highly efficient embedded architectures for executing these tiny models. Instead of a server board or a PC, think of a die small enough to go inside an earbud and efficient enough to not harm battery life.
Several approaches
There are many important tasks involved in neural-network inference, but the computing workload is dominated by matrix multiplication operations. The key to implementing inference at the extreme edge is to perform these multiplications with as little time, power, and silicon area as possible. The key to launching a whole successful product line at the edge is to choose an approach that scales smoothly, in small increments, across the whole range of applications you wish to cover.
It is the nature of the technology that models get larger over time.
System designers are taking different approaches to this problem. For the tiniest of TinyML models in applications that are not particularly sensitive to latency, a simple microcontroller core will do the job. But even for small models, MCUs with their constant fetching, loading, and storing are not an energy-efficient approach. And scaling to larger models may be difficult or impossible.
For these reasons many choose DSP cores to do the processing. DSPs typically have powerful vector-processing subsystems that can perform hundreds of low-precision multiply-accumulate operations per cycle. They employ automated load/store and direct memory access (DMA) operations cleverly to keep the vector processors fed. And often DSP cores come in scalable families, so designers can add throughput by adding vector processor units within the same architecture.
But this scaling is coarse-grained, and at some point, it becomes necessary to add a whole DSP core or more to the design, and to reorganize the system as a multicore approach. And, not unlike the MCU, the DSP consumes a great deal of energy in shuffling data between instruction memory and instruction cache and instruction unit, and between data memory and data cache and vector registers.
For even larger models and more latency-sensitive applications, designers can turn to dedicated AI accelerators. These devices, generally either based on GPU-like SIMD processor arrays or on dataflow engines, provide massive parallelism for the matrix operations. They are gaining traction in data centers, but their large size, their focus on performance over power, and their difficulty in scaling down significantly make them less relevant for the TinyML world at the extreme edge.
Another alternative
There is another architecture that has been used with great success to accelerate matrix operations: processing-in-memory (PiM). In this approach, processing elements, rather than being clustered in a vector processor or pipelined in a dataflow engine, are strategically dispersed at intervals throughout the data memory. This has important benefits.
First, since processing units are located throughout the memory, processing is inherently highly parallel. And the degree of parallel execution scales smoothly: the larger the data memory, the more processing elements it will contain. The architecture needs not change at all.
In AI processing, 90–95% of the time and energy is consumed by matrix multiplication, as each parameter within a layer is computed with those in subsequent layers. PiM addresses this inefficiency by eliminating the constant data movement between memory and processors.
By storing AI model weights directly within memory elements and performing matrix multiplication inside the memory itself as input data arrives, PiM significantly reduces data transfer overhead. This approach not only enhances energy efficiency but also improves processing speed, delivering lower latency for AI computations.
To fully leverage the benefits of PiM, a carefully designed neural network processor is crucial. This processor must be optimized to seamlessly interface with PiM memory, unlocking its full performance potential and maximizing the advantages of this innovative technology.
Design case study
The theoretical advantages of PiM are well established for TinyML systems at the network edge. Take the case of Listen VL130, a voice-activated wake word inference chip,which is also PIMIC’s first product. Fabricated on TSMC’s standard 22-nm CMOS process, the chip’s always-on voice-detection circuitry consumes 20 µA.
This circuit triggers a PiM-based wake word-inference engine that consumes only 30 µA when active. In operation, that comes out to a 17-times reduction in power compared to an equivalent DSP implementation. And the chip is tiny, easily fitting inside a microphone package.
Figure 2 Listen VL130, connected to external MCU in the above diagram, is an ultra-low-power keyword-spotting AI chip designed for edge devices. Source: PIMIC
PIMIC’s second chip, Clarity NC100, takes on a more ambitious TinyML model: single-microphone ENC. Consuming less than 200 µA, which is up to 30 times more efficient than a DSP approach, it’s also small enough for in-microphone mounting. It is scheduled for engineering samples in January 2025.
Both chips depend for their efficiency upon a TinyML model fitting entirely within an SRAM-based PiM array. But this is not the only way to exploit PiM architectures for AI, nor is it anywhere near the limits of the technology.
LLMs at the far edge?
One of today’s undeclared grand challenges is to bring generative AI—small language models (SLMs) and even some LLMs—to edge computing. And that’s not just to a powerful PC with AI extensions, but to actual edge devices. The benefit to applications would be substantial: generative AI apps would have greater mobility while being impervious to loss of connectivity. They could have lower, more predictable latency; and they would have complete privacy. But compared to TinyML, this is a different order of challenge.
To produce meaningful intelligence, LLMs require training on billions of parameters. At the same time, the demand for AI inference compute is set to surge, driven by the substantial computational needs of agentic AI and advanced text-to-video generation models like Sora and Veo 2. So, achieving significant advancements in performance, power efficiency, and silicon area (PPA) will necessitate breakthroughs in overcoming the memory wall—the primary obstacle to delivering low-latency, high-throughput solutions.
Figure 3 Here is a view of the layout of Listen VL130 chip, which is capable of processing 32 wake words and keywords while operating in the tens of microwatts, delivering energy efficiency without compromising performance. Source: PIMIC
At this technology crossroads, PiM technology is still important, but to a lesser degree. With these vastly larger matrices, the PiM array acts more like a cache, accelerating matrix multiplication piecewise. But much of the heavy lifting is done outside the PiM array, in a massively parallel dataflow architecture. And there is a further issue that must be resolved.
At the edge, in addition to facilitate model execution, it’s of primary importance to resolve the bandwidth and energy issues that come with scaling to massive memory sizes. Meeting all these challenges can improve an edge chip’s power-performance-area efficiency by more than 15 times.
PIMIC’s studies indicate that models with hundreds of millions to tens of billions of parameters can in fact be executed on edge devices. It will require 5-nm or 3-nm process technology, PiM structures, and most of all a deep understanding of how data moves in generative-AI models and how it interacts with memory.
PiM is indeed a silver bullet for TinyML at the extreme edge. But it’s just one tool, along with dataflow expertise and deep understanding of model dynamics, in reaching the point where we can in fact execute SLMs and some LLMs effectively at the far edge.
Subi Krishnamuthy is the founder and CEO of PIMIC, an AI semiconductor company developing processing-in-memory (PiM) technology for ultra-low-power AI solutions.
Related Content
- Getting a Grasp on AI at the Edge
- Tiny machine learning brings AI to IoT devices
- Why MCU suppliers are teaming up with TinyML platforms
- Open-Source Development Comes to Edge AI/ML Applications
- Edge AI: The Future of Artificial Intelligence in embedded systems
The post AI at the edge: It’s just getting started appeared first on EDN.
Aehr receives initial FOX-XP system order from GaN power semi supplier
Keysight Expands Novus Portfolio with Compact Automotive Software Defined Vehicle Test Solution
Keysight Technologies announces the expansion of its Novus portfolio with the Novus mini automotive, a quiet small form-factor pluggable (SFP) network test platform that addresses the needs of automotive network engineers as they deploy software defined vehicles (SDV). Keysight is expanding the capability of the Novus platform by offering a next generation vehicle interface that includes 10BASE-T1S, and multi-gigabyte BASE-T1 support for 100 megabytes per second, 2.5 gigabits per second (Gbit/s), 5Gbit/s, and 10Gbit/s. Keysight’s SFP architecture provides a flexible platform to mix and match speeds for each port with modules plugging into existing cards rather than requiring a separate card, as many current test solutions necessitate.
As vehicles move to zonal architectures, connected devices are a critical operational component. As a result, any system failures caused by connectivity and network issues can impact safety and potentially create life-threatening situations. To mitigate this risk, engineers must thoroughly test the conformance and performance of every system element before deploying them.
Key benefits of the Novus mini automotive platform include:- Streamlines testing – The combined solution offers both traffic generation and protocol testing on one platform. With both functions on a single platform, engineers can optimize the testing process, save time, and simplify workflows without requiring multiple tools. It also accelerates troubleshooting and facilitates efficient remediation of issues.
- Helps lower costs and simplify wiring – Supports native automotive interfaces BASE-T1 and BASE-T1S that help lower costs and simplify wiring for automotive manufacturers, reducing the amount of required cabling and connectors. BASE-T1 and BASE-T1S offer a scalable and flexible single-pair Ethernet solution that can adapt to different vehicle models and configurations. These interfaces support higher data rates compared to traditional automotive communication protocols for faster, more efficient data transmission as vehicles become more connected.
- Compact, quiet, and affordable – Features the smallest footprint in the industry with outstanding cost per port, and ultra-quiet, fan-less operation.
- Validates layers 2-7 in complex automotive networks– Provides comprehensive performance and conformance testing that covers everything from data link and network protocols to transport, session, presentation, and application layers. Validating the interoperability of disparate components across layers is necessary in complex automotive networks where multiple systems must seamlessly work together.
- Protects networks from unauthorized access – Supports full line rate and automated conformance testing for TSN 802.1AS 2011/2020, 802.1Qbv, 802.1Qav, 802.1CB, and 802.1Qci. The platform tests critical timing standards for automotive networking, as precise timing and synchronization are crucial for the reliable and safe operation of ADAS and autonomous vehicle technologies. Standards like 802.1Qci help protect networks from unauthorized access and faulty or unsecure devices.
Ram Periakaruppan, Vice President and General Manager, Network Test & Security Solutions, Keysight, said: “The Novus mini automotive provides real-world validation and automated conformance testing for the next generation of software defined vehicles. Our customers must trust that their products consistently meet quality standards and comply with regulatory requirements to avoid costly fines and penalties. The Novus mini allows us to deliver this confident assurance with a compact, integrated network test solution that can keep pace with constant innovation.”
Keysight will demonstrate its portfolio of test solutions for automotive networks, including the Novus mini automotive, at the Consumer Technology Show (CES), January 7-10th in Las Vegas, NV, West Hall, booth 4664 (Inside the Intrepid Controls booth).
The post Keysight Expands Novus Portfolio with Compact Automotive Software Defined Vehicle Test Solution appeared first on ELE Times.
Pages
