Збирач потоків

CHIPX to establish 8-inch GaN-on-SiC wafer fab in Malaysia

Semiconductor today - Пн, 12/15/2025 - 11:41
Semiconductor and photonics manufacturer CHIPX of Dublin, Ireland plans to establish an 8-inch wafer fabrication facility in Malaysia, the first of its kind in the ASEAN (Association of Southeast Asian Nations) region. The facility will introduce gallium nitride on silicon carbide (GaN/SiC) manufacturing, driving Malaysia’s entry into front-end semiconductor production and accelerating domestic capability in photonics, high-bandwidth optical interconnects, and advanced materials engineering essential to next-generation AI and high-performance compute systems...

What Are Memory Chips—and Why They Could Drive TV Prices Higher From 2026

ELE Times - Пн, 12/15/2025 - 09:47

As the rupee continues to depreciate, crossing the magical figure of 90, the electronics manufacturing industry in India is set to take a blow. As India imports nearly 70 percent of the components used in TVs, whether Open Cell Panels or glass substrates, and even the memory Chips, a continued rise in the prices of memory chips along with depreciating rupees beyond 90 is set to impact the TV prices by around 3-4 percent by January 2026. The situation is further aggravated by the rising demand for High-Bandwidth Memory (HBM) for AI servers, which is driving a global chip shortage. 

Also, the chipmakers are largely focusing on the high-profit AI chips, lowering the supply of chips for legacy devices like TVs. Like any high-end device, Smart TVs are also heavily dependent on memory chips for various reasons, ranging from storing preferences to facilitating services. 

What are Memory Chips? 

Memory Chips are integrated chips used in TVs that store firmware, settings, apps, and user data, primarily using Electrically Erasable Programmable Read-Only Memory (EEPROM) for basic adjustments (like picture settings) and modern eMMC flash memory (like in smartphones) for smart TV OS, apps, and video buffering. The EEPROM mode allows the memory to retain data even when power is off and makes it essential for storing data like configuration settings and the system’s fundamental settings. 

Why is it so important? 

Since the chip stores such minimal yet crucial data, important for the functioning of smart TVs, they act as the brains of the TV’s memory.  Most importantly, these not only hold data, but also ensure that the TVs start up correctly and store the TV’s Operating system and Software. In crux, it is an integrated memory that, like any memory, runs smart features, remembers preferences, and displays content smoothly. 

7-10 % Price Hike 

According to media reports, Super Plastronics Pvt Ltd (SPPL)—a TV manufacturing company that holds licences for several global brands, including Thomson, Kodak and Blaupunkt—has said that memory chip prices have surged by nearly 500 per cent over the past three months. The company’s CEO, Avneet Singh Marwah, added that television prices could rise by 7–10 per cent from January, driven largely by the memory chip crunch and the impact of a depreciating rupee.

According to a recent Counterpoint Research report, India’s smart TV shipments fell 4 per cent year-on-year in Q2 2025, weighed down by saturation in the smaller-screen segment, a lack of fresh demand drivers, and subdued consumer spending.

The post What Are Memory Chips—and Why They Could Drive TV Prices Higher From 2026 appeared first on ELE Times.

Anritsu & HEAD Launch Acoustic Evaluation Solution for Next-Gen Automotive eCall Systems

ELE Times - Пн, 12/15/2025 - 07:15

ANRITSU CORPORATION and HEAD acoustics have jointly launched of an advanced acoustic evaluation solution for next-generation automotive emergency call systems (“NG eCall”).

The new solution is compliant with ITU-T Recommendation P.1140, enabling precise assessment of voice communication quality between vehicle occupants and Public Safety Answering Points (PSAPs), supporting faster and more effective emergency response.

With NG eCall over 4G (LTE) and 5G (NR) now mandatory in Europe as of 1 January, 2026, ensuring high-quality, low-latency voice communication during vehicle emergencies has become essential. After a collision, calls are conducted hands-free inside the vehicle cabin, where high noise levels, echoes, and other acoustic challenges can significantly degrade speech clarity. Reliable voice performance is therefore critical to accurately conveying the situation and enabling rapid rescue operations.

The solution integrates Anritsu’s MD8475B (for 4G LTE base-station simulation) or MT8000A (for both 4G LTE and 5G NR simulation) with HEAD acoustics’ ACQUA voice quality analysis platform. This combination enables comprehensive evaluation of transmitted (microphone) and received (speaker) audio under a wide range of realistic operating conditions.

Example Evaluation Scenarios
• Echo and double-talk situations where speaker output re-enters the microphone or simultaneous speech may affect intelligibility
• Cabin noise simulations representing real driving environments, including road, wind, and engine noise
By delivering a reliable and repeatable approach to voice-quality assessment, Anritsu reinforces its commitment to supporting automotive manufacturers and suppliers in the development of NG eCall and advanced in-vehicle audio systems, contributing to a safer and more secure mobility ecosystem.

The post Anritsu & HEAD Launch Acoustic Evaluation Solution for Next-Gen Automotive eCall Systems appeared first on ELE Times.

Siren circuit I made

Reddit:Electronics - Ндл, 12/14/2025 - 11:35
Siren circuit I made

Last year at a social get-together, I got immensely bored and heard a fire truck siren in the distance. I began brainstorming ways to model the ramping-up and ramping-down of the Q-siren and came up with this simple VCO design and a large capacitor. Like the physical sirens, the circuit has a power button (to ramp up the frequency) and a brake button (to quickly reduce the frequency.

A fun side effect of the way I designed the controls is that when both buttons are depressed, the steady state frequency falls somewhere lower than it otherwise would, which mimics what would probably happen if you tried accelerating the turbine while the brake was engaged. (I have never heard this actually happen, but it’s a fun thought.)

I’m sad that I’m not allowed to post a video on here, but if someone asks for one I’ll figure out a way to share it.

submitted by /u/Nissingmo
[link] [comments]

Hot LEDs glow on their own

Reddit:Electronics - Ндл, 12/14/2025 - 11:22
Hot LEDs glow on their own

These are on aluminum boards that I reflow with a hot plate. Just setting down a raw LED on the hot plate causes the glow to begin and ramp up as it gets hotter, and stops glowing when you take it off the heat as it cools. The boards next to this one didn't glow because they had already cooled down, so I know it isn't from a glow in the dark effect from the building lights.

I did not test how long it glows for. I would expect it to fade out eventually. Maybe the heat just lets it drop to a lower energy state and it has to recharge from ambient light. Light glow in the dark but with heat required.

submitted by /u/Strostkovy
[link] [comments]

I don't think it's supposed to look like this

Reddit:Electronics - Сбт, 12/13/2025 - 23:37
I don't think it's supposed to look like this

The temperature sensor of the heating station gave up and now it heats up indefinitely. Perfect for making your PCBs very crispy and crunchy.

submitted by /u/Mmichex
[link] [comments]

Intend to buy huge lot of electronic components.

Reddit:Electronics - Сбт, 12/13/2025 - 20:40
Intend to buy huge lot of electronic components.

I am offered a huge lot of electronic components from a former TV repair shop that was active from 1973 - 2015. Resistors, capacitors, transistors, IC's and many other components. HV transformers (TV), switches, knobs, inductors, subassemblies, ... Most of it is sorted in over 40 Raaco bins, and the rest is partially sorted/unsorted. They are asking 400 euro and I have to decide tomorrow by noon. I think I will buy it, but it will take time to move it all and sort it again.

submitted by /u/Few_Hornet5864
[link] [comments]

Weekly discussion, complaint, and rant thread

Reddit:Electronics - Сбт, 12/13/2025 - 18:00

Open to anything, including discussions, complaints, and rants.

Sub rules do not apply, so don't bother reporting incivility, off-topic, or spam.

Reddit-wide rules do apply.

To see the newest posts, sort the comments by "new" (instead of "best" or "top").

submitted by /u/AutoModerator
[link] [comments]

Vintage white ceramic ICs are absolutely beautiful!

Reddit:Electronics - Сбт, 12/13/2025 - 15:40
Vintage white ceramic ICs are absolutely beautiful!

Black thermoset resin packaging is probably far superior from an industrial standpoint, but I’m in love with the beauty of white ceramic IC packages from around the 1970s.

submitted by /u/NEET_FACT0RY
[link] [comments]

Пішов з життя професор Віктор Демидович Романенко

Новини - Сбт, 12/13/2025 - 13:32
Пішов з життя професор Віктор Демидович Романенко
Image
kpi сб, 12/13/2025 - 13:32
Текст

З глибоким сумом повідомляємо про передчасну кончину видатного вченого та педагога, заступника директора Навчально-наукового інституту прикладного системного аналізу з науково-педагогічної роботи професора Віктора Демидовича Романенка.

The 1972LED's are Red

Reddit:Electronics - Сбт, 12/13/2025 - 08:00
The 1972LED's are Red

This is in response to "light them up" from mr. blueball. Finally figured out how to light it up with a AA battery. These are RED led's. Please forgive me for any sacred electronic transgressions I may have committed in making this picture, I did not intend to harm or decrease the value of these amazing objects, I am a biologist dammit, not an engineer. In 1972, I visited my father's lab. After turning off the lights, he started turning on rows and rows of red, green and yellow LED's. It was an amazing sight. Thank you to all commentors for the great information and feedback on my first post titled: Interesting old Monsanto LED's 1972.

submitted by /u/DuffmeisterBee
[link] [comments]

Building high-performance robotic vision with GMSL

EDN Network - Птн, 12/12/2025 - 15:00

Robotic systems depend on advanced machine vision to perceive, navigate, and interact with their environment. As both the number and resolution of cameras grow, the demand for high-speed, low-latency links capable of transmitting and aggregating real-time video data has never been greater.

Gigabit Multimedia Serial Link (GMS), originally developed for automotive applications, is emerging as a powerful and efficient solution for robotic systems. GMSL transmits high-speed video data, bidirectional control signals, and power over a single cable, offering long cable reach, deterministic microsecond-level latency with extremely low bit error rate (BER). It simplifies the wiring harness and reduces the total solution footprint, ideal for vision-centric robots operating in dynamic and often harsh environments.

The following sections discuss where and how cameras are used in robotics, the data and connectivity challenges these applications face, and how GMSL can help system designers build scalable, reliable, and high-performance robotic platforms.

Where are cameras used in robotics?

Cameras are at the heart of modern robotic perception, enabling machines to understand and respond to their environment in real time. Whether it’s a warehouse robot navigating aisles, a robotic arm sorting packages, or a service robot interacting with people, vision systems are critical for autonomy, automation, and interaction.

These cameras are not only diverse in function but also in form—mounted on different parts of the robot depending on the task and tailored to the physical and operational constraints of the platform (see Figure 1).

Figure 1 An example of a multimodal robotic vision system enabled by GMSL. Source: Analog Devices

Autonomy

In autonomous robotics, cameras serve as the eyes of the machine, allowing it to perceive its surroundings, avoid obstacles, and localize itself within an environment.

For mobile robots—such as delivery robots, warehouse shuttles, or agricultural rovers—this often involves a combination of wide field-of-view cameras placed at the corners or edges of the robot. These surround-view systems provide 360° awareness, helping the robot navigate complex spaces without collisions.

Other autonomy-related applications use cameras facing downward or upward to read fiducial markers on floors, ceilings, or walls. These markers act as visual signposts, allowing robots to recalibrate their position or trigger specific actions as they move through structured environments like factories or hospitals.

In more advanced systems, stereo vision cameras or time of flight (ToF) cameras are placed on the front or sides of the robot to generate three-dimensional maps, estimate distances, and aid in simultaneous localization and mapping (SLAM).

The location of these cameras is often dictated by the robot’s size, mobility, and required field of view. On small sidewalk delivery robots, for example, cameras might be tucked into recessed panels on all four sides. On a drone, they’re typically forward-facing for navigation and downward-facing for landing or object tracking.

Automation

In industrial automation, vision systems help robots perform repetitive or precision tasks with speed and consistency. Here, the camera might be mounted on a robotic arm—right next to a gripper or end-effector—and the system can visually inspect, locate, and manipulate objects with high accuracy. This is especially important in pick-and-place operations, where identifying the exact position and orientation of a part or package is essential.

Other times, cameras are fixed above a work area—mounted on a gantry or overhead rail—to monitor items on a conveyor or to scan barcodes. In warehouse environments, mobile robots use forward-facing cameras to detect shelf labels, signage, or QR codes, enabling dynamic task assignments or routing changes.

Some inspection robots, especially those used in infrastructure, utilities, or heavy industry, carry zoom-capable cameras mounted on masts or articulated arms. These allow them to capture high-resolution imagery of weld seams, cable trays, or pipe joints—tasks that would be dangerous or time-consuming for humans to perform manually.

Human interaction

Cameras also play a central role in how robots engage with humans. In collaborative manufacturing, healthcare, or service industries, robots need to understand gestures, recognize faces, and maintain a sense of social presence. Vision systems make this possible.

Humanoid and service robots often have cameras embedded in their head or chest, mimicking the human line of sight to enable natural interaction. These cameras help the robot interpret facial expressions, maintain eye contact, or follow a person’s gaze. Some systems use depth cameras or fisheye lenses to track body movement or detect when a person enters a shared workspace.

In collaborative robot (cobot) scenarios, where humans and machines work side by side, machine vision is used to ensure safety and responsiveness. The robot may watch for approaching limbs or tools, adjusting its behavior to avoid collisions or pause work if someone gets too close.

Even in teleoperated or semi-autonomous systems, machine vision remains key. Front-mounted cameras stream live video to remote operators, enabling real-time control or inspection. Augmented reality overlays can be added to this video feed to assist with tasks like remote diagnosis or training.

Across all these domains, the camera’s placement—whether on a gripper, a gimbal, the base, or the head of the robot—is a design decision tied to the robot’s function, form factor, and environment. As robotic systems grow more capable and autonomous, the role of vision will only deepen, and camera integration will become even more sophisticated and essential.

Robotics vision challenges

As vision systems become the backbone of robotic intelligence, opportunity and complexity grow in parallel. High-performance cameras unlock powerful capabilities—enabling real-time perception, precise manipulation, and safer human interaction—but they also place growing demands on system architecture.

It’s no longer just about moving large volumes of video data quickly. Many of today’s robots must make split-second decisions based on multimodal sensor input, all while operating within tight mechanical envelopes, managing power constraints, avoiding electromagnetic interference (EMI), and maintaining strict functional safety in close proximity to people.

These challenges are compounded by the environments robots face. A warehouse robot may shuttle in and out of freezers, enduring sudden temperature swings and condensation. An agricultural rover may crawl across unpaved fields, absorbing constant vibration and mechanical shock. Service robots in hospitals or public spaces may encounter unfamiliar, visually complex settings, where they must quickly adapt to safely navigate around people and obstacles.

Solve the challenges with GMSL

GMSL is uniquely positioned to meet the demands of modern robotic systems. The combination of bandwidth, robustness, and integration flexibility makes it well-suited for sensor-rich platforms operating in dynamic, mission-critical environments. The following features highlight how GMSL addresses key vision-related challenges in robotics.

High data rate 

The GMSL2 and GMSL3 product families support forward-channel (video path) data rates of 3 Gbps, 6 Gbps, and 12 Gbps, covering a wide range of robotic vision use cases. These flexible link rates allow system designers to optimize for resolution, frame rate, sensor type, and processing requirements (Figure 2).

Figure 2 Sensor bandwidth ranges with GMSL capabilities. Source: Analog Devices

A 3 Gbps link is sufficient for most surround view cameras using 2 MP to 3 MP rolling shutter sensors at 60 frames per second (FPS). It also supports other common sensing modalities, such as ToF sensors and light detection and ranging (LIDAR) units with point-cloud outputs and radar sensors transmitting detection data or compressed image-like returns.

The 6 Gbps mode is typically used for the robot’s main forward-facing camera, where higher resolution sensors (usually 8 MP or more) are required for object detection, semantic understanding, or sign recognition. This data rate also supports ToF sensors with raw output, or stereo vision systems that either stream raw output from two image sensors or output a processed point cloud stream from an integrated image signal processor (ISP). Many commercially available stereo cameras today rely on this data rate for high frame-rate performance.

At the high end, 12 Gbps links enable support for 12 MP or higher resolution cameras used in specialized robotic applications that demand advanced object classification, scene segmentation, or long-range perception. Interestingly, even some low-resolution global shutter sensors require higher speed links to reduce readout time and avoid motion artifacts during fast capture cycles, which is critical in dynamic or high-speed environments.

Determinism and low latency

Because GMSL uses frequency-domain duplexing to separate the forward (video and control) and reverse (control) channels, it enables bidirectional communication with deterministic low latency, without the risk of data collisions.

Across all link rates, GMSL maintains impressively low latency: the added delay from the input of a GMSL serializer to the output of a deserializer typically falls in the lower tens of microseconds—negligible for most real-time robotic vision systems.

The deterministic reverse-channel latency enables precise hardware triggering from the host to the camera—critical for synchronized image capture across multiple sensors, as well as for time-sensitive, event-driven frame triggering in complex robotic workflows.

Achieving this level of timing precision with USB or Ethernet cameras typically requires the addition of a separate hardware trigger line, increasing system complexity and cabling overhead.

Small footprint and low power

One of the key value propositions of GMSL is its ability to reduce cable and connector infrastructure.

GMSL itself is a full-duplex link, and most GMSL cameras utilize the power-over-coax (PoC) feature, allowing video data, bidirectional control signals, and power to be transmitted over a single thin coaxial cable.

This significantly simplifies wiring, reduces the overall weight and bulk of cable harnesses, and eases mechanical routing in compact or articulated robotic platforms (Figure 3).

Figure 3 A typical GMSL camera architecture using the MAX96717. Source: Analog Devices

In addition, the GMSL serializer is a highly integrated device that combines the video interface (for example, MIPI-CSI) and the GMSL PHY into a single chip. The power consumption of the GMSL serializer, typically around 260 mW in 6 Gbps mode, is favorably low compared to alternative technologies with similar data throughput.

All these features will translate to smaller board areas, reduced thermal management requirements (often eliminating the need for bulky heatsinks), and greater overall system efficiency, particularly for battery-powered robots.

Sensor aggregation and video data routing

GMSL deserializers are available in multiple configurations, supporting one, two, or four input links, allowing flexible sensor aggregation architectures. This enables designers to connect multiple cameras or sensor modules to a single processing unit without additional switching or external muxing, which is especially useful in multicamera robotics systems.

In addition to the multiple inputs, GMSL SERDES also supports advanced features to manage and route data intelligently across the system. These include:

  • I2C and GPIO broadcasting for simultaneous sensor configuration and frame synchronization.
  • I2C address aliasing to avoid I2C address conflict in passthrough
  • Virtual channel reassignment allows multiple video streams to be mapped cleanly into the frame buffer inside the systems on chip (SoCs).
  • Video stream duplication and virtual channel filtering, enabling selected video data to be delivered to multiple SoCs—for example, to support both automation and interaction pipelines from the same camera feed or to support redundant processing paths for enhanced functional safety.
Safety and reliability

Originally developed for automotive advanced driver assistance systems (ADAS) applications, GMSL has been field-proven in environments where safety, reliability, and robustness are non-negotiable. Robotic systems, particularly those operating around people or performing mission-critical industrial tasks, can benefit from the same high standards.

Feature/Criteria

GMSL (GMSL2/GMSL3)

USB (for example, USB 3.x)

Ethernet (for example, GigE Vision)

Cable Type

Single coax or STP (data + power + control)

Separate USB + power + general-purpose input/output (GPIO)

Separate Ethernet + power (PoE optional) + GPIO

Max Cable Length

15+ meters with coax

3 m reliably

100 m with Cat5e/Cat6

Power Delivery

Integrated (PoC)

Requires separate or USB-PD

Requires PoE infrastructure or separate cable

Latency (Typical)

Tens of microseconds (deterministic)

Millisecond-level, OS-dependent

Millisecond-level, buffered + OS/network stack

Data Rate

3 Gbps/6 Gbps/12 Gbps (uncompressed, per link)

Up to 5 Gbps (USB 3.1 Gen 1)

1 Gbps (GigE), 10 Gbps (10 GigE, uncommon in robotics)

Video Compression

Not required (raw or ISP output)

Often required for higher resolutions

Often required

Hardware Trigger Support

Built-in via reverse channel (no extra wire)

Requires extra GPIO or USB communications device class (CDC) interface

Requires extra GPIO or sync box

Sensor Aggregation

Native via multi-input deserializer

Typically point-to-point

Typically point-to-point

EMI Robustness

High—designed for automotive EMI standards

Moderate

Moderate to high (depends on shielding, layout)

Environmental Suitability

Automotive-grade temp, ruggedized

Consumer-grade unless hardened

Varies (industrial options exist)

Software Stack

Direct MIPI-CSI integration with SoC

OS driver stack + USB video device class (UVC) or proprietary software development kit (SDK)

OS driver stack + GigE Vision/ GenICam

Functional Safety Support

ASIL-B devices, data replication, deterministic sync

Minimal

Minimal

Deployment Ecosystem

Mature in ADAS, growing in robotics

Broad in consumer/PC, limited industrial options

Mature in industrial vision

Integration Complexity

Moderate—requires SERDES and routing config

Low—plug and play for development High—for production

Moderate—needs switch/router config and sync wiring

Table 1 A comparison between GMSL, USB, and Ethernet in terms of trade-offs in robotic vision. Source: Analog Devices

Most GMSL serializers and deserializers are qualified to operate across a –40°C to +105°C temperature range, with built-in adaptive equalization that continuously monitors and adjusts transceiver settings in response to environmental changes.

This provides system architects with the flexibility to design robots that function reliably in extreme or fluctuating temperature conditions.

In addition, most GMSL devices are ASIL-B compliant and exhibit extremely low BERs. Under compliant link conditions, GMSL2 offers a typical BER of 10–15, while GMSL3, with its mandatory forward error correction (FEC), can reach a BER as low as 10–30. This exceptional data integrity, combined with safety certification, significantly simplifies system-level functional safety integration.

Ultimately, GMSL’s robustness leads to reduced downtime, lower maintenance costs, and greater confidence in long-term system reliability—critical advantages in both industrial and service robotics deployments.

Mature ecosystem

GMSL benefits from a mature and deployment-ready ecosystem, shaped by years of high volume use in automotive systems and supported by a broad network of global ecosystem partners.

This includes a comprehensive portfolio of evaluation and production-ready cameras, compute boards, cables, connectors, and software/driver support—all tested and validated under stringent real-world conditions.

For robotics developers, this ecosystem translates to shorter development cycles, simplified integration, and a lower barrier to scale from prototype to production.

GMSL vs. legacy robotics connectivity

In recent years, GMSL has become increasingly accessible beyond the automotive industry, opening new possibilities for high performance robotic systems.

As the demands on robotic vision grow with more cameras, higher resolution, tighter synchronization, and harsher environments, traditional interfaces like USB and Ethernet often fall short in terms of bandwidth, latency, and integration complexity.

GMSL is now emerging as a preferred upgrade path, offering a robust, scalable, and production-ready solution that is gradually replacing USB and Ethernet in many advanced robotics platforms. Table 1 compares the three technologies across key metrics relevant to robotic vision design.

An evolution in robotics

As robotics moves into increasingly demanding environments and across diverse use cases, vision systems must evolve to support higher sensor counts, greater bandwidth, and deterministic performance.

While legacy connectivity solutions will remain important for development and certain deployment scenarios, they introduce trade-offs in latency, synchronization, and system integration that limit scalability.

GMSL, with its combination of high data rates, long cable reach, integrated power delivery, and bidirectional deterministic low latency, provides a proven foundation for building scalable robotic vision systems.

By adopting GMSL, designers can accelerate the transition from prototype to production, delivering smarter, more reliable robots ready to meet the challenges of a wide range of real-world applications.

Kainan Wang is a systems applications engineer in the Automotive Business Unit at Analog Devices in Wilmington, Massachusetts. He joined ADI in 2016 after receiving an M.S. in electrical engineering from Northeastern University in Boston, Massachusetts. Kainan has been working with 2D/3D imaging solutions from hardware development and systems integrations to application development. Most recently, his work focus has been to expand ADI automotive technologies into other markets beyond automotive.

Related Content

The post Building high-performance robotic vision with GMSL appeared first on EDN.

Dell Technologies’ 2026 Predictions: AI Acceleration, Sovereign AI & Governance

ELE Times - Птн, 12/12/2025 - 14:22

Dell Technologies hosted its Predictions: 2026 & Beyond briefing for the Asia Pacific Japan & Greater China (APJC) media, where the company’s Global Chief Technology Officer & Chief AI Officer, John Roese, and APJC President, Peter Marrs, outlined the transformative technology trends and Dell’s strategies for accelerating AI adoption and innovation in the region.

John Roese’s vision on the trends set to shape the technology industry in 2026 and beyond (Image Credits: Dell Technologies)

According to Roese, the rapid acceleration of AI is set to profoundly reengineer the entire fabric of enterprise and industry, driving new ways of operating, building, and innovating at an unprecedented scale and pace.

Focus on scalability and real adoption

A key trend is the shift in focus towards scaling AI for tangible business outcomes.  “Conversations are on very real adoption, and AI is creating a truly transformational opportunity,” said Marrs. “We are working with customers across the region to build AI at scale.”

Marrs noted that growing deployment of agentic AI is an example of this transformation, with organizations such as Zoho in India already working with Dell to accelerate agentic AI adoption by delivering contextual, privacy-first and multimodal enterprise AI solutions. “AI has become more accessible for all companies in the region, and what we’ve been doing is successfully building foundations with customers to deploy AI at scale.”

Roese highlighted that the industry is now entering the autonomous agent era, where agentic AI is evolving from a helpful assistant to an integral manager of complex, long-running processes. “We expect that as people go on the agentic journey into 2026, they will be surprised by how much more agents do for them than they anticipated. Its very presence will bring value to make humans more efficient, and make the non-AI work, work better,” he noted.

As the industry continues to build and deploy more enterprise AI, Roese also emphasized the need for businesses to rethink how they treat and make resilient AI factories.

Sovereign AI and governance as the foundation for innovation

With the light-speed acceleration of AI development, there is a degree of volatility. Roese predicted that the demand for robust governance frameworks and private, controlled AI environments will become undeniable, urging the industry to build on both internal and external AI guardrails that allow organizations to innovate safely and sustainably.

“Last year, we predicted that ‘Agentic’ would be the word of 2025. This year, the word ‘Governance’ is going to play a much bigger role,” Roese said. “The technology and its use cases are not going to be successful if you do not have discipline and governance around how you operate your AI strategy as either an enterprise, a region, or a country.”

At national levels, the rapid rise of sovereign AI ecosystems will continue as AI becomes critical to state-level interests. Marrs discussed this trend’s momentum, noting that like many countries in the region, enterprises are also actively building their own frameworks to drive local innovation, with strong foundations already in place.

Building the ecosystem for impact and progress

To bridge that gap, Marrs reiterated the importance of a collaborative ecosystem in nurturing a skilled talent pool and advancing the region’s AI competitiveness, citing the APJ AI Innovation Hub as an initiative that is delivering impact through the combination of Dell’s capabilities, talent, and ecosystem.

“By working with experts, government, and industry peers, we’ve made unbelievable headway in fostering skill development and advancing our collective expertise,” said Marrs. “Together, we are accelerating Asia’s leadership as an AI region, identifying key steps to bolster the region’s growth. Dell is excited about how we’re participating and helping with this transformation.”

The post Dell Technologies’ 2026 Predictions: AI Acceleration, Sovereign AI & Governance appeared first on ELE Times.

NAL-CSIR Advances Field testing of Indigenous Defence Tech

ELE Times - Птн, 12/12/2025 - 14:01

The Council of Scientific and Industrial Research (CSIR)-National Aerospace Laboratories (NAL), in collaboration with Solar Defence & Aerospace Ltd. (SDAL), a private industry partner co-developed a 150-kg class Loitering Munition UAV (LM-UAV).

The drone system has an indigenous Wankel engine which is a perfect blend of efficiency and reliability, essential for defence products. With an operational range of up to 900 kilometres and an endurance for 6-9 hours, the drone is reliable for long missions. The system also has an operability at service ceilings reaching 5 kilometres which ensures altitude flexibility. The system is fitted with cutting edge features, such as GPS-denied navigation, essential for situations where the GPS is compromised, along with a low-radar cross-section that enhances its stealth characteristics.

The drone system is also updated with latest AI technology for target identification which will boost precision and autonomy during missions.

Next, is the field testing for High Altitude Platforms (HAPs), a solar-powered unmanned aircraft, capable of a sustained flight above a 20-kilometre altitude. These HAPs act as pseudo-satellites, used for extended surveillance, communication, and reconnaissance purposes.

The field testing of the 150-kg class LM-UAV and the development of solar-powered HAPs mark important milestones in India’s evolving indigenous defence technology landscape. These advancements are testament to the country’s commitment to building resilient and self-sustaining defence assets through collaborative public-private partnerships and cutting-edge aerospace research.

They are part of India’s efforts to develop self-reliant defence technologies under the ‘Atmanirbhar Bharat’ Initiative.

The post NAL-CSIR Advances Field testing of Indigenous Defence Tech appeared first on ELE Times.

Hardware security to bolster interconnect IPs for SoCs, chiplets

EDN Network - Птн, 12/12/2025 - 13:38

Hardware security vulnerabilities have greatly expanded the attack surface beyond traditional software exploits, making hardware security assurance crucial in modern system-on-chip (SoC) designs. Chip interconnect specialist Arteris’ acquisition of semiconductor cybersecurity assurance supplier Cycuity is the latest reminder of how hardware security is becoming an inflection point in SoC design.

Arteris delivers data-movement IP hardware and IP block integration software to connect on-chip components and chiplets. On the other hand, Cycuity ensures the security of these semiconductor design building blocks and their interactions. Charles Janac, president and CEO of Arteris, claims that Cycuity’s technology and expertise will add to Arteris’ product portfolio, enabling chip designers to better understand and improve data movement security in chiplets and SoCs.

Figure 1 A security solution, built around a coverage metric tailored for hardware designs, enables chip designers to precisely measure the effectiveness of security protocols. Cycuity

Cycuity’s hardware security solutions prevent vulnerabilities throughout chip development—from IP blocks to RTL design to full systems—with systematic security assurance in software configuration via scalable, repeatable security verification. The San Jose, California-based firm specifies, integrates, and verifies security across a chip’s hardware development lifecycle.

Security is becoming critical to all types of chip designs because the attack potential has expanded to the hardware layer. As a result, silicon vulnerabilities can compromise electronic systems and expose unprotected information. The National Institute of Standards and Technology (NIST) has recently released data showing common vulnerabilities and exposures (CVEs) in hardware grew by more than 15 times over the last five years.

For Arteris’ network-on-chip (NoC) IPs, which provide the backbone for data movement across SoCs and chiplets, Cycuity’s offerings can help mitigate security vulnerabilities throughout the SoC hardware development cycle. They can uncover security weaknesses across firmware, IP blocks, chip subsystems, chiplets, and full SoCs.

Figure 2 This hardware security solution identifies secure design assets and ensures they are properly managed during secure boot. Source: Cycuity

Cycuity—which works closely with leading EDA toolmakers such as Cadence, Siemens EDA, and Synopsys—has its hardware security tools integrated with leading EDA environments. That allows chip designers to identify, verify, and resolve security risks before silicon implementation and production. For instance, they can safeguard against attacks exploiting microarchitectural side channels, logic bugs, third-party and open-source IP, unsecured interconnects, debug backdoors, and supply-chain gaps.

The acquisition deal, subject to regulatory approval, is expected to close in the first quarter of 2026.

Related Content

The post Hardware security to bolster interconnect IPs for SoCs, chiplets appeared first on EDN.

Toyota & NISE Test Mirai Hydrogen FCEV in India Conditions

ELE Times - Птн, 12/12/2025 - 11:56

Toyota Kirloskar Motor (TKM) and the National Institute of Solar Energy (NISE) under the Ministry of New and Renewable Energy signed an MoU to collaborate to test Toyota’s Mirai, a hydrogen fuel-celled EV in Indian conditions.

The MoU was signed in the presence of Union Minister of New and Renewable Energy and Consumer Affairs, Food and Public Distribution, Prahlad Joshi in New Delhi. Under the said agreement, Toyota has handed over its Toyota Mirai to NISE who will conduct a comprehensive real-world test on the vehicle. The study will judge Mirai’s performance across Indian climates, terrains, and driving conditions.

NISE will study the fuel efficiency, real-world range, refuelling patterns, drivability, environmental resilience, along with the overall adaptability of the vehicle to Indian roads and traffic conditions. These results are then expected to manoeuvre the early-stage adoption of hydrogen mobility technologies in the country.

This initiative will be a breakthrough in the advancement for Hydrogen mobility in the country. It will also support India’s Green Hydrogen Mission and strengthen the country’s decarbonisation and clean transportation goals.

India’s decarbonization goals revolve around its Panchamrit targets: reaching 500 GW non-fossil fuel capacity and 50% renewable energy by 2030, cutting emissions intensity by 45% (vs. 2005) and reducing total emissions by 1 billion tonnes by 2030, all leading to Net-Zero by 2070, driven by massive solar, wind, green hydrogen, battery storage, and grid improvements.

The post Toyota & NISE Test Mirai Hydrogen FCEV in India Conditions appeared first on ELE Times.

Advanced Energy unveils dual-output 400W module for NeoPower configurable power supplies

Semiconductor today - Птн, 12/12/2025 - 10:43
Advanced Energy Inc of Denver, CO, USA (which designs and manufactures precision power conversion, measurement and control solutions) has introduced a new dual-output 24V/24V module for its NeoPower family of configurable power supplies, delivering up to 400W (200W per output) in a compact 2.5-inch form factor...

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів