Українською
  In English
Збирач потоків
The new performance bottleneck: How more GPU memory unlocks next-gen gaming and AI PCs
Courtesy: Micron
The next era of PC performance will be defined not by more compute, but by memory scale. The rising size of game assets and AI models has outpaced GPU memory capacity until now. Micron’s latest evolution of GDDR7 marks a pivotal shift for next-generation GPUs by combining higher memory density with the scalability that modern gaming and AI workloads now demand. With expanded capacity options built to support configurations up to 96GB of graphics memory, this generation of GDDR empowers systems to keep vastly larger worlds, richer textures, and growing AI models resident in memory, reducing bottlenecks and unlocking more consistent real-time performance across high-fidelity games and AI-enhanced applications.
Visual computing: The convergence of graphics and intelligence
Visual computing is entering a new era as graphics and intelligence converge. Modern systems must not only render high-fidelity scenes in real time, but also interpret, enhance, and generate content using increasingly complex AI models. Two forces are accelerating this shift: the push toward cinematic quality gaming and the rapid emergence of AI-powered PCs. As worlds grow larger, textures more detailed, and on-device AI more integral to responsiveness and personalisation, the demands placed on GPU memory have surged. What that means is, memory capacity and efficiency now determine how smoothly a system can deliver immersive gameplay, intelligent creation tools, and real-time simulation, making memory a foundational enabler of next-generation visual computing.
Delivering unprecedented performance for high-resolution gaming
Modern games are pushing GPU architectures harder than ever. Real-time ray tracing demands continuous access to massive datasets, geometry, materials, lighting maps, and shadows, while high refresh rate displays and ultra-resolution textures multiply the data the GPU must process each frame. Add in sprawling open worlds and increasingly AI-assisted rendering techniques, and the result is a workload that easily overwhelms traditional memory limits.
The problem is that when GPU memory can’t hold all this data at once, the system is forced to constantly swap assets in and out. That leads to the issues gamers know too well: texture pop-in, mid-frame stutters, uneven frame times, and sudden drops during intense ray-traced scenes. AI-generated frames and upscaling pipelines also become less consistent when memory is constrained, because the models and intermediate buffers they rely on are constantly competing for space.
This is where next-generation GDDR capacity and bandwidth become critical. By enabling far larger datasets to remain resident in memory, GDDR7 keeps the entire visual pipeline fed: textures, lighting data, geometry sets, and AI inference models, without the bottlenecks that cause visual artefacts or performance instability. The result is smoother, more predictable real-time rendering at 4K, 5K, and 8K, even in the most demanding scenes.
To keep these visual pipelines running efficiently, the memory subsystem must deliver data rapidly and consistently.
Enabling larger, more detailed worlds with 24Gb die density
As game environments expand and visual assets grow, memory capacity becomes critical to maintaining seamless, artefact-free experiences. Micron’s new 24Gb die density enables up to 96GB of graphics memory, giving GPUs significantly more space for high-resolution textures, expansive worlds, and advanced visual effects.
This increased capacity matters to gamers because:
- Reduces asset swapping and texture pop-in
- Supports larger frame buffers for high-resolution displays
- Enables richer, more detailed environments with fewer loading transitions
Creators and professional users also benefit from faster real-time rendering, more responsive GPU-accelerated workflows, and improved handling of large datasets.
Fueling AI-enhanced graphics and the rise of AI PCs
AI is rapidly becoming integral to personal computing. Neural rendering, real-time media enhancement, content generation, and AI-assisted workflows place new demands on system memory. Micron GDDR7 is built to support these emerging workloads with increased bandwidth, lower latency, and improved efficiency.
Why GDDR7 matters for AI PCs
AI-driven graphics and compute tasks rely on continuous movement of large datasets. GDDR7 accelerates these operations by improving throughput and responsiveness across GPU pipelines.
Systems built with GDDR7 benefit from:
- Faster on-device AI inference for creation, media, and collaboration
- Lower-latency performance across hybrid CPU-GPU-NPU workflows
- Higher throughput for neural graphics and generative AI models
- Improved power efficiency thanks to architectural refinements and reduced operating voltages
As AI becomes embedded into everyday PC tasks from writing, coding, editing, presenting, and gaming, memory performance will heavily influence the immediacy, intelligence, and fluidity of the experience.
Enabling the future of immersive and intelligent computing
Micron GDDR7 is more than a performance improvement; it is a foundational technology for the next decade of visual and AI computing. With 36 Gbps bandwidth, 24Gb die density, and improved efficiency, GDDR7 empowers GPU and AI PC vendors to deliver richer, more dynamic, and more intelligent computing experiences.
While NPUs are becoming essential for power-efficient, on-device AI acceleration, the most demanding visual and AI workloads still rely on the scale and parallelism of a discrete GPU. NPUs excel at sustained, low-power inference, but GPUs deliver significantly higher throughput for large models, neural graphics, advanced rendering, and gaming workloads. By pairing NPUs with discrete GPUs equipped with GDDR7, AI PCs can intelligently distribute tasks, assigning lightweight inference to the NPU while leveraging the GPU’s computing power and memory bandwidth for operations that require maximum performance. This combination unlocks capabilities far beyond what NPUs can achieve alone.
Together, Micron GDDR7 and the next wave of discrete GPUs set the stage for a new era of immersive graphics and high-performance AI computing.
The post The new performance bottleneck: How more GPU memory unlocks next-gen gaming and AI PCs appeared first on ELE Times.
Thermal Management in 3D-IC: Modelling Hotspots, Materials, & Cooling Strategies
Courtesy: Cadence
As three-dimensional integrated circuit (3D-IC) technology becomes the architectural backbone of AI, high-performance computing (HPC), and advanced edge systems, thermal management has shifted from a downstream constraint to a fundamental design driver. The dense vertical integration that enables unprecedented performance also concentrates heat at levels that traditional two-dimensional design methodologies cannot anticipate or mitigate. In fact, the temperatures and heat fluxes inside localised 3D-IC hotspots can approach fractions of those encountered in rocket-launching thermal zones, only here the challenge unfolds on a microscopic silicon landscape rather than within a combustion chamber. This extreme thermal intensity makes early and predictive planning essential rather than optional.

Effective thermal management now begins at the architecture definition stage, where designers evaluate stack feasibility, power distribution, and allowable thermal envelopes before committing to partitioning decisions. These early insights directly shape block placement, power-delivery topology, and the choice of materials, interposers, and packaging technologies. As the industry increasingly relies on vertically integrated systems to achieve performance-per-watt gains, thermal awareness emerges as an architectural discipline in its own right, one that guides every subsequent stage of the 3D-IC design flow.
This article guides modelling, estimating, and mitigating thermal challenges in dense stacks and interposer-based 3D-ICs, with an emphasis on early electrothermal strategies that scale with complexity.
Sources of Heat in Stacked Architectures
Heat in 3D-ICs arises from a combination of device activity, vertical power density, and material constraints. When logic, memory, and accelerators are stacked, the total power per unit footprint increases dramatically. Upper dies, which are furthest from the heatsink, experience higher thermal resistance and reduced cooling efficiency, creating natural hotspots even when their individual power numbers appear modest.
The placement of through-silicon via (TSV) arrays, micro-bumps, and interconnect pillars also shapes the heat landscape. These structures act not only as electrical conduits but also as thermal conduits, depending on the material and density. Die-to-die interfaces with bonding layers often introduce thermal bottlenecks, and when chiplets operate at different power states, steep thermal gradients can trigger stress and reliability concerns. Understanding these interactions early is essential for setting realistic thermal limits and performance expectations.
Early Compact Models and Power Map Estimation
Thermal analysis must begin in parallel with the architectural definition itself. Early-stage compact models enable architects to approximate temperature distributions using only high-level power budgets, long before physical implementation. By capturing the combined influence of die thickness, material stacks, bonding interfaces, and interposer conductivity, these models reveal whether planned power densities or proposed die-stack configurations are thermally realistic. They help flag infeasible assumptions early, ensuring that functional partitioning and stacking choices are guided by thermally credible boundaries rather than late-stage surprises.

Creating usable power maps at this stage does not require full register transfer level (RTL) activity vectors. Coarse workload profiles can yield first-order estimates of dynamic and leakage power. When combined with simplified geometry models, they highlight thermally sensitive regions, enabling design teams to adjust block partitioning, die assignment, and approximate placement before entering the detailed implementation phase.

Cadence’s multiphysics system analysis ecosystem connects power estimation, compact thermal model (CTM) modelling, and system-level thermal analysis, ensuring that signal, power, electromagnetic (EM), and thermal assumptions remain aligned throughout the early design phase. This early visibility reduces late-stage thermal surprises, which are often the costliest to rectify.
Heat Paths Through Dies, Interposers, and Package
Heat does not follow a single escape route in a 3D-IC. Instead, it propagates through a network of vertical and lateral paths whose efficiency depends on materials, die arrangement, and the package environment. Lower dies may benefit from direct contact with the heatsink, while upper dies rely on indirect conduction through intermediate layers. Thermal resistance builds cumulatively across each interface.
Interposers, whether made of silicon, glass, or organic materials, play a significant role in the heat flow picture. Silicon interposers offer superior thermal conductivity, enabling heat spreading but also concentrating thermal load where chiplets cluster. Organic interposers introduce more thermal resistance but offer other integration advantages. Achieving the correct tradeoff means modelling these layers as active participants in heat distribution, not static mechanical components.
The entire package, including substrate layers, heat spreaders, and lid materials, must also be included in thermal simulation. When package effects are omitted in early analysis, temperature predictions often skew optimistic, masking hotspots that emerge only after assembly-level modelling is performed.
Materials, TIMs, and Cooling Options for Stacks
Thermal simulation heavily relies on the structural definition of a product because the geometry, material properties, and assembly details directly dictate how heat is generated, transferred, and dissipated.
High-conductivity silicon, optimised interconnect materials, and improved underfill or bonding layers can lower the vertical thermal resistance of a stack. Thermal interface materials (TIMs) exhibit significant variations in performance, and even slight differences in thickness or coverage can result in substantial temperature differences across dies.
Cooling strategies for 3D-ICs are evolving rapidly. Traditional air cooling can be sufficient for moderate power budgets, but high-performance AI and HPC systems often require advanced approaches such as direct liquid cooling or vapour chamber solutions. The choice of cooling strategy should align with the power roadmap, not just the current generation’s requirements. Once a die stack is assembled, cooling options become constrained, so decisions made early influence the thermal feasibility of future product iterations.
Co-Optimisation with Placement and PDN Design
Thermal constraints directly influence floorplanning, macro placement, and power delivery network (PDN) topology in 3D-ICs. Efficient heat spreading is achieved when high-power blocks are positioned to maximise vertical conduction paths and lateral spreading through metal layers. If a block is placed too far from major thermal conduits, even robust cooling cannot compensate for the heat.

The PDN adds additional complexity. Power delivery structures, including TSVs, bumps, and interposer redistribution layers, introduce their own resistive heating. When modelled jointly with thermal effects, the combined electro-thermal behaviour reveals interactions that neither analysis can capture alone. Co-optimisation across these domains ensures that thermal mitigation does not compromise power integrity and vice versa.
A tightly integrated workflow enables round-trip refinement as power, placement, and package assumptions evolve. Without this iterative co-design, late-stage violations become inevitable, requiring disruptive redesigns.
Electro-Thermal Readiness for Signoff
Before committing a 3D-IC to final signoff, teams must verify that the design can withstand realistic thermal stress across operating modes and process corners. This includes validating that estimated power profiles align with actual activity, ensuring that predicted peak temperatures remain within safe limits, and confirming that no layer or interface exceeds its thermal reliability threshold.

Die-to-die boundaries, micro-bump arrays, TSV clusters, and package interconnects must be evaluated holistically, since minor thermal mismatches can accumulate into significant mechanical strain. Long-term reliability also depends on understanding how temperature interacts with electromigration, ageing, and performance drift over the product lifetime.
A complete electro-thermal signoff process provides the confidence needed before entering manufacturing, reducing field failures and ensuring long-term stability.
Designing for Thermal Scalability
3D-ICs deliver unprecedented performance, but they require a disciplined and predictive approach to thermal management. Success depends on treating heat as a first-order design variable, not a late-stage correction. Early modelling, accurate power estimation, careful material and stack selection, and co-optimisation across placement, PDN, interposer, and package all contribute to thermal resilience.
As system complexity continues to climb, teams that embed electro-thermal planning into their architecture and implementation flows will deliver higher-performing, more reliable, and scalable 3D-IC designs. Thermal awareness is no longer a specialisation; it is a foundational competency for next-generation semiconductor design.
The post Thermal Management in 3D-IC: Modelling Hotspots, Materials, & Cooling Strategies appeared first on ELE Times.
ROHM introduces reference designs for three-phase inverters featuring new SiC power modules
Keysight Launches AI Inference Emulation Platform to Validate and Optimise AI Infrastructure
Keysight Technologies has introduced Keysight AI Inference Builder (KAI Inference Builder), an emulation and analytics platform designed to validate inference-optimised AI infrastructure at scale. Keysight will demonstrate the solution at NVIDIA GTC, showcasing operation within NVIDIA DSX Air AI factory simulation environments to model and optimise AI data centre infrastructure, architectures, and performance.
As the AI industry shifts from training large language models (LLMs) to deploying them, optimising inference has become a crucial factor for ROI. However, inference behaviour is highly dynamic and difficult to emulate. Traditional testing methods like synthetic traffic generation or GPU benchmarks cannot accurately reproduce the latency-sensitive workload behaviour of AI inferencing across compute, networking, memory, storage, and security layers.
KAI Inference Builder closes that gap by recreating realistic inference workload patterns and modelling industry-specific usage patterns to validate AI infrastructure, applications, and data centre deployments. The platform gives AI cloud providers, hardware vendors, and application developers a scalable solution for measuring, validating, and optimising real-world inference performance.
Key benefits of KAI Inference Builder include:
Built for the Inference Era: As part of the Keysight Artificial Intelligence (KAI) portfolio, KAI Inference Builder emulates AI inference workloads at scale and validates full-stack deployments under real-world conditions to optimise performance, scale, and security.
- Industry- and Application-Specific Benchmarking: Instead of generic emulations, KAI Inference Builder emulates industry-specific usage patterns and LLM architectures for AI models seen in finance, healthcare, and other verticals, enabling organisations to model and analyse infrastructure and application behaviour across different types of AI data centre deployments.
- End-to-End Validation and Optimisation: KAI Inference Builder evaluates inference workflows from user request to model response, helping teams reduce costly rework by identifying and resolving bottlenecks early across compute, network, and security layers.
- Subsystem Isolation and Root-Cause Precision: KAI Inference Builder can also do client-only emulation, which identifies where performance bottlenecks emerge across the AI infrastructure stack under load, enabling targeted optimisation that reduces overprovisioning, lowers costs, and improves overall efficiency.
- NVIDIA DSX Air Integration and Live GTC Demo: Keysight will showcase KAI Inference Builder’s turnkey integration with NVIDIA Air at NVIDIA GTC, generating realistic inference workloads throughout NVIDIA’s data centre simulation environment so operators can validate inference infrastructure before deploying physical equipment.
Ram Periakaruppan, Vice President and General Manager, Network Test & Security Solutions at Keysight, said: “Inference is the key to unlocking AI’s ROI, but that can be challenging to achieve when system resources aren’t optimised for capacity and performance. KAI Inference Builder provides visibility into real-world inference performance across the full stack, enabling customers to validate and optimise deployments before hardware reaches the rack. Showcasing this capability at NVIDIA GTC using NVIDIA’s Air platform demonstrates how organisations can accelerate the path to production while reducing risk and cost.”
Amit Katz, VP of Networking at NVIDIA, said: “As AI data centres scale to unprecedented levels, pre-deployment validation has transitioned from a best practice to a mission-critical requirement. The integration of KAI Inference Builder with NVIDIA DSX Air provides the essential environment needed to eliminate performance volatility and enables NVIDIA AI Factory partners and customers to emulate real inference workloads and preemptively resolve bottlenecks, ensuring optimised AI services reach the market quickly.”
The post Keysight Launches AI Inference Emulation Platform to Validate and Optimise AI Infrastructure appeared first on ELE Times.
POET and LITEON to co-develop optical modules for AI applications
STMicroelectronics accelerates global adoption and market growth of Physical AI with NVIDIA
STMicroelectronics announced the acceleration of global development and adoption of physical AI systems, including humanoid, industrial, service and healthcare robots. ST is integrating its comprehensive portfolio for advanced robotics into the reference set of components compatible with the NVIDIA Holoscan Sensor Bridge (HSB). In parallel, high-fidelity NVIDIA Isaac Sim models of ST components are being integrated into both companies’ robotics ecosystems to support faster, more accurate sim-to-real research and development. The first deliverables available to developers today include the integration of Leopard’s depth camera enabled by ST with the NVIDIA HSB and the high-fidelity model of an ST IMU into NVIDIA’s Isaac Sim ecosystem.
“ST is well engaged within the robotics community, providing robust support and a well-established ecosystem,” said Rino Peruzzi, Executive Vice President, Sales & Marketing, Americas & Global Key Account Organization at STMicroelectronics. “Our collaboration with NVIDIA aims to unleash the next wave of cutting-edge robotics innovation with developer and customer experience streamlined at every step, from the inception of AI algorithms to the seamless integration of sensors and actuators. This will accelerate the evolution of sophisticated AI-driven physical platforms.”
“Accelerating the development of next-generation autonomous systems requires high-fidelity simulation and seamless hardware integration to bridge the gap between virtual training and real-world deployment,” said Deepu Talla, Vice President of Robotics and Edge AI at NVIDIA. “The integration of STMicroelectronics’ sensor and actuator technologies with NVIDIA Isaac Sim, Holoscan Sensor Bridge and Jetson platforms provides developers with a unified foundation to build, simulate and deploy physical AI at scale.”
Simplifying sensor and actuator integration with the Holoscan Sensor Bridge
With the NVIDIA HSB, developers can unify, standardise, synchronise, and streamline data acquisition and logging from multiple ST sensors and actuators, a critical foundation for building high-fidelity NVIDIA Isaac models, accelerating learning, and minimising the sim-to-real gap.
The goal is to simplify the process of connecting ST sensors and actuators to NVIDIA Jetson platforms through pre-integrated solutions for the combination of STM32 MCUs, advanced sensors (including IMUs, imagers, and ToF devices) and motor‑control solutions, particularly for humanoid robot designs. Leopard Imaging’s stereo depth camera for robots is the perfect example. Using ST imaging, depth and motion-sensing technologies, it is expected to support a broad wave of designs across Physical AI OEMs, academic research groups and the industrial robotics community.
Reducing cost, complexity, and challenges with high-fidelity modelling for Omniverse Isaac
Advanced robotics developers face high development costs, in addition to modelling challenges. High‑fidelity simulations with extensive randomisation demand substantial GPU and CPU resources and large datasets. Selecting which parameters to randomise, and over what ranges, requires deep domain expertise. Poor choices can result in unrealistic scenarios or inefficient training. Finally, excessive variability can confuse models, slow convergence, and degrade real‑world performance when randomisation no longer reflects plausible conditions.
ST and NVIDIA’s objective is to provide accurate, hardware-calibrated models for the comprehensive portfolio of ST components, matching the requirements of advanced robotics. Following the availability of the first model of an IMU, ST is working to bring developers models of ToF sensors, actuators and other ICs derived from benchmark data collected on real ST hardware, using ST tools to capture accurate parameters and realistic behaviour, resulting in models optimised to NVIDIA’s Isaac Sim ecosystem. NVIDIA HSB is being integrated into ST’s toolchain collaboratively.
As a result, ST and NVIDIA envision that more accurate models will significantly improve robot learning. With models that closely mirror real-world device behaviour, robots can learn from simulations that better reflect actual conditions, shortening training cycles and lowering the cost of building and refining humanoid robotics applications.
The post STMicroelectronics accelerates global adoption and market growth of Physical AI with NVIDIA appeared first on ELE Times.
Coherent demos InP technology innovation at OFC
Chiplet innovation isn’t waiting for perfect standards

Across markets such as AI, high-performance computing (HPC), and automotive, the demand for computational power continues to accelerate. This demand spans everything from compact edge devices to massive data center servers. Traditionally, that capacity was delivered by monolithic systems-on-chip (SoCs) implemented on a single silicon die. While manufacturing trade-offs can ease some pressures, a large die still limits optimization, forcing designers to balance power and performance across the entire chip rather than fine-tuning each function individually.
The problem is structural. Monolithic SoCs have reached physical and economic limits. As shown in Figure 1, reticle size is fixed, yields decline as die size grows, and the cost of large devices is prohibitively high.

Figure 1 Multi-die architectures are emerging as monolithic scaling reaches its limits. Source: Arteris Inc.
Multi-die systems offer a practical path forward. By breaking a large SoC into smaller chips, teams gain better yields, leverage proven components, and combine diverse process technologies in a single package. Additionally, chiplets can be reused across product lines, improving scalability and reducing cost.
The semiconductor industry has long envisioned chiplets as modular and interoperable, backed by fully proven standards. Companies are not waiting for that vision to materialize fully. They are already moving ahead with chiplet adoption while standards remain in flux.
Why chiplets, and why now?
Until recently, the world’s largest semiconductor companies were the predominant users of chiplet technology. These companies could control every aspect of the design, integration, and packaging processes.
Mid-size and startup companies also long for this future to be realized. However, lacking the resources of industry giants, they must adapt and take incremental steps today, even as the whole framework evolves.
Disaggregating a monolithic design into chiplets offers multiple advantages. By mounting these components on a common silicon substrate, the resulting multi-die systems can be manufactured at the most appropriate technology node.
For example, memory at 28 nm, a high-performance processor at 7 nm, and a cutting-edge CPU at 2 nm. Combining all dies into a single package creates a multi-die system that outperforms a monolithic design.
Standards: Ideal vs. actual
One of the issues is that the standards needed to make chiplets broadly interchangeable are not yet fully baked. They still need to be implemented, validated, and tested across different pieces of silicon before designers can count on them.
Even when two companies follow the exact specification, small details such as sideband signals or initialization steps can differ enough to cause unexpected failures. Until compatibility is proven at scale, design teams need to remain pragmatic in their approach to developing multi-die systems.
The ideal case is often described as chiplets that fit together like Lego bricks, highlighting the requirement that they are straightforward to combine and verified so that they work reliably together. Achieving that vision will ultimately depend on widely adopted industry standards that enable dies from different sources to function as one system.
Initiatives such as AMBA CHI Chip-to-Chip (C2C), Bunch of Wires (BoW), and Universal Chiplet Interconnect Express (UCIe) are helping to define the physical and protocol layers for die-to-die (D2D) links. Yet many challenges remain in areas such as system-level verification, latency optimization, power efficiency, security, and ensuring that chiplets from different vendors perform cohesively, as shown in Figure 2.

Figure 2 Multi-die SoC adoption is expanding across multiple markets. Source: Arteris Inc.
Companies can turn to multi-die systems
Progress can’t be delayed until standards are finalized, so design teams are advancing with innovation. Some of the ways system architects are tackling multi-die design are as follows:
- Design for modularity: Partition compute, memory, and IO into reusable blocks. Utilize silicon-proven network-on-chip (NoC) interconnect IP that supports multiple device-to-device (D2D) protocols and topologies.
- Build with interoperability in mind: Utilize tools and IP that are co-validated with major electronic design automation (EDA), physical layer (PHY), and foundry partners to align chiplet workflows and ensure IP, tool, and foundry compatibility.
- Automate integration: Hand-stitching chiplets together is a time-consuming and error-prone nightmare. Employ tools that automate HW/SW interface definition and assembly, which is essential for fast iteration and derivative design creation.
- Use coherency only where it matters: Certain functions, such as CPU and GPU clusters, may require coherent chiplets and D2D interfaces that necessitate the use of a coherent NoC. By comparison, functions like AI/ML accelerators may be satisfied by non-coherent chiplets and D2D interfaces. These are simpler and more power-efficient and can be addressed with a non-coherent NoC.
- Reuse what works: Adopt chiplet templates that can scale across product families and incorporate proven monolithic dies alongside new multi-die IP in derivative designs.
- Accept that the ecosystem is co-evolving: Standards are years away from full maturity. And companies are just beginning to explore building modular, standard-aware designs, laying the groundwork for the ecosystem’s future.
Build now, don’t wait
Multi-die system development teams should adopt modular design principles, utilize proven IP blocks with flexible D2D support, implement automated integration tools, and embrace ecosystem-aware development flows. Designers should also collaborate with like-minded innovators, partners, and customers to deliver tomorrow’s complex systems today.
Chiplets design solutions show how multi-die architectures can be built and deployed now. They enable companies to address today’s performance and scalability needs while laying the groundwork for seamless interoperability in the future.
Andy Nightingale, VP of Product Management and Marketing at Arteris, has over 39 years of experience in the high-tech industry, including 23 years in various engineering and product management roles at Arm.
Special Section: Chiplets Design
The post Chiplet innovation isn’t waiting for perfect standards appeared first on EDN.
Socomec Expands Power Solutions Portfolio in India, Launches MASTERYS GP4 UPS and ATyS a M Automatic Transfer Switch
Socomec has announced the launch of its new advanced MASTERYS GP4 UPS and ATyS a M Automatic Transfer Switch. This further strengthens their portfolio of reliable power management solutions. With over 25 years in the industry, the launch reinforces the company’s focus on innovative, efficient technologies for modern infrastructure.
Mr. Meenu Singhal, Regional Managing Director, Socomec Innovative Power Solutions, said,
“The launch of the MASTERYS GP4 UPS and ATyS a M Automatic Transfer Switch strengthens our portfolio with solutions that drive operational continuity and efficiency. From data centres and IT rooms to commercial buildings, organisations require resilient power infrastructure to ensure uninterrupted operations and protect critical systems. These products help optimise power supply while supporting reliable performance. We remain focused on innovation and committed to delivering dependable, future-ready power solutions for our customers.”
MASTERYS GP4 UPS, Designed for Critical Power Environments:
|
The Socomec MASTERYS GP4 200–250 kVA UPS is a high-performance uninterruptible power supply designed to ensure reliable power continuity for mission-critical environments. Built with advanced power protection technology and high-efficiency SiC technology, it delivers superior energy efficiency, consistent power quality, and reliable performance for data centres, industrial operations, and commercial infrastructure requiring uninterrupted operations. |
•Reliable power protection: Ensures uninterrupted power for critical infrastructure such as data centres, IT rooms, industrial processes, and commercial facilities, helping maintain operational continuity during grid disturbances.
•Advanced double-conversion technology: Provides stable and high-quality power output while minimising energy losses and supporting lower CO₂ emissions. • High efficiency and robust design: Combines high efficiency levels with a resilient architecture, leveraging advanced Sic technology to reduce downtime and support continuous operations in demanding environments. •Optimised for modern digital infrastructure: Designed to meet the growing power reliability needs of expanding digital ecosystems and industrial facilities. |
ATyS a M Automatic Transfer Switch: Compact, Reliable Source Switching:
|
Socomec’s ATyS a M Automatic Transfer Switch enables automatic and seamless switching between two power sources, such as the main utility supply and a backup generator, ensuring uninterrupted power for commercial buildings, industrial facilities and other critical installations where continuous operations are essential. |
•Automatic Source Transfer: Automatically switches between the main power source and backup supply, ensuring continuity of operations during power interruptions.
•Compact Modular Design: More compact than similar solutions, enabling easier integration within electrical panels and helping save valuable installation space. •Quick & Easy Commissioning: Integrated pre-configured controller automatically manages parameters and source transfers, reducing setup time and risk of manual error. •Proven Reliability for Low-Voltage Installations: Designed and tested according to international standards, supporting reliable switching for commercial and industrial facilities. |
Socomec offers support in design and commissioning, ensuring high-performing and sustainable electrical installations that are compliant. These solutions improve continuous power supply and strengthen resilience across data centres, industrial facilities, commercial buildings, and other critical infrastructure.
The post Socomec Expands Power Solutions Portfolio in India, Launches MASTERYS GP4 UPS and ATyS a M Automatic Transfer Switch appeared first on ELE Times.
Ігор Івіцький: "КПІ навчив мене думати як вчений, а бізнес дозволив масштабувати силу цього мислення"
Український експерт уперше в історії увійшов до рейтингу "The Top 100 Most Influential PPC Experts" – основного світового рейтингу у сфері цифрової реклами та маркетингу, який щорічно визначає найвпливовіших експертів планети.
Lumentum demos technologies and products for scale-out, scale-up and scale-across applications
Lumentum showcases optical scale-up demo at OFC using VCSELs
EDOM Technology Strengthens Its Role in Integrating the Physical AI Ecosystem
EDOM Technology continues to expand its edge computing and intelligent system integration capabilities. Built on NVIDIA IGX Thor Development Kit, EDOM enables industries to adopt safety-critical computing platforms, accelerating the deployment of intelligent devices and autonomous computing systems while strengthening the physical AI competitiveness of Taiwan and the broader Asia-Pacific region.
With extensive system integration expertise, EDOM provides end-to-end hardware and software architecture planning, complemented by value-added engineering services that help customers streamline product development and deployment. Beyond computing platform advisory, EDOM also provides specialised services for safety-critical and sensor-rich edge environments, including functional safety architecture consulting, peripheral sensor and control module selection and integration, as well as ecosystem partner solution enablement. Through these capabilities and close collaboration with ecosystem partners, EDOM helps strengthen the industry value chain, lower technical integration barriers, and accelerate customers’ time to market.
The NVIDIA IGX Thor Platform is powered by NVIDIA Blackwell GPU architecture and supports NVIDIA Multi-Instance GPU (MIG) technology, enabling multiple AI workloads to run concurrently for improved resource utilisation. Designed to deliver high-performance edge AI computing, the platform achieves up to 5,581 FP4 TFLOPS of AI compute performance. Integrated with an Arm Neoverse CPU architecture, NVIDIA IGX Thor balances high-throughput AI inference with real-time data processing requirements, making it well-suited for deployment in a safety-critical environment.
NVIDIA IGX Thor Developer Kit also supports functional safety architecture implementation, incorporating compute monitoring and protection mechanisms to enhance long-duration operational stability. It also provides high-speed I/O connectivity and diverse expansion interfaces, facilitating integration with a wide range of sensing devices and industrial control modules. These capabilities address key edge computing requirements, including low latency, high reliability, and multi-sensor data integration.
As intelligent industry applications extend from cloud computing environments to physical devices, NVIDIA IGX Thor is well-suited for deployment in smart healthcare, industrial automation, and autonomous robotics. In smart manufacturing and industrial inspection scenarios, it supports real-time quality monitoring, predictive maintenance, and intelligent production line management. In healthcare environments, it enables high-precision imaging analysis and clinical decision support workloads. For autonomous mobile machines and service robots, its multi-sensor data fusion and real-time inference capabilities enhance navigation accuracy and safe obstacle avoidance.
Looking ahead, EDOM will continue to deepen its physical AI ecosystem integration services by combining hardware value-added integration expertise with an open partner collaboration model. Working alongside technology providers, system integrators, and application developers, EDOM aims to accelerate the deployment of edge AI computing solutions across diverse industry scenarios. By supporting customers from proof-of-concept validation through commercial rollout, EDOM enables the realisation of high-value AI-driven solutions and advances the evolution of next-generation smart industry value chains.
The post EDOM Technology Strengthens Its Role in Integrating the Physical AI Ecosystem appeared first on ELE Times.
Indian HVAC Market Poised to Double in Five Years with 15% Annual Growth
Industry leaders at ACREX India 2026 highlight that the Indian HVAC sector is poised for significant expansion, with the market expected to grow at 15% annually and potentially double within five years.
The industry is shifting toward local manufacturing and AI-driven predictive maintenance to capture massive growth potential in residential and infrastructure sectors. With residential AC penetration at just 10%, leaders are prioritising sustainability through humidity-optimised, super-efficient systems that can cut energy use by 60%. Ultimately, the sector is evolving beyond equipment sales to focus on the entire system lifecycle, emphasising energy efficiency, indoor air quality, and environmental impact.
Organised by ISHRAE, ACREX India 2026 served as a global hub where more than 400 exhibitors representing 40 nations and over 30,000 attendees gathered for South Asia’s premier HVAC and intelligent building exhibition. During the event, prominent industry leaders such as LG, Carrier, Daikin, Voltas, Danfoss, Schneider Electric, Panasonic, Johnson Controls and Tecumseh presented their latest advancements in next-generation cooling technology.
Speaking at the event, Mr Mukundan Menon, Managing Director, Voltas Limited, said, “The Indian HVAC industry is at the cusp of significant expansion, with the sector expected to grow at nearly 15% annually, potentially doubling within the next five years. Currently, about 15 million residential AC units are sold in India each year, and this is projected to reach around 30 million units by 2030. On the commercial side, India continues to build rapidly, creating strong opportunities across data centres, district cooling and infrastructure development. The recent GST reduction on ACs from 28% to 18% is a welcome policy step that will stimulate demand, with the first visible impact expected during the summer of 2026.”
Emphasizing the strong transformation and long-term growth potential of the HVAC industry in India, Mr Ravichandran Purushothaman, President, Danfoss Industries Private Limited, said, “Over the past three years, the HVAC industry in India has nearly doubled in size, with a significant shift toward local manufacturing, reflecting the momentum of Atmanirbhar Bharat and the government’s focus on reducing import dependence in the cooling sector. Looking ahead, the opportunity is substantial. As per the India Cooling Action Plan, cooling demand in India is expected to grow eightfold by 2038. On the commercial side, rapid expansion in semiconductor facilities, advanced manufacturing and data centres is driving demand for high-performance cooling solutions. At the same time, the industry is focusing on energy-efficient, water-efficient and carbon-efficient technologies, increasing localisation in electronics and strengthening new skill capabilities.”
Mr Jayanta Kumar Das, Society President, ISHRAE, said, “ACREX India brings the entire HVAC ecosystem onto one platform, enabling companies to showcase innovations and engage with the broader industry community. As cooling demand grows rapidly in India, the focus must move beyond equipment to the entire lifecycle of systems, where installation, operation and maintenance account for nearly 90% of the total cost. Through nationwide training programs, research and industry partnerships across 55+ locations, ISHRAE is working to strengthen skills, encourage innovation and support the sector’s mission of delivering more cooling with lower energy consumption and reduced environmental impact.”
Mr Amod Dikshit, Chairman, ACREX India, said, “India’s rapid infrastructure expansion and growing dependence on cooling across sectors such as data centres, district cooling, airports, hospitals, hotels and metro rail projects is creating a significant opportunity for the HVAC industry over the next decade. As this demand accelerates, the industry is focusing on localising the production of key sub-assemblies, strengthening capabilities and advancing more energy-efficient technologies. Cooling already accounts for nearly 40% of India’s electricity demand, which makes efficiency and sustainability critical priorities. With the Bureau of Energy Efficiency progressively upgrading standards every two years by about 7–10%, the industry continues to move towards more energy-efficient solutions.”
Ricardo Maciel, CEO of Tecumseh, said, “India is one of the world’s fastest-growing HVAC and refrigeration markets, fueled by urbanisation, food security, and expanding cold chain infrastructure. At ACREX, we highlighted our commitment to the Indian market through advanced, sustainable technology and strengthened local manufacturing. By combining global engineering with local production, we are meeting the market’s growing demand for energy efficiency, helping customers lower operating costs while supporting India’s long-term sustainability goals. Tecumseh introduced the new TC3 premium-efficiency compressor platform, ranging from 3 to 12 cc, delivering more than 30% energy savings compared to current platforms.”
Mr. Sanjeev Seth, Sr Vice President and Business Head, Systems Air Conditioning Division, LG Electronics India Limited, said, “India’s HVAC sector is on a strong growth trajectory, driven by rapid urbanisation, infrastructure expansion and increasing demand for efficient climate control. At the same time, AI is beginning to revolutionise HVAC system management in India, as customers seek higher energy efficiency and reduced downtime. Technologies such as predictive maintenance, cloud-based remote monitoring and intelligent controls are improving reliability and optimising energy consumption in VRF and chiller systems. This integration of digital technologies will significantly boost system performance and operational efficiency. At ACREX India 2026, LG Electronics India showcased its latest intelligent HVAC innovations designed for India’s evolving cooling landscapes. “
Mr. Abhishek Verma, Head – Products Marketing & Planning, Panasonic Life Solutions India Pvt. Ltd., said, “The air conditioning industry in India continues to offer strong growth potential. With residential AC penetration at around 8%, the segment has significant headroom for expansion and is expected to grow at a CAGR of nearly 15%, while the commercial AC market is also witnessing robust demand. As climate needs evolve, energy efficiency and indoor air quality are becoming key priorities. At Panasonic, we are advancing AI-driven cooling technologies to enhance energy efficiency without compromising comfort. At ACREX India, Panasonic showcased these intelligent and sustainable cooling solutions for India’s evolving needs.”
Globally, the HVAC industry is entering a significant expansion phase, with the market projected to reach nearly $445 billion by 2033, while HVAC systems already account for close to 40% of total building energy consumption worldwide. In response, the industry is rapidly shifting toward sustainable and energy-efficient technologies. Innovations such as AI-enabled smart HVAC systems, Variable Refrigerant Flow (VRF) technologies, natural refrigerants, district cooling systems, and advanced data centre cooling solutions are transforming the sector.
The post Indian HVAC Market Poised to Double in Five Years with 15% Annual Growth appeared first on ELE Times.
This is so easy. First board, first try
| submitted by /u/IHaveThreeBedrooms [link] [comments] |
DIY Lighthouse tracker using custom PCB and ESP32-C3
| Hey everyone, For that I built a custom PCB in the simplest way possible, as I am still quite a beginner in electronics. I am using 2 BPW-34 photodiodes - they have no IR filter built in, so i'm using floppy disk film as a cheap IR bandpass which works surprisingly well. To amplify and filter the signal i used an op-amp as somehow better options such as the TS4231 were not sourceable easily for me. It seems like most of these chips are sold out or hard to get by. But even with just that a very basic tracking that captures the laser pulses from the lighthouse worked! [link] [comments] |
КПІшники — переможці інженерного челенджу від Brave1
Команда Факультету електроніки (ФЕЛ) КПІ ім. Ігоря Сікорського «Оленячі роги» здобула перемогу в інженерному челенджі Brave1 у межах ініціативи Brave Students.



