Збирач потоків

Infineon Upgrades Its Control MCUs for Post-Quantum Cryptography Transition

AAC - 14 хв 6 секунд тому
Announced today, the new MCUs future-proof industrial and data center systems in anticipation for the post-quantum world.

OpenLight raises $34m in Series A funding round to scale integrated photonics for AI data centers

Semiconductor today - 17 хв 18 секунд тому
Photonic application-specific integrated circuit (PASIC) chip designer and manufacturer OpenLight of Santa Barbara, CA, USA (which launched as an independent company in June 2022, introducing the first open silicon photonics platform with heterogeneously integrated III-V lasers, modulators, amplifiers and detectors) has closed its oversubscribed $34m Series A fundraising round, which was co-led by Xora Innovation and Capricorn Investment Group. Other participants include Mayfield; Juniper Networks (now part of HPE); Lam Capital (the corporate venture arm of Lam Research Corp); New Legacy Ventures; and K2 Access...

Top 10 Federated Learning Companies in India

ELE Times - 1 година 46 хв тому

Federated learning is transforming AI’s potential in India by allowing models to be trained without infringing on the privacy of decentralized data. Federated learning is of critical importance in healthcare, finance, and consumer technology due to the rising needs of industries for AI that is secure, compliant with regulations, and privacy-preserving. Due to India having a flourishing technology ecosystem as well as a strong pool of AI talents, India is emerging as a leader in this technology. This article will discuss the leading 10 companies in India that focus on federated learning.

  1. TCS Research

TCS Research as the innovation wing of Tata Consultancy Services, TCS Research collaborates with federated learning for enterprise AI. Their initiatives cover healthcare, banking, and smart city projects, centering on the safe training of models over distributed data silos.

  1. Wipro HOLMES

Wipro’s AI platform, uses federated learning to provide intelligent automation and edge AI. Its application in telecommunications, manufacturing, and IT services aids in the development of AI models without eroding data privacy.

  1. Infosys Nia

Infosys Nia An all-in-one AI platform, Infosys Nia also offers federated learning for decentralized data modeling, which is especially beneficial in retail, and finance, where data sensitivity is high and compliance is critical.

  1. SigTuple

With its headquarters in Bengaluru, SigTuple is a health tech company which employs federated learning to streamline the analysis of medical images and diagnostics, while still maintaining patient data privacy. Their AI solutions not only save time but also improve the decision-making processes of pathologists and radiologists.

  1. Qure.ai

With over a decade of specialization in AI-driven radiology, Qure.ai is a clear leader. They are notable examples of the application of federated learning in radiology, not only for advancing diagnostic precision but also for safeguarding critical medical information.

  1. Vaidik AI

Vaidik AI marks a new chapter in the federated learning narrative of India. It launched an extensive selection of AI services, including the fine-tuning of LLMs and multilingual AI. Its multidisciplinary expertise in data annotation and the privacy-first approach to AI model development is well known. It provides healthcare, finance, and education sectors with economical and scalable solutions.

  1. ActionLabs AI

ActionLabs AI is located in Bengaluru and works with federated learning, edge AI, and generative model creation. Though healthcare and fintech startups appearing to be ActionLabs’ primary areas of focus, the company’s small size allows it to efficiently cater to a wider range of companies.

  1. Accenture India

Accenture adapts federated learning to its Responsible AI framework, assisting clients spanning the energy sector to public services in securely training models on decentralized data.

  1. Fractal Analytics

Fractal Analytics Fractal applies federated learning to generate consumer insights for retail and CPG. Their solutions enable brands to analyze consumer behavior without pooling sensitive data.

  1. Intel India

Intel India, with its offices in Bengaluru and Hyderabad, is pivotal in advancing federated learning as it refines secure hardware platforms such as Trusted Execution Environments (TEEs) and furthers AI research through Intel Labs. It also champions privacy-preserving AI in healthcare, smart cities, and edge computing.

Conclusion:

The federated learning ecosystem in India is evolving rapidly with the presence of global technology leaders such as Intel and the innovative local startups such as ActionLabs AI, Vaidik AI, and SigTuple. These firms not only expand the frontiers of privacy-preserving AI but also position the federated learning ecosystem to thrive on data collaboration devoid of security risks. With growing demand across healthcare, finance, and edge computing, federated learning is becoming a cornerstone of ethical AI development in India.

The post Top 10 Federated Learning Companies in India appeared first on ELE Times.

Cadence Accelerates Development of Billion-Gate AI Designs with Innovative Power Analysis Technology Built on NVIDIA

ELE Times - 2 години 2 хв тому

New Cadence Palladium Dynamic Power Analysis App enables designers of AI/ML chips and systems to create more energy-efficient designs and accelerate time to market

Cadence announced a significant leap forward in the power analysis of pre-silicon designs through its close collaboration with NVIDIA. Leveraging the advanced capabilities of the Cadence Palladium Z3 Enterprise Emulation Platform, utilizing the new Cadence Dynamic Power Analysis (DPA) App, Cadence and NVIDIA have achieved what was previously considered impossible: hardware accelerated dynamic power analysis of billion-gate AI designs, spanning billions of cycles within a few hours with up to 97 percent accuracy. This milestone enables semiconductor and systems developers targeting AI, machine learning (ML) and GPU-accelerated applications to design more energy-efficient systems and accelerate their time to market.

The massive complexity and computational requirements of today’s most advanced semiconductors and systems present a challenge for designers, who have until now been unable to accurately predict their power consumption under realistic conditions. Conventional power analysis tools cannot scale beyond a few hundred thousand cycles without requiring impractical timelines. In close collaboration with NVIDIA, Cadence has overcome these challenges through hardware-assisted power acceleration and parallel processing innovations, enabling previously unattainable precision across billions of cycles in early-stage designs.

“Cadence and NVIDIA are building on our long history of introducing transformative technologies developed through deep collaboration,” said Dhiraj Goswami, corporate vice president and general manager at Cadence. “This project redefined boundaries, processing billions of cycles in as few as two to three hours. This empowers customers to confidently meet aggressive performance and power targets and accelerate their time to silicon.”

“As the era of agentic AI and next-generation AI infrastructure rapidly evolves, engineers need sophisticated tools to design more energy-efficient solutions,” said Narendra Konda, vice president, Hardware Engineering at NVIDIA. “By combining NVIDIA’s accelerated computing expertise with Cadence’s EDA leadership, we’re advancing hardware-accelerated power profiling to enable more precise efficiency in accelerated computing platforms.”

The Palladium Z3 Platform uses the DPA App to accurately estimate power consumption under real-world workloads, allowing functionality, power usage and performance to be verified before tapeout, when the design can still be optimized. Especially useful in AI, ML and GPU-accelerated applications, early power modeling increases energy efficiency while avoiding delays from over- or under-designed semiconductors. Palladium DPA is integrated into the Cadence analysis and implementation solution to allow designers to address power estimation, reduction and signoff throughout the entire design process, resulting in the most efficient silicon and system designs possible.

The post Cadence Accelerates Development of Billion-Gate AI Designs with Innovative Power Analysis Technology Built on NVIDIA appeared first on ELE Times.

Here Comes the First Industrial Edge AI Computer Built on Raspberry Pi

AAC - 10 годин 14 хв тому
Sixfab’s ALPON X5 AI aims to make real-world edge AI deployment faster, cheaper, and far less frustrating.

CGD appoints Robin Lyle as VP R&D

Semiconductor today - Втр, 08/26/2025 - 23:04
Fabless firm Cambridge GaN Devices Ltd (CGD) — which was spun out of the University of Cambridge in 2016 to design, develop and commercialize power transistors and ICs that use GaN-on-silicon substrates — has appointed Robin Lyle, a 30-year veteran of the power semiconductor industry, as vice president of R&D...

Toshiba and SICC sign MOU on silicon carbide power semi wafer collaboration

Semiconductor today - Втр, 08/26/2025 - 22:59
Toshiba Electronic Devices & Storage Corp of Kawasaki, Japan and Chinese silicon carbide (SiC) supplier SICC Co Ltd have signed a memorandum of understanding (MOU) to explore collaboration in improving the characteristics and quality of silicon carbide (SiC) power semiconductor wafers developed and manufactured by SICC, and expanded supply of stable, high-quality wafers from SICC to Toshiba. The two firms will discuss the scope of their joint efforts and mutual support...

Infineon introduces 75mΩ industrial CoolSiC MOSFETs 650V G2 for medium-power applications with high power density

Semiconductor today - Втр, 08/26/2025 - 22:50
Infineon Technologies AG Munich, Germany is expanding its CoolSiC MOSFETs 650V G2 portfolio with new 75mΩ variants to meet the demand for more compact and powerful systems...

Microchip Unveils Bottleneck-Busting RAID Storage Accelerator Cards

AAC - Втр, 08/26/2025 - 20:00
The new series targets CPU-attached NVMe deployments with disaggregated architecture and hardware offload.

❤️ 500 000 гривень за кращі цифрові рішення для громад: EGAP Ideathon 2025

Новини - Втр, 08/26/2025 - 17:14
❤️ 500 000 гривень за кращі цифрові рішення для громад: EGAP Ideathon 2025
Image
kpi вт, 08/26/2025 - 17:14
Текст

Фонд Східна Європа в межах флагманської Програми EGAP, що реалізується за підтримки Швейцарії, оголошує реєстрацію на EGAP Ideathon 2025 — національний конкурс ідей для вдосконалення цифрових платформ СВОЇ та e-DEM.

Simple but accurate 4 to 20 mA two-wire transmitter for PRTDs

EDN Network - Втр, 08/26/2025 - 15:40

Accurate, inexpensive, and mature platinum resistance temperature detectors (PRTDs) with an operating range extending from the cryogenic to the incendiary are a gold (no! platinum!) standard for temperature measurement.

Similarly, the 4 to 20 mA analog current loop is a legacy, but still popular, noise- and wiring-resistance-tolerant interconnection method with good built-in fault detection and transmitter “phantom-power” features.

Wow the engineering world with your unique design: Design Ideas Submission Guide

Figure 1 combines them in a simple, cheap, and cheerful temperature sensor using just eight off-the-shelf (OTS) parts, counting the PRTD. Here’s how it works.

Figure 1 PRTD current loop sensor with Ix = 500 µA constant current excitation.
Ix = 2.5v/R2, PRTD resistance = R1(Io/Ix – 1)
R1 and R2 are 0.1% tolerance (ideally)

The key to measurement accuracy is the 2.50-V LM4040x25 shunt reference, available with accuracy grade suffixes of  0.1% (x = A), 0.2% (B), 0.5% (C), and 1% (D). The “B” grade is consistent (just barely) with a temperature measurement accuracy of ±0.5oC.

R1 and R2 should have similar precision. R2 throttles the 2.5 V to provide Ix = 2.5/R2 = 500 µA excitation to T1. Because A1 continuously servos the Io output current to hold pin3 = pin4 = LM4040 anode, the 2.5 V across R2 is held constant, therefore Ix is likewise.

Thus, the voltage across output sense resistor R1 is forced to Vr1 = Ix(Rprtd) and Io = Ix(Rprtd/R1 + 1). This makes Io/Ix = Rprtd/R1 + 1 and Rprtd/R1 = Io/Ix – 1 for Rprtd = R1(Io/Ix – 1).

Wrapping it all up with a bow: Rprtd = R1(Io/(2.5/R2) – 1). Note that accommodation of different Rprtd resistance (and therefore temperature) ranges is a simple matter of choosing different R1 and/or R2 values.

Conversion of the Io reading to Rprtd is an easy chore in software, and the step from there to temperature isn’t much worse, thanks to Callendar Van Dusen math.

Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.

Related Content

The post Simple but accurate 4 to 20 mA two-wire transmitter for PRTDs appeared first on EDN.

Filtronic secures record order from SpaceX

Semiconductor today - Втр, 08/26/2025 - 15:13
Filtronic plc of Sedgefield and Leeds, UK — which designs and manufactures RF and millimeter-wave (mmWave) transmit & receive components and subsystems — has secured its largest ever contract, valued at £47.3m ($62.5m), with its long-standing customer SpaceX, for the Starlink high-speed internet service...

Skyworks names Phil Carter as CFO

Semiconductor today - Втр, 08/26/2025 - 14:51
Skyworks Solutions Inc of Irvine, CA, USA (which manufactures analog and mixed-signal semiconductors) says that Philip Carter has been appointed senior VP & chief financial officer, effective 8 September, responsible for financial strategy, investor relations, treasury and leadership of the global finance and information technology organizations...

Navitas names Chris Allexandre as president & CEO and board member

Semiconductor today - Втр, 08/26/2025 - 14:42
Gallium nitride (GaN) power IC and silicon carbide (SiC) technology firm Navitas Semiconductor Corp of Torrance, CA, USA has appointed Chris Allexandre as president & chief executive officer, effective 1 September. He will also join the board of directors...

Top 10 Federated Learning Algorithms

ELE Times - Втр, 08/26/2025 - 14:16

Federated Learning (FL) has been termed a revolutionary manner of machine learning because it provides the capability of collaborative model training across devices in a decentralized manner while preserving data privacy. Instead of transferring data to a centralized server for training, devices train locally, and only their model updates are shared. This way, it finds applicability in sensitive areas like healthcare, finance, and mobile applications. As Federated Learning continues to evolve, an increasingly diverse array of algorithms has emerged each designed to enhance communication efficiency, boost model accuracy, and strengthen resilience against data heterogeneity and adversarial challenges. This article will delve into the types, examples, and top 10 Federated Learning Algorithms.

Types of federated learning algorithms:

Federated Learning algorithms get classified by how data is laid out, by the system structure, and by the privacy requirements. Horizontal FL covers clients with the same features but distinct data points. Vertical FL captures the case where features are different but clients overlap. When users and features are both different, we use Federated Transfer Learning. Decentralized FL, as opposed to Centralized FL, doesn’t use a central server and instead allows for peer-to-peer communication. In terms of FL deployment methods, Cross-Silo FL consists of powerful participants like hospitals and banks, while Cross-Device FL focuses on lightweight devices, such as smartphones. In addition, Privacy-Preserving FL protects user data with encryption, differential privacy, and other techniques, and Robust FL attempts to protect the system from malicious, adversarial, or broken clients.

Examples of federated learning algorithms:

Examples of Federated Learning Algorithms: A number of algorithms have been created to overcome challenges specific to Federated Learning problems. The basic approach of Federated Learning is FedAvg, which, in contrast, models client averaging. FedProx, which is designed to work well with data heterogeneity, is a more advanced approach. For personalization, FedPer customizes top layers for each client, and pFedMe applies meta-learning techniques. Communication-efficient algorithms like SCAFFOLD and FedPAQ reduce bandwidth usage and client drift. Robust algorithms such as Krum, Bulyan, and RFA filter out malicious or noisy updates to maintain model integrity. Privacy-focused methods like DP-FedAvg and Secure Aggregation ensure data confidentiality during training. These algorithms are often tailored or combined to suit specific domains like healthcare, finance, and IoT.

Top 10 Federated Learning Algorithms:

  1. Federated Averaging (FedAvg):

FedAvg stands as the founding algorithm for Federated Learning. The weight averaging is performed after models are trained locally on each client for updating the global model. Due to its simple design and the ease with which one can scale, it has been widely implemented in practice.

  1. FedProx

FedProx improves upon FedAvg by adding a proximal term to the loss function. FedProx builds upon FedAvg by introducing a proximal term in the loss function. By penalizing local updates that diverge too much from the global model, this term helps stabilize training in settings with widely differing client data distributions. It is especially helpful in fields like healthcare and finance, where heterogeneous data is prevalent.

  1. FedNova (Federated Normalized Averaging)

To address the drift of the client, FedNova normalizes updates with respect to the number of local steps and learning rates. This ensures each client has an equal contribution to the global model regardless of its computational capabilities or data volume. This further favors convergence and fairness in heterogeneous setups.

  1. SCAFFOLD

SCAFFOLD, an abbreviation for Stochastic Controlled Averaging for Federated Learning, employs control variates to make corrections to the client’s updates. This limits the variance that exists owing to non-IID data and speeds the convergence. It is particularly effective in an edge computing environment, where data come from various sources.

  1. MOON (Model-Contrastive Federated Learning)

MOON brings contrastive learning into FL by aligning local and global model representations. It enforces consistency of models that are particularly necessary when client data are highly divergent. MOON should often be used for image and text classification tasks for very heterogeneous user bases.

  1. FedDyn (Federated Dynamic Regularization)

FedDyn incorporates a dynamic regularization term in the loss function to enable the global model to accommodate client-specific updates better. Because of this, it can withstand situations involving extremely diverse data, such user-specific recommendation systems or personalized healthcare.

  1. FedOpt

FedOpt substitutes in place of the vanilla averaging mechanisms with advanced server-side optimizers like Adam, Yogi, and Adagrad. Using these optimizers leads to faster and more stable convergence, which is paramount in deep learning tasks with large neural networks.

  1. Per-FedAvg (Personalized Federated Averaging)

Personalized Federated Averaging hopes to balance global generalization with local adaption by allowing clients to fine-tune the global model locally. Because of this, Per-FedAvg is suitable for personalized recommendations, mobile apps, and wearable health monitors.

  1. FedMA (Federated Matched Averaging)

The distinguishing feature of this method is the matching of neurons across client models before averaging. This retains the architecture of a deep neural network and hence allows for much more meaningful aggregation, especially for convolutional and recurrent architectures.

  1. FedSGD (Federated Stochastic Gradient Descent)

A simpler alternative to FedAvg, FedSGD sends gradients instead of model weights. It’s more communication-intensive but can be useful when frequent updates are needed or when model sizes are small.

Conclusion:

These algorithms represent the cutting edge of federated learning, each tailored to address specific challenges like data heterogeneity, personalization, and communication efficiency. As FL continues to grow in importance especially in privacy-sensitive domains these innovations will be crucial in building robust, scalable, and ethical AI systems.

The post Top 10 Federated Learning Algorithms appeared first on ELE Times.

Integrated voltage regulator (IVR) for the AI era

EDN Network - Втр, 08/26/2025 - 13:20

A new integrated voltage regulator (IVR) claims to expand the limits of current density, conversion efficiency, voltage range, and control bandwidth for artificial intelligence (AI) processors without hitting thermal and space limits. This chip-scale power converter can sit directly within the processor package to free up board and system space and boost current density for the most power-hungry digital processors.

Data centers are grappling with rising energy costs as AI workloads scale with modern processors demanding over 5 kW per chip. That’s more than ten times what CPUs and GPUs required just a few years ago. Not surprisingly, therefore, in a data center, power can account for more than 50% of the total cost of ownership.

“This massive jump in power consumption of data centers calls for a fundamental rethink of power delivery networks (PDNs),” said Noah Sturcken, co-founder and CEO of Ferric. He claims that his company’s new IVR addresses both the chip-level bottleneck and the system-level PDN challenge in one breakthrough.

Fe1766—a single-output, self-contained power system-on-chip (SoC)—is a 16-phase interleaved buck converter with a fully-integrated powertrain that includes ferromagnetic power inductors. The high-switching-frequency powertrain also includes high-performance FETs and capacitors that drive ferromagnetic power inductors.

Figure 1 The new IVR features a digital interface that provides complete power management and monitoring with fast and precise voltage control, fast transient response times, and high bandwidth regulation. Source: Ferric

Fe1766 delivers 160 A in an 8 × 4.4 mm form factor to bolster power density and reduce board area, layout complexity, and component count. The new IVR achieves one to two levels of miniaturization compared to a traditional DC/DC converter by taking a collection of discrete components that we design on a motherboard and replacing them with a much smaller chip-scale power converter.

Moreover, these IVRs can be directly integrated into the packaging of a processor, which improves the efficiency of the PDN by reducing transmission losses. It also brings the power converter much closer to the processor, leading to a cleaner power supply and a reduction in board area. “That means more processing can occur in the same space, and in some cases, design engineers can place a second processor in the same space,” Sturcken added.

Fe1766, which enables vertical power delivery within the processor package, claims to provide more power within the processor package while cutting energy losses with vertical power delivery. That makes it highly suitable for ultra-dense AI chips like GPUs. AI chip suppliers like Marvell have already started embedding IVRs in their processor designs.

Figure 2 Marvell has started incorporating IVRs in its AI processor packages. Source: Ferric

Ferric, which specializes in advanced power conversion technologies designed to optimize power delivery in next-generation compute, aims to establish a new benchmark for integrated power delivery in the AI era. And it’s doing that by providing dynamic control over power at the core level.

Related Content

The post Integrated voltage regulator (IVR) for the AI era appeared first on EDN.

Hon’ble PM Shri. Narendra Modi to inaugurate fourth edition of SEMICON India 2025

ELE Times - Втр, 08/26/2025 - 12:33
  • Bharat set to welcome delegates from 33 Countries, 50+ CXOs,  350 Exhibitors
  • At country’s biggest Semiconductors & Electronics Show in New Delhi from 2-4 September 2025
  • Over 50+  Eminent Global Visionary Speakers
  • Event To Highlight Robust Local Semiconductor Ecosystem Expansion and Industry Trends

The fourth edition of SEMICON India 2025 will be officially inaugurated by Hon’ble Prime Minister Shri. Narendra Modi on 2nd September 2025 at Yashobhoomi (India International Convention and Expo Centre), New Delhi. Staying true to its legacy of positioning India as a global Semiconductor powerhouse, the fourth edition of SEMICON India 2025 will convene key stakeholders including global leaders, semiconductor industry experts, academia, government officials and students.

Under the Semicon India program, 10 strategic projects have been approved across high-volume fabs, 3D heterogeneous packaging, compound semiconductors (including SiC), and OSATs, marking a significant milestone for the country. Recognizing semiconductors as a foundational technology, over 280 academic institutes and 72 startups have been equipped with state-of-the-art design tools, while 23 startups have already been approved under the DLI scheme. These initiatives are driving innovations in critical applications such as CCTV systems, navigation chips, motor controllers, communication devices, and microprocessors—strengthening India’s journey towards Atmanirbhar Bharat.

Accelerating India’s semiconductor revolution, SEMI, the global industry association prompting the semiconductor industry and India Semiconductor Mission (ISM), Ministry of Electronics and Information Technology (MeitY), announced the programming for SEMICON India 2025 at a press conference held in the national capital.

Under the theme Building the Next Semiconductor Powerhouse, the event will offer valuable insights into innovations and trends in key areas such as Fabs, Advanced packaging, smart manufacturing, AI, supply chain management, sustainability, workforce development, Designs and Start Up’s along with 6 country round tables.

The SEMICON India exhibition will feature nearly 350 exhibitors from across the global semiconductor value chain including 6 county Round Tables, 4 country pavilions,  9 states participations and over 15000 expected visitors providing South Asia’s single largest platform for showcasing the latest advancements in the semiconductor and electronics industries, said Shri S Krishnan, Secretary, MeitY.

“SEMI is bringing the combined expertise and capabilities of our member companies across the global electronics design and manufacturing supply chain to SEMICON India, helping to advance both India’s semiconductor ecosystem expansion and industry supply chain resiliency,” said Ajit Manocha, President and CEO, SEMI. “The event will feature signature SEMICON opportunities for professional networking, business development, and insights into technology and market trends from a star-studded lineup of leading industry experts.”

SEMICON India 2025 is designed to maximize technological advancements in the semiconductor and electronics domain and highlight India’s policies aimed at strengthening its semiconductor ecosystem.

The event is a remarkable convergence of ideas, collaboration and innovation, and provides a unique opportunity to address complex challenges of tomorrow while fostering collaboration across the semiconductor ecosystem. We are looking forward to an astounding number of participations this year, Said Shri Amitesh Kumar Sinha, Additional Secretary , MeitY and CEO ISM.

“India’s semiconductor industry is poised for a breakthrough, with domestic policies and private sector capacity finally aligning to propel the nation to global prominence. As we navigate this transformative landscape, collaboration and ecosystem building will be key to unlocking the next wave of growth and breakthroughs and SEMICON India 2025 plays the catalyst for this.” said Ashok Chandak, President, SEMI India and IESA.

In addition to distinguished government officials, this year’s event will also feature an impressive lineup of industry leaders from top companies including Applied Materials, ASML, IBM, Infineon, KLA, Lam Research, MERCK, Micron, PSMC, Rapidus, Sandisk, Siemens, SK Hynix, TATA Electronics, Tokyo Electron, and many more.

Over the span of three days, the flagship event will feature a diverse range of activities including high profile keynotes, panel discussions, fireside chats, paper presentations, 6 international roundtables and more that will converge to drive the next wave of semiconductor innovation and growth. The event will also include a ‘Workforce Development Pavilion’ to showcase microelectronics career prospects and attract new talent.

SEMICON India is one of eight annual SEMICON expositions worldwide hosted by SEMI that bring together executives and leading experts in the global semiconductor design and manufacturing ecosystem. The upcoming event marks the beginning of an exciting journey into the future of technological innovation, fostering collaboration and sustainability in the global semiconductor ecosystem.

The post Hon’ble PM Shri. Narendra Modi to inaugurate fourth edition of SEMICON India 2025 appeared first on ELE Times.

Rohde & Schwarz extends the broadband amplifier range to 18 GHz

ELE Times - Втр, 08/26/2025 - 09:24

The new BBA series features higher field strengths for critical test environments up to 18 GHz

Rohde & Schwarz, a leading global supplier of test and measurement equipment and a reliable partner for turnkey EMC solutions, has expanded its broadband amplifier portfolio of the R&SBBA300 family with the two innovative amplifier series R&SBBA300-F for 6 to 13 GHz and R&SBBA300-FG for 6 to 18 GHz with additional power classes such as 90W, 180W and 300W.

Together with the already successfully introduced broadband amplifier series R&SBBA300-CDE for 380 MHz to 6 GHz and R&SBBA300-DE for 1 to 6 GHz, Rohde & Schwarz now offers compact dual-band amplifiers covering the entire frequency range from 380 MHz to 18 GHz in 4HU desktop models only.

The R&SBBA300 family is the new generation of compact, solid-state broadband amplifiers, designed for high availability and a linear output across an ultra-wide frequency range. It supports amplitude, frequency, phase, pulse and complex OFDM modulation modes and is extremely robust under all mismatch conditions, providing reliable test results in all circumstances.

Typical applications include EMC, co-existence and RF component tests during development, compliance test and production. The very wide frequency range makes them ideal for wireless and ultra-wideband testing.

The R&SBBA300-F series is a cost-effective solution for applications between 6 GHz and 13 GHz; the R&SBBA300-FG series covers a continuous frequency band from 6 GHz to 18 GHz. The two amplifier series can be used for ultrawideband applications as well as to address various EMC standards within mobile communications (FCC, ETSI), automotive (ISO), aerospace (DO-160), and military (MIL-STD-461). Both the R&SBBA300-F and the R&SBBA300-FG are now available in the power classes 30 W, 50 W, 90 W, 180 W, 300 W.

The R&SBBA300 broadband amplifier family offers two powerful tools for tailoring the RF output signal to the application: adjusting the amplifier either for excellent linearity or faithful reproduction of pulse signals by shifting the operating point between class A and class AB, and setting the amplifier for maximum tolerance to output mismatch or for maximum RF output power to utilize the power reserves for the application.

This allows users like developers, test engineers, integrators, or operators to optimize the output signal and react flexibly to a wide variety of requirements. Both parameters can be changed during amplifier operation.

“In addition to high linearity and excellent harmonic properties, our users also need extremely wide, continuous frequency bands at high RF output power,” said Michael Hempel, product manager for amplifier systems at Rohde & Schwarz. “The BBA300 series is our direct response to these requirements, offering outstanding bandwidth with high output power.”

Rohde & Schwarz also provides fully compliant EMI test receivers, signal generators, antennas, software and other essential system components and service for EMC testing.

The post Rohde & Schwarz extends the broadband amplifier range to 18 GHz appeared first on ELE Times.

EDOM Strengthens NVIDIA Jetson Thor Distribution Across APAC

ELE Times - Втр, 08/26/2025 - 08:56

Empowering a New Era of Physical AI and Robotics Development in the Asia-Pacific Region

EDOM Technology announced the official distribution of NVIDIA’s latest NVIDIA Jetson Thor module and developer kit, built for physical AI and general robotics. This move is set to accelerate technological upgrades and local deployment of applications such as intelligent robotics, AMR (Autonomous Mobile Robot), AIoT, and smart manufacturing across the region.

Jetson Thor is the most powerful edge AI module in the NVIDIA Jetson series. Built on NVIDIA Blackwell GPU architecture, it delivers over 2,070 TFLOPS of AI inference capability, specifically designed for humanoid robots, AMRs, and industrial smart devices. Its highly integrated computing architecture supports multi-sensor fusion, Transformer model inference, and real-time motion control, enabling a deep integration of generative AI and the physical world. Jetson Thor seamlessly integrates with NVIDIA Isaac ROS, NVIDIA Omniverse, and NVIDIA Isaac GR00T, forming a complete AI toolchain from data generation and simulation training to edge deployment. This significantly accelerates the adoption and commercialization of Physical AI applications, making it a key enabler of next-generation edge AI and robotics intelligence.

As NVIDIA’s long-standing partner and authorized distributor of Jetson series modules in the Asia-Pacific, EDOM brings around 30 years of experience in distribution and technical integration, covering AI modules, embedded systems, sensor integration, industrial automation, and component applications.

EDOM provides comprehensive product offerings of the Jetson Thor platform, including the Jetson AGX Thor Developer kit and Jetson T5000 module. Equipped with NVIDIA Holoscan Sensor Bridge for real-time data processing, along with high-speed interfaces such as GMSL, MIPI, 25GbE, 5G, and Wi-Fi modules, as well as high-performance storage interfaces, these solutions effectively meet the stringent low-latency and high-bandwidth demands of edge computing. Additionally, EDOM supports custom hardware design and system integration reference solutions, fully assisting customers in accelerating product development and deployment processes.

Jeffrey Yu, CEO at EDOM Technology stated:
Jetson Thor represents a major breakthrough in NVIDIA’s physical AI and robotics applications. We are honored to be the authorized distributor for Jetson Thor in the Asia-Pacific. By combining technical supports, educational resources, and platform ecosystems, we aim to help customers accelerate innovation and advance the deployment of generative and physical AI technologies.”

With the launch of Jetson Thor, the module is expected to see wide adoption in fast-growing physical AI and robotics sectors across Asia-Pacific, including smart manufacturing, AMRs, smart transportation, and service robots. For example:

  • In high-precision AOI (Automated Optical Inspection), Jetson Thor can process large-scale image data in real time and perform inference, improving yield rates and automation in factories.
  • In AMR factory logistics, through multi-sensor fusion and real-time motion control, it enables autonomous navigation and smart scheduling in complex environments.
  • In humanoid and companion robots, Jetson Thor’s integration with GR00T multimodal models and visual recognition enables highly interactive scenarios, ideal for applications in aging societies and public services.
  • With support for multiple GMSL cameras and high-speed Ethernet, Jetson Thor is also well-suited for smart city traffic nodes, performing real-time image analysis and behavior recognition.

These applications demonstrate Jetson Thor’s powerful computing capabilities and provide developers and enterprises in Asia-Pacific a complete path from AI training to edge deployment.

EDOM will continue to act as a critical bridge between technology and the market, working with system developers, integrators, and academic institutions. By driving the local deployment of the NVIDIA Jetson platform across key sectors—such as smart transportation, AIoT, and smart manufacturing—EDOM is accelerating the development and implementation of generative AI and Physical AI throughout the Asia-Pacific region.

The post EDOM Strengthens NVIDIA Jetson Thor Distribution Across APAC appeared first on ELE Times.

Сторінки

Subscribe to Кафедра Електронної Інженерії збирач матеріалів