Українською
  In English
Збирач потоків
Water Damage may have killed my light
![]() | The bottom 3 leds are not working There was water in the casing [link] [comments] |
Top 10 Federated Learning Applications and Use Cases
Nowadays, individuals own an increasing number of devices—such as fitness trackers and smartphones that continuously generate valuable data. At the same time, organizations like banks, hospitals, and enterprises produce vast amounts of sensitive information. However, due to strict privacy regulations, this data cannot be openly shared for centralized processing. In such scenarios, federated learning offers a transformative solution: it enables machine learning models to be trained directly on-device or within institutional boundaries, without transferring raw data. This approach preserves privacy while unlocking powerful, collaborative AI capabilities. As a result, data from diverse sources both personal and institutional can be securely leveraged to extract insights and drive smarter decisions. Below are 10 compelling real-world applications where federated learning is making a significant impact.
- Telecommunications
The federated model enables telecommunication firms to study patterns of their clients, enhance network performance, and make accurate tele-service projections for their distributed systems. This fosters efficient network systems while safeguarding customer information. In the same context, mobile operators stand to enhance calling services from user data sourced from spatially dispersed systems.
- Autonomous Vehicles
Self-driving cars and connected vehicles utilize federated learning to collaborate vehicles enhance navigation, obstacle identification, and safety measures. This eliminates the need to consolidate personal driving information. Drivers of self-driven automobiles and fleet operators utilize federated learning to enhance safety, navigation, and object detection with the aid of local sensor data consisting of cameras, LIDAR, and object detection.
- Finance
Banks and fintech companies use federated learning for detecting fraud, credit scoring, and modeling credit risk. One example is the training of a multi-bank fraud detection model to recognise suspicious transactions while safeguarding user information.
- Smart Devices & IoT
Smartphones, as well as other wearable devices, use federated learning to enhance voice recognition, keyboard prediction, and health tracking functions. An instance is the Gboard keyboard from Google, which leverages federated learning to upgrade its autocorrect as well as next-word prediction features grounded on users’ typing patterns.
- Cybersecurity
Federated learning is employed in factories for process optimization, predictive maintenance, and even defect detection. Federated learning enables multiple organizations to collaboratively train intrusion detection models using local network logs. This approach enhances threat detection accuracy while preserving sensitive data and complying with privacy regulations.
- Manufacturing
Factories use federated learning for predictive maintenance, defect detection, and process optimization. For instance, multiple production lines can train a model to predict equipment failure using local sensor data, reducing downtime.
- Energy & Utilities
Energy companies and power grids use advanced techniques to forecast demand and anticipate failures in the system by learning from distributed sensor data across substations and smart meters. Use Case includes a national utility company uses federated learning to predict peak electricity usage across cities, helping balance load distribution without accessing individual household data.
- Retail & E-commerce
Retailers customize product recommendations cross-sell and up-sell suggestions and basket-level cross-product purchase analytics across different store locations without sharing any stepwise item-level purchase data of shoppers. A classic use case is a global fashion retailer who wants to suggest outfit combinations based on current trends of different geographies. The retailer can now use the federated approach, enabling training of the model across all the stores in the regions while protecting shopper and purchase data.
- Content Platforms
With less risk to user privacy, platforms can better personalize user feeds and automatically moderate content by learning locally from user interactions. Use Case: A video streaming app enhances its recommendation system by locally training on user watch histories stored on devices, ensuring tailored recommendations while refraining from uploading any viewing data to the cloud.
- Aviation
Carriers and aircraft manufacturers develop models from flight execution and servicing records over different fleets in an attempt to improve safety and cut downtime, with the added benefit of keeping proprietary data private. A use case is offered by the federated model training from different airlines that enables the prediction of an engine’s wear and tear based on flight conditions, which aids in the scheduling of proactive maintenance without the need to share sensitive operational data.
Conclusion:
Federated learning protects privacy while facilitating cooperative model training across dispersed data sources. It lowers the risks associated with data transfers, conforms with data protection laws, and enables businesses to leverage insights without jeopardizing user privacy.
The post Top 10 Federated Learning Applications and Use Cases appeared first on ELE Times.
Infineon Upgrades Its Control MCUs for Post-Quantum Cryptography Transition
OpenLight raises $34m in Series A funding round to scale integrated photonics for AI data centers
Top 10 Federated Learning Companies in India
Federated learning is transforming AI’s potential in India by allowing models to be trained without infringing on the privacy of decentralized data. Federated learning is of critical importance in healthcare, finance, and consumer technology due to the rising needs of industries for AI that is secure, compliant with regulations, and privacy-preserving. Due to India having a flourishing technology ecosystem as well as a strong pool of AI talents, India is emerging as a leader in this technology. This article will discuss the leading 10 companies in India that focus on federated learning.
- TCS Research
TCS Research as the innovation wing of Tata Consultancy Services, TCS Research collaborates with federated learning for enterprise AI. Their initiatives cover healthcare, banking, and smart city projects, centering on the safe training of models over distributed data silos.
- Wipro HOLMES
Wipro’s AI platform, uses federated learning to provide intelligent automation and edge AI. Its application in telecommunications, manufacturing, and IT services aids in the development of AI models without eroding data privacy.
- Infosys Nia
Infosys Nia An all-in-one AI platform, Infosys Nia also offers federated learning for decentralized data modeling, which is especially beneficial in retail, and finance, where data sensitivity is high and compliance is critical.
- SigTuple
With its headquarters in Bengaluru, SigTuple is a health tech company which employs federated learning to streamline the analysis of medical images and diagnostics, while still maintaining patient data privacy. Their AI solutions not only save time but also improve the decision-making processes of pathologists and radiologists.
- Qure.ai
With over a decade of specialization in AI-driven radiology, Qure.ai is a clear leader. They are notable examples of the application of federated learning in radiology, not only for advancing diagnostic precision but also for safeguarding critical medical information.
- Vaidik AI
Vaidik AI marks a new chapter in the federated learning narrative of India. It launched an extensive selection of AI services, including the fine-tuning of LLMs and multilingual AI. Its multidisciplinary expertise in data annotation and the privacy-first approach to AI model development is well known. It provides healthcare, finance, and education sectors with economical and scalable solutions.
- ActionLabs AI
ActionLabs AI is located in Bengaluru and works with federated learning, edge AI, and generative model creation. Though healthcare and fintech startups appearing to be ActionLabs’ primary areas of focus, the company’s small size allows it to efficiently cater to a wider range of companies.
- Accenture India
Accenture adapts federated learning to its Responsible AI framework, assisting clients spanning the energy sector to public services in securely training models on decentralized data.
- Fractal Analytics
Fractal Analytics Fractal applies federated learning to generate consumer insights for retail and CPG. Their solutions enable brands to analyze consumer behavior without pooling sensitive data.
- Intel India
Intel India, with its offices in Bengaluru and Hyderabad, is pivotal in advancing federated learning as it refines secure hardware platforms such as Trusted Execution Environments (TEEs) and furthers AI research through Intel Labs. It also champions privacy-preserving AI in healthcare, smart cities, and edge computing.
Conclusion:
The federated learning ecosystem in India is evolving rapidly with the presence of global technology leaders such as Intel and the innovative local startups such as ActionLabs AI, Vaidik AI, and SigTuple. These firms not only expand the frontiers of privacy-preserving AI but also position the federated learning ecosystem to thrive on data collaboration devoid of security risks. With growing demand across healthcare, finance, and edge computing, federated learning is becoming a cornerstone of ethical AI development in India.
The post Top 10 Federated Learning Companies in India appeared first on ELE Times.
Cadence Accelerates Development of Billion-Gate AI Designs with Innovative Power Analysis Technology Built on NVIDIA
New Cadence Palladium Dynamic Power Analysis App enables designers of AI/ML chips and systems to create more energy-efficient designs and accelerate time to market
Cadence announced a significant leap forward in the power analysis of pre-silicon designs through its close collaboration with NVIDIA. Leveraging the advanced capabilities of the Cadence Palladium Z3 Enterprise Emulation Platform, utilizing the new Cadence Dynamic Power Analysis (DPA) App, Cadence and NVIDIA have achieved what was previously considered impossible: hardware accelerated dynamic power analysis of billion-gate AI designs, spanning billions of cycles within a few hours with up to 97 percent accuracy. This milestone enables semiconductor and systems developers targeting AI, machine learning (ML) and GPU-accelerated applications to design more energy-efficient systems and accelerate their time to market.
The massive complexity and computational requirements of today’s most advanced semiconductors and systems present a challenge for designers, who have until now been unable to accurately predict their power consumption under realistic conditions. Conventional power analysis tools cannot scale beyond a few hundred thousand cycles without requiring impractical timelines. In close collaboration with NVIDIA, Cadence has overcome these challenges through hardware-assisted power acceleration and parallel processing innovations, enabling previously unattainable precision across billions of cycles in early-stage designs.
“Cadence and NVIDIA are building on our long history of introducing transformative technologies developed through deep collaboration,” said Dhiraj Goswami, corporate vice president and general manager at Cadence. “This project redefined boundaries, processing billions of cycles in as few as two to three hours. This empowers customers to confidently meet aggressive performance and power targets and accelerate their time to silicon.”
“As the era of agentic AI and next-generation AI infrastructure rapidly evolves, engineers need sophisticated tools to design more energy-efficient solutions,” said Narendra Konda, vice president, Hardware Engineering at NVIDIA. “By combining NVIDIA’s accelerated computing expertise with Cadence’s EDA leadership, we’re advancing hardware-accelerated power profiling to enable more precise efficiency in accelerated computing platforms.”
The Palladium Z3 Platform uses the DPA App to accurately estimate power consumption under real-world workloads, allowing functionality, power usage and performance to be verified before tapeout, when the design can still be optimized. Especially useful in AI, ML and GPU-accelerated applications, early power modeling increases energy efficiency while avoiding delays from over- or under-designed semiconductors. Palladium DPA is integrated into the Cadence analysis and implementation solution to allow designers to address power estimation, reduction and signoff throughout the entire design process, resulting in the most efficient silicon and system designs possible.
The post Cadence Accelerates Development of Billion-Gate AI Designs with Innovative Power Analysis Technology Built on NVIDIA appeared first on ELE Times.
It’s nice having an LPKF setup in the office, but this was faster than cutting out a new board to test a couple fixes.
![]() | submitted by /u/JaRay [link] [comments] |
Here Comes the First Industrial Edge AI Computer Built on Raspberry Pi
CGD appoints Robin Lyle as VP R&D
Toshiba and SICC sign MOU on silicon carbide power semi wafer collaboration
Infineon introduces 75mΩ industrial CoolSiC MOSFETs 650V G2 for medium-power applications with high power density
Microchip Unveils Bottleneck-Busting RAID Storage Accelerator Cards
❤️ 500 000 гривень за кращі цифрові рішення для громад: EGAP Ideathon 2025
Фонд Східна Європа в межах флагманської Програми EGAP, що реалізується за підтримки Швейцарії, оголошує реєстрацію на EGAP Ideathon 2025 — національний конкурс ідей для вдосконалення цифрових платформ СВОЇ та e-DEM.
Simple but accurate 4 to 20 mA two-wire transmitter for PRTDs

Accurate, inexpensive, and mature platinum resistance temperature detectors (PRTDs) with an operating range extending from the cryogenic to the incendiary are a gold (no! platinum!) standard for temperature measurement.
Similarly, the 4 to 20 mA analog current loop is a legacy, but still popular, noise- and wiring-resistance-tolerant interconnection method with good built-in fault detection and transmitter “phantom-power” features.
Wow the engineering world with your unique design: Design Ideas Submission Guide
Figure 1 combines them in a simple, cheap, and cheerful temperature sensor using just eight off-the-shelf (OTS) parts, counting the PRTD. Here’s how it works.
Figure 1 PRTD current loop sensor with Ix = 500 µA constant current excitation.
Ix = 2.5v/R2, PRTD resistance = R1(Io/Ix – 1)
R1 and R2 are 0.1% tolerance (ideally)
The key to measurement accuracy is the 2.50-V LM4040x25 shunt reference, available with accuracy grade suffixes of 0.1% (x = A), 0.2% (B), 0.5% (C), and 1% (D). The “B” grade is consistent (just barely) with a temperature measurement accuracy of ±0.5oC.
R1 and R2 should have similar precision. R2 throttles the 2.5 V to provide Ix = 2.5/R2 = 500 µA excitation to T1. Because A1 continuously servos the Io output current to hold pin3 = pin4 = LM4040 anode, the 2.5 V across R2 is held constant, therefore Ix is likewise.
Thus, the voltage across output sense resistor R1 is forced to Vr1 = Ix(Rprtd) and Io = Ix(Rprtd/R1 + 1). This makes Io/Ix = Rprtd/R1 + 1 and Rprtd/R1 = Io/Ix – 1 for Rprtd = R1(Io/Ix – 1).
Wrapping it all up with a bow: Rprtd = R1(Io/(2.5/R2) – 1). Note that accommodation of different Rprtd resistance (and therefore temperature) ranges is a simple matter of choosing different R1 and/or R2 values.
Conversion of the Io reading to Rprtd is an easy chore in software, and the step from there to temperature isn’t much worse, thanks to Callendar Van Dusen math.
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- A two-wire temperature transmitter using an RTD sensor
- Improved PRTD circuit is product of EDN DI teamwork
- The power of practical positive feedback to perfect PRTDs
- Hotwire thermostat: Using fine copper wire as integrated sensor and heater for temperature control
- High-accuracy temperature measurements call for PRTDs and precision delta-sigma ADCs
- Minimize measurement errors in RTD circuits
- DIY RTD for a DMM
The post Simple but accurate 4 to 20 mA two-wire transmitter for PRTDs appeared first on EDN.
Filtronic secures record order from SpaceX
Skyworks names Phil Carter as CFO
Navitas names Chris Allexandre as president & CEO and board member
Top 10 Federated Learning Algorithms
Federated Learning (FL) has been termed a revolutionary manner of machine learning because it provides the capability of collaborative model training across devices in a decentralized manner while preserving data privacy. Instead of transferring data to a centralized server for training, devices train locally, and only their model updates are shared. This way, it finds applicability in sensitive areas like healthcare, finance, and mobile applications. As Federated Learning continues to evolve, an increasingly diverse array of algorithms has emerged each designed to enhance communication efficiency, boost model accuracy, and strengthen resilience against data heterogeneity and adversarial challenges. This article will delve into the types, examples, and top 10 Federated Learning Algorithms.
Types of federated learning algorithms:
Federated Learning algorithms get classified by how data is laid out, by the system structure, and by the privacy requirements. Horizontal FL covers clients with the same features but distinct data points. Vertical FL captures the case where features are different but clients overlap. When users and features are both different, we use Federated Transfer Learning. Decentralized FL, as opposed to Centralized FL, doesn’t use a central server and instead allows for peer-to-peer communication. In terms of FL deployment methods, Cross-Silo FL consists of powerful participants like hospitals and banks, while Cross-Device FL focuses on lightweight devices, such as smartphones. In addition, Privacy-Preserving FL protects user data with encryption, differential privacy, and other techniques, and Robust FL attempts to protect the system from malicious, adversarial, or broken clients.
Examples of federated learning algorithms:
Examples of Federated Learning Algorithms: A number of algorithms have been created to overcome challenges specific to Federated Learning problems. The basic approach of Federated Learning is FedAvg, which, in contrast, models client averaging. FedProx, which is designed to work well with data heterogeneity, is a more advanced approach. For personalization, FedPer customizes top layers for each client, and pFedMe applies meta-learning techniques. Communication-efficient algorithms like SCAFFOLD and FedPAQ reduce bandwidth usage and client drift. Robust algorithms such as Krum, Bulyan, and RFA filter out malicious or noisy updates to maintain model integrity. Privacy-focused methods like DP-FedAvg and Secure Aggregation ensure data confidentiality during training. These algorithms are often tailored or combined to suit specific domains like healthcare, finance, and IoT.
Top 10 Federated Learning Algorithms:
- Federated Averaging (FedAvg):
FedAvg stands as the founding algorithm for Federated Learning. The weight averaging is performed after models are trained locally on each client for updating the global model. Due to its simple design and the ease with which one can scale, it has been widely implemented in practice.
- FedProx
FedProx improves upon FedAvg by adding a proximal term to the loss function. FedProx builds upon FedAvg by introducing a proximal term in the loss function. By penalizing local updates that diverge too much from the global model, this term helps stabilize training in settings with widely differing client data distributions. It is especially helpful in fields like healthcare and finance, where heterogeneous data is prevalent.
- FedNova (Federated Normalized Averaging)
To address the drift of the client, FedNova normalizes updates with respect to the number of local steps and learning rates. This ensures each client has an equal contribution to the global model regardless of its computational capabilities or data volume. This further favors convergence and fairness in heterogeneous setups.
- SCAFFOLD
SCAFFOLD, an abbreviation for Stochastic Controlled Averaging for Federated Learning, employs control variates to make corrections to the client’s updates. This limits the variance that exists owing to non-IID data and speeds the convergence. It is particularly effective in an edge computing environment, where data come from various sources.
- MOON (Model-Contrastive Federated Learning)
MOON brings contrastive learning into FL by aligning local and global model representations. It enforces consistency of models that are particularly necessary when client data are highly divergent. MOON should often be used for image and text classification tasks for very heterogeneous user bases.
- FedDyn (Federated Dynamic Regularization)
FedDyn incorporates a dynamic regularization term in the loss function to enable the global model to accommodate client-specific updates better. Because of this, it can withstand situations involving extremely diverse data, such user-specific recommendation systems or personalized healthcare.
- FedOpt
FedOpt substitutes in place of the vanilla averaging mechanisms with advanced server-side optimizers like Adam, Yogi, and Adagrad. Using these optimizers leads to faster and more stable convergence, which is paramount in deep learning tasks with large neural networks.
- Per-FedAvg (Personalized Federated Averaging)
Personalized Federated Averaging hopes to balance global generalization with local adaption by allowing clients to fine-tune the global model locally. Because of this, Per-FedAvg is suitable for personalized recommendations, mobile apps, and wearable health monitors.
- FedMA (Federated Matched Averaging)
The distinguishing feature of this method is the matching of neurons across client models before averaging. This retains the architecture of a deep neural network and hence allows for much more meaningful aggregation, especially for convolutional and recurrent architectures.
- FedSGD (Federated Stochastic Gradient Descent)
A simpler alternative to FedAvg, FedSGD sends gradients instead of model weights. It’s more communication-intensive but can be useful when frequent updates are needed or when model sizes are small.
Conclusion:
These algorithms represent the cutting edge of federated learning, each tailored to address specific challenges like data heterogeneity, personalization, and communication efficiency. As FL continues to grow in importance especially in privacy-sensitive domains these innovations will be crucial in building robust, scalable, and ethical AI systems.
The post Top 10 Federated Learning Algorithms appeared first on ELE Times.
Integrated voltage regulator (IVR) for the AI era

A new integrated voltage regulator (IVR) claims to expand the limits of current density, conversion efficiency, voltage range, and control bandwidth for artificial intelligence (AI) processors without hitting thermal and space limits. This chip-scale power converter can sit directly within the processor package to free up board and system space and boost current density for the most power-hungry digital processors.
Data centers are grappling with rising energy costs as AI workloads scale with modern processors demanding over 5 kW per chip. That’s more than ten times what CPUs and GPUs required just a few years ago. Not surprisingly, therefore, in a data center, power can account for more than 50% of the total cost of ownership.
“This massive jump in power consumption of data centers calls for a fundamental rethink of power delivery networks (PDNs),” said Noah Sturcken, co-founder and CEO of Ferric. He claims that his company’s new IVR addresses both the chip-level bottleneck and the system-level PDN challenge in one breakthrough.
Fe1766—a single-output, self-contained power system-on-chip (SoC)—is a 16-phase interleaved buck converter with a fully-integrated powertrain that includes ferromagnetic power inductors. The high-switching-frequency powertrain also includes high-performance FETs and capacitors that drive ferromagnetic power inductors.
Figure 1 The new IVR features a digital interface that provides complete power management and monitoring with fast and precise voltage control, fast transient response times, and high bandwidth regulation. Source: Ferric
Fe1766 delivers 160 A in an 8 × 4.4 mm form factor to bolster power density and reduce board area, layout complexity, and component count. The new IVR achieves one to two levels of miniaturization compared to a traditional DC/DC converter by taking a collection of discrete components that we design on a motherboard and replacing them with a much smaller chip-scale power converter.
Moreover, these IVRs can be directly integrated into the packaging of a processor, which improves the efficiency of the PDN by reducing transmission losses. It also brings the power converter much closer to the processor, leading to a cleaner power supply and a reduction in board area. “That means more processing can occur in the same space, and in some cases, design engineers can place a second processor in the same space,” Sturcken added.
Fe1766, which enables vertical power delivery within the processor package, claims to provide more power within the processor package while cutting energy losses with vertical power delivery. That makes it highly suitable for ultra-dense AI chips like GPUs. AI chip suppliers like Marvell have already started embedding IVRs in their processor designs.
Figure 2 Marvell has started incorporating IVRs in its AI processor packages. Source: Ferric
Ferric, which specializes in advanced power conversion technologies designed to optimize power delivery in next-generation compute, aims to establish a new benchmark for integrated power delivery in the AI era. And it’s doing that by providing dynamic control over power at the core level.
Related Content
- ISSCC: Voltage regulators stacked in 3-D
- Magnetics for Integrated Voltage Regulators
- Understanding isolated DC/DC converter voltage regulation
- Low-power chip-level dc/dc converter modules see plenty of action
- Integrated voltage regulator enables DC/DC conversion without discrete components
The post Integrated voltage regulator (IVR) for the AI era appeared first on EDN.
Hon’ble PM Shri. Narendra Modi to inaugurate fourth edition of SEMICON India 2025
- Bharat set to welcome delegates from 33 Countries, 50+ CXOs, 350 Exhibitors
- At country’s biggest Semiconductors & Electronics Show in New Delhi from 2-4 September 2025
- Over 50+ Eminent Global Visionary Speakers
- Event To Highlight Robust Local Semiconductor Ecosystem Expansion and Industry Trends
The fourth edition of SEMICON India 2025 will be officially inaugurated by Hon’ble Prime Minister Shri. Narendra Modi on 2nd September 2025 at Yashobhoomi (India International Convention and Expo Centre), New Delhi. Staying true to its legacy of positioning India as a global Semiconductor powerhouse, the fourth edition of SEMICON India 2025 will convene key stakeholders including global leaders, semiconductor industry experts, academia, government officials and students.
Under the Semicon India program, 10 strategic projects have been approved across high-volume fabs, 3D heterogeneous packaging, compound semiconductors (including SiC), and OSATs, marking a significant milestone for the country. Recognizing semiconductors as a foundational technology, over 280 academic institutes and 72 startups have been equipped with state-of-the-art design tools, while 23 startups have already been approved under the DLI scheme. These initiatives are driving innovations in critical applications such as CCTV systems, navigation chips, motor controllers, communication devices, and microprocessors—strengthening India’s journey towards Atmanirbhar Bharat.
Accelerating India’s semiconductor revolution, SEMI, the global industry association prompting the semiconductor industry and India Semiconductor Mission (ISM), Ministry of Electronics and Information Technology (MeitY), announced the programming for SEMICON India 2025 at a press conference held in the national capital.
Under the theme Building the Next Semiconductor Powerhouse, the event will offer valuable insights into innovations and trends in key areas such as Fabs, Advanced packaging, smart manufacturing, AI, supply chain management, sustainability, workforce development, Designs and Start Up’s along with 6 country round tables.
The SEMICON India exhibition will feature nearly 350 exhibitors from across the global semiconductor value chain including 6 county Round Tables, 4 country pavilions, 9 states participations and over 15000 expected visitors providing South Asia’s single largest platform for showcasing the latest advancements in the semiconductor and electronics industries, said Shri S Krishnan, Secretary, MeitY.
“SEMI is bringing the combined expertise and capabilities of our member companies across the global electronics design and manufacturing supply chain to SEMICON India, helping to advance both India’s semiconductor ecosystem expansion and industry supply chain resiliency,” said Ajit Manocha, President and CEO, SEMI. “The event will feature signature SEMICON opportunities for professional networking, business development, and insights into technology and market trends from a star-studded lineup of leading industry experts.”
SEMICON India 2025 is designed to maximize technological advancements in the semiconductor and electronics domain and highlight India’s policies aimed at strengthening its semiconductor ecosystem.
The event is a remarkable convergence of ideas, collaboration and innovation, and provides a unique opportunity to address complex challenges of tomorrow while fostering collaboration across the semiconductor ecosystem. We are looking forward to an astounding number of participations this year, Said Shri Amitesh Kumar Sinha, Additional Secretary , MeitY and CEO ISM.
“India’s semiconductor industry is poised for a breakthrough, with domestic policies and private sector capacity finally aligning to propel the nation to global prominence. As we navigate this transformative landscape, collaboration and ecosystem building will be key to unlocking the next wave of growth and breakthroughs and SEMICON India 2025 plays the catalyst for this.” said Ashok Chandak, President, SEMI India and IESA.
In addition to distinguished government officials, this year’s event will also feature an impressive lineup of industry leaders from top companies including Applied Materials, ASML, IBM, Infineon, KLA, Lam Research, MERCK, Micron, PSMC, Rapidus, Sandisk, Siemens, SK Hynix, TATA Electronics, Tokyo Electron, and many more.
Over the span of three days, the flagship event will feature a diverse range of activities including high profile keynotes, panel discussions, fireside chats, paper presentations, 6 international roundtables and more that will converge to drive the next wave of semiconductor innovation and growth. The event will also include a ‘Workforce Development Pavilion’ to showcase microelectronics career prospects and attract new talent.
SEMICON India is one of eight annual SEMICON expositions worldwide hosted by SEMI that bring together executives and leading experts in the global semiconductor design and manufacturing ecosystem. The upcoming event marks the beginning of an exciting journey into the future of technological innovation, fostering collaboration and sustainability in the global semiconductor ecosystem.
The post Hon’ble PM Shri. Narendra Modi to inaugurate fourth edition of SEMICON India 2025 appeared first on ELE Times.
Сторінки
