Новини світу мікро- та наноелектроніки

Pickering Interfaces Introduces 9 kV High Voltage PXI and LXI Switching Modules

ELE Times - Tue, 09/13/2022 - 12:31

Pickering Interfaces has launched new ranges of switching test modules that deliver high performance up to 9 kV.

Their 4x-323 PXI range and 65-23x LXI range are available in various topologies such as multiplexers and smaller ‘building block’ uncommitted SPST switches, enabling complex test setups to be implemented. Hardware interlock is provided on all models in addition to loop-thru ports on multiplexer and matrix products to facilitate simple expansion.

The 40-323 (PXI) and 42-323 (PXIe) SPST high voltage power relay modules feature up to 14 high-quality Pickering reed relays per module. The modules can cold switch up to 9 kilovolts DC/AC peak, hot switch up to 7.5 kilovolts DC/AC peak at 50 Watts maximum, and carry up to 250 mA.

The 65-23x LXI high voltage switch families are based on a 2U Ethernet-controlled modular chassis that can be configured with up to six plug-in switch modules. Each plug-in module can hold up to 50 high voltage relays with the same 9 kilovolts specification as the PXI range mentioned above. There are three variants of the plug-ins:

  • In the 65-231 version, each plug-in is configured as a 1-pole multiplexer with various channel counts and bank quantities. These can be easily interconnected with external cables to form larger multiplexers up to 288 channels.
  • The 65-233 version has up to 50 SPST uncommitted switches per plug-in for general-purpose HV applications.

All modules, both PXI and LXI, feature industry-leading superior quality reed relays from Pickering’s reed relay division, Pickering Electronics. The modules also include RFI suppression components to extend relay contact life and control surges caused by high voltage transients. Connections are made via Redel K or S series high voltage connectors.

“Introducing 9 kilovolt SPST switching into the PXI platform provides high voltage switching in a small form-factor, allowing modular and scalable test systems to be constructed with minimal rack space compared to older rack and stack style instruments”, comments Steve Edwards, Switching Product Manager at Pickering Interfaces. “And the large number of configurations now offered in the LXI platform allows a switching solution to be tailored closely to the test system requirements.”

Pickering also offers a standard range of high-quality interconnection accessories to support the Redel connector series and provides custom cable manufacturing services through its connection division. Pickering’s standard three-year warranty covers all modules.

The post Pickering Interfaces Introduces 9 kV High Voltage PXI and LXI Switching Modules appeared first on ELE Times.

STMicroelectronics Announces Status of Common Share Repurchase Program

ELE Times - Tue, 09/13/2022 - 12:11

STMicroelectronics announces full details of its common share repurchase program (the “Program”) disclosed via a press release dated July 1, 2021. The Program was approved by a shareholder resolution dated May 27, 2021, and by the supervisory board.

STMicroelectronics announces the repurchase (by a broker acting for the Company) on the regulated market of Euronext Paris, in the period between July 11, 2022, to July 15, 2022 (the “Period”), of 210,462 ordinary shares (equal to 0.02% of its issued share capital) at the weighted average purchase price per share of EUR 31.2095 and for an overall price of EU 6,568,407.90.

Below is a summary of the repurchase transactions made in the course of the Period in relation to the ordinary shares of STM (ISIN: NL0000226223), in a detailed form.
Transactions in Period

Dates of transaction Number of shares purchased Weighted average purchase price per share (EUR) Total amount paid (EUR) Market on which the shares were bought (MIC code)
11-Jul-22  42,236  30.8342  1,302,313.27 XPAR
12-Jul-22  42,565  30.9220  1,316,194.93 XPAR
13-Jul-22  42,336  31.0429  1,314,232.21 XPAR
14-Jul-22  42,101  31.3137  1,318,338.08 XPAR
15-Jul-22  41,224  31.9554  1,317,329.41 XPAR
Total for Period  210,462  31.2095  6,568,407.90

Following the share buybacks detailed above, the Company holds in total 4,217,560 treasury shares, which represents approximately 0.5% of the Company’s issued share capital.
In accordance with Article 5(1)(b) of Regulation (EU) 596/2014 (the Market Abuse Regulation) and Article 2(3) of Commission Delegated Regulation (EU) 2016/1052, a full breakdown of the individual trades in the Program are disclosed on the ST website investors.st.com/buyback-program

The post STMicroelectronics Announces Status of Common Share Repurchase Program appeared first on ELE Times.

5G vs 6G – What We Can Expect from 6G Technology

ELE Times - Tue, 09/13/2022 - 12:02

While we are still in the process of the global rollout of 5G, the hype is already building about the next generation of wireless technology: 6G. In fact, many telecommunications vendors are already investing heavily in 6G technology, which is currently in the research phase. Although implementation may still be many years away, 6G is tipped to become an integral part of communications in the next decade.

6G technology is predicted to provide faster speeds, lower latency, and more bandwidth than 5G, which will increase productivity, and create new opportunities in automation, artificial intelligence, and the internet of things through instantaneously delivering huge amounts of data across decentralized networks.

What are the Features of 6G vs 5G?

6G is the next generation of cellular technology, following on from 5G technology, but with the ability to use higher frequencies and provide significantly higher capacity. With increased performance, 6G will expand the scope of 5G abilities to support fresh and innovative applications in wireless connectivity, cognition, sensing, and imaging.

The following are five key things to know regarding the similarities and differences between 5G and 6G technologies.

  1. Both Generations Have Very Low Latency

As generational technology evolves, latency is reduced. While 4G latency is around 50 milliseconds, 5G drops to 5 milliseconds, and 6G latency is estimated to drop again to just one millisecond. This means that huge transmissions of data will be possible almost instantaneously.

  1. 5G and 6G Use Two Different Parts of the Spectrum

Both 5G and 6G technology use the higher frequencies on the wireless spectrum to transmit more data, quicker. Signals at the higher end of the radio spectrum will be used to power 6G networks. It is too early to predict 6G data transmission rates; however, early calculations suggest it may be possible for a top data rate of 1 terabyte per second for wireless data. This means that 6G has the potential to deliver speeds 1,000 times faster than 5G.

  1. 6G Opens New Frontiers of Connectivity 

The deployment of 5G has been slow due to the necessary infrastructure requirements that have had to be set up. 6G will not have the same difficulties, as it will build on the infrastructure that has already been installed for 5G technology.

  1. 6G Will Speed Up the Internet of Things

4G frequencies are too narrow and crowded to transfer data at the necessary speeds required by smart devices to support IoT. With quicker speeds, 5G is tipped to support IoT and make it a practical everyday reality for users. 6G will further enhance performance, which will likely lead to widespread traction of IoT.

  1. 6G Will Not Replace 5G

4G technology was basically a faster version of 3G technology. However, 5G and 6G technologies are different versions of wireless connectivity, and as such, will run concurrently. Experts in the field currently think that it won’t be practical to have every device using 6G technology, and as such, 6G technology is tipped to be reserved for business, military, and industrial sectors, powering only some consumer uses such as immersive entertainment. However, technological advancements may change this.

6G Technology Capabilities

6G technology will enhance the performance of data transmission across the globe. The following are some of the key things that 6G technology will enable:

  • Technology convergence: 6G technology will enable the integration of previously separate technologies, such as deep learning and big data analytics.
  • Edge computing: 6G will support the deployment of edge computing to ensure overall throughput and low latency for extremely reliable communications.
  • Internet of things (IoT): 6G technology is tipped to support the machine-to-machine communication necessary for operating IoT.
  • High-performance computing (HPC): There is a strong relationship between 6G technology and high-performance computing, where 6G technology supports centralized HPC resources for processing.

When will 6G be available?

The research and development of 6G technology started in 2020. In order to launch the technology, advanced mobile communications technologies will have to be developed, such as cognitive and highly secure data networks. In addition, spectral bandwidth will also have to be expanded. China has already launched a 6G test satellite equipped with a terahertz system. However, it is predicted that 6G technology will not launch commercially until 2030.

Shirley Lim, Channel Marketing Manager, APAC, Viavi Solutions

The post 5G vs 6G – What We Can Expect from 6G Technology appeared first on ELE Times.

How to make better measurements with your oscilloscope or digitizer

EDN Network - Mon, 09/12/2022 - 19:41

Modern oscilloscopes and digitizers are getting better and better. Higher bandwidth, better vertical resolution, and longer acquisition memories. Not to mention more firmware tools for application-specific measurements. With all these advanced analysis capabilities it is sometimes hard to remember some very old and simple rules that can improve the accuracy and precision of your measurements.  Here are a few good ideas to help.

Use the full dynamic range of your instrument’s front end

Digital instruments feed their input signals to an analog-to-digital converter (ADC). The dynamic range of the ADC is related to its number of bits of resolution. The instrument matches the input signal to the ADC input voltage range using attenuators or amplifiers. If the input to the ADC is less than its input range, it reduces the total dynamic range of the ADC. This can happen when users set up multiple traces on the screen.

Some oscilloscopes and digitizers display software only offer a single display grid. If you try and display more than one signal trace at full dynamic range, the signals overlap, making it hard to view them. Most people faced with this problem reduce the vertical scaling of each channel. If you have four traces, just increase the volts per division setting by a factor of four. Each trace now occupies only a quarter of the screen and all four traces fit on the screen with no overlap. Problem solved? Not really. You just reduced the dynamic range by two bits, you made your eight-bit oscilloscope a six-bit oscilloscope. You attenuated the signal, but the internal noise of the instrument is the same, the signal-to-noise ratio is now worse by two bits.  Figure 1 shows the effect of the loss of dynamic range.

Figure 1 An example of the decrease in signal-to-noise ratio due to reducing the signal amplitude in order to fit multiple traces on a single grid.

The bottom grid shows the original signal acquired a 50mV/division. The top trace shows the trace acquired at one quarter of the full screen or 200mV/division.  If you vertically expand the attenuated trace and display it at the original 50mV/division, the vertical noise has increased significantly as you can tell from the thickening of the displayed trace. Measurements made on the attenuated trace will have increased uncertainty due to a poorer signal-to-noise ratio. This is not a problem for an oscilloscope or digitizer that has multiple grid displays, each grid displays the signal at full dynamic range and multiple signals can be compared, each in its own grid. If you don’t have access to a multiple grid oscilloscope, make sure that any measurements are made on the full amplitude signals, reserve the attenuated signal for visual comparisons only.

Improve dynamic range and measurement accuracy by eliminating noise

Use signal processing in the form of averaging or filtering to reduce noise, improve dynamic range and measurement accuracy.  Ensemble averaging, where the nth samples of each acquisition are averaged together over multiple acquisitions, reduces Gaussian noise proportional to the square root of the number of averaged signals. This can bring a low level signal out of background noise for better measurements. It does require multiple acquisitions.

For a single acquisition, you can reduce noise by bandwidth limiting the signal. The improvement of dynamic range is proportional to the square root of the bandwidth reduction. Reduce the bandwidth by a factor of four to achieve a two-to-one improvement in dynamic range. This assumes that the signal has a low bandwidth and is not affected by the bandwidth reduction. Figure 2 shows the improvement that can be achieved using either averaging or filtering.

Figure 2 Averaging multiple acquisitions or filtering a single acquisition can improve the dynamic range of the acquisition by eliminating noise.

The acquired signal is an exponentially damped sine wave. The top trace shows a raw acquisition. Note that the signal disappears into the noise about three quarters of the way across the screen. The center trace shows the average of multiple acquisitions. In the bottom trace, a Gaussian low pass filter has been applied to the acquired signal. Both averaging and filtering reduce the noise and improve the dynamic range of the measurement. The signal is clearly discernible after either type of signal processing.

 Improving the accuracy of cursor measurements

Cursors are vertical and/or horizontal lines that can be moved over the oscilloscope or digitizer display to mark significant points on a waveform. Cursor readouts show the time or amplitude of the waveform at the cursor location as shown in Figure 3. The waveform is a keyed RF carrier and the horizontal relative cursors are used to measure the width of the RF burst. This is a measurement that cannot be made with the instrument’s automatic measurement parameters. The cursor horizontal readout appears in the lower right-hand corner under the timebase annotation box, it reads 8.06275 µs.

Figure 3 Horizontal relative cursors are used to measure the duration of an RF pulse burst.

Is that really the duration of the burst? The answer is no. This waveform has two million samples in the acquisition. The horizontal screen resolution is 1920 pixels. So, it’s obvious that not all the samples are shown on the screen. Instrument manufacturers apply compaction algorithms to reduce the number of displayed points. They manage to show significant points like peaks but there is still a lot that you can’t see unless you expand the display.

A more accurate way to make this measurement is the used the zoom traces to horizontally expand the waveform at the start and end of the RF burst as shown in Figure 4.

Figure 4 Using zoom traces to make more accurate placements of the cursors at the first and last sample points of the burst.

Zoom traces Z1 and Z2 horizontally expand the beginning and ending of the burst. The sample counts in the zoom traces are smaller than the screen resolution so the compaction algorithm is not used. The cursors track on the acquired signal and the zoom traces. The cursor on zoom trace Z1 (yellow trace) marks the beginning of the RF burst which starts at a zero crossing. The cursor on zoom trace Z2 (red trace) marks the end of the burst. The cursor horizontal readout shows the burst length as 8.33295 ms, a more accurate result.

Built-in measurement parameters

Oscilloscope and digitizer support software offers built-in measurement parameters. Most oscilloscopes include about twenty or more common measurement parameters like amplitude, frequency, rise time, and fall time to mention a few. Application-specific software packages can increase the number of available parameters to over a hundred. The standard parameter measurements are usually based on IEEE standard 181 which employs statistical techniques to make the measurements on pulse waveforms as shown in Figure 5.

Figure 5 IEEE Standard 181 bases pulse measurement parameters on a statistical determination of the top and base values of a measured pulse.

The amplitude values of the pulse top and base are determined by forming a histogram of the acquired samples of the waveform, this is shown as an inset on the right of the screen.  A square wave or pulse waveform will have a histogram with two distinct peaks. The average or mean values of the upper histogram peak is called the “top”. The mean of the lower valued peak is called the “base” of the waveform. Using the statistical mean of many pulse measurements suppresses the effects of waveform aberrations like noise, overshoot, and ringing. The pulse amplitude is the difference between the top and base. The maximum value of the waveform minus the top is the positive overshoot. Likewise, the difference between the waveform minimum value and the base is the negative overshoot. The pulse width is the time difference between the leading and trailing edges crossing the mid-amplitude or median between the top and base. The peak-to-peak value of the waveform is the difference between the maximum and minimum amplitudes. The transition time measurements, like rise and fall time, measure the time to transition from 90% to 10% of the pulse amplitude. If the waveform is not a pulse, the measurement engine sees that because the waveform histogram has more or less than the two peaks that define a pulse. In that case, the amplitude measurement reverts to a peak-to-peak measurement and indicates the fact that the waveform is not a pulse using a measurement status icon under the parameter readout.

In almost all cases, the measurements made using measurement parameters are far more accurate than those made using cursors. They are also made automatically saving a great deal of time.

Measurement Statistics

How do instrument measurements vary from measurement to measurement? Measurement statistics answer that question. Many instruments include statistics reporting along with the basic measurement parameters as shown in Figure 6.

Figure 6 Measurement statistics record how measurement values vary over multiple measurements showing the last value, mean, minimum, maximum, standard deviation, and total population.

Some oscilloscopes include all instance measurements. Time-related measurements, like frequency and width, report one value for each cycle of the measured waveform. If you have 100 cycles of a signal on the screen, the measurement engine adds 100 measurements for each acquisition. Amplitude-related measurements only add a single value per acquisition. You can acquire a good many measurement values over many acquisitions. Measurement statistics provide very useful views of this data. The table under the waveform display, shown expanded in the blue box, lists the last value measured in the acquisition, the mean values of all the acquired values, the minimum and maximum values of the set, the standard deviation of the set, and the total population of all the measurements. It also includes a status indicator and an iconic histogram of all the measurement values.

The amplitude measurement reports that 11,873 values are included in the statistics. The mean or average value is 237.5457 mV. The mean value is reported with a higher resolution than the last value because the mean is an averaged value. As we saw in the waveform, averaging the averaging process improves the vertical resolution of the measurement, the same thing happens if you average multiple measurements, hence the more significant figures in the mean value.

The largest value is 241.5 mV reported as the maximum, the smallest value, minimum, is 234.8mV. These values can help detect transient events that occur during the acquisitions. Other tools can plot measured values versus time to see when transient events occur and match them in time with possible sources.

The standard deviation describes the distribution of the measured values about the mean, in this case is it 826 µV. The mean and standard deviation are useful in understanding the distribution of the measured values as is the iconic histogram. The iconic histogram can be expanded to view the full-sized histogram for a more detailed analysis with its own set of histogram measurements. All of these measurement tools help understand the dynamics of the particular measurements. A knowledge of the distribution of measurements enables you to establish test limits for signals. 


These tools and techniques can help improve the measurement accuracy and reliability of your instruments. Other tricks can be gleaned from manufacturer’s webinars and application notes. The more you learn about your instrument, the more accurate and reliable your measurement results will be.

Arthur Pini is a technical support specialist and electrical engineer with over 50 years of experience in electronics test and measurement.

Related articles:


googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post How to make better measurements with your oscilloscope or digitizer appeared first on EDN.

Apple’s Beats Powerbeats Pro: a repair attempt blow-by-blow

EDN Network - Mon, 09/12/2022 - 17:53

By now, you’ve hopefully seen (and enjoyed!) one of last month’s posts, where I shared the insights from Christoph Riehl. In his past professional life, Riehl was a member of the development team (focused mostly on the RF subsystem) at Siemens VDO for the Volvo key fob that I’ve been covering in several of my recent writeups.

This month, I’m once again “crowdsourcing” my content, this time from Huan Nguyen, a firmware engineer at local firm Halleck-Willard, Inc. (HWI), a Steripack company. Back in early June, I’d discussed how the premature failure of the embedded batteries in a pair of Beats Powerbeats Pro earbuds had resulted in them being useful only as lightweight paperweights, and that HWI had offered to attempt to resurrect them using replacement parts and video instructions from another company, Joe’s Gaming and Electronics:

I shipped the dead earbuds and replacement batteries, along with the tube of glue and tools, to HWI at the end of April, and Huan got back to me in mid-June with some disappointing news: “I’m sad to say I failed to resurrect the earbuds”. However, he’d thoroughly documented his travails, and he acquiesced to my request to republish and share them all with you here. Without further ado, over to you, Huan!

In a recent article for EDN, Brian Dipert writes about his experience suffering the result of obsolescence by design of the Powerbeats Pro earbuds. He reached out to our team to have somebody attempt the repair and document the process. I volunteered and this past weekend ensconced myself in the lab to try to bring the earbuds back to life. I pulled up the Joe’s Gaming and Electronics video repair guide that Brian mentions, fired up the soldering iron, and got to work.

Spoiler alert: I unfortunately failed in my attempt. However, the result of this attempt is documented so that readers can learn from my mistakes, and hopefully extend the life of their $200+ earbuds beyond just a couple of years.

The first step was to remove the housing cover from the first earbud. The instructor in the video recommends using a razor blade for this delicate process.

When I hear the word “razor”, I think “razor-sharp” or “cuts things like a hot knife through butter”. While working the blade through the housing plastic, I looked up and realized that, on the workbench, we have a sonic cutter available. I threw its switch and lowered the blade into the housing plastic.

It sliced easily through the housing plastic, with much less force required than for the razor blade. However, the cut it made was thicker than the razor blade, and the plastic was slightly deformed all along the cut.

When I finally broke through the seam all around the housing cover and popped it off, I discovered that the cut was quite rough. The housing cover itself looked ragged around the edges, and I feared (rightly, as it would turn out) that re-assembling the housing and cover would prove more difficult due to the deformed plastic.

The cut already having been made, I moved onto the next step of removing the battery shield. First the contact cable needed to be detached from the shield: I resorted to sliding the edge of the razor’s blade and gently levering the cable off the shield.

With the cable detached, the shield could be removed. The flathead screwdriver proved to be the perfect tool for levering the shield out of the housing.

With the battery shield out of the way, I had access to the battery and its lead wires. There was glue covering the terminals of the lead wires, as well as some holding the battery into the housing.

Removing the glue was easily done: a bit of heat from a hot air gun made it a simple task to pop the glue off from the terminals, and an X-ACTO knife sliced through the glue on the side of the battery (be careful not to short any connections, as I later did!).

With the glue removed, the soldering iron was enough to remove the battery wires. I moved the wire out of the solder blob with the iron until the blob cooled.

In hindsight, I should have set up a third hand tool or other fixture to hold the earbud while I manipulated the wire with a pair of tweezers in one hand, with the soldering iron in the other.

That done, I inserted the flathead screwdriver into the gap between the battery and PCBA/housing and levered the battery out of the enclosure.

The new battery slid right in without issue. However, its lead wires were just a bit too long, needing a bit of trimming before soldering to the terminals.

In retrospect, it might have been easier to solder the long wire to the terminal and then tuck the extra length into the gap between the housing and battery—cutting the wire required stripping it again, and I was required to strip the wire with a gentle touch using the edge cutters (rather than the convenience of a wire stripper tool).

That done, I slid the battery cover back into place, sliding it so that the tab at the top slotted back into place into the enclosure.

The remaining step in the process was to glue the housing cover back on to the housing. Applying the glue and holding the cover in place necessitated both hands, resulting in a lack of photographs for this step—but we’ll see the results later in a comparison to the second earbud.

For the second earbud, I remembered my resolution to forgo the sonic cutter and use the razor blade instead. That resulted in a much cleaner separation between the housing and its cover. I found that the best technique was to gently insert the blade’s edge into the seam, and then rock the razor back and forth while pressing it into the plastic, as opposed to trying to slide the blade back-and-forth in a cutting or sawing motion.

Unfortunately, I damaged the device in the process of swapping the second battery. I tore the cable off the shield and out of the device, as well as shorting something (as much as we like to call it “magic smoke”, I prefer keeping it inside any devices I touch!) and causing a spark inside.

Deeming the electronics irreparably damaged, I thought at least I might attempt a better re-assembly than I had performed for the first earbud. The first time, I added a bit too much glue and a bit too much around the outer edge, resulting in the glue oozing outside of the housing and creating a visible and colorful gap between the cover and housing. In this second attempt, I laid a line of glue around the entire length of the seam between the cover and the housing, making sure to bias the line towards the inside of the housing.

The end result of that was an earbud that looked close to new, with a barely-perceptible widening in the gap between the housing and the cover compared to how it was prior to disassembly. It certainly looked much better than the first earbud.

All in all, I would consider the repair not technically difficult nor lengthy (requiring me about 90 minutes to get through this process for the first time, undoubtedly less for any future attempts), but requiring a gentler touch than it might seem.

Part of the challenge in my eyes was not having all the information about the device I would like. I spent a lot of unnecessary time on the first earbud trying to delicately cut through the seam between the housing and cover because I wasn’t sure how thick it was and didn’t want to damage anything by cutting too quickly.

As these devices are designed for obsolescence rather than repair, I accepted that I couldn’t have found drawings or other information for them. Perhaps in the future such information will be available for others attempting to repair their earbuds, or even their laptops and their phones – New York state has just recently passed a right-to-repair bill, perhaps paving the way for other states in the future.

Brian, myself, and many others will be looking forward to seeing the change from users virtually having to toss their old tech into the landfill, to being able to extend their useful life and save money while doing so.

I guess we now know why Joe’s Gaming and Electronics doesn’t do onsite repairs anymore!

Abundant thanks go to Huan for his time and effort (over the weekend!), including his extensive text and image documentation of the project, in attempting the repair of the Beats Powerbeats Pro. Thanks, too, to his managers at HWI for supporting my request and his involvement. Speaking of documentation, Huan assembled a sizeable tranche of images and videos, which I’ve linked to here if you’re interested in downloading and perusing them for yourself.

powerbeats full media (part 1)

powerbeats full media (part 2)

I asked Huan to re-open both earbuds’ housings before sending them back to me, as I plan to do a complete teardown soon. Until then, Huan and I both welcome your thoughts in the comments!

Brian Dipert is Editor-in-Chief of the Embedded Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.

Related articles:

googletag.cmd.push(function() { googletag.display('div-gpt-ad-inread'); });
googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Apple’s Beats Powerbeats Pro: a repair attempt blow-by-blow appeared first on EDN.

5G SoCs boost performance in consumer phones

EDN Network - Mon, 09/12/2022 - 17:26

Qualcomm brings advancements in photography, connectivity, and AI to entry-level and midrange mobile phones with its Snapdragon 4 and 6 Gen 1 SoCs. The chips offer improved processing performance with Kyro CPUs and Ardeno GPUs, and both Motorola and IQOO will be adopting the new mobile platforms.

For the entry-level market, the 6-nm Snapdragon 4 Gen 1 increases CPU and GPU performance by up to 15% and 10%, respectively, compared to the Snapdragon 480, allowing smooth multitasking and immersive entertainment. It also features a triple image signal processor (ISP) that enables concurrent photo or video capture from three cameras and multi-frame noise reduction for crisp 108-Mpixel photos. An intuitive AI engine powers responsive always-on voice assistants, far-field detection, and echo cancellation. Fast 2.5-Gbps peak 5G download speeds are supported, along with 2×2 Wi-Fi and Bluetooth 5.2.

With up to 35% quicker graphics rendering and up to 40% faster processing than the Snapdragon 695, the Snapdragon 6 Gen 1 powers HDR gaming at 60 fps on mid-range phones. The device’s triple ISP allows 200-Mpixel photos and video capture with computational HDR using staggered HDR image sensors—a first in the Snapdragon 6 series. A 7th generation AI engine delivers a 3x improvement over its predecessor for intelligent assistance, including AI-based activity tracking. The SoC supports 3GPP Release 16 with 2.9-Gbps peak 5G download speeds, Bluetooth 5.2, and 2×2 WiFi 6E—another first in the Snapdragon 6 series.

Devices based on Snapdragon 6 Gen 1 are expected to be commercially available in Q1 2023, while devices based on Snapdragon 4 Gen 1 are expected to be commercially available in Q3 2022.

Snapdragon 4 product page

Snapdragon 6 product page

Qualcomm Technologies

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post 5G SoCs boost performance in consumer phones appeared first on EDN.

Montage samples DDR5 clock driver for PC memory

EDN Network - Mon, 09/12/2022 - 17:24

Montage Technology’s Gen1 DDR5 clock driver aids the development of memory modules for the next generation of desktop and notebook computers. The company has begun shipping its first engineering samples of the Gen1 DDR5 clock driver—CKD or DDR5CK01 as defined by JEDEC—to leading DRAM module vendors.

Montage explains that clock driver functions have long been integrated in the register clock driver (RCD) device, which is used in server platforms rather than PCs. With the boost of the DDR5 data rate and the rising frequency of the clock signal, the signal integrity of the clock becomes increasingly challenging. As the DDR5 data rate reaches 6400 MT/s and above, UDIMMs and SODIMMs used in desktop and notebook computers will need an on-DIMM clock driver to buffer and redrive the clock signal of the memory modules in order to meet the signal integrity and reliability requirements of the high-speed clock signal.

The Montage DDR5CK01 chip is designed to buffer the clock signal coming from the desktop or notebook CPU and then redrive the output clocks to the DRAM chips on the memory module. Compliant with the JEDEC DDR5CK01 standard, the driver supports data rates up to 6400 MT/s and the low-power management mode.

A datasheet for the Montage Gen1 DDR5CK01 clock driver is only available under a nondisclosure agreement.

Montage Technology

Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Montage samples DDR5 clock driver for PC memory appeared first on EDN.

Arm and Arteris join hands to align automotive IP integration

EDN Network - Mon, 09/12/2022 - 16:03

The system-on-chips (SoCs) catering to autonomous driving, advanced driver assistance systems (ADAS), cockpit and infotainment, vision, radar and lidar, body and chassis control, and other automotive subsystems mandate high performance and power efficiency for complex and demanding safety-critical tasks with differing workloads.

So, at a time when smart compute is quickly making its way to automotive, Arm and Arteris IP have decided to expand their existing automotive partnership to speed up SoC design innovation with a robust alignment of IPs and automation toolset integration. While Arm and Arteris have been long-term partners, the question is, what’s the strategic importance of this tie-up between two IP suppliers? “It’s all about how we are pre-integrating and validating SoC IPs and automation tools,” said Michal Siwinski, chief marketing officer at Arteris IP.

Ian Smythe, VP of product marketing at Arm, acknowledged that the automotive industry is at a critical inflection point with demand for autonomy, more capable ADAS, richer driver experiences, and electrification driving the need for more capable SoCs and microcontroller units (MCUs). “The collaboration between Arm and Arteris IP will facilitate integrated and optimized automotive solutions to enable faster time to market.”

Source: Arteris IP

The expanded relationship means that automotive designers working around Arm’s Cortex-A, Cortex-R, Cortex-M, and Mali processors as well as Arteris’ FlexNoC and Ncore interconnect IPs and Magillem IP deployment software will have highly optimized design flows. “In the past, you could buy all the design blocks and try to figure out how to integrate them,” said Frank Schirrmeister, VP of Solutions & Business Development. “This partnership provides the ability to optimize the IP integration from the get-go, resulting in better SoC design productivity.”

An average car now has at least 20 advanced SoCs, Siwinski noted. “Here you have a huge continuum of different types of automotive SoC configurations.” So, a choice of pre-integrated and pre-optimized IPs for high-performance compute and system-on-chip connectivity will make life easier for SoC designers working at semiconductor companies, Tier 1 suppliers, automotive OEMs, and ride-sharing companies.

Related Content

googletag.cmd.push(function() { googletag.display('div-gpt-ad-native'); }); -->

The post Arm and Arteris join hands to align automotive IP integration appeared first on EDN.

Advanced Oscilloscope Analysis – 4 Unique Capabilities with the 6 Series B MSO

ELE Times - Mon, 09/12/2022 - 15:25

With the lowest input noise and up to 10 GHz analog bandwidth, the 6 Series B MSO provides the best signal fidelity for analyzing and debugging today’s embedded systems with GHz clock and bus speeds. The remarkably innovative pinch-swipe-zoom touchscreen user interface coupled with a large high-definition display and up to eight FlexChannel inputs, each of which lets you measure one analog or eight digital signals, the 6 Series B MSO is ready for today’s toughest challenges (and tomorrow’s too).

The 6 Series B MSO also delivers simplified, advanced measurement and analysis. Characterize jitter on GHz clocks and serial buses with ease. Bring statistics into your everyday toolkit with integrated measurements. Use the same simple drag and drop action to add advanced and everyday measurements.

Four unique analysis capabilities in particular help the 6 Series B MSO stand out, including:

  • Spectrum View Synchronized Multi-channel Spectrum Analysis.

It is often easier to debug an issue by viewing one or more signals in the frequency domain. Oscilloscopes have included math-based FFTs for decades in an attempt to address this need. However, FFTs are notoriously difficult to use for two primary reasons.

First, when performing frequency-domain analysis, you think about controls like Center Frequency, Span, and Resolution Bandwidth (RBW), as you would typically find on a spectrum analyzer. But then you use an FFT, where you are stuck with traditional scope controls like sample rate, record length, time/div, and must perform all the mental translations to try to get the view you’re looking for in the frequency-domain.

Second, FFTs are driven by the same acquisition system that’s delivering the analog time-domain view. When you optimize acquisition settings for the analog view, your frequency-domain view isn’t what you want. When you get the frequency-domain view you want, your analog view is not what you want. With traditional oscilloscope FFTs, it is virtually impossible to get optimized views in both domains.

Spectrum View changes all of this. Tektronix’ patented technology provides both a decimator for the time-domain and a digital downconverter for the frequency domain behind each FlexChannel. The two different acquisition paths let you simultaneously observe both time and frequency-domain views of the input signal with independent acquisition settings for each domain. Other manufacturers offer various ‘spectral analysis’ packages that claim ease-of-use, but they all exhibit the limitations described above. Only Spectrum View provides both exceptional ease-of-use and the ability to achieve optimal views in both domains simultaneously.


Using Spectrum View Spectrum Analysis on Multiple Channels

  • Advanced Jitter Analysis Quickly Characterizes Clock Signal Quality

The 6 Series B MSO has seamlessly integrated the DPOJET Essentials jitter and eye pattern analysis software package, extending the oscilloscope’s capabilities to take measurements over contiguous clock and data cycles in a single-shot real-time acquisition. This enables measurement of key jitter and timing parameters such as Time Interval Error and Phase Noise to help characterize possible system timing issues.

Analysis tools, such as plots for time trends and histograms, quickly show how timing parameters change over time, and spectrum analysis quickly shows the precise frequency and amplitude of jitter and modulation sources.

Option 6-DJA adds additional jitter analysis capability to better characterize your device’s performance. The 31 additional measurements provide comprehensive jitter and eye-diagram analysis and jitter decomposition algorithms, enabling the discovery of signal integrity issues and their related sources in today’s high-speed serial, digital, and communication system designs. Option 6-DJA also provides eye diagram mask testing for automated pass/fail testing.


High Speed Serial and Jitter Test for 6 Series B MSO

  • Advanced Power Analysis Delivers Fast, Repeatable Power Supply Measurements

The 6 Series B MSO has also integrated the optional 6-PWR power analysis package into the oscilloscope’s automatic measurement system to enable quick and repeatable analysis of power quality, in-rush current, harmonics, switching loss, safe operating area (SOA), ripple, magnetics measurements, efficiency, Control Loop Response (Bode Plot), and Power Supply Rejection Ratio (PSRR).

Measurement automation delivers measurement quality and repeatability at the touch of a button, without the need for an external PC or complex software setup.

Measuring Bode/Control Loop Response of a Power Supply

  • Vector Signal Analysis for Examining Modulated Signals

The 6 Series B MSO, combined with available analysis software, offers cost-effective mid-range performance as either a 4 channel, 10 GHz bandwidth, or 8 channel, 5 GHz bandwidth multichannel, multi-domain Vector Signal Analysis (VSA) solution.

When analysis needs go beyond the basic spectrum, amplitude, frequency, and phase vs. time you can employ the SignalVu-PC vector signal analysis application. This enables in-depth transient RF signal analysis, detailed RF pulse characterization, and comprehensive analog and digital RF modulation analysis.

For example, Tektronix’ mixed signal oscilloscope-based approach to 5G New Radio testing, with dedicated digital down converters on each channel and SignalVu-PC VSA software, offers a novel approach to validate 5G NR designs that the traditional RF engineer may not have considered previously due to technical limitations in traditional FFT-based oscilloscopes.


The post Advanced Oscilloscope Analysis – 4 Unique Capabilities with the 6 Series B MSO appeared first on ELE Times.

RF Connectors in Smart Agriculture and Beyond

ELE Times - Mon, 09/12/2022 - 15:23

Mouser Electronics, Inc., announces the launch of a new content stream in collaboration with Molex, exploring the capabilities, challenges, and transformative potential of RF connectors. The content stream features more than a dozen in-depth resources on RF technology, including podcast episodes, white papers, blog posts, and product guides. Each piece of content links directly to product information from Mouser and Molex, connecting device manufacturers to the tools they need for their design applications.

The new RF Connectors content stream from Molex and Mouser features The Power of Smart Agriculture & the Role of RF Technology, a new podcast episode putting technological advancements in the context of agriculture’s significant role in society. The episode features a conversation between Darren Schauer, Product Manager of RF and Microwave Products at Molex, and David Pike, Content Director at Connector Geek. The two subject matter experts describe the potential applications of RF connectors in smart agriculture, including the dramatic expansion of information and data through connected machinery and sensors.

Cracking the Rural Access Nut: Is 5G the Answer?, a blog post from Molex Senior Director of Product Management and Marketing, Roger Kaufmann, explains how 5G radio frequencies can bring reliable connectivity to previously unserved rural areas. The blog post links to 5G technologies from Molex, including RF flex-to-board connectors, Ethernet transceivers, and cellular external antennas. The new RF Connectors content stream also includes a Smart Ag White Paper, which analyzes the newest applications in smart agriculture, as well as the technology advancements making them possible. The white paper explores a range of relevant topics, including precision agriculture, vision and sensor technology, and smart greenhouse applications.

From Design Chain to Supply Chain, Mouser offers customers a wide selection of the most advanced technology, including a comprehensive lineup of Molex products, helping designers avoid costly redesigns, manufacturing delays or even project terminations.

For more information, visit www.mouser.com

The post RF Connectors in Smart Agriculture and Beyond appeared first on ELE Times.

Mouser Now Shipping UnitedSiC (Now Qorvo) 750V UJ4C/SC SiC FETs

ELE Times - Mon, 09/12/2022 - 15:03

Mouser Electronics, Inc. is now stocking the UJ4C/SC FETs from UnitedSiC (now Qorvo) in an industry-standard, D2PAK-7L surface-mount package. The UJ4C/SC series devices are 750 V silicon carbide field-effect transistors (SiC FETs) that capitalize on the D2PAK-7L package option to deliver low switching loss, increased efficiency at higher speeds, and improved system power density. The FETs are optimized for applications such as onboard chargers, soft-switched DC/DC converters, battery charging, and IT/server power supplies.

The UJ4C/SC devices, available from Mouser Electronics, leverage a unique cascode SiC FET technology in which a normally-on SiC JFET is co-packaged with a silicon MOSFET to produce a normally off SiC FET. The FETs reduce inductance from compact internal connection loops, which, along with the included Kelvin source connection, results in low switching loss, enabling higher frequency operation and improved system power density.

The D2PAK-7L version of the UJ4C/SC series is available in on-resistance options from 9 mΩ to 60 mΩ, delivering design flexibility while maintaining generous design margins and circuit robustness. Rated to 750 V, the FETs offer a best-in-class on-resistance × area (RDS(on) × A) figure of merit and a low body diode, ultra-low gate charge, and a 4.8 V threshold voltage that allows a gate drive of 0 V to 15 V for ultimate versatility with low conduction losses.

To learn more, visit www.mouser.com

The post Mouser Now Shipping UnitedSiC (Now Qorvo) 750V UJ4C/SC SiC FETs appeared first on ELE Times.

The Internet of E-mobility

ELE Times - Mon, 09/12/2022 - 14:53

Connecting to the internet changed our lives, and now it’s changing our EVs

Internet of Things as an emerging paradigm is filling the right spaces in the ever-expanding technological field. It gradually became an important aspect of people’s lives and can be sensed everywhere. A considerable portion of IoT devices are created for consumer use, including connected vehicles, home automation, wearable technology, connected health, and appliances with remote monitoring capabilities. Driving smart is not just a fancy dream anymore. The increasing use of EVs is driving vehicle digitization as well. When making a switch to electric vehicles, people expect more technological innovations and there is an increased demand for connectivity and mobility solutions.

When electric vehicles were introduced, they had a plethora of issues. With time and technological advancements, progress was made and people started warming up to EVs. But issues like the availability of charging infrastructure, and technical malfunctions still persisted. To accelerate mass adoption, the EV industry decided to take the internet road.

Advancing electrification is amplifying the need for vehicles to be smart. EVs need automobile, power grid, telecom, and digital industries to work together to create coherent user experiences leading to faster adoption. Anchoring IoT for this development is the right way. EVs in comparison to conventional vehicles have fewer mechanical components. The software plays a key role in creating product differentiation in EVs. The cutting-edge technology that IoT offers, helps them stay afloat. The application of IoT in infrastructure applications has great potential to meet sustainability goals.


Safety and smart driving

ADAS (advanced driver-assistance system) is one of the best-offered features through IoT. It enables real-time monitoring of the vehicles and aids in preventive maintenance offered by the technology making it more reliable. It enables computing the absolute and relative parameters and provides real-time tips to ensure better performance. Smart ADASs connected to the Internet of Things (IoT) by mobile data or WiFi also take inputs from other vehicles (vehicle-to-vehicle, V2V) and surrounding infrastructure such as buildings and roads (vehicle-to-vehicle, V2X). Features like real-time tracking and geo-fencing can help in enhancing the security of the vehicle. The feedback data collected based on the performance can help the companies work on improving the features. Companies like Robert Bosch (Germany), Continental AG (Germany), ZF Friedrichshafen (Germany), Denso (Japan), Aptiv (UK), Valeo (France), and Magna International (Canada) dominate the global ADAS market. The global market for ADASs is expected to grow at a 15.9% compound annual growth rate (CAGR) to 2025 and is currently valued at an estimated $11.83 billion. Combined with the rapid growth of the EV market, this will lead the majority of cars on the roads in the future to come equipped with these safety and enhancement technologies. India however, is still not the ideal market for ADAS technology development. ADAS requires good road infrastructure to be a success.  Most local roads have little to no markings for lane keeping which makes them limited to expressways and well-paved highways. Lack of enough data about road conditions, traffic patterns and people’s driving habits stand as primary hurdles. India also has a problem of stray animals wandering on roads, something which is not that prevalent in more developed countries. It will take some time for Indian roads and vehicles to be ready to have smooth access to this technology.

Battery management

The BMS controls and monitors the battery processes and performance and provides the necessary data to algorithms that decide the battery’s overall health.  Maintaining the charging and discharging cycle, ensures optimal battery health and reduces battery damage. BMS takes various sensor readings of your vehicle and monitors the data. The BMS must be able to cope with the complex nature of power batteries, such as high capacity, high power, wide temperature variation, and harsh driving conditions. IoT helps estimate the power, state-of-charge, and state-of-health and maintains them by enabling the remote data logging facility. In scenarios where the battery malfunctions, the onboard sensor data obtained through IoT can help in managing the challenges. Then, these can be run through AI-based models for performance evaluation. Evolute Cleantech Solutions, Spark Innovations, Bacancy, Ziptrax Cleantech, and Grinntech are some Indian companies providing good battery management systems. Models are characterized using the data collected from each step. These are integrated with AI before deploying on a server. The EV sends crucial sensor data to the server, providing insights on the next course of action and performance. It detects and reacts to ground faults.

Fault Alert and Preventive Maintenance System

An electric vehicle, being a machine is bound to experience technical glitches. IoT-based fault alert systems help to alert drivers regarding EV faults, giving them time to act and address them by monitoring the internal parameters such as heating rate, engine oil level and status of the CO of the vehicle. In addition, it is necessary to know the overall temperature and moisture conditions in various geographies, and keeping a check on remote performance is essential. This leads to a better customer experience as they will find it to be reliable. With the help of IoT-based telematics technology, data is collected when connected to vehicle sensors, and it can be rapidly displayed through widgets, send instant notifications, and generate automatic reports. Though EVs are well designed to prohibit errors, sometimes parts might fail or stop. To avoid landing in situations like these, electric vehicles are being developed to help the driver stay out of trouble, as much as possible. Technical glitches are unavoidable in machines. More advanced working mechanism invites more chances of faults because of the complex working structure. IoT-based applications which are exclusively focused on early fault detections also help in maintaining the overall health of the vehicle so that rectifications can be made before the damage is severe.

EV charging infrastructure

With the help of IoT-based telematics technology, data is collected when connected to vehicle sensors, and it can be rapidly displayed through widgets, send instant notifications, and generate automatic reports. Electric vehicle users face challenges like knowing when and where to charge. Electric vehicle monitoring solutions with telematics notify the user regarding the low battery level of the vehicles along with the location of nearby charging stations. With IoT technologyEV charger stations become smart, connected and hence easily accessible for remote support and maintenance. EV charging stations are integrated with various third-party service providers such as energy suppliers, e-MSPs, and charge point operators. They use various protocols & connectivity options and back-end cloud infrastructure to ensure seamless charging operations such as payment processing, software updates, scheduling, predictive maintenance, and usage analytics. YoCharge, ABB, Tata Power, and Delta Electronics are among some of the companies that have successfully developed the EV charging infrastructure in India.

EVs are the future of roads and bringing the internet into the picture will push them to more heights, enabling mass popularity and a smooth driving experience. EV experience is not complete without digital interfaces, like access through mobile phones and rich intelligence through vehicle data collected over time. Route planning and optimization using these rich data sets and algorithms is the key to the success. India is gradually accepting EVs and the government is working towards increasing awareness regarding the advantages of switching to electricity.

Tanya Tyagi | Technology Journalist | ELE Times

The post The Internet of E-mobility appeared first on ELE Times.

AI-Based Security at the Endpoint

ELE Times - Mon, 09/12/2022 - 14:25

The Internet of Things (IoT) has transformed the fabric of the world into a smarter and more responsive one by merging the digital and physical universes into one. Over the past few years, the IoT has exhibited exponential growth across a wide range of applications. According to a McKinsey study, the IoT will have an economic impact of $4 – $11 trillion by 2025. The edge continues to become more intelligent, and vendors are racing to support more connected and smart endpoint devices.

Combining high-performance IoT devices with ML capabilities has unlocked new use cases and applications that have resulted in the phenomenon identified as Artificial Intelligence of Things. The possibilities of AIoT — AI at the edge — are endless.

AIoT has the capability to automate the system of processing data and converting IoT raw data into meaningful information. It provides manufacturing sections with the ability to improve production processes by increasing efficiency, product quality, and smoother processes. AI-based industrial applications include maintenance, predictive quality control, fault detection, inventory management, and production planning and optimization.

The AIoT market is forecast to experience a significant Annual Growth Rate (CAGR) of 39.1% by 2026. Other market reports estimate that the AIoT market will reach more than 100$ billion by 2026. According to the Industrial AI and AIoT Market Report 2021–2026, the AI adoption rate in industrial settings has increased significantly from 19% to 31% in about two years. The global market for IoT data as service solutions will further reach $8.89B USD by 2026. These estimations identify the edge devices market as the fastest growing within the AIoT.

With this ever-increasing usage of AI, the need to secure IoT data has become a top-most priority.

Securing the Endpoint

The endpoint in any network is the term that describes any device that is physically at the end of any point in the network. Devices at the endpoint can collect data, analyse, perform computations, and take decisions based on results. Applying AI at the endpoint has its pros, the main one being real-time processing, which provides the needed response with no network transmission delay time.

Security is the biggest challenge confronting the Internet of Things and an impediment to its expansion. The major concern in the security field is that hackers will find new ways of exploiting the vulnerabilities and cause damage or steal information from connected devices. Endpoint security is a part of the overall cybersecurity solution. It protects the endpoint devices from cyberattacks. It is noteworthy that not all attacks on a company occur from an external source. The attackers can be insiders as well with malicious intent, who have access to your network and are able to steal information or infect your network with malware. Security risks can lead to huge financial losses for companies and even governments, as the impact of an attack on IoT is greater than those on traditional networks.

The IoT presents the scalability challenge which means the size of the system is expanding by adding more endpoint devices, and a large number of additional connected devices can be available in a very short time. Endpoint devices are weak points because they are the entry point of any upcoming security threats. As the different types of endpoints have evolved and expanded, the security solutions that protect them have also had to adapt.

One of the main problems in IoT security is that these devices are not only the weakest link in the cybersecurity chain, but they also offer a way for attackers to bypass perimeter defences. This increases the need for organizations that benefit from IoT technologies to discover new ways of securing IoT devices from exploitation by hackers.

Designing in security needs to be considered from the beginning at the architectural stage. Some of the key security principles include:

  1. Least privilege
    This is the first basic principle that focuses on ensuring that users have only limited access that allows them to carry out their job.
  2. Design for security
    There are many processes for embedded systems design which include establishing threat analysis, security requirements, secure design, implementation, test/verification for security, and a response plan in case a problem arises after releasing the product.
  3. Multi-layers protection
    To provide the systems with guaranteed protection and prevent different types of attacks, it is preferred to apply the protection in multiple layers through multi-layered platforms.
  4. Production life-cycle protection
    Besides layers and design for security, protection can be provided during the product lifecycle. This type can guarantee both security and integrity for the product starting from production to shipment, to deployment, and ultimately to their end-of-life.
  5. Root-of trust (RoTs) on the hardware level
    RoTs are the basic step to ensure the assurance of the trustworthiness of a device. RoTs are the core of security that is built of hardware, firmware, and/or software that provide a set of trusted, security-critical functions.
  6. Minimizing attack surface area
    This principle is about removing parts of a system to make it more secure. Minimizing attack surface area questions whether a feature is necessary or not. Now and then redesigning a feature to make it simpler will improve the overall system in the end.


Leverage AI to identify and prevent cyber threats

AI can automate actions based on specific rules, detect threats, improve response times, identify determined patterns, and analyze data at the endpoints. In a recent report from TechRepublic, an organization of medium size can be confronted with more than 200,000 cyber alerts daily. While it may be impossible to deal with the vast number of humans involved, it can be managed by AI.

Every AI cybersecurity solution works in a unique way, and these models become smarter over time. AI uses machine learning and deep learning techniques to assess behaviour of network components over time. It can identify reliable and suspicious patterns and categorize them accordingly.  One of AI’s biggest advantages in cybersecurity is the ability to analyze massive amounts of data in a very short time and that too with high performance and low error rate, which is impossible to be done manually.

IoT devices connected to the internet are not as secure as they should be, making them a new frontier for hackers. Therefore, companies ought to explore ways to discover, predict, justify, act, and learn to ensure the protection of their customers’ security and privacy.

Security is the foundation for realizing connected AIOT applications across any segment. As the number of connected devices increases, the level of security for embedded devices must also be enhanced. Renesas offers a wide range of security software solutions and products based on the Root of Trust concept that contributes to easier integration of systems and robust security for embedded devices.

Renesas RA MCUs offer customers the ultimate IoT security by combining our secure crypto engine IP with NIST CAVP [Cryptographic Algorithm Validation Program] certifications on top of Arm TrustZone for Armv8-M, while also providing tamper detection and reinforcing resistance to side-channel attacks.

The RA MCU family is certified to PSA (Platform Security Architecture) Level 2 and offers Security Evaluation Standard for IoT Platforms (SESIP) certifications. The RA family includes solid, hardware-based security features, including integrated crypto subsystems based within the MCU. Renesas’ Secure Crypto Engine, an isolated subsystem of the MCU, provides symmetric and asymmetric encryption and decryption, hash functions, true random number generation (TRNG), and advanced key handling (which includes key generation and key wrapping unique to the MCU). In addition, RA MCU integrated SCE (Secure Crypto Engine) access management circuit shuts down the crypto engine if the correct access protocol is not followed, and a dedicated RAM ensures that plaintext keys are never exposed to any CPU or peripheral bus. All these features are well integrated within our FSP (Flexible Software Package) which provides integrated, easy-to-configure support and a collection of Application Projects that enables you to incorporate security into your design effortlessly.

Endpoints are considered an entrance to an IoT system, making them an attractive target for hackers. Hence, it is imperative that designers employ security as a foundation by leveraging loT ready compute devices and relevant deep learning techniques to achieve complete end-to-end security. At Renesas Electronics, we invite you to take advantage of our high-performance MCUs and A&P portfolio combined with complete SW platform and tools, to build highly secure endpoint applications with added intelligence.

Kaushal Vora, Sr. Director, Head of Business Acceleration & Ecosystem, Renesas Electronics

The post AI-Based Security at the Endpoint appeared first on ELE Times.

Achieving Greater Safety for Tomorrow’s Autonomous Vehicles

ELE Times - Mon, 09/12/2022 - 14:23

With the evolution of autonomous vehicles, today’s cars are becoming both more connected and complex. Consumers and suppliers worldwide are demanding much more intelligence and customization, which adds pressure on product development teams to validate the underlying technology and start their design processes months earlier. Enhancements in hardware and software features also mean that the way designers think about automotive safety and security at the system-on-chip (SoC) level must evolve.

While fully autonomous vehicles are still a ways off, there’s a good chance that your car already has driver assistance features such as adaptive cruise control, lane guidance, or active braking. However, as the number of sensors being integrated in automotive systems increases to enable new capabilities, building security and quality into all stages of the design’s lifecycle becomes integral.

The requirements for automotive design are changing, from the silicon all the way to the fully assembled vehicle. Going forward, security and safety are inseparable considerations for automotive SoCs.

Forging a Safe Path to Vehicle Autonomy

Today, every vehicle requires a different blend of sensors to operate at each driving automation level. For instance, a combination of LiDAR, radar, ultrasound, infrared, and camera sensors deliver an increasingly comprehensive picture of the driving environment through object detection and scene segmentation. The role of these sensors is not only to track and identify objects. They must also be able to discern a person from a building in real-time and predict their consequent actions within the context of the “scene.”

As you might expect, this level of intelligence calls for a remarkable amount of processing power to take place locally or be distributed within the vehicle itself. This means connecting the vehicle solely to the cloud isn’t a favorable option since any amount of delay between sending and receiving the information has the potential to endanger the safety of the passenger(s).

All of this has major implications for the future of processor architecture and automotive SoC design. In line with the ISO 26262 functional safety standard for production vehicles, any underlying hardware and software must have functional safety built in to minimize the risk of failure — and potential catastrophe. If that isn’t enough, security of an equally high level is essential for the automotive system to work as designed.

Redundancy of functionality is one way to achieve safety in a manner that also simplifies hardware and software integration, with both “safety” and “mission” mode applications running on a single chip. This makes for easier verification and certification than is possible when functionality is distributed over a board, throughout the car, or across different electronic control units (ECUs).

Taking a Holistic Approach to Automotive Safety and Security

While safety is one key consideration, it must go hand-in-hand with security. It is no coincidence that in a number of languages, like German or Chinese, the same word describes both. Combining the two is becoming a critical design criterion for teams worldwide.

As cars become more automated and gain access to over-the-air updates, they naturally become more connected. The nature of their operations means they constantly collect and transmit valuable information, which makes them potential prey for malicious attacks by cybercriminals. An attack might take the form of stealing “key” information from a keyless car system to enable a break-in; running a chip in a test or debug mode to gain system privileges; or hacking an infotainment system with a virus via a mobile handset. Whatever the attack approach, if a system is easily hacked, it is simply unsafe. Going forward, this will impact the entire supply chain, from IP blocks to the final assembled vehicles themselves.

An essential component in high-performing automotive SoCs is the role of a functional SoC safety manager. It acts as the brain of the vehicle for monitoring and escalating system failures in real-time, independent of other processing occurring in the chip. This is necessary to meet the top-level ISO 26262 safety standards, including Automotive Safety Integrity Levels (ASIL) D, a risk classification that dictates functional safety for a vehicle’s electrical and electronics systems.

In most cases, it is implemented as a dual-processor setup with both cores operating in a lockstep manner along with a small shift to prevent a fault from transpiring in the same area and comparing those results to detect the occurrence of an error. Synopsys ARC Functional Safety Processor IP is certified for both ASIL B and ASIL D operations, and these are increasingly used as a chip’s safety manager. It supports holistic design with an ASIL D-compliant processor and security to resist attack. It also detects physical tampering and supports a trusted execution environment, while providing comprehensive documentation required for ISO 26262 certification. This in turn is backed with functional safety software to help prioritize and optimize functional safety, add flexibility, and reduce the effort required for implementation and development.

Our ASIL D compliant ARC SEM130FS Processor adds safety-critical hardware features, such as dual-core lockstep, to meet strict automotive safety requirements as well as mitigate random hardware faults and avoids system failures.

Autonomous Vehicles

Advancing Vehicle Safety and Security Features at the SoC Level

Secure systems require SoCs with integrated security features. With automobiles becoming the next-generation application hub, embedding a hardware root of trust enables chip manufacturers to secure the device identity and uniquely identify and authenticate themselves to create secure channels for remote device management and service deployment.

At Synopsys, we make automotive SoC design and verification highly predictable for teams to achieve target ASILs with the least impact to quality of results. The ASIL B compliant Synopsys tRoot Hardware Secure Module (HSM) for the automotive segment amalgamates the Root of Trust security solution with hardware safety mechanisms, protecting SoCs against rising data tampering and physical attacks.

Autonomous Vehicles

tRoot Hardware Secure Modules with Root of Trust

Both our IP products fit safety process and documentation requirements, targeting a broad range of applications for telematics, radar, advanced driver assistance systems (ADAS), V2X communications, and industrial SoCs. This allows SoC designers to find ways to implement advanced levels of security while eliminating points of failure.


The automotive industry is transforming on many levels, but its development of smarter, safer cars is perhaps the most noteworthy. Teams responsible for the automotive applications of today and tomorrow will need to think more holistically to factor in both safety and security. As designers continue to address both parameters during the early stages of the design cycle, a “fully” autonomous vehicle on a road near you is no longer far from reality.

Yankin Tanurhan, Senior Vice President Engineering, Synopsys Inc

The post Achieving Greater Safety for Tomorrow’s Autonomous Vehicles appeared first on ELE Times.

TERAKI Selects Infineon AURIX TC4x for ML-Based Radar Detection Software

ELE Times - Mon, 09/12/2022 - 13:25

Autonomous driving (AD) and advanced driver assistance systems (ADAS) rely on the precise sensing of the vehicle’s surrounding environment to safely navigate. Manufacturers around the world have turned to advanced sensors and algorithms to enhance perception and reach unprecedented levels of safety. TERAKI, a market leader in edge sensor processing, today released the latest radar detection software that accurately identifies static and moving objects with increased accuracy and less computational power. The real traffic solution runs on ASIL-D-compliant AURIX TC4x microcontrollers from Infineon Technologies AG.

“Automotive radar system performance has drastically increased over the last product generations,” said Marco Cassol, Director of Product Marketing for Infineon Automotive Microcontrollers. “Edge AI processing is one of the many innovations that has helped us drive this increase in radar performance. TERAKI’s unique radar algorithms are now being implemented in Infineon’s new parallel processing unit (PPU) to showcase next-generation radar performance from Infineon’s AURIX TC4x devices.”

“We have refined our algorithm to achieve more with less,” said Daniel Richart, TERAKI’s CEO “With the minimum amount of data, our solutions detect and correctly classify static and moving objects with radar signals, providing AD and ADAS applications with the essential information for situational awareness and decision-making. Ultimately, we aim to ensure safety, at the edge, by reducing inference time and the required processing power of constrained devices.”

As radar turns into the industry standard for cost-effective signal processing, overcoming the limitations of this sensor technology becomes a priority. For example, interference can severely lower radar detection performance, leading to invalid detections in difficult multi-target situations, which also carry high processing requirements. Additionally, the precision required for reliable radar classifications involves more data points per frame and sub-1-degree angular resolution, if static, and moving objects are to be correctly detected and classified.

TERAKI’s machine learning (ML) approach intends to solve this challenge by working with raw data and reducing noise while acting as a cognitive function to dissect information from the radar, identify targets in a noisy environment, along with clusters and other interference, and decreasing the processing capacity at the edge. TERAKI’s ML detection delivers more points per object, leading to fewer false positives and thus, increased safety; particularly when compared to other radar processing techniques, such as CFAR.

Ported with Infineon’s AURIX TC4x, TERAKI’s ML-based algorithm reduces radar signals after the first Fast Fourier Transformation (FFT), achieving up to 25 times lower error rates of missing objects at the same RAM/fps. Compared to CFAR, classification is up to 20 percent higher in precision, and valid detections increase to 15 percent more. With this release, TERAKI is improving the chipset architecture of edge devices, ensuring real-time processing performance on AURIX TC4x, which alleviates the computing requirements by consuming 4- or 5-bit bitrates instead of 8- or 32-bits without compromising the F1-scores. This leads to up to 2 times less memory required.

The post TERAKI Selects Infineon AURIX TC4x for ML-Based Radar Detection Software appeared first on ELE Times.

Welcoming Diversifications in the Battery Market

ELE Times - Mon, 09/12/2022 - 12:58

An outlook on what could be powering the EVs in future.

Lithium-ion batteries were made commercially in the 1990s and since then have been the front-runners in the energy storage segment. High energy densities, great rechargeability and low self-discharging rate made them favourable for prolonged use. They have been the most preferred option till now but because of their high cost and complex extraction process, there has been a rise in the need for alternatives for the same. Let us take a look at some of the evolving battery technologies.

Lithium Ferro Phosphate

The LFP battery operates similarly to other Li-ion batteries, moving between positive and negative electrodes to charge and discharge. However, phosphate is a non-toxic material compared to cobalt oxide or manganese oxide which are used in lithium-ion batteries. LFP batteries are capable of delivering constant voltage at a higher charge cycle in the range of 2,000–3,000. Unlike many cathode materials, LFP is a polyanion compound composed of more than one negatively charged element. Its atoms are arranged in a crystalline structure forming a 3D network of lithium ions.

LFP is known for its low cost with some estimates putting it as much as 70 per cent lower. The cost advantage comes from its chemical composition. Iron and phosphorus are mined at enormous scales across the globe and are widely used in many industries. LFP batteries also have a smaller environmental impact; they don’t contain nickel or cobalt, which are supply-constrained, expensive, and have a larger environmental impact.

LFP batteries have a longer lifecycle than other lithium-ion batteries because cells experience slower rates of capacity loss. Their lower operating voltage also means that cells are less prone to reactions that impact capacity. With a consistent discharge voltage and lower internal resistance, LFP-powered vehicles can deliver power faster and achieve a higher charge/discharge efficiency.  However, LFP batteries have a low energy density, which means they require more protection. They also tend to not perform adequately at low temperatures. They are more prone to transportation and ageing effects.

Despite being dated technology, LFP and its associated reduction in battery costs may be fundamental in accelerating mass EV adoption. Major automakers including Ford, VW and Tesla are increasingly leveraging lithium-iron-phosphate for electric vehicle batteries.


It is neither a battery nor an engine, but rather an electric equivalent of an engine. In this engine, the fuel is aluminium metal (the anode), which reacts with the oxygen (the cathode) around it to create power. Since the cathode is just oxygen from the surrounding air, there is no need to carry the weight of another metal like a conventional battery, and this makes it considerably lighter. In the Aluminium-Air battery, aluminium hydroxide is produced when energy is released after aluminium reacts with oxygen in ambient air. Due to its lightweight and high energy density, an Aluminium-Air battery increases the driving range of electric vehicles.  It is estimated that 25 kg of aluminium will provide a driving range of about 1600 km and the entire battery will weigh about 90 kg.

These batteries are non-toxic, have a long shelf-life and are expected to make EV adoption more convenient, and accelerate the transition to zero-emission mobility.

A major disadvantage of the aluminium-air battery is that it is not rechargeable. Once the aluminium is consumed, the battery must be replaced, which means that some sort of battery-swapping station or service would need to be readily available to drivers. However, the by-product of the battery’s reaction, aluminium hydroxide, is recyclable 100%. Automobile manufacturers are currently testing this technology. Recently Aditya Birla Group’s metal flagship, Hindalco, has signed an MoU with Phinergy, a leading Israel-based pioneer in metal-air battery technology, and IOC Phinergy Private Limited (IOP) – a joint venture between Phinergy and Indian Oil Corporation, to create aluminium-air batteries for electric vehicles.

Lithium-Sulphur Batteries

Li-sulphur batteries can overcome Li-Ion battery limitations in terms of cost, and the abundance of sulphur available with a reduced environmental footprint. These batteries have a much higher energy density in comparison to Li-ion batteries. Another great advantage of Li-S batteries is the affordability owing to sulphur being a more abundant material than cobalt.  The Li-S battery system has a redox reaction-based storage mechanism, which delivers higher energy density. Sulphur is sourced from industrial waste and cardanol is sourced from bio-renewable feedstock that is easily available, non-toxic, and environmentally friendly.

A few months back, researchers at Monash University in Australia developed a very high-performance and energy-efficient lithium-sulphur (Li-S) battery with a potential electric vehicle (EV) range of 1000 KM

The Li-Sulphur battery technology leverages principles of Green Chemistry. From gadgets to drones, electric vehicles (EV), and other products, the Li-S battery has the potential to aid multi-billion-dollar industries. They come with problems of short lifespan and energy loss along with transportation problems because of the chemicals used in them. Johnson Matthey, LG Chem, Morrow Batteries, NOHMs Technologies, OXIS Energy, PolyPlus, Sion Power, and Williams Advanced are some of the lithium-sulphur battery manufacturers.


Sodium-ion batteries have great potential. They’re energy dense, non-flammable, and operate well in colder temperatures, and sodium is cheap and abundant. Plus, sodium-based batteries will be more environmentally friendly and even less expensive than lithium-ion batteries are becoming now. Sodium-ion battery performance has been limited because of poor durability and low energy density, but developments are underway to overcome these issues. Some manufacturers have spoken of how Li-ion batteries make up for the most expensive component of an EV, accounting for 40-50% of its cost. If sodium-ion cells are manufactured on a mass scale, many in the sector believe it will make EVs more affordable

Sodium is a common element available in plenty of amounts across the world. Sodium is usually mined from soda ash and can be found anywhere. Can be found even in seawater. It is the 7th most abundant material in the world. It is much cheaper than lithium as it costs less to extract and purify.

Sodium-ion battery cells can be manufactured with ample metals such as iron and manganese, li-ion on the other hand requires cobalt, which is limited in reserve and distributed across the world in an uneven manner. Also, cobalt is highly expensive to obtain. Na-ion batteries can be built using existing battery equipment and don’t require massive redesigning effort. They are able to operate at a wider temperature range.

Non-flammable and no thermal runaway, thereby reducing risks of battery fires.  They are lightweight so will ensure more efficient and agile handling of EVs. Safety risk during transit is also nil. Manufacturers can transport the sodium ion batteries with battery terminals connected and voltage held at zero. Top manufacturers of sodium-ion batteries are Faradion Limited, AGM Batteries Ltd., Altris, and NGK Insulators among others.


Advancements in Zinc based batteries are now presenting potential alternatives to Li-ion batteries. Zinc-based batteries are not new technology. The first patent was issued in 1901 to Thomas Edison and these were commercialised in the 1930s. Li-ion batteries use a process called intercalation that allows for lighter, more compact batteries that still have high energy density-properties that make them ideal for EVs.

However, recent developments in zinc-based batteries show considerable promise in matching the feasibility of Li-ion batteries, while improving on other parameters like safety and recyclability. Lithium is highly reactive to water, and manufacturing them requires complicated and expensive processes to control factory environments. Zinc-ion batteries are water-based and avoid this issue altogether.

Unlike Li-ion batteries, zinc-ion batteries do not require formation cycling after manufacturing to increase longevity, allowing them to travel off the production line quicker. The primary ingredients of zinc are zinc and manganese. Both are available widely in India and India ranks in the top 5 for production of both, making zinc a viable option for domestic battery production. The manufacturing of zinc-ion batteries is also comparatively straightforward due to the similarities with li-ion batteries as the manufacturing expertise and equipment accrued over time for li-ion batteries can be leveraged for zinc-ion batteries.

Other zinc-based batteries are showing promise as li-ion alternatives for EV applications. For example- aqueous 3D sponge zinc-based batteries avoid rechargeability issues commonly faced by older zinc battery technology.

China’s share of the Lithium-ion market is estimated to be around 80%. China is also the global leader in recycling lithium-ion batteries. India and other countries can greatly benefit from the evolution of battery alternatives because of their abundant availability. This will reduce the import cost by large, thereby bringing down the overall cost of EVs. Battery technology advancements require thorough governmental support to ensure that no economic and technological obstacles pose a threat to development. New batteries require a considerable amount of research and tests to be approved. They need proper and widely available charging infrastructure as well. Finding better battery alternatives should be a never-ending process in the quest to make our electric vehicles more driver and environment-friendly.

Tanya Tyagi | Technology Journalist | ELE Times

The post Welcoming Diversifications in the Battery Market appeared first on ELE Times.

The Evolution of Telecommunications Infrastructure in Factories

ELE Times - Mon, 09/12/2022 - 12:01

Effective and efficient manufacturing is performed in smart factories while exchanging data between various systems and equipment in factories and also between core information systems, clouds, and other technologies in companies. When exchanging data inside and outside a factory, network technology called an “industrial network” is used. This technology is different from that used in regular offices and homes. An industrial network is communications technology that improves data communication quality and reliability, real-time performance, security, and other elements to suit the quantity and content of the data handled and the purpose of use and the usage environment in the factory.

Ethernet, Wi-Fi and other networks have become fast enough to be able to transmit high-definition video data in real time in recent years. Even so, some aspects remain that make them difficult to use in factories from perspectives such as communications quality and reliability and real-time performance.

For example, it is frustrating when video takes time to be displayed despite clicking on the thumbnail of a video you want to watch on a video site. However, this may lead to an even more serious problem in a factory. For instance, what if a situation arises in which a system develops a malfunction and the arrival of an emergency stop signal is delayed despite it being sent? Defective products may continue to be made on-site. This may lead to enormous damage being suffered. A function that ensures such a situation does not arise is incorporated into network technology in industrial networks. Currently, industrial network technology is evolving from conventional control communications technology called a “field network” to Ethernet-based time-sensitive networking (TSN) that is even faster and easier to use in anticipation of the evolution of smart factories.

From the control of systems to line management and operation optimization over entire companies

There are two main types of industrial network used in factories (Fig. 1). One is a field network in which the control data of systems and equipment on the line and the data of sensors that detect the state of works in progress and other information is exchanged with a programmable logic controller (PLC). The other is a controller-to-controller network that connects between multiple PLCs and between systems that monitor lines.


Fig. 1: Industrial network configuration

Coordination is now also being taken with the core system that manages the operation status of multiple factories, the procurement of parts and materials across the entire company, inventory information, and other data in addition to an industrial network in factories in recent years. The information system network that forms the core system and the industrial network in the factory are located apart. Therefore, the two are also connected by a public network in many cases. Accordingly, they are connected via a firewall to protect highly confidential industrial information from cyber-attacks and other incidents.

Industrial networks have diverse standards with different characteristics

There are various standards with different characteristics, usage settings, and compatible devices for the protocols of industrial networks.

The controller area network (CAN) in-vehicle network standard, the RS-485 serial communications standard, and other standards were used at first in the fieldbus with the development of factory automation.  After that, FA device manufacturers and manufacturing industry groups proposed multiple unique standards to ensure development into a high-speed and highly reliable standard capable of being applied to even more advanced FA. Profibus, formulated as a standard by an industry group in Germany called PROFIBUS & PROFINET International, and other standards are widely used globally, whereas CC-Link, MECHATROLINK, and other standards are often used in Japan.

There are also many standards for controller-to-controller networks. These include PROFINET, EtherNet/IP, EtherCAT, Modbus TCP/IP, CC-Link IE Field, and Sercos III. The adoption of Ethernet-based systems is currently spreading in this field. All of the standards we just gave are Ethernet-based technologies. The reason Ethernet-based technologies have spread is that there is a focus on coordination with information system networks. It has become possible to manage and control all connected devices at once with an IP address by utilizing a data exchange standard called “OPC UA.”

Wireless communication is also widely utilized in factories

Smart factories are being built and digital transformation (DX) in companies is being implemented. This has led to diverse devices and equipment being connected via networks more than ever before. As a result, there is a need for new network technologies that can be applied to even more diverse devices and that can handle even more usage settings.

A standard technology has appeared in which various standards enable real-time communications in an Ethernet environment to meet such needs. This is TSN (Fig. 2). TSN is a technology that enables application to industrial use and makes it possible to unify communications in smart factories with just an Ethernet-based standard. It does this by incorporating various functions to meet the demands of the times in Ethernet used in office automation and other applications. These functions include time synchronization, securing of bandwidth to enable real-time communications, improvement of reliability by having redundancy, and an increase in security through the detachment of information leakage nodes.

Fig. 2: Unification of networks in a factory on an Ethernet-basis in TSN

There is also an increase in activity for the movement to connect systems and equipment placed in factories to networks by using wireless LAN, Bluetooth, and other short-range wireless communications. If sensors are connected wirelessly, it becomes possible to install them in places where they could not previously be installed. It also becomes easier to change the layout of the line. However, Wi-Fi and Bluetooth do not have the same communications quality and reliability as industrial networks. Accordingly, attempts have been made to use the fifth-generation mobile communications system (5G) that achieves ultra-high-speed, ultra-low delays, and multiple simultaneous connections in place of industrial networks. A local 5G system that grants a 5G base station license only within a specified area has started in Japan. It is anticipated that it will also become possible to use TSN technology in 5G that is expected to be used in smart factories. This will make data utilization in factories even more effective and efficient.

Courtesy: Murata

The post The Evolution of Telecommunications Infrastructure in Factories appeared first on ELE Times.

Optimize Serial-to-Ethernet Communication for Smart Transportation

ELE Times - Mon, 09/12/2022 - 10:54

Thanks to advanced technologies, real-time remote monitoring of transportation systems across a vast number of regions is now a reality, heralding the era of smart transportation. However, many legacy devices at roadsides and stations are still using serial communications. Besides finding a serial-to-Ethernet solution that helps you enable remote monitoring applications from roadsides all the way to the control center, you also need better serial-to-Ethernet communication technology to overcome challenges such as long-distance communications and complex communication requirements in large-scale applications. For an easy-to-use serial-to-Ethernet solution, our NPort serial device servers support a variety of operation modes to make it easy for you to send and receive serial data over TCP/IP networks. In this article, we illustrate communication challenges in different application scenarios and how you can use TCP/UDP operation mode, featured in our NPort serial device servers, to optimize serial-to-Ethernet communication for your smart transportation applications.

Scenario 1: Road Traffic Monitoring 

A variety of controllers and sensors at roadsides collect data on both traffic and environmental conditions. Deployed miles away from each other, these field devices must communicate with traffic control centers to provide operators with real-time road conditions. Correspondingly, operators must provide instant information to road users regarding traffic jams and severe weather. To collect field data in such large-scale applications and transform it into useful information for road users, operators may encounter difficulties dealing with multiple serial data requests from different application programs and longer response times when incidents occur.

Enhance Transmission Accuracy With a Command-by-command Function

Our NPort serial device servers support TCP server mode, often used in remote monitoring applications to connect with field sensors such as traffic controllers, road sensors, and other types of devices. Central systems in the control center running TCP-client programs initiate contact with the NPort, establish a connection, and receive serial data from field devices. When multiple hosts contact the NPort simultaneously, our TCP server mode supports the Max Connection function that enables multiple hosts to collect serial data from the same field device at the same time. Although this function makes multiple command requests possible, it could lead to potential data collisions. Thus, we designed the Command-by-command function to prevent serial data collisions when you enable the Max Connection function. The Command-by-command function allows the NPort to store the commands in its buffer when it receives a command from any of the hosts on the Ethernet. These commands will be sent to the serial ports on a first in first out (FIFO) basis. Once the field device responds, the NPort will save that response to its buffer and then send the response to where the command originated.

TransportationReduce Network Resume Time With TCP Alive Check Timeout Function

When the host operates in an active role to establish a TCP connection (while the NPort acts as a TCP server passively waiting for the client to connect), the NPort has no way to determine whether the network has crashed and will continue as if the connection is still there. Even if the network connection resumes, the client won’t be able to reestablish a connection with the device because the resource has been occupied. Consequently, someone needs to go to the field site to reboot the NPort to free up the resource. In terms of both labor and time costs, this is extremely inefficient. To address this issue, the TCP server mode includes a TCP Alive Check Timeout function that provides the NPort with a fail-safe mechanism if the network gets disconnected. Therefore, this function provides the Ethernet connection status by checking the TCP/IP connection status periodically.


Scenario 2: Access Control Systems

Many intelligent transportation systems, such as parking systems and entry gates at stations, use access control systems. Such systems usually require actively collecting serial data through card readers and transmitting it over TCP/IP back to multiple systems for authentications and payment calculations. When a connection fails, it can lead to a loss of time and money for both users and operators. To enhance connection reliability, you must ensure your serial-to-Ethernet solution can send the correct serial data over TCP/IP networks and provide sufficient transmission bandwidth for backup systems.

Deliver Requested Serial Data With a Data Packing Function

The NPort serial device servers support TCP client mode, often used in access control systems to connect with serial card readers and other devices. In this scenario, the data is sent back to the host application program for further processing. One problem regarding transporting serial data over TCP/IP networks is that data could be divided into separate Ethernet packets, causing the application program to fail. Our NPort serial device servers provide Data Packing functions to ensure that the serial data arrives in a complete and recognized packet so that the application can receive and process requests properly. Since the application program recognizes a specific character as the end of a data stream, the Delimiter function, one of our data packing functions, enables the NPort to immediately pack and send all data in the buffer to the Ethernet when a specific character is received through its serial port. This way, your payment system can receive serial data as requested.


Enhance Connection Efficiency With Connection Control Function

When the NPort is configured to TCP client mode, it can decide when to establish or disconnect a TCP connection with the host by enabling the Connection Control function. This function allows you to limit the number of TCP connections to those required and increase the efficiency of the host server by disconnecting unused connections automatically. Many different events can be defined to establish or disconnect a TCP connection. A very common one is Any Character/Inactivity Timeout. Here, whenever there is any serial data activity, the NPort is triggered to establish a TCP connection with the host. If the serial end is idle for a specified time, the NPort will disconnect the TCP connection until serial data activity resumes. In this situation, you can use our Max Connection function to connect a backup host for your serial data collection without worrying that it would occupy your transmission bandwidth.

Scenario 3: Passenger Information Systems

Smart transportation uses passenger information systems to provide commuters with real-time transport information. Operators need to broadcast (or multicast) the same messages to a set of LED displays to show information such as train schedules at stations or road conditions on highways. This application requires faster transmission so that commuters can receive real-time information to adjust their commute route.

Enhance Transmission Speed With UDP Mode

If the application requires real-time data transmission and the socket program uses the UDP protocol, you can set the NPort to UDP mode. The major difference between UDP and TCP server/client modes is the connection does not need to be established before transmitting data with UDP mode. It sends data faster than TCP server/client mode because the time required for TCP’s three-way handshakes is eliminated. UDP mode is suitable for applications that require real-time transmission and can still tolerate possible data losses.

In UDP mode, a multicast IP address can be set for every serial port, and all devices that subscribe to the same multicast IP address will receive the message assigned to that IP address. The benefit of multicast is that it not only efficiently sends the message to multiple destinations but also saves valuable bandwidth, because it does not transmit the same data to different destinations multiple times.

Our NPort serial device servers provide a variety of functions for different operation modes to meet your demands in industrial applications. You can download our guide to learn more about other functions. In addition, our NPort serial device servers feature security functions and a variety of OS driver support to ensure your serial devices connect easily and securely to modern systems. Learn more about how our serial connectivity solutions help you take your serial devices into the future of networking.

Courtesy: Moxa

The post Optimize Serial-to-Ethernet Communication for Smart Transportation appeared first on ELE Times.

Dual 200mA op amp from STMicroelectronics Drives Power-Hungry Industrial and Automotive Loads

ELE Times - Mon, 09/12/2022 - 10:27

The STMicroelectronics TSB582 dual high-output amplifier simplifies circuitry for driving inductive and low-ohmic loads like motors, valves, and rotary resolvers in industrial applications and automotive systems such as steer-by-wire and auto-parking.

The TSB582 operates from 4V-36V supplies and contains two operational amplifiers (op amps), each capable of sinking/sourcing up to 200mA. This enables direct connection of a load in bridge-tied mode, allowing one TSB582 to replace two single-channel power op amps or high-current drivers built from discrete components. While integrating two op amps into one package, the TSB582 is able to save up to 50% of board space and lowers the bill of materials.

Available in industrial- as well as automotive-grade versions, the TSB582 addresses applications such as controlling robot movements and position, conveyor belts, and servo motors. Automotive applications include motor-position sensing including steer-by-wire and electric-traction motors, as well as tracking road-wheel rotation in autonomous driver-assistance systems and self-driving vehicles.

The TSB582 comes with internal short circuit and over-temperature protection, it has rail-to-rail outputs and operates up to 3.1MHz gain-bandwidth (GBW). Both the industrial- and automotive-grade versions are qualified over a temperature range of -40°C to 125°C, are EMI hardened, and provide ESD robustness up to 4kV HBM.

There are two package options, each with low thermal resistance: an SO8 with exposed thermal pad and a 3mm x 3mm DFN8 with exposed pad and wettable flanks. The wettable flanks aid inspection after soldering to meet automotive quality-assurance requirements. The DFN8 3mm x 3mm package is available now in industrial grade. The equivalent in automotive grade and the SO8 package in both grades will be released within Q3 2022.

The TSB582 is part of ST’s 10-years longevity program and free samples are available now on the ST eStore. The unit price is $1.57 for industrial-grade and $1.80 for automotive-qualified versions, in either package style.

For more information, please visit www.st.com/opamps.

The post Dual 200mA op amp from STMicroelectronics Drives Power-Hungry Industrial and Automotive Loads appeared first on ELE Times.


Subscribe to Кафедра Електронної Інженерії aggregator - Новини світу мікро- та наноелектроніки