-   Українською
-   In English
Feed aggregator
20-A models join buck converter lineup
TDK-Lambda expands its i7A series of non-isolated step-down DC/DC converters with seven 500-W models that provide 20 A of output current. The converters occupy a standard 1/16th brick footprint and use a standardized pin configuration.
With an input voltage range of 28 V to 60 V, the new converters offer a trimmable output of 3.3 V to 32 V and achieve up to 96% efficiency. This high efficiency reduces internal losses and allows operation in ambient temperatures ranging from -40°C to +125°C. Additionally, an adjustable current limit option helps manage stress on the converter and load during overcurrent conditions, enabling precise adjustment based on system needs.
The 20-A i7A models are available in three 34×36.8-mm mechanical configurations: low-profile open frame, integrated baseplate for conduction cooling, and integrated heatsink for convection or forced air cooling.
Samples and price quotes for the i7A series step-down converters can be requested on the product page linked below.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post 20-A models join buck converter lineup appeared first on EDN.
Discrete GPU elevates in-vehicle AI
A discrete graphics processing unit (dGPU), the Arc A760A from Intel delivers high-fidelity graphics and AI-driven cockpit capabilities in high-end vehicles. According to Intel, the dGPU supports smooth and immersive AAA gaming and responsive, context-aware AI assistants.
The Arc A760A marks Intel’s entry into automotive discrete GPUs, complementing its existing portfolio of AI-enhanced, software-defined vehicle (SDV) SoCs with integrated GPUs. Together, these devices form an open and flexible platform that scales across vehicle trim levels. Automakers can start with Intel SDV SoCs and later add the dGPU to handle larger compute workloads and expand AI capabilities.
Enhanced personalization is enabled by AI algorithms that learn driver preferences, adapting cockpit settings without the need for voice commands. Automotive OEMs can transform the vehicle into a mobile office and entertainment hub with immersive 4K displays, multiscreen setups, and advanced 3D interfaces.
Intel expects the Arc A760A dGPU to be commercially deployed in vehicles as soon as 2025. Read the fact sheet here.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Discrete GPU elevates in-vehicle AI appeared first on EDN.
Raspberry Pi SBC touts RISC-V cores
The Raspberry Pi Pico 2 single-board computer is powered by the RP2350 MCU, featuring two Arm cores or optional RISC-V cores. This $5 computer board also boasts higher clock speeds, twice the memory, enhanced security, and upgraded interfacing compared to its predecessor, the Pico 1.
Designed by Raspberry Pi, the RP2350 MCU leverages a dual-core, dual-architecture with a pair of Arm Cortex-M33 cores and a pair of Hazard3 RISC-V cores. Users can select between the cores via software or by programming the on-chip OTP memory. Both the Arm and RISC-V cores run at clock speeds of up to 150 MHz.
Pico 2 offers 520 kbytes of on-chip SRAM and 4 Mbytes of onboard flash. A second-generation programmable I/O (PIO) subsystem provides 12 PIO state machines for flexible, CPU-free interfacing.
The security architecture of the Pico 2 is built around Arm TrustZone for Cortex-M and includes signed boot support, 8 kbytes of on-chip antifuse OTP memory, SHA-256 acceleration, and a true random number generator. Global bus filtering is based on Arm or RISC-V security/privilege levels.
Preorders of the Pico 2 are being accepted now through Raspberry Pi’s approved resellers. Even though Pico 2 does not offer Wi-Fi or Bluetooth connectivity, Raspberry Pi expects to ship a wireless-enabled version before the end of the year.
Find more datasheets on products like this one at Datasheets.com, searchable by category, part #, description, manufacturer, and more.
The post Raspberry Pi SBC touts RISC-V cores appeared first on EDN.
Microchip Technology Adds ECC20x and SHA10x Families of Secure Authentication ICs to TrustFLEX Platform
Pre-Configured CryptoAuthentication ICs help reduce development time and minimize design costs
Secure key provisioning is vital to protect sensitive keys against third-party tampering and malicious attacks. For securing consumer, industrial, data center and medical applications, secure key storage is essential but the process to develop and document secure key provisioning can be complex and costly. To lower the barrier of entry into secure key provisioning and enable more rapid prototyping, Microchip Technology has added the ECC204, SHA104 and SHA105 CryptoAuthentication ICs to its TrustFLEX portfolio of devices, services and tools.
ECC20x and SHA10x ICs are hardware-based, secure storage devices that are designed to keep secret keys hidden from unauthorized attacks. As part of the TrustFLEX platform, ECC204, SHA104 and SHA105 ICs are preconfigured with defined use cases, customizable cryptographic keys and code examples to streamline the development process.
“Adding the ECC20x and SHA10x pre-configured devices to our TrustFLEX platform will facilitate leveraging Microchip’s secure provisioning services for a broader set of applications,” Nuri Dagdeviren, corporate vice president of Microchip’s secure computing group. “With this platform expansion, Microchip is continuing to strengthen its portfolio, making security authentication ICs more accessible and more specifically optimized for high-volume, cost-sensitive applications.”
ECC20x and SHA10x devices meet Common Criteria Joint Interpretation Library (JIL) High rated secure key storage requirements and have been certified by the NIST Entropy Source Validation (ESV) and Cryptographic Algorithm Validation Program (CAVP) in compliance with the Federal Information Processing Standard (FIPS). The secure IC families are designed to implement trusted authentication to maintain the confidentiality, integrity and authenticity of data and communications in a wide range of systems and applications.
Microchip’s CryptoAuthentication ICs are small, low-power devices that are designed to be compatible with any microprocessors (MPUs) or microcontrollers (MCUs). They provide flexible solutions for securing industrial, medical devices, battery-powered equipment and disposable applications. Additionally, the ECC204 is a Wireless Power Consortium (WPC) approved Qi authentication Secure Storage Subsystem (SSS). Visit the Microchip website to learn more about the Trust Platform and its portfolio of security solutions.
Development ToolsECC20x and SHA10x ICs are supported by Microchip’s Trust Platform Design Suite, which provides code examples and learning materials and enables the secure transfer of credentials to more easily leverage Microchip’s secure key provisioning services. The devices are also supported by the MPLAB® X Integrated Development Environment (IDE), product-specific evaluation boards and CryptoAuthLib library support.
The post Microchip Technology Adds ECC20x and SHA10x Families of Secure Authentication ICs to TrustFLEX Platform appeared first on ELE Times.
New Vishay Intertechnology Silicon PIN Photodiode Improves Sensitivity in Biomedical Applications
Key Features Include Larger Sensitive Area of 6.0 mm², Increased Reverse Light Current, and Small Form Factor of 4.8 mm by 2.5 mm by 0.5 mm
Vishay Intertechnology, Inc. has released a new silicon PIN photodiode that brings a higher level of sensitivity in the visible / near infrared wavelength to biomedical applications such as heart rate and blood oxygen monitoring. The new VEMD8082 features increased reverse light current, decreased diode capacitance, and faster rise and fall times compared to previous-generation solutions. Additionally, its small form factor of 4.8 mm by 2.5 mm by 0.5 mm makes it suitable for integration into low profile devices such as smart watches.
The high sensitivity provided by the VEMD8082 is particularly important in biomedical applications such as photoplethysmography (PPG), where the photodiode is used to detect changes in blood volume and flow by measuring the amount of light absorbed or reflected by blood vessels. In such applications, precise measurements are crucial for diagnosing and monitoring conditions such as cardiovascular disease.
Specifications for the new device contributing to its high sensitivity compared to previous-generation devices include a radiant sensitive area of 6.0 mm² and an increase in reverse light current of 18 % to 20 %, depending on wavelength. Decreased diode capacitance from 50 pF to 46 pF, as well as faster rise times of 40 ns vs. 110 ns, allow for higher sampling rates.
Samples and production quantities of the VEMD8082 are available now.
The post New Vishay Intertechnology Silicon PIN Photodiode Improves Sensitivity in Biomedical Applications appeared first on ELE Times.
RISC-V migration to mainstream one startup at a time
As noted by Kleiner Perkins partner Mamoon Hamid, the migration to RISC-V is in full flight. Kleiner Perkins, along with Fidelity and Mayfield, is a backer of RISC-V upstart Akeana, which has officially launched itself after exiting the stealth mode.
Akeana marked this occasion by unveiling RISC-V IPs for microcontrollers, Android clusters, artificial intelligence (AI) vector cores and subsystems, and compute clusters for networking and data centers. Its 100 Series configurable processors come with 32-bit RISC-V cores and support applications spanning from MCUs to edge gateways.
Akeana’s 1000 Series processor line includes 64-bit RISC-V cores and an MMU to support rich operating systems as well as in-order or out-of-order pipelines, multi-threading, vector extension, hypervisor extension and other extensions that are part of recent and upcoming RISC-V profiles.
Next, its 5000 Series features 64-bit RISC-V cores optimized for demanding applications in data centers and cloud infrastructure. These processors are compatible with the Akeana 1000 Series but offer much higher single-thread performance.
Three RISC-V processors come alongside an SoC IP suite. Source: Akeana
Akeana feels especially confident in data center processors while having acquired the same team that designed Marvell’s ThunderX2 server chips. “Our team has a proven track record of designing world-class server chips, and we are now applying that expertise to the broader semiconductor market as we formally go to market,” said Rabin Sugumar, Akeana CEO.
Besides RISC-V processors, Akeana offers a collection of IP blocks needed to create processor system-on-chips (SoCs). That includes coherent cluster cache, I/O MMU, and interrupt controller IPs. The company also provides scalable mesh and coherence hub IP compatible with AMBA CHI to build large coherent compute subsystems for data centers and other use cases.
Akeana, another RISC-V startup challenging the semiconductor industry status quo, has been officially launched three years after its foundation. And it has raised over $100 million from A-list investors like Kleiner Perkins, Mayfield, and Fidelity.
Related Content
- Navigating the RISC-V Revolution in Europe
- Amidst export restrictions, RISC-V continues to advance
- Accelerating RISC-V development with network-on-chip IP
- RISC-V venture in Germany to accelerate design ecosystem
- RISC-V as you like it: the ‘first’ fully configurable 64-bit processor core
The post RISC-V migration to mainstream one startup at a time appeared first on EDN.
Google’s fall…err…summer launch: One-upping Apple with a sizeable product tranche
Within last month’s hands-on (or is that on-wrist?) coverage of Google’s first-generation Pixel Watch, I alluded to:
…rumors of the third-generation offering already beginning to circulate…
Later in that writeup, I further elaborated:
…the upcoming Pixel Watch 3, per leaked images, will not only be thicker but also come in a larger-face variant.
I (fed by the rumor mill, in my defense) was mostly right, as it turns out. The “upcoming” Pixel Watch 3 was released today (as I write these words on Tuesday, August 13). It does come in both legacy 41 mm and new, larger 45 mm flavors. And, in a twist I hadn’t foreseen, the bezel real estate is decreased by 16%, freeing up even more usable space with both screen sizes (also claimed to be twice as bright as before). But they’re not thicker than before; they’ve got the same 12.3 mm depth as that of the second-gen precursor. And anyway, I’m getting ahead of myself, as today’s live event (including live demos, full of jabs at competitor Apple’s seeming scripted, pre-recorded preferences in recent years):
was predated by several notable preparatory press release-only announcements last week.
4th-generation Nest learning thermostatI’ve got a Lennox “smart” system, so don’t have personal experience with Nest (now Google Nest) gear, but I know a lot of folks who swear by it, so for them the latest-generation addition will likely be exciting. Aside from various cosmetic and other aesthetic tweaks, it’s AI-centric, which won’t surprise anyone who saw the Google I/O keynote (or my coverage of it):
With the Nest Learning Thermostat we introduced the concept of intelligence to help you save energy and money. It keeps you comfortable while you’re home and switches to an energy-efficient temperature when you’re away. And the Nest Learning Thermostat (4th gen) is our smartest, most advanced thermostat yet.
It uses AI to automatically make micro-adjustments based on your patterns to keep you comfortable while saving both energy and money. Now, AI can more quickly and accurately create your personalized, energy-saving temperature schedules. With Smart Schedule, the thermostat learns which temperatures you choose most often or changes in behavior based on motion detected in your home — like coming home earlier — and automatically adjusts your temperature schedule to match. These energy-saving suggestions can be implemented automatically, or you can accept or reject them in the Google Home app so you’re always in control.
The thermostat also analyzes how the weather outdoors will affect the temperature inside. For example, if it’s a sunny winter day and your home gets warmer on its own, it will pause heating. Or, on a humid day, the indoor temperature may feel warmer than intended, so the thermostat will adjust accordingly.
TV StreamerThis once was admittedly something of a surprise, albeit less so in retrospect. Google is end-of-lifeing its entire Chromecast product line, replacing it with the high-end TV Streamer, which not only handles traditional audio and video reception-and-output tasks but also, for example, does double-duty as a “smart home” hub for Google Home and Matter devices. The reason why it wasn’t a complete surprise was that, as I’d mentioned before, the existing Chromecast with Google TV hardware was getting a bit long in the tooth, understandable given that the original 4K variant dated from September 2020 with the lower-priced FHD version only two years newer.
With its Chromecast line, Google has always strived to deliver not only an easy means of streaming Internet-sourced content to (and displaying it on) a traditional “dumb” TV but also a way to upgrade the conventional buggy, sloth-like “smart” TV experience. As “smart” TVs have beefed up their hardware and associated software over the past four years, however, the gap between them and the Chromecast with Google TV narrowed and, in some cases, collapsed and even flipped. That said, I still wonder why the company decided to make a clean break from the longstanding Chromecast brand equity investment versus, say, calling it the “Chromecast 2”.
The other factor, I’m guessing, has at least something to do with comments I made in my recent teardown of a Walmart Android TV-based onn. UHD streaming device:
Walmart? Why?… I suspect it has at least something to do with the Android TV-then-Google TV commodity software foundation on which Google’s own Chromecast with Google TV series along with the TiVo box I tore down for March 2024 publication (for example) are also based, which also allows for generic hardware. Combine that with a widespread distribution network:
Today, Walmart operates more than 10,500 stores and clubs in 19 countries and eCommerce websites.
And a compelling (translation: impulse purchase candidate) price point ($30 at intro, vs $20 more for the comparable-resolution 4K variant of Google’s own Chromecast with Google TV). And you’ve got, I suspect Walmart executives were thinking, a winner on your hands.
Competing against a foundation-software partner who’s focused on volume at the expense of per-unit profit (even willing to sell “loss leaders” in some cases, to get customers in stores and on the website in the hopes that they’ll also buy other, more lucrative items while they’re there) is a tough business for Google to be in, I suspect. Therefore, the pivot to the high end, letting its partners handle the volume market while being content with the high-profit segment. This is a “pivot” that you’ll also see evidence of in products the company announced this week. To wit…
The Pixel 9 smartphone seriesNow’s as good a time as any to discuss the “elephant in the room”. Doesn’t Apple generally (but not always) release new iPhones (and other things) every September? And doesn’t Google generally counter with its own smartphone announcements (not counting “a” variants) roughly one month later? Indeed. But this time, Mountain View-headquartered Google apparently decided to get the jump on its Cupertino-based Silicon Valley competitor (who I anticipate will once again unveil new iPhones next month; as always, stay tuned for my coverage!).
Therefore the “One-upping Apple” phrase in this post’s title. And my already mentioned repeated snark from Google regarding live-vs- pre-recorded events (and demos at such). Along with plenty of other examples. That said, Google wasn’t above mimicking its competitor, either. Check out the first-time (versus historically curved) flat edges in the above video. Where have you seen them (plenty of times) before? Hmmm? That said, Pixels’ back panel camera bar (which I’ll cover more in an upcoming post) currently remains a Google-only characteristic.
Another Apple-reminiscent form factor evolutionary adaptation also bears mentioning. Through the Pixel 4 family generation, Google sold both standard and large screen “XL” smartphone variants. The “XL” option disappeared with the Pixel 5. In its stead, starting with the Pixel 6, a “Pro” version arrived…a larger screen size than the standard, as in the “XL” past, but also accompanied by a more elaborate multi-camera arrangement, among other enhancements.
And now with the Pixel 9 generation, there are two “Pro” versions, both standard (6.3” diagonal, the same size as the conventional Pixel 9, which has grown a bit from the 6.2” Pixel 8 predecessor) and resurrected large screen “XL” (6.8” diagonal). Remember my earlier comments about media streamers: how Google was seemingly doing a “pivot to the high end, letting its partners handle the volume market while being content with the high-profit segment”? Sound familiar? This is also reminiscent of how Apple dropped its small-screen “mini” variant after only one generation (13).
Even putting aside standard-vs-“Pro” and standard-vs-large screen product line proliferation prioritization by Google, the overall product line pricing has also increased. The Pixel 7 phones that I’m currently using, for example, started at $599, with the year-later (and year-ago) Pixel 8 starting at $699; the newly unveiled Pixel 9 successor begins at $799. That said, in fairness, you now get 50% more RAM (12 vs 8 GBytes). Further to that point, especially given that the associated software suite is increasingly AI-enhanced (yes, Google Assistant is being replaced by Gemini, including the voice-based and Alexa- and Siri-reminiscent Gemini Live), Google isn’t making the same mistake it initially did with the Pixel 8 line.
At intro, only the 12 GByte RAM-inclusive “Pro” version of the Pixel 8 was claimed capable of supporting Google’s various Gemini deep learning models; the company later rolled out “Nano” Gemini variants that could shoehorn in the Pixel 8’s 8 GBytes. This time, both the Pixel 9 (again, 12 GBytes) and Pixel 9 Pro/Pro XL (16 GBytes) are good to go. And I suspect Apple’s going to be similarly memory-inclusive from an AI (branded Apple Intelligence, in this case) standpoint with its upcoming iPhone 16 product line, given that its current-generation AI support is comparably restrictive, to only the highest-end iPhone 15 Pro and Pro Max.
Accompanying the new-generation phones is, unsurprisingly, a new-generation SoC powering them: the Tensor G4. As usual, beyond Google’s nebulous claim that “It’s our most efficient chip yet,” we’ll need to wait for Geekbench leaks, followed by actual hands-on testing results, to assess performance, power consumption and other metrics, both in an absolute sense and relative to precursor generations and competitors’ alternatives. They all (like Apple for several years now) come with two years of gratis satellite SOS service, which is nice. And they all also, after three generations’ worth of oft-frustrating conventional under-display fingerprint sensor usage (oh, how I miss the back panel fingerprint sensors of the Pixel 5 and precursors!), switch to a hopefully more reliable ultrasonic sensor approach (already successfully in use in Samsung Galaxy devices, which is an encouraging sign).
That said, the displays themselves differ between standard and “Pro” models: the Pixel 9 has a 6.3-inch OLED with 2,424×1,080-pixel resolution (422 pixels per inch, i.e., ppi, density) and 60-120 Hz variable refresh rate, while its same-size Pixel 9 Pro sibling totes a 2,856×1,280-pixel resolution (495 ppi density) and its low-temperature polycrystalline oxide (LTPO) OLED affords an even broader 1-120 Hz variable refresh rate range to optimize battery life. The Pixel 9 Pro XL’s display is also LPTO OLED in nature, this time with a 2,992×1,344-pixel resolution (486 ppi density). And where the phones also differ, among other things (some already mentioned) and speaking of AI enhancements, is in their front and rear camera allotments and specifications. With the Pixel 9, you get two rear cameras—50 Mpixel main and 48 Mpixel ultrawide—along with a 10.5 Mpixel front-facing. The Pixel 9 Pro and Pro XL add a third rear camera, a 48 Mpixel telephoto with 5x optical zoom, as well as bumping up the front camera to 42 Mpixel resolution. And for examples of some of the new and enhanced AI-enabled computational photography capabilities, check out this coverage, along with a first-look video from Becca, at The Verge:
Video Boost’s cloud-processed smoother transitions between lenses, dealing with an issue whose root cause I also discuss in the aforementioned upcoming blog post, is very cool.
In closing (at least for this section), a few words on the Pixel 9 Fold Pro, the successor to last year’s initial Pixel Fold. Read through The Verge’s recently published long-term usage report on the first-generation device (or Engadget’s version of the same theme, for that matter), and you’ll see that one key hoped-for improvement with its successor was increased display brightness. Well, Google delivered here, claiming that it’s “80% brighter than Pixel Fold”.
The Pixel 9 Fold Pro also inherits other Pixel 9 series improvements, such as to the SoC and camera subsystem. After some initial glitches, Google seems to have solved the Pixel Fold’s screen reliability issue, a key characteristic that I assume will carry forward to the second generation. And the company’s also currently offering a generous first-generation trade-in offer, although you’ll still be shelling out $1,000+ for the second-gen upgrade. That all said, as I read through the coverage of both generation foldable devices, I can’t help but wonder what could have also been, had Google and Microsoft more effectively worked together to harness the Surface Duo’s hardware potential with equally robust (and long-supported) software. Sigh.
Smart watchesI already teased the new Pixel Watch 3 variants at the beginning of the piece, specifically with respect to their dimensions and display characteristics. Interestingly, they run the same SoC as that found in second-generation predecessors, Qualcomm’s Snapdragon SW5100, and have the RAM allocation, 2 GBytes. The new Loss of Pulse Detection capability is compelling, specifically its multi-sensor implementation that strives to prevent “false positives”:
Loss of Pulse Detection combines signals from Pixel Watch 3’s sensors, AI and signal-processing algorithms to detect loss of pulse events, with a thoughtful design — built from user research — to limit false alarms. The feature uses signals from the Pixel Watch 3’s existing Heart Rate sensor, which uses green light to check for a user’s pulse.
If the feature detects signs of pulselessness, infrared and red lights also turn on, looking for additional signs of a pulse, while the motion sensor starts to look for movement. An AI-based algorithm brings together the pulse and movement signals to confirm a loss of pulse event, and if so, triggers a check-in to see if you respond.
The check-in asks if you’re OK while also looking for motion. If you don’t respond and no motion is detected, it escalates to an audio alarm and countdown. If you don’t respond to the countdown, the LTE watch or phone your watch is connected to automatically places a call to emergency services, and shares an automated message that no pulse is detected along with your location.
And unlike the Pixel 9 smartphones, which will still be “stuck” on Android 14 at initial shipment (Android 15 is still in beta, obviously), the new watches will ship with Wear OS 5, whose claimed power consumption improvements I’m anxious to see in action on my first-gen Pixel Watch, too. Speaking of which, that “free two years of Google Fi-served wireless service for LTE watches” short-term promotion that I’d told you I snagged? It’s now broadly available.
I should note that Google launched a new smartwatch last week, too, the kid-tailored Fitbit Ace LTE. But what about broader-audience new Fitbit smartwatches? Apparently, the Pixel Watch family is the solitary path going forward here; in the future, Google will be refocusing the Fitbit brand specifically at lower-end activity tracker-only devices.
EarbudsLast (but not least), what’s up with Google and earbuds? At the beginning of last year, within a teardown of the first-generation Pixel Buds Pro, I also gave a short historical summary of Google’s to-date activity in this particular product space. Well, two years after the initial Pixel Buds Pro series rolled out, the second-generation successors are here. They check all the predictable improvement-claim boxes:
- “Twice as much noise cancellation”
- “24% lighter and 27% smaller”
- “increase[d]…battery life, despite the smaller and lighter design. When Active Noise Cancellation is enabled, you get up to 8 hours”
And, for the first time, they’re powered by Google-designed silicon, the Tensor A1 SoC (once again reminiscent of Apple’s in-house supply chain strategy). That said, I was happily surprised to see that the “wings” designed to help keep the earbuds in place when in use have returned, albeit thankfully in more subdued form than that in the standard Pixel Buds implementation:
Although I’m blessed to own several sets of earbuds from multiple suppliers, I inevitably find myself repeatedly grabbing my first-gen Pixel Buds Pros in all but critical-music-listening situations. However, even when all I’m doing is weekend vacuuming, they don’t stay firmly in place. The second-gen “wings” should help here. Hear? (abundant apologies for the bad pun).
Having just passed through 2,500 words, I’m going to wrap up at this point and pass the cyber-pen over to you all for your thoughts in the comments!
—Brian Dipert is the Editor-in-Chief of the Edge AI and Vision Alliance, and a Senior Analyst at BDTI and Editor-in-Chief of InsideDSP, the company’s online newsletter.
Related Content
- Playin’ with Google’s Pixel 7
- Pixel smartphone progressions: Latest offerings and personal transitions
- The 2024 Google I/O: It’s (pretty much) all about AI progress, if you didn’t already guess
- The Google Chromecast with Google TV: Realizing a resurrection opportunity
- Google’s Chromecast with Google TV: Car accessory similarity, and a post-teardown resurrection opportunity?
- The Pixel Watch: An Apple alternative with Google’s (and Fitbit’s) personal touch
- Walmart’s onn. UHD streaming device: Android TV at a compelling price
The post Google’s fall…err…summer launch: One-upping Apple with a sizeable product tranche appeared first on EDN.
SemiQ adds S7 package option to QSiC 1200V power module family
EEVblog 1635 - Mailbag: RF book, Probe Bonanza, Beelink Ryzen 9 EQR6 PC, CueCat
Workbench Wednesday! - Soldering Edition
Working for an EMS provider, I get to work with all sorts of nice and fancy soldering tools. We are an assembly house, so our focus is on soldering and not engineering. What do you think? Any questions on tools and equipment or processes, I'll do my best to answer in the comments. [link] [comments] |
My workbench mess for handling multiple projects at once
submitted by /u/MrSteakhouse [link] [comments] |
Workshop Wednesday - Home Lab/Office
submitted by /u/einsteinoid [link] [comments] |
Dpot pseudolog + log lookup table = actual logarithmic gain
This is the Microchip MCP41xxx digital potentiometer data sheet that includes (on page 15, their Figure 4-4) an interesting application circuit comprising a Dpot controlled amplifier with pseudologarithmic gain settings. However, as explained in the Microchip text, the gains implemented by this circuit start changing radically as the control setting of the pot approaches 0 or 256. As Microchip puts it: “As the wiper approaches either terminal, the step size in the gain calculation increases dramatically. This circuit is recommended for gains between 0.1 and 10 V/V.”
That’s good advice. Unfortunately, following it would effectively throw away some 48 of the 256 8-bit pot settings, amounting to a loss of nearly 20% of available resolution. The simple modification shown in Figure 1 gets rid of that limitation.
Figure 1 Two fixed resistors are added to bound the gain range to the recommended limits while keeping full 8-bit resolution.
Wow the engineering world with your unique design: Design Ideas Submission Guide
This results in the gain vs code red curve of Figure 2.
Figure 2 Somewhat improved pseudologarithmic gain curve from the simple modification shown in Figure 1.
However, despite this improvement, the key term remains pseudologarithmic. It still isn’t a real log function and, in fact, isn’t quantitatively even that close, deviating by almost a factor of two in places. Can we do better? Yes!
The simple (software) trick is to prepare a 257-byte logarithmic lookup table that translates the 0.1 to 10.0 gain range settings to the Dpot codes needed to logarithmically generate those gains.
Let’s call the table index variable J. Then for a 257-byte table of (abs) gains G from 0.1 to 10.0 inclusive,
J(G) = (128 LOG10(abs(G)) + 128)
…examples…
J(0.1) = 0,
J(0.5) = 89,
J(1.0) = 128,
J(10.0) = 256,
etc.
Inspection of the gain expression in Figure 1 reveals that the Dpot decimal code N required for (abs) gain G is:
N(G) = (284.4G – 28.4)/(G + 1)
…thus…
N(.1) = (28.4 – 28.4)/(.1 + 1) = 0/1.1 = 0,
N(.5) = (142 – 28.4)/(.5 + 1) = 114/1.5 = 76,
N(1.0) = (284.4 – 28.4)/(1 + 1) = (256)/2 = 128,
N(10.0) = (2844 – 28.4)/(10 + 1) = 2816/11 = 256,
etc.
Figure 3 summarizes the resulting relationship between G, J, and N.
Figure 3 The Dpot settings [N(J)] versus log table indices [J(G)], summarizing the relationship between G, J, and N.
The table of log gains can be found in this excel sheet. The net result, with as good log conformity as 8 bits will allow, is exhibited as Figure 4’s lovely green line.
Figure 4 The absolute gain [Gabs = 10(J/128 -1)] versus decimal code (J).
Stephen Woodward’s relationship with EDN’s DI column goes back quite a long way. Over 100 submissions have been accepted since his first contribution back in 1974.
Related Content
- Keep Dpot pseudologarithmic gain control on a leash
- Synthesize precision Dpot resistances that aren’t in the catalog
- Reducing error of digital potentiometers
- Adjust op-amp gain from -30 dB to +60 dB with one linear pot
- Op-amp wipes out DPOT wiper resistance
The post Dpot pseudolog + log lookup table = actual logarithmic gain appeared first on EDN.
Data transformation- Meaning, Aim, Processes involved, Phases, Classification, and Significance
Meaning of data transformation
Data transformation is the process of converting data from one format or structure into another format or structure. For instance, converting a raw dataset into a well arranged, scientifically analysed, vetted, and a user-friendly format.
As the aim of data transformation is to present the data into a very user-friendly format, it invariably involves converting dataset from one format of file into another format. For instance, CSV (comma separated values), excel spreadsheet, XML (extensible markup language), etc.
It involves conversion of both the format and/ or structure of a data set into a format or structure that is congruent to the requirements of the target system.
Aim of data transformation
The aim behind executing any data transformation process is to ensure that the available data is scientifically arranged, well-analysed, vetted from reliable sources and as per the internationally accepted standards, and presented in a user-friendly format.
This ensures that the decision making based on the available data is rational, logical, scientific, and correct to the best of knowledge. Hence, it aids in analyses and developing insights. Besides, further analyses of the data available after executing the process of data transformation brings to fore some of the hitherto unexplored facts or dimensions about any topic.
Data transformation is only change in the format of the data and not the content of the data
An important feature of data transformation is that it only involves conversion of data from one format to another. It does not change anything in the content of the data.
Who are the people involved in the process of data transformation?
Majorly, the data engineers, data analysts, and data scientists collaborate amongst each other to execute the process of data transformation.
Processes involved in data transformation
Data transformation is executed by accomplishing three processes. They are as follows:
First, data integration.
Second, data migration.
Third, warehousing.
Phases of data transformation
Data transformation is accomplished over five phases. They are as follows:
First, data discovery.
Second, data mapping.
Third, code generation.
Fourth, code execution.
Fifth, data review.
Classification of the process of data transformation
The process of data transformation is classified into four types. They are as follows:
First, constructive data transformation. In this type, data is copied, replicated, or added.
Second, destructive data transformation. In this type, data pertaining to fields or records is deleted.
Third, structural data transformation. In this type, columns in data is combined, moved or renamed.
Fourth, aesthetic data transformation. In this type, data pertaining to certain values are standardized.
Significance of data transformation
First, data transformation is a critical stage of both the ETL (Extract, Transform, Load) and ELT (extract, load, transform) processes.
The difference between the ETL approach and the ELT approach is that the ETL approach uses a fixed criteria to sort data from multiple sources before compiling it a central place. On the other hand, the ELT approach aggregates data as it is from the beginning and then transforms it later depending upon the requirements of the case and analytics.
Second, data transformation is an important aspect of big data analytics. Hence, it is of immense importance in today’s age of big data, an age when the data is already huge in volume and is rapidly growing in gargantuan proportions.
Common life examples of data transformation
Data transformation is undertaken in various applications in our life. Few such examples are as follows:
First, converting file from CSV format to XML format.
Second, conversion of speech into text by means of speech conversion software.
The post Data transformation- Meaning, Aim, Processes involved, Phases, Classification, and Significance appeared first on ELE Times.
Fraunhofer IAF uses MOCVD to fabricate aluminum yttrium nitride
Microchip and Acacia Collaborate to Enable Optimized Terabit-Scale Data Center Interconnect Systems
The companies enable an interoperable coherent optics ecosystem that can help streamline the development of data center interconnect and metro transport systems
The latest data center architectures and increased traffic are driving higher bandwidth requirements between data centers. To address this challenge, system developers must streamline the development of a new generation of 1.2 Tbps (1.2T) transport solutions across a wide range of client configurations. This requires that today’s terabit-scale Ethernet PHY devices and coherent optical modules interoperate with each other in Data Center Interconnect (DCI) and metro transport networks. Microchip Technology today announces that it has worked with Acacia to demonstrate the fourth generation of interoperability between Microchip’s META-DX2 Ethernet PHY family and Acacia’s Coherent Interconnect Module 8 (CIM 8).
The two companies’ interoperable devices enable low-power, bandwidth-optimized, scalable solutions for pluggable optics in DCI and transport networks. They deliver three key benefits as they jointly enable high-capacity, multi-rate muxponders for optical transport platforms:
- Optimized DCI bandwidth: The META-DX2 family, through its META-DX2+ PHY, uses its unique Lambda Splitting feature to split 400 GbE or 800 GbE clients across multiple wavelengths driven by the CIM 8 modules. This maximizes the capacity between data centers in rate configurations such as 3×800 GbE over 2×1.2 Tbps waves or 5×400 GbE over 2×1.0 Tbps waves.
- Reduced design risk: Microchip and Acacia have jointly verified successful SerDes interoperation at up to 112G per lane for Ethernet and OTN clients, which reduces design validation and system qualification requirements.
- Better support for full bandwidth, multi-rate operation: The META-DX2+ crosspoint and gearbox functions enable 100 GbE to 800 GbE client modules to connect with full bandwidth to CIM 8 modules.
“This interoperability extends a long-established partnership with Acacia to help accelerate and optimize the build-out of cloud computing and AI-ready optical networks while reducing development risk for our customers,” said Maher Fahmi, vice president for Microchip’s communications business unit. “Our META-DX2 is the first solution of its kind to integrate 1.6T of encryption, port aggregation and Lambda Splitting into the most compact 112G PAM4 device in the market.”
“With Acacia’s CIM 8 coherent modules verified to interoperate with Microchip’s META-DX2 devices, we see this as a robust solution that reduces system time-to-market,” said Markus Weber, senior director DSP product line management of Acacia. “The compact size and power efficiency of our CIM 8 coherent modules were designed to help network operators deploy and scale capacity of high-bandwidth DWDM connectivity between data centers and throughout transport networks.”
The post Microchip and Acacia Collaborate to Enable Optimized Terabit-Scale Data Center Interconnect Systems appeared first on ELE Times.
Can you spot the DRSSTC stuff?
Welcome to my where’s Waldo themed workbench also it’s Wednesday in New zealand so i’d say this counts. List of stuff to find: • DSSTC H-bridge(50 points) • tesla coil secondary(10 points) • multimeter(1 point because its easy to find) • drill battery(5 points) • Variac(20 points) • shunt resistor (you win instantly and gain the achievement: how tf?) Comment you’re score try and beat my high-score of 0 (I have no idea where anything is anymore lol) [link] [comments] |
Infineon expands its Bluetooth portfolio with eight new parts, including the AIROC CYW89829 Bluetooth LE MCU for automotive applications
Infineon Technologies AG has announced the expansion of its Bluetooth portfolio by eight new products in the AIROC CYW20829 Bluetooth Low Energy 5.4 microcontroller (MCU) family, featuring Systems-on-Chip (SoCs) and modules optimized for industrial, consumer, and automotive use cases. The high integration of the CYW20829 product family allows designers to reduce bill-of-material (BOM) cost and device footprint in a wide variety of applications, including PC accessories, low-energy audio, wearables, solar micro inverters, asset trackers, health and lifestyle, home automation and others. Product designers also benefit from Infineon’s rich development infrastructure and commitment to robust security, with support for secure boot and execution environments and cryptography acceleration to safeguard sensitive data.
The latest automotive part in the product family, the AIROC CYW89829 Bluetooth Low-Energy MCU, is ideal for car access and wireless battery management systems (wBMS) applications, due to its robust RF performance, long range and latest Bluetooth 5.4 features including PAwR (Periodic Advertising with Responses). The dual ARM Cortex core design of the chip family features separate application and Bluetooth Low Energy subsystems that deliver full featured support for Bluetooth 5.4, low-power, 10 dBm output power without a PA, integrated flash, CAN FD, crypto accelerators, high security including Root of Trust (RoT), and is PSA level 1 ready.
“Infineon offers one of the industry’s broadest portfolios of IoT solutions. Our Bluetooth solutions offer robust connectivity and the latest features,” said Shantanu Bhalerao, Vice President of the Bluetooth Product Line, Infineon Technologies. “Our automotive AIROC CYW89829 Bluetooth LE MCU, and versatile AIROC Bluetooth CYW20829 LE MCU deliver ultra-low power and a high degree of integration for a better user experience across various applications in automotive, industrial, and consumer markets.”
Infineon has been working with customers to design with current products in the CYW20829 family and has received positive reviews:
“The Infineon CYW20829 is the leading Bluetooth part in the market, which has passed the latest Bluetooth 5.4 certification,” said Kevin Wang, CEO of ITON. “CYW20829 has very good RF performance, supports PAwR and LE Audio. These features bring more possibilities in consumer and industrial markets.”
“CYW20829, with perfect RF performance, flexible API, and good long-range features, provides a good solution for commercial lighting, industrial IoT, and more,” said Cai Yi, CEO of Pairlink.
“Earlier this year, the Bluetooth SIG released version 5.4 of the specification with new features: Periodic Advertising with Responses and Encrypted Advertising Data. These features implemented on Infineon’s CYW20829 chips allow Addverb to develop a secure monitoring and controlling system for a fleet of wireless robots in the industrial warehouse, satisfying safety requirements,” said Tapan Pattnayak, Chief Scientist at Addverb, a global leader in robotics.
The post Infineon expands its Bluetooth portfolio with eight new parts, including the AIROC CYW89829 Bluetooth LE MCU for automotive applications appeared first on ELE Times.
Finally I got my first oscilloscope 🤓
submitted by /u/RoyalDream59 [link] [comments] |
How to reconnect battery?
I'm wondering how to reconnect this so maybe I can make a rechargeable mic? [link] [comments] |