#gpu

frenchhope@diaspora-fr.org

Raspberry Pi 5 teams up with Radeon GPU to run Doom Eternal with RTX on at 4K — the combo also tackles Crysis Remastered, Red Dead Redemption 2, and Forza Horizon 4 | Tom's Hardware ⬅️ URL principale utilisée pour la prévisualisation Diaspora* et avec plus de garantie de disponibilité.

Archivez vous même la page s'il n'existe pas encore d'archive et évitez ainsi les pisteurs, puis ulilisez µBlockOrigin pour supprimer d'éventuelles bannières qui subsisteraient sur la page sauvegardée.

💾 archive.org

#raspberrypi #gpu #performance #jeuvidéo #3d

‼️ Clause de non-responsabilité v1.0

anonymiss@despora.de

#Chrome Lets #Google Websites #Access Your #System #CPU & #GPU Usage

Source: https://origin.80.lv/articles/chrome-lets-google-websites-access-your-system-cpu-gpu-usage/

Recently, software developer #LucaCasonato found out that Chrome gives all Google websites full access to system/tab CPU, GPU, and memory usage as well as detailed processor information and logging backchannel. "This is interesting because it is a clear violation of the idea that #browser vendors should not give preference to their websites over anyone else's," he said on X/Twitter. "Depending on how you interpret the DMA, this additional exposure of information only to Google properties may be considered a #violation of the #DMA."

#news #fail #internet #economy #cybersecurity #problem

danie10@squeet.me

How do Video Game Graphics Work? This is likely why GPUs costs so much!

Image of a steam locomotive with the front end showing in full resolution, and then the resolution few towards the rear end degrades in stages through three primary colours, black and white, and ending in a blueprint type resolution.
The link below is to a video that explains quite well, with illustrations, how realistic and responsive 3D scenery and objects are generated. Yes, the bulk of the work performed today is done by 3rd party gaming engines like the Unreal Engine and others. So from a developer point of view, they don’t really have to get their hands dirty any more with the nitty-gritty mathematics.

But if you consider the number and complexity of calculations made for each pixel (as the video explains) and then multiply that by the number of pixels on the screen, and the number of screen refreshes every second, it becomes quite mind-blowing. It is no wonder that GPUs are so powerful and cost more than the rest of the PC all combined.

Watch youtu.be/C8YtdC8mxTU?si=6iNQX3…
#Blog, #gaming, #GPU, #technology

anonymiss@despora.de

The #AI #supply chain:

"It makes visible the #connection between an #engineer training an #algorithm in the #UK, a miner extracting #tantalum in #Kazakhistan, an engineer in #Mexico working in a #data centre, a #worker in #Taiwan #manufacturing GPUs and a worker in #Kenya dismantling e-waste"

source: https://twitter.com/ana_valdi/status/1747200486392950785

#economy #technology #supplyChain #resources #globalization #internet #software #hardware #gpu #labour #map #news

california@diaspora.permutationsofchaos.com

Tinker board with #AI support and #GPIO connector compatible with #RaspberryPi

URI: https://hub.libre.computer/t/2023-09-25-libre-computer-aml-a311d-cc-alta-ai-sbc-announcement/2905

#AML-A311D-CC features:

  • 4 M Cores
  • 2 E Cores
  • 4 M #GPU Cores
  • 5+ TOPs #NPU Cores (AI)
  • Up to 4GB LPDDR4X
  • #USB Type C Power + 2.0 Data Dual Role
  • #HDMI 2.1
  • 3.5mm Jack with CVBS and Analog Stereo Audio Output
  • Gigabit #Ethernet with WOL
  • 4 USB Type-A 3.0 Hub
  • PoE Connector
  • 40-Pin GPIO Connector
  • IR Receiver Sensor
  • DSI 4-Lane 22-Pin Connector up to 1080P
  • CSI 4-Lane 22-Pin Connector up to 8MP with 2 Cameras
  • 16MB #OpenSource UEFI BIOS
  • #eMMC 5.x Slim Connector
  • #MicroSD Card Slot with UHS SDR104 Support
  • Price $45 (2GB Ram version)

If you want to know how to use the AI module (NPU):
* https://www.cnx-software.com/2020/01/13/getting-started-with-amlogic-npu-on-khadas-vim3-vim3l/
* https://www.cnx-software.com/2023/11/09/libre-computer-aml-a311d-cc-alta-sbc-features-amlogic-a311d-ai-processor/
* https://forum.khadas.com/c/khadas-vim3/30

board

#hardware #iot #technology #ARM

danie10@squeet.me

3 reasons to ditch Nvidia for AMD in 20233 reasons to ditch Nvidia for AMD in 2023

Black coloured Graphics Processor Unit with two fans on the front and the name RADEON on it. Behind it is a black box with words AMD Radeon RX7900XT on it.
Nvidia’s GPUs are considered second to none in the enthusiast PC space, and there are plenty of convincing reasons to go with Team Green for your next build. XDA Developers highlighted a few of those reasons to consider an Nvidia GPU over an AMD one recently, going over things like DLSS and the raw performance of RTX GPUs. AMD graphics cards, however, have also come a long way, and they aren’t trailing too far behind in 2023.

In fact, there are some good reasons to consider them over Nvidia’s options, and you certainly can’t count them out of the race. If you are in the market to buy a new graphics card and are split between AMD and Nvidia, then here are a few reasons why you should consider an AMD GPU for your build.

OK, admittedly two of the reasons are related, so this could be more of two good reasons. I did opt to go with an AMD Ryzen 7 for my last CPU I bought (my first non-Intel in decades), and I’ve been very happy with that choice. I only realised a month later, after I’d bought an Nvidia GPU, that I never really took a serious look at the AMD GPUs.

I’m certainly going to do so next time I buy a GPU (I don’t buy them with every PC upgrade I do). In my case, too, I’m using Linux, so I really don’t get to use some of those additional extra Nvidia Windows-only features. I have way less to lose, actually.

A GPU comparison for Linux users would be quite interesting to see, where it compares head-to-head on open source as well as OEM proprietary drivers.

See https://www.xda-developers.com/reasons-ditch-nvidia-for-amd/
#Blog, #GPU, #technology

faab64@diasp.org

A shocking new discovery about China's successful ability to manufacture 7nm System on Chip in a local factory, based on domestic CPU and GPU architecture.

The technology blogs and "experts" are all in shock, because they had no idea what China was able to produce such advanced SoC like "Huawei's HiSilicon Kirin 9000S" that is powering the company's flagship mobile phone P60 Pro.

Ever since the US started the campaign to sanction and boycott Huawei a few years ago, the company has turned away from using American based products and any technology that could be controlled by the American/Western sanctions.

The impressive (yet not so fast with today's levels) SoC is nothing but a wake up call for the western companies dominating the #CPU, #GPU and #SoC market, from #Intel, to #AMD, #NVidia and of course #Qualcum.

The technology war that the US started, is not going to end well when China manages to create even more efficient and powerful CPUs to power not only expensive mobile phones, but also mid range and low cost ones as well as tablets and ultra portable tablets.

The treatment of #Russia after the War in #Ukraine and how the whole western world tried to isolate and corner Russia was a wake up call for China in so many fronts, and they seems to be making giant leaps rather than baby steps.

Huawei's HiSilicon Kirin 9000S looks to be a quite complex SoC packing four high-performance cores (one at up to 2.62 GHz and two at up to 2,150 MHz) and four energy-efficient cores (up to 1,530 MHz) based on the company's own TaiShan microarchitecture (which still looks to be found on the Armv8a ISA ) as well as the Maleoon 910 graphics processing unit operating at up to 750 MHz, based on screenshots by Huawei Central. CPU and GPU cores run at relatively low clocks compared to frequencies of Arm's cores featured in previous generations of HiSilicon's SoCs.

But low frequencies can be explained by the fact that SMIC makes the new SoC on its unannounced 2nd generation 7nm fabrication process, which could be a breakthrough for #SMIC, Huawei, and China's high-tech industry. Although TechInsights calls this fabrication technology SMIC's 2nd generation production node, state-controlled Global Times claims that China's foundry champion uses its 5nm-class manufacturing technology to make the SoC. But these two names seem to describe the same thing, which was once known as SMIC's N+2.

#Technology #SoC #Huawei #7nmTechnlogy #ChipManufacturing #China #US #Politics #Economy

https://www.tomshardware.com/news/huaweis-new-mystery-7nm-chip-from-chinese-fab-defies-us-sanctions

analysisparalysis@pod.beautifulmathuncensored.de

Petals dropped. Now you can use large models on a single gpu it says.
Reminds me of the “more expensive setup is better” argument that makes people buy new graphics cards and CPUs.

But what we need to define is application, not numbers.

Just like “With iPod, Apple has invented a whole new category of digital music player that lets you put your entire music collection in your pocket and listen to it wherever you go” and not “we invented a X GB mp3 player with Y KB cache”.

Application focus, not specs focus.

My goal is to have a model that gives accurate answers to questions about a document and that has the “decency” to admit that the answer cannot be found in those documents.

There are several options out there with local llms, yet none of them can be configured.

If a reply is bad, all you can do is choose another model. People hope that bigger was better, so they try to stuff huge and even bigger(petals) models into their computer, but what do those models really do?

They contain VAST corpuses on all kinds of topics. I assume, you won’t need 99% of those billions of parameters in your entire lifetime.

THIS is where you need to start: limit models by application. If you only search in English, don’t get a model that also contains Urdu.

If you only talk about computer science, don’t get that model that contains psychology.

Now the problem is that there are no models available that are specific - and good at what they do there.

To summarize, we need models that are specializable, by defining requirements and creating a new model based on that, which is much faster, smaller and gives you the right results.

EDIT: Another issue is what if the answers don’t satisfy you, would a human give better answers? Is it the model, what is it? You only have a few parameters like temperature and that’s it. Besides, a huge issue is model entitlement, the dreaded “This is not morally okay, and I will not”…Shut up, you are living on MY computer, I tell you what you do and don’t do.

Yet I feel like running against windmills, nobody debates that anywhere and people just report about the great, now 10 times more parameter model as if that saved humanity. #AI #MachineLearning #Models #ApplicationFocus #SpecsFocus #ComputerScience #English #Urdu #SpecializableModels #Accuracy #DocumentAnalysis #GPU #Hardware #Technology