Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The AMD Ryzen 3 3300X and 3100 CPU Review: A Budget Gaming Bonanza (anandtech.com)
133 points by jjuhl on May 7, 2020 | hide | past | favorite | 124 comments


So I was recently looking at those as options for a very low budget video editing PC for someone, and yes, AMD destroys intel in raw CPU performance for the same price, however in those low-budget applications Intel has an upper hand - integrated GPU. With AMD you either have to go with the super crappy 3400G, which is a really poor CPU(but also very cheap), or for literally anything else + a dedicated GPU(which increases the cost significantly above Intel's offering). I was surprised to find out it's actually cheaper to go with say i5 9400 with an integrated GPU than with Ryzen 5 3600 + cheapest dedicated GPU. Yes the dedicated gpu will be better than intel's offering(marginally so) but if you only care about the CPU performance, don't have a lot of money and yet you need something to drive your monitor, then Intel has that integrated GPU to offer that AMD lacks.


I think this is why you mostly see these cheap articles called out as being ideal for gaming rigs where you're highly likely to be adding a dedicated GPU anyway. Even a low end card like a $120 RX 570 is going to be 600+% faster than the built in Intel GPU: https://gpu.userbenchmark.com/Compare/Intel-UHD-Graphics-630...

Even for video editing the math might change a bit since modern video editing apps can leverage GPU for rendering.

That said, I agree with the folks who say that AMD releasing desktop chips built on Zen 2 with built in GPU cannot come fast enough.


$120 is sensitive amount of money for someone who wants to use a machine only for development. I am wondering why AMD would manufacture ultra low end potato cards which would cost like $35, only for office use. That would boost the sales big time.


You could just buy older-gen AMD cards? They still work. Like Apple with their hardware, AMD sees no reason to produce new "value" SKUs, when you can just buy their 1–3-gen-old flagship cards for "value" prices.


Even old designs on new semiconductor processes would be an improvement - a 3 generation old flagship card will be a bit cheaper to buy but will still have significant power consumption (ergo noise and heat) relative to the current generation.


They still work, until they don't because the vendor stops supporting them.


I've yet to see a graphics card which stopped being supported in reasonable terms. Except if you count 3dfx or other really old stuff, at which point it gets silly. Or awesome, guess it's kinda retro by now.


Opencl is not supported on IVY BRIDGES anymore for example, although there is no technical obstacles to provide such support.


Businesses would NEVER buy used cards. The are inefficient in terms of energy consumption, you never know if they were abused for mining and such, no warranty etc.


Parent didn't say "used" at all.


PCI Express cards based on NVIDIA GeForce GT 710 are available new for <40 EUR. Presumably there's a large clearance and used market with much lower prices. None of that is an option?


But that was kind of my point - Ryzen 5 3600 I can currently find for £180, while an i5 9400 is £198. If I now have to get a £30-40 dedicated GPU for the first one, Ryzen ends up being more expensive(not to mention that a super basic GPU like the GT710 does actually struggle driving a 4K screen, while the Intel 630 does not, but that's a minor point)


Not everyone is willing to buy old underpowered yet power hungry cards. You'd waste your time searching, you card would'not support 4k and VP9, no warranty. It gets especially expensive if you purchasing machines for some software developing shops - you want to have similar configuration on each machine, to save on IT support.


You're not wrong, the GT 710 only does 4k at <=30 Hz, which makes it a terrible choice for general purpose computing at this point. It also doesn't do VP9 acceleration. The GT 1030 does these things, but it's also twice as expensive.


What is "general purpose" computing at 30+Hhz?

710 plays 1080p YouTube and non-graphics intensive games fine.


I looked at this exact thing yesterday and those are currently the best option. They're also passively cooled which is nice. Having to use NVIDIA is annoying because of the story around Linux support. I mostly worry about these being old new stock and having poor reliability because of that but maybe it's fine.


New old stock is probably not a big deal for GPUs ... There's not a lot that's going to go bad on one that's just been stored for a couple years. From arcade collecting experience, big things that go bad in storage are batteries, and capacitors, and maybe dust accumulation. But a GPU doesn't have batteries, and capacitors usually take 10+ years to degrade in absence of heat from usage or bad chemicals (and heat makes that worse, so you'ld already know there was another bad cap scourge).

You can also avoid all that by getting a new design GT 710 https://www.anandtech.com/show/15713/asus-launches-an-old-gp...


Except for a couple of very new cards, most GT 710 cards don't have a DispayPort output, only HDMI. I also wonder if they support HDCP 2.2.


For modern video editors (aka "not Adobe") the CPU barely matters at all. I suspect "content creation needs strong CPU" is held up largely by Adobe, while more modern systems ditched CPUs for heavy lifting a long time ago or never used CPU for these purposes in the first place.


What video editors are you talking about specifically?


DaVinci Resolve would be one such editor


It still benefits from a faster CPU, at least when paired with a top-end GPU: https://www.pugetsystems.com/recommended/Recommended-Systems...


Yep! CPU is definitely still important.


As you suggested I think there is market for Desktop using the Ryzen Mobile 4000 Parts.

This used to be a thing where the Motherboard maker would sell the Board + Mobile Chip soldered. Not sure if you could still do that.

And it is not like Zen 2 Desktop APU aren't coming. Just a tad later. I think the current roadmap from AMD is pretty solid. Although I do wish their sales and marketing execution to be much better.


There never was a 3600G or 3800G. So I'm imagine the next APU will also stop at a 4400G, which given its priced/placed so low in their stack, may not actually be better as a CPU than some Zen+ parts still hanging around like the 2600x or 1600AF, and almost certainly won't be as good as higher tier Zen2 parts like the 3700x or 4900H.

I also think there would be a market for a 4900H in a desktop form factor but I don't think AMD will make one.


Aliexpress has some good mini-PCs based on mobile CPUs. It's been a while since I went shopping for one, so I'm not sure where they are, right now. Since these are typically <=15W parts, they can be passively cooled. Most are fanless, from what I've seen.

I run OpenBSD on an i5-5200U I picked up a few years ago (probably 2015, or early 2016). It serves as my router and runs a bunch of services for me. It lives in my basement next to the fiber line coming in from the road.

I wouldn't be surprised if an AMD -H part comes to market soon in this category.


This is really annoying when building a NAS too. Great value CPUs with ECC support make the configurations great and then you need to spend 50-100€ to just get boot graphics even though the machine will be headless 99% of the time. Buying something second hand or from aliexpress defeats the purpose of trying to build something reliable. The market for low-end discrete GPUs has all but vanished thanks to Intel integrated GPUs. AMD should add a very basic GPU to every CPU part just good enough for boot graphics and consoles. All the motherboards already include the outputs themselves so only a little bit of silicon in the CPU is missing.


> and then you need to spend 50-100€ to just get boot graphics even though the machine will be headless 99% of the time.

Buy a serial-over-USB patch cable; connect it from the NAS to your laptop; open an STTY emulator on the laptop. No boot graphics needed.


Still need something for boot graphics for the BIOS, unless non-server motherboards are coming with serial redirection default enabled and nobody mentioned it.


Set it up with a temporary card and then yank it out after it's setup. You don't tend to access the BIOS that much.


I wonder if it's possible to set up the UEFI firmware by setting some EFI variables from the operating system for example by running efivar [1].

[1]: https://github.com/rhboot/efivar


I mean, why not use a server motherboard (i.e. one supporting AMT; not necessarily one that requires RDIMMs) if you're building a NAS? NASes are servers in terms of their workload. Take advantages of the features designed for server customers.


Availability and cost naturally. A big part of the appeal here is the low price point for a great spec bare-bones home server using these consumer parts.


Mostly cost. The board itself, but also server boards are often hard to find for desktop sockets, so that drives cost for the CPU as well. But, if I get a server board, maybe I'm going to get one with IPMI, and those usually have serial consoles and some limited VGA for KVM over IP too.

I could afford a $800 server project, but I'd rather spend $300.


Had no idea that would even boot. I prefer being able to just plug in a monitor but that's an interesting option.


What about the core i3-8100 with ECC and IGP? The real problem with building a NAS out of these low-end parts is finding the motherboard where ECC actually works. For Ryzen it's roulette, you just have to order one and see if ECC works at all; for Intel you can count on a few vendors but the motherboards cost twice as much as the CPUs and/or have inconvenient shapes.


The point was to use one of these great AMD parts... Motherboard support doesn't seem to be a problem, Asrock lists all their range as supported and has everything you might need. The actual annoying part about motherboards is that some older stuff is technically compatible but you need an old CPU to upgrade the BIOS before you can install one of the newer ones.


A comment above refers to the GT 710 as a cheap option new.


That's the 50 in 50-100€. At this level that's around one step up in CPU within the AMD range that you're wasting on an old GPU just to get the thing to boot.


There's an important market segment though that these will suit well. People on budgets doing upgrades of cheap systems. They'll likely continue to use their old video card, which will probably be better than Intel's integrated GPU, but will swap out the motherboard and CPU (and maybe RAM).

When I was much younger, and on much more of a budget, these "computers of theseus" were my main rigs for a long time. The meta-game was to try to see how few parts you had to swap out from piecemeal upgrade to piecemeal upgrade.


> When I was much younger, and on much more of a budget, these "computers of theseus" were my main rigs for a long time. The meta-game was to try to see how few parts you had to swap out from piecemeal upgrade to piecemeal upgrade.

I still do it like this, despite having plenty of money to spend on hardware. if it ain't broke...


Zen2 desktop cpus with integrated gpus are the next thing AMD will be releasing.


Few thoughts for your friend. This is not a "you're wrong" post, but a "here are some options to consider, with different tradeoffs."

1. The AMD R3-3300X is neck and neck with the Intel i5-9600K in most productivity benchmarks in the GN test suite.[0] The 9600K is a step up from the 9400 you're looking at, so you could likely do an R3 build. With the clock differences, it's likely that the R3 would pull ahead of the 9400. The processor is $120 MSRP, but not available just yet. Also, you'll likely want to pair it with a B550 motherboard. Nevertheless, with a cheapest-available graphics card, this should be price-competitive with an i5-9400. The future upside is huge, as you could get a beast R9-3950X in a couple years when the price has come down a lot.

2. The AMD R5-1600AF is actually a Zen+ part, and MSRPs at $85. The theme of the entire 3300X review video I linked is to get one of these if you can. Availability is limited and resellers are marking up significantly. AMD is committed to providing more, though.

3. Depending on where your price/performance threshold is, dropping to an R5-2600 brings your price difference to $26 ($224 vs $250 for CPU+mobo and CPU+mobo+graphics, respectively). This should edge out the 1600AF mentioned above, and regularly beat the i5-9400.[1][2]

Of the options above, I'd lean toward the R3-3300X. This gives future expansion to a 16-core monster that is a chart topper for productivity workloads. You cannot make that jump with Intel, because their competitor is the 10980XE on a different chipset than the i5. Additionally, Ryzen 3000 and B550 supports PCIe gen 4, for double the bandwidth. Again, this is a future expansion option that you don't get with Intel.

My preference is definitely subjective, and there are good reasons to go for the Intel i5, first among them that you can buy the system today, instead of in a month.

[0] GN 3300x review: https://youtu.be/NM2fFpzPKPg?t=1063

[1] Cheapest i5: https://pcpartpicker.com/list/B3pFtp

[2] Cheapest R5 2600: https://pcpartpicker.com/list/HttVyk


An update. Phoronix's benchmarking includes the i5-9400F, which is the no-iGPU version of the Intel chip. F chips are typically expected to edge ahead of their iGPU siblings. The 3300X beats the 9400F in nearly every single benchmark (I think two or three total exceptions).

Additionally, the 9400F seems to struggle to even beat the 8400. Their composite scores are insignificantly different. Perhaps this is why a 9400 is cheaper than an 8400....

https://www.phoronix.com/scan.php?page=article&item=amd-ryze...


Yes, the article expresses some surprise that the new Ryzen 3 chips are based on binned Ryzen 5/7 3000's, rather than on the new and exciting Ryzen Mobile 4000's which do have integrated graphics and are benchmarking very well. Hopefully some Ryzen Mobile based desktop chips are coming soon too.


Hopefully some inventive friends from Aliexpress retrofit these mobile cpus for AM4 socket, like they did with mobile i7s.


If you wait a little longer, AMD is expected to release these as Desktop APUs; I don't think I've seen a comitted release window yet, though. My guess is sometime q3, but I would love for it to be sooner.


I totally agree. A lot of these CPU review articles are aimed at gamers and hence ignore integrated graphics entirely, since it's assumed you'll go out and buy a nice discrete graphics card.


I would presume that, if you have a low-budget application with high scale (e.g. you're a company designing a NAS, or other "Turing-complete but still embedded" system), then you'd ignore both Intel and AMD and look at ARM-based SoC solutions, no? You can get significantly better integrated graphics, for cheaper, on e.g. Nvidia's Tegra platform. Thus why Nintendo—ROI penny-pinchers to a fault—chose it.


Definitely. If you're going for something integrated, Intel/AMD (and maybe x86 in general) are out of the question.

For a PC, though, using ARM can be tough if you're trying to use Windows. I'm not sure how many Windows programs have ARM binaries.


You still have the Ryzen 5 3400G with integrated graphics


This is the problem with AMD's intentionally misleading marketing. The Ryzen 5 3400G is an APU with a Zen+ CPU. The CPU core that's actually good, that everybody wants, is the Zen2. The Zen+ is pretty slow. It has performance comparable to $99 Intel CPUs from last year.


Can you name this Intel CPU which is as fast, has an iGPU and costs $99?

Zen2 is faster than Zen+, but Zen+ isn't that slow.


No, I think the nearest thing is probably the $140 Core i3-9100. But look at Anandtech's comparison of the Zen and Zen2: at the same clock speed and core count, Zen2 stomps all over Zen. It's a much slower CPU. https://www.anandtech.com/bench/product/2333?vs=2590


> No, I think the nearest thing is probably the $140 Core i3-9100.

Which has approximately equivalent CPU performance and a slower iGPU than a 3200G which costs $40 less.

> https://www.anandtech.com/bench/product/2333?vs=2590

The 2400G is Zen1. The APUs use different numbering. The 2500X is Zen+:

https://www.anandtech.com/bench/product/2259?vs=2590

Zen2 is something like 15% faster give or take. Zen+ was never really that slow.


If by "15% ±" you meant 30-50% then we agree. I don't see any individual results (other than GPU-limited games) where the Zen2 is not at least 15% faster, and on most of them it's at least 20% faster, on specific relevant workloads like 7-Zip compression the newer part is almost twice as fast, while drawing 2/3rds the power.


It's ~6% faster for GIMP, <4% faster on AES encoding, ~10% faster on Cinebench ST, ~11% faster on Blender, ~3% faster on Luxmark etc. It does really well on 7zip in particular. If you spend all day unzipping things you should be very interested in Zen2.

Also, the 3100 and 2500X are both 65W.


You're right, I looked at the wrong link.


This benchmark compares Zen against Zen2, not Zen+ against Zen2


For this reason, I bought used cheapest GPU with 3 outputs. More than happy.


I like Anandtech. They have some really great deep dive articles on CPUs and system architecture. Which is why this type of 'test' is just weird. None of the CPUs they compare seem to be in the same price bracket. Many aren't even from the same generation. 7700K? 8086K? 4900HS? Yes, sometimes you have to work with what you have, but c'mon - at least one somewhat comparable CPU would've been nice for reference (i3 8100 or i5 9400F).


They are comparing it against previous gen gaming CPUs to help people trying to decide if it is worth upgrading.


Fair enough. But I would imagine the vast majority of those folks will be looking at $300-$700 CPUs in that case - going from a top-of-the-line CPU to a low/mid-grade seems like an unusual choice.


Additionally, AMD hyped beating i7s of yore.


GN has a very thorough review of each of the 3100 and the 3300X. Note, the 3100 review is also positioned against a 7700K. The comparison charts are well-populated, though still have some gaps, as GN just updated their processor testing methodology, so some benchmarks are incomplete.

3100: https://www.youtube.com/watch?v=V4nQpXVTh0g

3300X: https://www.youtube.com/watch?v=NM2fFpzPKPg


its usually faster than the i5 9400F


I would expect so, but that's what benchmarks are for!


Intel can't match this performance to price ratio


Maybe but we can't tell from this article which should have compared against the $75 Core i3-9100F or maybe the $160 Core i5-9400F, instead of filling their graphs full of unavailable, ancient Intel parts like the 4th-gen core i7 that launched in 2014, and then listing the 2014 MSRP as if that was a relevant point of comparison.


There's a hint as to why they didn't include the i3-9100F in the conclusion section: "Even with this, Intel's ability to provide enough stock of these low-end parts, depending on your location, is questionable as previously mentioned. Case in point: the company never even made it as far as sampling any of the 9th Generation i3 parts for review."


You can literally order this part for next-day delivery on Newegg. They are also, literally, on the shelf at Central Computer.

https://www.centralcomputer.com/intel-core-i3-9100f-3-6ghz-4...


So GN looked at it, the lack of the IGPU basically killed the i3-9100F in premiere pro as did the lower base clock; even compared to the 3100. The i5 parts seem to do better particularly the 9600K (I don't see the 9400F which again would be hampered by the lack of an IGPU).

https://youtu.be/V4nQpXVTh0g?t=1195


There are currently a bunch of this articles (and videos) out there.

Other do compare against that CPUs and it's still much better to buy the Ryzen then the Intel ones (at least if you go headless or if you will put one a dedicated GPU anyway).


Linus from LinusTechTips did include that specific processor, and even the 150$ Core i5 (I forgot which one specifically).

I recommend you see that video for the comparison you're asking for.


Is there a bottom line? I can't find the video and the format of 10 seconds of information wrapped in a 10-minute video doesn't suit me.


R3s outcompete i3s, pretty much across the board. They are bested by i5+ for gaming, but not by much. They are competitive with, or beat i5s in productivity.

GN has timestamps in the video description if you want to jump to graphs: https://www.youtube.com/watch?v=NM2fFpzPKPg&t=0s

The top comment on HN for the Anandtech article (at the time of my posting) is about a comparison for a budget machine. They choose an i5-9400, because this includes an iGPU. If you are going for extreme budget, the low-tier Intel chips may win for this reason.


I wanted to update my system with the Ryzen 5 1600 AF, given its value at $85 - but it's no longer in stock anywhere at that price, so the 3300X is looking appealing.


AMD has committed to restocking to retailers who should be selling at least close to MSRP, but this part is underpriced. We'll likely see it continue to be bought in bulk and resold at a higher price as long as this is true. You're getting what is very nearly a half-price 2600.


3100/3300X will also be compatible with 500 series boards, while it's likely that AMD will drop support for Zen+ (like 1600AF) on B550.


Do you find upgrading low end CPUs to be less expensive overall than going straight to mid range CPUs and skipping the upgrade cycle?


I admit that I'm not as concerned with long-term performance for this system. I've managed to stretch a Phenom II 1055T for more than 10 years now - my interest in PC gaming fell off for a while there - so I've already skipped more than a few upgrade cycles!


I have a 5 year old Intel i7 CPU. Not to diverge too far from the topic, but would I have any reason to upgrade my CPU for gaming?

I assume swapping my GTX 760 with a latest generation card would give me better gaming performance.


This should be easy for you to test. Play the game you want, at the settings you want. If the cpu is running at 100%, get a faster CPU; if it's not, get a faster GPU.


I'm in a similar-ish boat. I've got an i5-6600k with a solid overclock on it. Single core performance is completely modern competitive, but it falls very flat in multicore.

Most games will get a huge bump just with a new GPU - You'd do well with an RTX 2070, or the next generation AMD GPUs (speculating - but there's a lot of momentum on that front).

That being said - LinusTechTips did a head to head benchmark comparison between the i7-7700k (the latest my Z170 chipset can handle) and this 3300X. Bottom line - the 3300X is an insanely good value for performance.


Depends which one - I have a 4790K + a GTX1080Ti and I see absolutely no reason to upgrade. There are no games that I cannot play at near max settings in 1440p. The only game that has somewhat stretched the CPU is Satisfactory, but then I think all machines struggle with it once you have a large enough factory.


as always, no need to upgrade if you're happy with your current setup. you're definitely leaving some performance on the table with that combo though. I just upgraded 4670K -> 9700K also with a 1080 ti and I've seen some nice fps boosts in most of the games I play. as an added benefit, it's an absolute beast for c++ compilation. takes less than half the time to build compared to my actual work computer with a 7700.


You'd be surprised how much better CPUs have gotten over the last five years. Even though the clocks might look roughly similar, overall performance has gone up by quite a bit. You probably only have 4 cores on a 2015 i7, too, and 8+ cores are standard now for high-end CPUs.

Whether it'll make an immediate difference to you is game-dependent, but if you play a lot of AAA titles I'd definitely look to upgrade soon.


For most games, I'd expect a 4700k + rtx 2060 to outdo e.g. a r5 3600 + gtx760, as similar priced upgrade options (once you factor in a new motherboard and ddr4 ram) in the tier of your current GPU.

That said, there are cpu heavy games out there, like MMOs with high on screen player counts, simulations like Cities Skylines or flight sims, so your mileage may vary.


Absolutely no doubt a GPU upgrade would be the way to go in your situation. A 2015 i7 can still keep up with a monster GPU such as the RX5600XT or RX5700XT.


Unless you're streaming or doing something that is causing you to be CPU bound? Unlikely. At a minimum I'd wait for the Zen 3 parts at this point.


In your case, it is quite likely that the GPU is the better money for an upgrade.


How about single core performance? Many games are pegged to one or two cores only. I've always found the AMD approach of throwing more cores at the problem not to be optimal.


>> Many games are pegged to one or two cores only

That hasn't been the case for a looong time. But to answer your question, they are much faster than anything from Intel at comparable pricing:

https://www.eurogamer.net/articles/digitalfoundry-2020-amd-r...

"The results here are immediately interesting. Despite costing less than every other CPU represented, the 3100 manages to tie the Core i5 9600K in single-core performance and outperform the Ryzen 2600 and 2700X by around 12 per cent. The 3300X is even more impressive, with a single-core score that exceeds the Core i7 9700K and only falls behind the Ryzen 9 3900X and Core i9 9900K."


I have a Ryzen 3900X, and also an i5 for software development. With unoptimized code the Ryzen is pretty close to the i5. But when you start to really optimize it becomes obvious the i5 has more execution resources and is faster single threaded.


Most benchmarks show the 3900X with single thread performance competitive with Intel's offerings, including for software whose developers spend plenty of time optimizing it. Can you think of a reason they haven't realized this performance advantage in practice?


Zen is great if you have moderate utilization code around the ~2-3 instructions per cycle mark. This is common for most compiled C and C++ where it hasn’t been specifically tuned for ILP. In these scenarios even if Intel has an extra ALU it won’t be used anyways.

It took 3 days of tuning to get to a point where Intel started pulling away from my Zen2 chip. No matter what I did the Zen2 would not get faster because it was already at max utilization.

Lemire had a blog [1] talking about this on here a few months ago as well. Most people were surprised because frankly Zen2 is very competitive with Intel for normal compiled code.

https://lemire.me/blog/2019/12/05/instructions-per-cycle-amd...


Are you measuring actual performance though or just IPC?

We know Intel has that problem with AVX-512. You can get a lot of throughput per cycle with those instructions but the cost is they cause the processor to run hot and have to downclock. It's possible (and really expected) that the same thing happens to some extent at unusually high IPC. Getting 15% higher IPC doesn't really buy you anything if the processor has to lower its clock speed by 15% to execute that type of code.


Actual wallclock time. The code is 100% scalar - LZF is very byte oriented and I haven't found a good way to vectorize it. IPC counts were used to help guide tuning but not a determination of success.

I don't really understand why people think this is controversial. Somehow an Intel cpu on a generation behind process is still competitive with a 7nm CPU with more cache. Intel must have one hell of an architecture to make that possible. AMD did a good job for the most common workloads but they have work to do with Zen3 if they want to maintain that lead when Intel finally figures out its manufacturing.


Have you looked at what 7zip is doing? That should be a similar type of code and it runs very well on Zen2.


7zip doesn't use LZF. LZF is mandated by the file format I'm working with.


When you optimize for maximum performance, leaving 11 cores idle is very sad.


If your algorithm is already parallel then you go back to making each core do more work per cycle.


Is this including SPECTRE, etc. mitigations?


Yes default settings. The code we’re talking about is a hand rolled LZF decompressor originally written on the Ryzen 9 (so not intentionally Intel optimized).

Spectre doesn’t make much difference since its not making syscalls.

If you can somehow keep the CPU fed it becomes obvious there is more oomph behind the Intel device. That said how often are you running hand rolled assembly routines?

As an additional datapoint liblzf is faster on the Ryzen. But its limited by pipeline stalls so doesn’t get to take advantage of the extra resources.


Why would anyone enable speculative execution mitigations on a computer used to build software?


Building software with lots of opensource libraries is effectively giving shell access to the authors of those libraries. They can stick whatever they like in those build scripts, and there's so many thousands of them I bet you don't check all of them by hand.

Given that, I'd prefer they had shell access as a low privilege user than be able to read my ssh keys from RAM...

Obviously if you compile software as your regular linux user account like most users, you're already a sitting duck, so might as well throw in a few more vulnerabilities.


Because you’re downloading lots of untrustable source from npm and friends when you are fetching your dependencies.


because it's the default on most operating systems


I haven't looked at their newer offerings, but in years past that was definitely a problem that afflicted AMD -- having a fraction of the floating point execution units per core, creating AVX by adding an emulation layer over the existing SSE registers, etc. For a lot of use cases AMD could still be faster according to a wall clock (they weren't worse across the board -- those failings were used to purchase advantages in other chip features), but their peak flops were peanuts in comparison to the Intel offerings.


That part is fixed. They even eliminated most of the downclocking issues with the AVX units on Intel. AMD has been pretty tactical with Zen2.

The issues I’m talking about are execution resources for scalar code. Things like how many ALUs and which execution ports are able to execute which instructions.


It’s worth remembering that the PS4 and XB1 consoles both have eight relatively slow x64 cores. Thus any multiplatform titles targeting either of these platforms has to be optimised for many cores and cannot rely on single core performance.


Definitely. But in my case I don't play the last of the last. Most games I play such as world of Warcraft or csgo use old engines and don't take advantage of many cores.


But that was not your original premise. "Many" games are not single core. Rather, a few old classics that refuse to die are.


https://store.steampowered.com/stats/Steam-Game-and-Player-S...

Probably most games on that list don't use all cores. So yeah not "many" but "most". Thanks for the correction.


Any older single core game will run just fine on probably any (non-embedded) x64 CPU on sale right now. Your original premise simply isn’t a problem.


You might be surprised, "older single-core games" like Crysis and Skyrim really do not run all that well even with "modern" architectures behind them. And in particular they often do quite poorly with AMD due to latency issues.

Per-thread performance really hasn't improved all that much since the days of, say, a 4.7 GHz 2600K. Core counts have gone up drastically, but that doesn't help your single-threaded game from 2007.


Please provide sources for core usage for each of the games referenced in your list.


World of Warcraft definitely takes advantage of multiple cores these days: https://wow.gamepedia.com/CVar_processAffinityMask


I know that, but wow doesn't evenly distribute processing across all cores. Run it and you will see it uses just a couple cores for rendering.


Does WOW need single core performance beyond what AMD offers? Or is this entire thread one big troll and goalpost shift?


CSGO takes advantage of multiple cores. It starts tapering off at around 4 cores, but it still showed gains all the way up to 6 cores: https://www.youtube.com/watch?v=fj9cuHuTNVU

It very definitely takes advantage of far more than 1 or 2 cores, though, with 4 cores having ~2x the FPS compared to 2 cores.


It's really hard to glean from the article, which is I guess just designed to get 500+ ad impressions by spreading the information over 33 pages. This comparison of the same data is a little more informative:

https://www.anandtech.com/bench/product/2589?vs=2250

That's a $199 Intel 6-core part against a $120 AMD 4-core part. Anandtech hasn't bothered to benchmark a 4-core 9th-generation Intel CPU yet, despite the fact that they hit retail a year ago.


'Hasn't bothered' = not been sampled.


AnandTech only reviews parts that manufacturers send them for free? Isn't that the opposite of an ethical policy (e.g. the one followed by Consumer Reports?) As it says on "About AnandTech", "one way or another we'll get our hands on a product for review".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: