Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AMD Ryzen 5 1600X vs Core i5 Review (anandtech.com)
226 points by nedsma on April 11, 2017 | hide | past | favorite | 165 comments


Just a heads up, you can view the entire article as one page by clicking the "Print this Article" button: http://www.anandtech.com/print/11244/the-amd-ryzen-5-1600x-v...


the hero we need


Looks like AMD is the way to go for compilation workloads. Very impressive. Check out the chromium compile benchmarks:

http://www.anandtech.com/show/11244/the-amd-ryzen-5-1600x-vs...


All these nice graphs, but only few publications get that an indicator towards "better" (more points, fewer milliseconds, more fps, fewer watts) greatly improves comprehension speed, especially when multiple graphs using different metrics are used.

    <= better
       better =>


These are fairly easy to comprehend, as they are sorted from best (at the top) to worst (at the bottom). Having an indicator would be nice, but since all the graphs are sorted, it's fairly obvious.


I'm not sure what you are complaining about but there is such an indicator on the graph.


In the linked example it first says "Points" (implying more are better), and then "Lower is better" for some timed benchmark. That is not and indicator but text for which one has to slow down to read it, for skimming over a lot of graphs an arrow or triangle helps a lot, just like the already color-coded bars do.

I can't remember where I saw it first, but for me at least it makes quite a difference.


Better is on the top for all the graphs...


The one at the top is best, so if the top one has the shortest bar then shorter is better. If the top one has the longest bar then longer is better. Plus it says in the title of each graph which way is better.

But really, the best way to read the graphs is to look at the labels on the left and see the ranking. You can glance at the bars to get an idea of the magnitude of the difference, but these graphs are really designed to emphasize the ranking.


Absolutely. Compiling scales well across threads, especially with solid state drives.

Even piledriver had this advantage, at least considering the price over the last several years. My FX-8320e was ~$120 when I got it a couple years ago, and overclocked to 4.1ghz without any difficulty. Sure, the IPC is not great, but with 8 threads at that price, it's been well worth it. I won't be upgrading this system for a while, and by then I will almost definitely be going to Ryzen.


The good thing about buying a slightly higher-end CPU is that it will last you for quite a few years. I got my CPU close to its release (2009) and don't feel like I am missing out on anything.

Furhtermore turns out that the Ryzen 5 matches the price that my current CPU is selling for at the moment. I have an Intel i7 860 (1st gen). I still don't think that I would benefit from upgrading at this point.

Of course, I use my system mostly just for writing code. Friends of me who are into computer animation and design follow changes in the hardware world more closely than I do. (And gamers as well).


This is true in a different way as well: the $135 off lease Dell T1600 workstation I bought on eBay with an E3 1245 (v1 / Sandy Bridge) CPU still decently compares to a Kaby Lake i5 and so the price/value is sky high. I didn't just get the CPU but also motherboard, RAM, PSU, chassis.

Truth is, IPC growth has almost completely stalled, only 20% since Sandy Bridge https://www.hardocp.com/article/2017/01/13/kaby_lake_7700k_v... and so these ancient machines still serve us very well. And the E3 1245 had HT on which the i5 doesn't...


Mid 2013 I built my current workstation with the E3-1245 v3 ($290) that had just come out, maxed the ram out at 32GB. Been rock solid, great performer and expect to be using it for quite a few years more. The prior system was an again maxed out at the time dual pentium-pro that I kept kicking for 10 years.

Everyone has different values, but for me maximizing the time a system is viable and my work environment is stable is most important; I'll pass on the latest shiny until absolutely needed.


Now I can't edit but to clarify: this machine I bought less than a month ago.


I agree. Still running a Phenom XII 940. Of course I've been upgrading the HDD all along (went through a Raptor, a VelociRaptor, an early 80GB SSD and finally a 256GB Samsung), and put in a new video card about four years ago (don't play latest games, anyway.)

The Phenom cost me... $220. So the Ryzen 5 1600 has my name written all over it. But really, it would just make some things I do on my computer a little quicker. It would not have the profound effect that switching to an SSD did, in my opinion. I'll let the idea bake for a while.


I'm the same - Phenom II, upgraded to SSD and new graphics but that's it.

When you say this has your name written on it, are they socket compatible? The idea of changing motherboards/ memory etc isn't compelling - I'd just get a new system - but if you can drop these in then it's really tempting.


Nope, I mean, spending the same amount as I did 8 years ago, and getting 8 years out of this new CPU sounds appealing to me! But I've got a uATX case, so my motherboard selection is mildly limited. I don't mind swapping out a motherboard. I used to do it rather frequently, for myself and for others.

I believe the socket I have now is AM2+, which is not at all compatible with AM4. I'd also need new RAM. Overall, I'd expect to spend close to $400.


Nope, Phenom II could work in an AM3 or AM3+ socket (depending on whether your particular model supported DDR3 RAM) but AM4 doesn't physically fit. Plus the new CPUs don't have DDR3 controllers so they couldn't use your RAM anyway.


You mean Phenom II right? I have Phenom II X4 925 from 6 years ago. On the same boat. To think that I worked on Ryzen performance while it was being developed/brought up. I'm a bit emotional but still not pulling the trigger.


Yes, Phenom II X4 940. (I called it a Phenom II 940 in my previous post.)


I'm running a Phenom II 1055T (x6). It actually seems to be ok for pretty much everything, including gaming at 1440p (I've upgraded the GPU 3 times and am currently running a GTX 1070, 16GB of RAM, and an SSD). Not bad for a system that's almost 7 years old.

The only issue is that I tried using the Oculus, and it won't work. Apparently the CPU doesn't have some instructions required for VR. I tried using a hacked version of the Oculus setup program that bypasses the CPU check, but it still won't work.

Which is why I've been thinking about a Ryzen 5...


I had a similar chip since 2011 and only recently upgraded. For thread-heavy workloads even one of the fastest E3 Xeons is only like 40-50 % faster (if no special insns like AES can be used).

Safe to say it had superb price/performance (IIRC paid like 150 € for it).


I'm running a Phenom II X4 965 and I have yet to feel limited by CPU speed. Really snappy for everything I do. This is a 2009 CPU. To put that in perspective, that is like using a 486 DX4/100 in 2002, when the state of the art was Pentium 4's at 3 GHz. Pretty amazing, actually.


>To put that in perspective, that is like using a 486 DX4/100 in 2002, when the state of the art was Pentium 4's at 3 GHz. Pretty amazing, actually.

Stagnation in improvements is amazing ? :\


Yeah I upgraded other parts of my machine as well - just did not feel the need to touch the CPU. And actually, I had 8GB of RAM when it started out and I did not change that either. 8GB still runs fine for me.

I should switch to a 256GB SSD as well though, that is on my list of things I want to upgrade.


Your CPU is slower than the $50 G4560, and current high end CPU's are about 2 - 2.5X as fast as yours.


Sure, but whether or not you'll actually notice any difference is very workload dependent.


Completely agree. I am aware that my CPU is not a blazing fast machine. But I do not really notice the difference all that much (if at all) for the work that I am doing on it.

Which mainly consists of running : IntelliJ, Spotify, and maybe a few tabs in chromium on Ubuntu.


What if you were running a few virtual machines to develop (very common workload)?


According to [1], the i7-860 still has better multi-core performance and is pretty close overall.

[1] http://cpu.userbenchmark.com/Compare/Intel-Pentium-G4560-vs-...


Well yeah, for writing code you might as well use decade old CPU and it wouldn't really matter. Only compile times would suffer.

If you use more performance-critical applications I think the best way to future-proof would be to buy an "unlocked" CPU that you'll be able to overclock when you want to juice out a bit more power (maybe at the cost of a better cooling system). At least that's what I do.


> Well yeah, for writing code you might as well use decade old CPU and it wouldn't really matter. Only compile times would suffer.

2007 is the era of the Core 2 Duo, which really are dated and slow compared to today processors.

5 years is reasonable, the evolution has been kinda slow lately.


Does it matter for writing code though?

I have an old crappy 32bit celeron laptop with 1GB of RAM that's about 10 years old and runs Debian. I still use it from time to time when I want to code on the go, it runs Emacs and Firefox just fine. If I were to change something I'd probably replace the crappy HDD with an SSD before I touch the CPU. And the screen resolution is pretty poor too. But performance-wise it just works.

I guess the key element for me is that I basically use the same software stack now that I did 10 years ago. Same shell, same window manager, same OS, same editor, same compiler. Only the web browser has significantly evolved in this time span.


1GB of ram won't run eclipse or visual studio smoothly. It's definitely not gonna run any test VM.


I remember when I first got Visual Studio, it used so much memory that it was the justification i used to get dad to spring for an extra 128MB of RAM, bringing the total to 256. I can't think of much it does now but couldn't then.


It should be ok for light docker usage.

Builds might become memory bound. C++ compilers use lots of ram, and you might not be able to run one per core in parallel (assuming this machine has 2+ cores). I suspect some other languages are similar.


It's only the linking stage that uses a lot of RAM and if you're building a single binary that's single threaded anyway. But if you need swap to link, it's going to take forever, and nowadays plenty of things need more than 1G to link (especially if you enable LTO)


I'll be going on 5 years this fall, which is my planned update period. Aside from some newer games, though, everything is great. Compilation times okay, heavy web browsing feels fine.


It's been a few years since games can't run on core 2 duo. I had to processor because of that. (running at 10fps isn't running fine).


Well, of course when writing code you do want to run your tools that you are using without any issues. (IntelliJ, Visual Studio, profilers, ..)

So it's not just the "pushing text into a file" which you could of course just do from the terminal as well if you were so inclined to do.

The idea of buying an overclockable CPU does sound like a pretty good strategy. ;-)


I have built computers with the intention of overclocking down the road but never got around to it. If you are going to overclock then do it with the intention of doing from the get go. Historically, are unlocked CPU's more expensive than locked CPU's?


I have an i7 860 as well. I've added 32GB RAM, a new 1050 (cheap, but very fast for what I do with it, panoramic stitching.)

The 860 is feeling a little sluggish these days for me. My laptop is about as fast... I'm excited about the jump once I decide to upgrade though and I've pretty decided that if it isn't a 2x improvement it's not worth jumping.


It's closer than I thought to my 2011 vintage Core i5-2500. I don't see any point in upgrading either....

http://cpuboss.com/cpus/Intel-Core-i7-860-vs-Intel-Core-i5-2...


If you really want bang for your buck, the non-X variants of the Ryzen chips are completely unlocked. So you can overclock/overvolt the chip to get within 3-5% performance of the high end model.

Right now, I think the Ryzen 1700 with some overclocking is the best deal in the market. The R5 1600 and 1400 might be similar.

This video has a quite interesting comparison of the price/performance: https://www.youtube.com/watch?v=-RRt5WkVxuk


Agreed. I went with a 1700, and clocked it the same as an 1800X, saved $170 for effectively the same performance. The thing is a beast, for really cheap.


The performance is lasting just fine for me as well. I've kept up with a couple SSD upgrades and stupid-cheap DDR3 RAM. But my performance bottleneck has always been storage, and the current champion (and next upgrade for me) is a PCIe SSD using NVMe. And old processors just don't support that.


That's always the trick - whats the best bang for a buck while still future proofing. I'm still using my Q6600 doing development and playing the occasional game. Maybe the 1600 now to keep it at the 65W level.


Just to mirror your point, my desktop and main work computer has an i5 750 (2009). There hasn't been a single reason for me to replace it, unless it dies. It's an excellent processor and does everything I need it to do.

I also have an i7 7th gen in my laptop, to be honest there is no noticable difference between the two for my use cases.


I too have a 750 from 2009, overclocked to 3.5 GHz since the day I bought it. Still benches about half the score as brand new ~$200 CPUs.

I'll probably get one of these new fangled AMD raisens so I can have USB 3.0 and onboard LAN.


FWIW a Skylake laptop chip still felt "slower" to me than an i7 2600k, which is several generations behind. In my case I know the Skylake chip is being throttled (XPS 9350 runs hot)


Yeah but all the other components on your motherboard will become outdated. Every few years you'd probably want a mobo with the latest memory and storage support.


Same, still going strong on my i5 750 here. That gen was perfect time to buy.

My year old (cheap) laptop is actually significantly slower.


TL;DR: Intel is still faster for single-threaded tasks, but the AMD with all its cores and threads demolishes on parallel work loads. It doesn't lag far behind on single threaded performance either, so overall it's a big winner unless your application is stubbornly dependent on single threaded performance.


I'm extremely excited for a competitive AMD again. Hats off to Lisa Su for turning around a fallen company.

Just to make my wants clearer, I would love to have a Ryzen-based APU in a NUC like form factor. I have a 6th gen Intel i5 NUC, but would happily scrap it for a graphics system that plays nicer with linux.


> but would happily scrap it for a graphics system that plays nicer with linux.

Interesting -- Intel integrated graphics have always been the gold standard for graphics systems that play nicely with Linux (and other open source systems). As far as I know, they're still the only vendor with complete documentation.


That's true, although I have had a bunch of problems (mostly related to not being on kernel 4.9 yet, so kinda my fault?). I guess what I really should have said is that I want is the combination of good graphics performance with good Linux support. Iris is definitely a step up from Intel's past performance, but it's still not good.


turning around a fallen company

AMD has been going through this cycle of down-and-turnaround for decades now. They tend to be like Microsoft OSes in that every other CPU architecture is the one that does really well, giving Intel some trouble... until Intel decides to crush AMD with lower prices and existing OEM/supplier relationships.

Give it a couple years, AMD stock will be back down pushing new lows, at which point you can scoop up some shares and ride it out a few years for the next big win. AMD sticks around moreso because Intel needs them to exist, otherwise Intel would be a straight monopoly.


It hasn't been a cycle in forever, only down down down. AMD has not been relevant in high performance consumer computers for about a decade! (for CPUs, GPUs is another matter entirely)


Every generation gets it's "AMD kicks ass!" moment.


Agreed. Would be super nice to get full ECC support for 64gb of ram in a NUC form factor.

The ram capacity is the only reason I don't have 2 or 3 NUC's right now for playing with.


I wasn't actually reading the news at the time but:

No one ever got fired for buying Intel (2000)[0]

It seems this is still AMD biggest issue ?

[0]: https://www.theregister.co.uk/2000/11/27/no_one_ever_got_fir...


"All the Ryzen 5 parts will support DDR4 ECC and non-ECC memory"

This is huge for the NAS market. Mainstream intel parts do not support ECC.


What makes NAS more different than other systems in regard of ECC?


There's a presumption that NAS stores more-important-than-average data, so an error is a bigger deal.

Also NAS systems often have check-summing and redundancy on the disk, but if an error happened in RAM, then that error will be propagated into the checksum and redundant copies.


Bit flips when your storage is on the line is a Very Bad Thing(TM).


Due to silent data corruption.


I learnt on another HN thread that Ryzen is now working on Mac. 10.9 only at the moment, but hopefully that will change:

https://www.youtube.com/watch?v=ntJLxbwurK4


Most interestingly to me, is that means OS X 10.9 is running DDR4, even though no Apple product to date that I'm aware of has shipped with anything faster than 2133MHz DDR3.

[edit for highest DRAM frequency offered by Apple]


At least until now, none of the AMD CPU can be used in Hackintosh, which is unfortunate (https://www.tonymacx86.com/).


Welcome back AMD. And of course, no doubt Intel has reaction power.. Competition is great!


What are the most comparable benchmarks to web development work (i.e. Visual Studio compile/publish/debugging), or what do you recommend for making educated decisions around purchasing a CPU specifically for building software?


There are compilation benchmarks in TFA.

Edit: http://www.anandtech.com/show/11244/the-amd-ryzen-5-1600x-vs...


There's a single benchmark for measuring how long it takes to compile Chrome. That doesn't tell, say, the average IDE user how well it'll handle their interactive experience – it's reasonable to expect that to multithread well – or what the workload the original poster specifically mentioned will be like. Many web developers have e.g databases, redis, CSS/JS compilers, etc. running (directly, in Docker, etc.) in addition to the app, and many of those bottleneck on single-threaded performance, and so it's quite reasonable to ask whether that kind of workload benefits from the extra parallelism which is the main selling point for this processor design.


It seems to average 10-20 compiles a day... And it's Chromium compilation. That probably averages 1-2.5 hours per compilation. Compiler as a benchmark depends a lot on moving data and jumps from all the parsing, codegen, dataflow analysis, etc.. It's definitely not a floating point heavy benchmark.



Seems like the answer that more threads are better, and Ryzen has more threads.


If you do compiling then multiple of threads are better.

For video editing multi threads is HUGE.

If you have several single thread applications open then multi thread is better. Multiple of tabs multi threads is better.

Most of the time single thread performance is more important.


That's why I upgraded at home from a 6700k to a R7 1700x. My build times went down to like 55 to 65%, so not quite 2x as fast.


No, but you got an 80% improvement for a similar price.


Higher frequency is better. Most web languages are single threaded. Most operations, opening your IDE, running tests are also single threaded.


> running tests are also single threaded

Only if your tests modify global state / aren't thread safe. Good unit tests can be ran multi-threaded.


> opening your IDE, running tests are also single threaded

That very much depends on the IDE and testing framework


Yeah, I'm not sure what IDE they're talking about, but certainly IntelliJ will happily use 800% CPU, causing the fans to spin up impressively, when starting up/reindexing/doing any of the slow things that IntelliJ likes to do.


Glad to learn that the reindexing is multi threaded :)


Yes... and he's trying to say a lot of software out there aren't written multi-threaded because either there wasn't much performance gain or the developers simply choose not to.


Compilation and linting are mostly mono threaded. At least in the languages I use. That is a lot of direct waiting.


> Compilation and linting are mostly mono threaded.

Which languages fall on your list? (Just curious)

I use:

* C (make and gcc can thread)

* Python (no real thread story)

* JS (depends)

* Scheme (usually threads)


C, C++. Compilation can be multi threaded. Linking is inherently mono threaded. A number of optimizations flags are mono threaded (and take very long).

Python. Lint is mono threaded. The virtualenv setup is mono threaded (running requirements.txt).

Java. Mono threaded for package generation (jar files). Not sure about compilation and running tests. Most of Eclipse is mono threaded (last I used it) like indexing and rendering and auto completion.

Outside of that, opening any IDE, tool, application or browser is a mostly single threaded operation. The rendering of most GUI is also single threaded.


> Linking is inherently mono threaded.

This is actually not the case -- modern linkers (e.g., lld) can take advantage of multiple cores.

https://lld.llvm.org/


so can the 'gold' linker.


Interesting -- do you have a link on this?


Make definitely requires a well-specified list of dependencies, meaning that the targets and prerequisites need to specify a DAG from the source files through the intermediates to the final target executable(s). If that's not complete then a threaded make will fail.

Most recursive make systems can't be run in jobserver mode because the graph is not well-specified.


This statement is pretty badly outdated. Modern web tech is just as multi-thread capable as other tech.


It's not. There's a reason Mozilla is pushing Servo.


Finally, multicore for the masses. A 6C12T based workstation is also a sweet spot for development work.


Multicore has been 'for the masses' for a decade or more by now, even in mobile phones. What is different here than with other chips made by Intel or AMD that makes you say that?


Intel's offerings with more than 4 cores have been segregated into an "enthusiast"/HEDT platform (X99) with more expensive mainboards and the CPUs have been overpriced too.

Ryzen is reasonably priced & has one socket for the 4, 8 and 6 core CPUs.


4 cores still is multicore though.

But ok, I see what you mean: anything with more than 4 cores that is affordable.


If anything Intel quad-cores have been regressing in the market over the past few years. The quad-cores have gotten more expensive (in laptops), and Intel has started naming dual-core chips "Core i7", as another trick to charge almost the same prices it charges for quad-cores.


Intel has been putting die space into integrated graphics instead of more cores. Higher core machines have been available only at very high prices.


Which has been a bad move in the desktop market. Who want's to pay for integrated graphics when they are just going to put one or more discreet GPUs in their system?

The only reason LGA 1151 has been so successful is that most of the market consists of "gamers" who are convinced that they need higher clock speed and IPC.

AM4 should only be compared to LGA 2011v3, which has a very inflated price.


Intel's multicore CPUs (hexa-core and octa-core chips) have been available for for Intel's socket B/R systems, which are the HEDT grade. These systems came with a price premium, which isn't exactly the case for R5 and even R7 builds.


Currently my home 'workstation' is running a x5670@4Ghz, a 6C12T Westmere xeon from 2010, but it still competitive and can be bought for peanuts today (bought mine for 70£). Its base clock is 2.95Ghz, but it easily overclocks to 4Ghz and beyond.


I wonder how much more you pay in electricity costs over a Kaby Lake 7700K for example, which idles much lower and wouldn’t need to overclock.


I'm sure that KL consumes significantly less in practice, especially idle, but for what is worth they are 91W vs 95W TDP.

Anyway, the machine runs an, admittedly overkill, GTX 1070 with a TDP of 150W, so the CPU power usage is the least concern.


Since TDP is not power consumption, the GTX 1070 will most likely consume less power than your CPU because it has much better power saving features.

Also, since CPU's are idle more of the time than under load the savings by using a newer CPU like Kaby Lake or Ryzen could be interesting. There is also the motherboard. Newer boards consume less power because the chips are on a smaller node.

But I guess the money you saved on buying the CPU will outweigh the electricity costs even in countries where electricity is expensive.


> But I guess the money you saved on buying the CPU will outweigh the electricity costs even in countries where electricity is expensive.

Unless he's running 24/7 and paying way too much, yeah, by a large margin. Even these older Xeons weren't that bad at energy saving.


To be honest the machine is turned off most of the time, as I have very little time to play with it.


But the overclock is going to affect actual draw. Not sure by how much... possibly a lot.


Isn't the issue with these old Intel CPUs not the CPUs but the motherboards, which can retail for 2-300 as their supplies dwindle.


Yes, it is the motherboard.

Get a second hand workstation on ebay, don't get individual components.


It might be an issue, but I still got mine (a Gigabyte x58A-UD3) for 140£. I was specifically looking for a MB with decent overclocking support and working VT-d, which limits the choice. You can get entry level x58 Intel MBs for much less (overclocking these xeons only requires bumping the BCKL to 200Mhz and maybe bumping the core voltage a bit, nothing fancy). The RAM is also still fairly cheap.

If you care more about core counts than single thread clock you can find used x58 dual socket workstation MBs which also have support for large amounts of server ECC ram which is very cheap.


R7 1700 is even more of a sweet spot for development though


Especially at 65 watts. That's incredible.


Not 65 watts if you overclock to 4 GHz :D


Mildly off topic but still on topic: Last night I was looking at user benchmarks for the Ryzen 7 vs the newer i7s and it looked like the i7s were beating the R7s in almost every practical category. I thought the R7 was supposed to be the 'bigger performance' sort of chip, but it can't seem to beat the i7, which hasn't really had a big performance boost in 5 years. What's going on? Is it that software testing the chips has not been updated to take advantage of the fancy new technology in the R7? Or is Intel just so far ahead that the R7 is a catch-up?


There are a couple of things going on here:

- Ryzen is weaker in single-core IPC than KL, (it's at Broadwell levels) and many benchmarks are single core optimised. - There are lots of optimisations by software vendors as well as fixes by AMD yet to come, (a couple were already released) - Most of the coverage focuses on gaming, where Intel wins hands down, because of the better single-core performance - The Ryzen i7 is really aimed at content creators who export a lot of photos or 4K video and developers who do a lot of compilation, basically tasks that require and benefit from multiple cores - in those categories, the $500 Ryzen is neck and neck with the $1000 i7 Extreme CPU, this is where it really shines and is worth considering.

The reason why most of the benchmarks are aimed at gamers and thus favour single-core IPC is because the PC market is such that if you want anything decent, you have to be a gamer.

Want a good mechanical keyboard? It's going to be a gaming keyboard etc. because the Pro market is much smaller than the gaming market and thus these CPUs are rarely shown in comparisons where they really shine, one exception is here: https://www.youtube.com/watch?v=UIIb5uZfukU


Good non-gaming keyboard: https://www.kinesis-ergo.com/shop/advantage2/

I agree on most other points. It is sad that good equipment is hard to get without a bunch of lights and MEGA-XXXTREME!!! stickers/labeling all over the place.

I think the thing is that the gaming market is the largest "prosumer" market in the space. For real enterprises, they charge ridiculous markups on something similar to the gaming parts but labeled "professional" or "enterprise". For normal people, they produce the budget parts, because most people don't really care as long as Facebook works. The middle market in PC parts is gamers, enthusiasts who are willing to spend a little more to get decent quality/performance.


I think theres plenty of non-gaming high end peripherals out there. I don't even think they are that hard to find. I bought a Filco Majestouch tenkeyless keyboard about 10 years ago and its certainly not a gaming LED lightshow. Similarly I have a pair of Sennheiser HD 558 headphones that aren't marketed towards gamers.


I thought the R7 was supposed to be the 'bigger performance' sort of chip

Not really. Basically, if you want the fastest CPUs money can buy you go with Intel. If you want a $500 CPU that can hold it's own against Intel's $1000 CPUs you get a Ryzen 7.


https://www.youtube.com/watch?v=aXHlTMKyse8

(Heaven Unigine and Prime95 at the same time)


It's pretty wild that a CPU can have this much of an effect on game framerates. That definitely defied my expectations. I'm also fascinated that it's much worse on some games, and much better on others.

Personally, I use my GPU for video encoding and mostly play games in the categories where Ryzen seems to underperform, so I won't be picking one of these up. But it's nice to know that AMD is bringing the heat on this front.


> I'm also fascinated that it's much worse on some games, and much better on others.

That's due to the all the Intel-specific optimizations that game companies have done. Here's an article about it:

http://www.pcworld.com/article/3185466/hardware/heres-proof-...

"Every processor is different on how you tune it, and Ryzen gave us some new data points on optimization," Oxide’s Dan Baker told PCWorld. "We’ve invested thousands of hours tuning Intel CPUs to get every last bit of performance out of them, but comparatively little time so far on Ryzen."


> It's pretty wild that a CPU can have this much of an effect on game framerates.

The CPU is the thing that provides work for the GPU to do (draw lists or draw bundles). Not every frame is fill bound (GPU bound) but some are and some aren't.


For a home server/NAS (or PC that runs 24/7), it's a "maybe" for me. According to the article, power consumption of Ryzen 1500x(65W TDP) at core is about 49 watts, while the Intel i5-7400 (65W TDP) sits at 30 watts at the core under full load. The difference is significant enough, not just in electric power consumption, but also for CPU cooling/heat and noise consideration.

As for desktop, I'd give Ryzen a serious consideration, but the motherboards are still at a premium or out of stock.


> the motherboards are still at a premium or out of stock.

Premuim? Not at all. Compare AM4 motherboards with Socket 2011v3, and you will see a huge price difference. There are plenty of decent AM4 boards to be had around $100. The cheapest 2011v3 board on newegg.com is $140. Higher end 2011v3 boards approach $600.

Out of stock? A quick glance at newegg doesn't show any out of stock. Where are you looking?


I'm curious why so many of these charts show the 1600X beating the 1800X. This surprises me and it's not mentioned in the article that I could see.


Off-topic: what's the benefit of the Wraith coolers? A decent pack-in for cheaper than what you'd get aftermarket?


A decent cooler for effectively $0, since it comes included in the boxed version, which is mostly not more expensive than the tray version without the cooler.


Metal seems good & upgradable for deep learning but reports say software is nowhere near the Intel environment yet.


Software? You mean like firmware? Every new platform has problems with that… IIRC Intel X99 massively sucked at launch


I wish Ryzen be useful for hackintosh, but is currently ill-advised to go there...


When considering the switch from Intel to AMD, don't be fooled by comparing clock speeds! Intel has been ahead of AMD in the performance-per-clock game for a long time, and while some of these benchmarks look more promising than my disappointing experience (buying Magny Cours, and later upgrading to Interlagos), less-expensive (and lower TDP) Intel CPUs in many of these benchmarks still come out ahead of Ryzen. The tests where the Ryzen does shine generally seem to involve parallel workloads, but bear in mind that most of the stuff you wait for your computer to do are still bottlenecked by a single thread.


I disagree. It's all dependent on use case, but as a sysadmin I have seen real world use cases where people are doing multiple things at once and one bottlenecked program on an intel will slow the system to a halt whereas the AMD will keep chugging on the rest of the multitasking going on, and that's a very real benefit.

For example, how many people these days actuall just run with one monitor and do just one thing at a time? I certainly don't, and for example, on my i7 laptop, if I have firefox up on the second monitor and play cs-go, I get 20fps less. That never happened on my AMD FX system...

In essence what I am saying is parallel workloads are becoming the norm, and thats why I'm betting on AMD in the long run.

The world where everything is single-threaded is quickly shrinking, and the world where people only do one thing is as well. For example, lots of game engines traditionally had this problem, but they are getting better and better and multi-threading for perf gains.


Developers as well, I generally have 2-4 vagrant machines running in the background, a heavy IDE and a bunch of other things all doing things.

6 cores and 12 threads vs my current i5-3570K starts to look interesting, very interesting.


Intels IPC advantage has mostly evaporated with Ryzen.

They went from a 50-60% advantage to 5-6% advantage for kabylake and near parity with Broadwell (current high thread count Xeons and Extreme consumer platforms).


I've used AMD hardware on a very heavily loaded webserver to good effect. This on a relatively affordable HP server with two CPU slots, two Opteron 6274's in there and a good amount of ram to go with it.

Super stable and very good performance for relatively little money. Web stuff is one of those places where multi-threading and lots of cores is an easy win due to the nature of the workload.


I imagine unless your workload is MS SQL or Oracle (if for no other reason than you'll spend more on licensing than your CPU, might as well get the most out of it) then Naples is probably going to be very appealing when it comes out later this year. Hell, I'm tempted to replace my 9 month old ThinkServer TD340 depending on how much the chips cost, I'm looking at $1000 to put a reasonable pair of 2nd generation E5 chips in this already.


If the rumors [0][1] about Ryzen 9 (12 and 16 core HEDT CPU's for under 1000$) are true, and Naples starts at a similar price point things could get really interesting.

[0] https://twitter.com/CPCHardware/status/844618089618722816

[1] https://videocardz.com/67649/amd-ryzen-cpu-with-12-cores-and...


The nice thing about those cpus is that your argumentation pretty much does not apply anymore. Yes, Intel is still faster in single thread workloads, but the gap is not that big anymore. The benchmarks show the in areas that were historically like you describe, especially gaming, the Ryzen R5 has pretty great results. It doesn't win against the i7-7700K, and it looks not too good in Rocket League, but it has good results in GTA 5.

And actually, the difference is less performance-per-clock now, but more that higher Intel cpus, the 7600K and 7700K, have a higher clock to start with.


>The tests where the Ryzen does shine generally seem to involve parallel workloads, but bear in mind that most of the stuff you wait for your computer to do are still bottlenecked by a single thread.

Who exactly do you think visits these forums? One-tab open in the browser grandmas?

I imagine most people here can make full use of at least 4-cores/8 threads, at all times, between compiling, rendering, editing, video streaming, joggling between 50 browser tabs and other programs, opening one or two VMs for testing, running local web servers, and so on.


That is kind of the point of the benchmarks. Intel CPUs come out ahead in terms of IPC, but they aren't utterly destroying AMD as they were in the past few generations. That performance leap coupled with lower price point/TDP and more cores and threads do make AMD CPUs a lot more attractive than they have been for a lot of years.


Most of the stuff I wait for my computer to do is bottlenecked by latency to webservers.


Nice try Intel social media team!


If you want decent upgrade path go for a 1151 based chipset, AMD CPUs are spread out over too many sockets.

I'm considering getting a i3 or i5 based setup and upgrading to a 1151 i7 a year or two later.


It is Intel that is introducing a new socket with each new generation, and now even a new socket for their performance-but-not-X99-line. It will then have that new Socket 2066, plus Socket 1151, plus Socket 2011-3. Only that the latter will be replaced by LGA 3647.

And I'm not even touching on server stuff here.


Many sockets? From 2017 there is only one - AM4 and it's future-proof.


Future proof? People still say that? Nothing in computing is future proof as evidenced by the many different connector/socket types in just the last couple years.


How many sockets has AMD had in "just the last couple years"?

Right, the AM3 socket started being sold February 2009, and AM3+ was entirely compatible with that.

[edit] Also: FM2, targeting a different range of performance, came out in 2012. Again, FM2+ was backwards compatible there too. Still five years since a non-backwards-compatible AMD chipset.

You might be thinking about Intel's practices.


To be fair, after AM3 there was also FM1, FM2, FM2+, AM1 and AM2.


AM1/2 are older than AM3. And you're right that FM1/2/2+ exist, but FM2 came out in 2012 and FM2+ was backwards compatible with FM2 CPUs. FM chipsets targeted a different range of performance, so I consider the updates to those chipsets orthogonal to the AM series.

Thanks for pointing out the FM series, though; I've modified my comment to reflect this.


Made a mistake there, you are right, partly. The AM2 is indeed the older AM3, I meant only the AM1, which was a recent platform for some processors with a low TDP. Not very known, it is no surprise it wasn't on your radar.


AM1 was a competitor to Intel's Atom chips. Which have no socket at all!

AM1 was very niche, but a socket for ~20W CPUs is outstanding. You can pair any AM1 motherboard (down to $20 crap) with any AM1 CPU ($20 crap to $100 less crap). Good for HTPCs and NAS machines... when you need a "serious" Raspberry Pi ish machine with x86 compatibility, AM1 was my choice.


I agree. Those were nice cpus and it is not a bad thing that they had a socket. I did not list them as a negative point, but to show that AMD had more sockets in the last time than AM3+, which could maybe partly explain OPs misconception that AMD spread their cpus over them more than Intel. When FM2+ got upgrades while AM3+ got them as well, it was indeed a strange split. That is was nothing compared what Intel does with their platform segmentation seems to have got lost.


Future in computing means 3-5 years. Everything else is just a day dream.

In the time between AM2 and AM3 AMD's socket strategy was pretty straight forward. Don't know what they tried with FM/FS stuff. I am looking forward to the success of AM4.


They say that am4 will be their platform for coming years.


1151 already has a massive range of CPU to select from, AM4 as only got Ryzen 3,5 and 7 which are not very budget right now.

1151 has the budget options right now, yes you can get a Ryzen 3 and upgrade to Ryzen 7 in future but the performance bump bang per buck will be less in my opinion.


> 1151 already has a massive range of CPU to select from, AM4 as only got Ryzen 3,5 and 7 which are not very budget right now.

AM4 has just been released while 1151 has been around for some time. If I plan to upgrade later on, I would rather buy a newly released socket.


That's odd. Why do you think AMD is spread out over many sockets? Their old CPU's are entirely irrelevant now so only the AM4 socket is relevant today, no? And for intel that is still spread among 1150, 1151 and some 2011 versions


hahaha nope Intel is the one with too many sockets. 1155 → 1150 → 1151 is just ridiculous, and 2011-v3 is electrically different from 2011 as well.


1151 has the most CPU options right now, no?


That is irrelevant. If you get a proper i5 now, the only upgrade option you are sure to have is the i7-7700K, and if you are lucky there might be i7 of the next line.

Absolutely buy an intel processor if it is the right fit for you (=you are a gamer), but don't expect to re-use the mainboard for your next processor - only plan with that if you go with a Pentium G4560 for now. Then you can upgrade to a better processor of the current generation.


Essentially two options: Skylake and Kaby Lake (which is a slightly more efficient update of Skylake). And seems like no future beyond that.

Counting the individual SKUs is kinda pointless, they're just cut down versions of the same chip.


Is that type of "future proofing" really worth it? Spending extra $400 today and using the computer until completely obsolete, instead of futile attempts to keep up part by part, seems cheaper and easier to me (unless your goal is to be able to play AAA games at max settings on release).


looking at yearly mid range gpu upgrades. and the cpus must be able to scale too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: