Welcome to Steelbeasts.com

Register now to gain access to all of our features. Once registered and logged in, you will be able to contribute to this site by submitting your own content or replying to existing content. You'll be able to customize your profile, receive reputation points as a reward for submitting content, while also communicating with other members via your own private inbox, plus much more! This message will be removed once you have signed in.

Sign in to follow this  
Followers 0
Skybird03

CPU-type for SBP?

After 7 years on duty, my system's hardware starts to signal me that it wants to be put to pensioner mode. I plan to replace it in the forseeable future. this has benefits (more performance, VR an option - I do a lot of racing in Assetto corsa), and also drawbacks (Windows 10 and its technical support and privacy disaster). Obviously, playing the kind of stuff I like on Linux is not an option.

 

I now wonder. Usually, in the past 20 years I always bought "one generation behind", which was proper economics, I got good performance but avoided the hilarious costs of newest, latest stuff. My current CPU is a proven i5 2500K, a CPU of almost legendary status. But I probably want to get a taste of VR and for that need obviously a beefy CPU and GPU. Also I take into account that this will be my last gaming PC I ever build, so it has to last, technically, and should hold some performance reserves. Another 7 years of longevity are my minimum expectation.

 

That leaves me wondering about SBP. I plan to go with either a it 7700K, or a Ryzen 1600, something in this range. Currently I am in waiting mode for the new 8th generation intel CPU, namely the 8700K. First benchmarking from a couple of days ago shows a small increase in single core performance, and up to a 50% boost in multi threaded performance compared to the 7700K. Now I am wondering: what kind of software is SBP, what does it mostly base on: single core, multi threaded cores, or GPU anyway? Is it even already 64 Bit?

 

I also wonder whether waiting for Coffee Lake really is worth it, because the socket still may be 1151, but needed are new chipsets Z370, which are only compromised Z270, but mainboards with the really needed and all new Z390 Anandtech recently listed as not being available before second half 2018, I think I will not wait this long.

 

The intel have an advantage in single core performance, the Ryzens have an advantage in multi threaded performance. The 7700 is known to get very hot, and it sounds as if 8700 will not solve this issue. The medium future will probably belong to games benefitting from multi threading, but that must not include simulations of the type I prefer, from SBP over FSX to DCS , the latter two are absolutely not interested in HT. The Oculus as an example also does not benefit from HT, it is recommended to have HT switched off (it costs frames).

 

So, from a strictly SBP point of view, what kind of CPU does SBP benefit from? Can it make use of HT, or is it just one core using anyway? Are their known pros and contras for intel or Ryzen CPUs with SBP?

 

I suppose a GTX 1060Ti, 1070 or 1080Ti as GPU is sufficient. :D

Edited by Skybird03

Share this post


Link to post
Share on other sites

The question is not just about what's best for "right now", but which CPU also offers the greatest promise for the expected lifetime - that is, the next five years or so.

 

Naturally all questions about the future are difficult to answer. That being said, I think that the Ryzen is a very promising CPU for a number of reasons,

  • supports ECC RAM
  • has more PCI Express lanes, which is good if you want to retain the option to pick two graphics cards, and having a few fast I/O devices like USB3.0, SSD, ...
  • 180W thermal design power at full throttle isn't great, but Intel's direct competitors are at 240W without all cores on full blast
  • As far as SB Pro is concerned, the basic decision has been made to move the code base towards parallelization. This is not something that will be achieved over night, but at least for all new features that get added we try to take advantage of multithreading while gradually reworking the existing code as well. The whole transition will probably cost me two or three times as much as the decision to go for high resolution terrain, which is something I'm mentioning only to illustrate that "management is committed" to this change

At the end of the day, there are no guarantees. But if I had to make a purchase within the next three months, I'd go for a Ryzen.

Share this post


Link to post
Share on other sites

Thank you. ;)

I take from it that currently SBP is more a single CPU software, and the use of HT still is some time away.

Games as a general thing do not interest me that much anymore, my sim/game interests are more specific. It seems from there I may want to go with single CPU performance. The Oculus (cheaper than the Vive) is said to benefit from having HT switched off. So I am really not that sure about my needs for HT, at leats not at the cost of single CPU performance. But as you said, nothing about the future is certain. I only have a doubt on that HT will become relevant all that quickly, when considering how long it has taken for quadcores becoming a "standard" today - there are still many dualcores out there, and the number of games that indeed and for sure use more than just one or two cores, still is very limited. HT nevertheless got hyped already when I bought my current - now obsolete - CPU in 2010. Seven years later, HT still has not manifestated itself as a standard, it still is relatively rare. Even my beloved Assetto Corsa is not really optimised for HT. Some people report frame rate increases with HT, others say they loose frames if they do not switch it off.

For older sims like FSX or Falcon 4 one could clearly say that they were quite CPU heavy, and that the CPU was more important than the GPU. DCS until today cannot make real use of HT, and uses 1, at best 2 cores only. ArmA3 also does not have any use for qudcore, not to mention HT. It even runs slower with HT left on, I read.

Is it the same with SBP as it is today, is CPU power more relevant for its performance than GPU (what I assume)? Or is SBP more hammering to the GPU than to to the CPU? I recall that I once knew stuff like this ten years ago :), but that was not SBP version 3 and 4, and the recommended specs seem to have grown significantly since then.

Edited by Skybird03

Share this post


Link to post
Share on other sites

Well, there's a bit of a hen and egg problem. The way I see it - and I feel the need to point out once more that this is my private optionion, and that I am not particualrly qualified about anything - a lot of game developers look at what's actually available. My impression - as unfair as it may appear to some Intel execs - is that Intel has artificially held back multi-core CPUs to protect their profit margins for the Xeon, marketed for servers and workstations.

And because no hardware was available to consumers that offered more than about two to four cores on average, most game engines only utilize four cores or less (note that Unity for mobile platforms is the exception simply because you can't get decent performance on smartphones without taking advantage of all eight cores). With Threadripper AMD is throwing over the game table. All of a sudden everybody can have effectively sixteen CPU cores (8 ... x2 for hyperthreading), AND a lot of I/O; the small Epyc offers 16 x 2 cores, the big one 32 (plus hyperthreading), I'm not even mentioning that there are also mainboards supporting two large Epyc CPUs for 64 native cores (x2 for HT). This really looks like a game changer to me. Of course you also need a Windows version that supports more than just a handful of cores - another needless barrier, created only to protect fat profit margins in the server market.

 

Of course the immediate benefits will be available only for computing tasks where parallelization is easy to accomplish, like simultaneous rendering of raytracing images (...Cinebench), or the compilation of software source code. But maybe this will also kick off a reaction by other software developers that finally the stars are right to kick off parallelization in a more serious manner.

The CPU cycles have leveled off at about 4 GHz for a number of years now. CPU architectures continue to improve to get more stuff done per cycle, that's certainly a factor, but the only way to retain the growth rate of raw computational power as predicted by a loose interpreation of Moore's law is to add more cores, period. I think that Intel's product policy of the past five years has held back success of parallelization on a broad front. Threadripper is now tearing down these artificial barriers.

Share this post


Link to post
Share on other sites

Thanks again for taking the time.

I do not know the CPU business landscape, but I followed the controversy around Microsoft's porked Get Windows X (GWX) campaign and the loss of competence in their ability to maintain Windows functionally in a technical manner closely since over two years, it led me to turn all non-gaming computer activities to a second system with Linux, and run Windows 7 only as a game launcher, not having updated Win7 since two years. If Microsoft can run such foul, rotten business practices, then why not others like Intel as well.

But, lets face it, quadcores are around since longer. Your argument was that because Intel hindered the release of multicores, developers saw no reason to develop for multicores. Still, quadcores are available since years - and still the overwhelming majority of developers, especially sim developers, do not make best use of them, if they even use more than one car at all. Intel'S policies cannot have anything to do with that, or not? Its more that developers shied away from the increased workload, or had no idea how to make use of the additional potential of 2 or 3 other cores.

Finally, you said "x CPUs x2", to refer to Hyperthreading. I think that is a bit misleading. Its not as if by magic and miracle the number of cores get multiplied by two. That would be like claiming a CPU capable of multitasking, could multiply the workload it can get done in a given time. It doesn't. As I understand it, HT compares to a desk worker on a swivel chair, and sitting between two desktops. He works on one, until he interrupts work on that table because he needs to wait for a form coming in, or waits for a telephone call. Where a normal CPU now would just stop and wait, he turns around on this swivel chair and starts working on the other desktop, until either he gets called back to the first one, or sees his work on the second desktop interrupted by some needs he has to wait for, he then swings back to the first. As I understand it, the gains are not that he does twice as much work, but that he does not waste time anymore with needing to wait repeatedly within the workflow - instead, when he gets interrupted with the task at hand, he then simply does something different so that that task gets completed earlier as well.

Isn't that the net gain of HT?

This also explains why HT can even slow down work with software that is not optimized for HT. It is as if the guy on his swivel chair spends more time with turning back and forth, than it would have consumed in time if he would only have sit at desktop 1 and waited until the event he waited for actually has happened.



Edited by Skybird03

Share this post


Link to post
Share on other sites

For the sake of brevity, the points I raised are my point of view, not necessarily the absolute truth. Yes, I simplified things a bit with hyperthreading, but it's not like it deserves being called "faux cores". Where the software is designed to take advantage of it, the effect is substantial. Where eSim Games is going to implement support for multithreading we will make sure to take advantage of hyperthreading too, that's all I can say.

I also think that you'll have to make the switch over to WinX if you want to take advantage of more than four cores; you'll need to check how many cores the different Windows license types actually support (there's a limit, unless it's a Windows Server variant).

 

All that being said,

  • I just don't see where growth in raw computing power is supposed to come from, if not parallelization. And Ryzen/Threadripper is a big step in that direction. Simulation developers may be slow to pick up the trend for other reasons, like
  • most sim developers focus on the PC platform, and here Intel's product policy has definitely delayed progress (unlike the mobile market, where more than four cores have become the norm years ago). That's why I see a potential that a successful Ryzen/Threadripper launch may turn out to be a game changer also for software development.
  • To the extent that sim developers follow a similar product development strategy as eSim Games, incremental improvements of a core product, a rapid transition towards multithreaded code is not to be expected. Rewriting existing code is very hard/costly/ties down a lot of development resources that cannot be put to create new features ... but it's features that sell the product. It's much easier to create multithreaded code from scratch, but rewriting your own product from the ground up is usually a terrible idea because you end up with a gazillion of new bugs that need to be worked out where the old code may actually have been rather mature.

Share this post


Link to post
Share on other sites

To be clear, I did not disagree with your described outlook - and desired outcome/trend -, it all makes pretty much sense to me what you said. Regarding these future trends, you probably are right, I tend to think in the same direction. I only disagreed a bit due to the fact that game developers since years have not even made use of multiple core architectures that as a matter of fact already were available. And that leaves me wondering how long this new trend will take to realize in material, in software outcomes on a wide front so that it indeed becomes "mainstream". I do not believe that now that Threadripper is coming, they all start to do simulators for 16 threads only. It could very well be that the full transition until a new "mainstream standard" will take the full technical lifespan of a computer rig: several, and not few, years.

And yes, Win X cannot be avoided on new hardware, I tried to install test installations of W7 on two different Sky Lake notebooks, using those usual recipes of how to inject USB code into the installer medium to get it even starting the installation properly, and two or three other things to bypass non-recognition of hardware, but I failed in both attempts. That is in parts due to technical incompatabilities or non-recognitions, in parts due to MS having formed alliances with hardware producers to have them building in needless hurdles to prevent old Windows being installed on new platforms. Like Intel now demands new chipsets Z370 for the 8th generation of intel of CPUs (they use the same old socket 1151...), with the Z370 only unlocking a >>needless<< block to run with the new CPUs, because the Z370 else is nothing else but a Z270 - but Asus and others can sell new mainboards for this reason. The really new chipset unlocking the CPU in full will not be available until second half next year: the Z390.

I think dual systems or dual boot is the way to go these days: one Linux system for emailing, browsing, shopping, work, text writing, photo editing, database storage, and one WinX system as game launcher where nothing personal and private is being done with or stored on, with the options set tight and privacy options being shut down as much as possible. (Even then you still can get knocked out or negatively affected by the shabby KB updates Microsoft has started to force-infest people'S property with, which Microsoft now acts with as if it is not peoples' property, but Microsoft's property: they do not accept users' "No" as a No anymore). All privacy cannot be protected with WinX, even a totally sealed Enterprise version of W10 still extracts almost 2000 variables that in one way or the other compromise the safety of private data and try to profile the user, and phones them home. But a "game console" cannot reveal more about me than my steam account (I even buy stuff for steam or load up the wallet via Linux), and what games I play. Any profiling beyond that is not possible, when I do not use that system for anything more than launching a game. I still do not like W10 for principle reasons and the foul policies of MS, but I can deny them the intended full scale of their wanted intrusion. Its not all good in Linux land, there are problem like hardware and driver incompatabilities, but still - it is so much better, safer, faster and stable than Windows ever has been - or will be. Certain software I principally avoid, for privacy and safety reasons: Google, Adobe, Microsoft, Facebook, Twitter, Java, and the likes. There is this nice quote by Snowden, and he has it very right: "Arguing that you don't care about the right to privacy because you have nothing to hide is no different than saying you don't care about free speech because you have nothing to say."

Thanks for the replies, I appreciate that you took the time.

Edited by Skybird03

Share this post


Link to post
Share on other sites

It may have been lost in the wall of text, but I pointed out that game developers HAVE adopted parallelization beyond four cores - in the mobile market, just not for PC. In fact, major game engines do support eight cores and more there where engines of the same brand don't under Windows. To me this suggests that limited availability is the bottleneck.

Share this post


Link to post
Share on other sites

On the subject of Hyper-threading, I've only ever had a single processor that did it.  I had an old Pre-HT PC for pretty much forever, one HT PC, and then my modern multicore.

 

I had some odd experiences with Hyperthreading, where as the PC began to show it's age, turning it off seemed to get better performance than leaving it on.   No idea why.

Share this post


Link to post
Share on other sites

I expect delivery of my new system in 10-14 days. It will be an i7 8700K and 1080TI, due to VR preparations for some other titles. The thread on a benchmark scenario is from summer 2016, is this still actual, is feedback still wanted? I could provide some data in 3-4 weeks on these specs then.

"Es froit sich wer." :)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0