Jump to content

Benchmarking framerates between 4.0 and 4.1?


Scrapper_511

Recommended Posts

15 hours ago, Ssnake said:

Try to open the scenario in the Mission Editor. When the warning message comes up that you're missing the map for it, click the button to "Replace map" and then pick from the stock "Hannover-Weserbergland". Steel Beasts will then proceed to load it, then you can save it, and run it.

Okay...  That worked and I was able to run the scenario.  Per the ratings, I fit into the low/medium category, but only because one time did the minimum frame rate drop into the twenties; otherwise, it's "Good" to borderline "Great":

 

image.png.9dde138ce5cd0a5fe8f9c9ae8dea73d8.png

 

What really surprised me was the consistency of most of the frames (excepting the smoke) with an average right at 50 FPS.  Overall, it runs well enough that I will run well over the default graphics settings.  I'll post anything that I find tipping the scales

 

My Specs:

Alienware Aurora

Win 10 Pro

i7-9700K @3.6GHZ

32 GB RAM

NVIDIA GeForce GTX 1080 Ti

Samsung SSD 860EVO - 2 TB

PC400 NVMe SK hynix - 512 GB

 

Link to comment
Share on other sites

  • Replies 119
  • Created
  • Last Reply

Top Posters In This Topic

After having reading people's results and I've tested a large number of different scenarios it seems unlikely that a descent framerate can be achieved even on the fastest overclocked CPUs and GPUs today unless the missions are fairly small in terms of landscape complexity and units. I've given up the idea of upgrading to a 9900K@5Ghz as I only think it would provide a few more fps.

It's really hard to poinpoint what is causing the slowdown. I had a scenario where changing ground cover from 44 to 45 would make the fps drop 10 frames from 40 to 30. If i reverted the value to 44 it would go back up to 40... 

Link to comment
Share on other sites

2 hours ago, inexus said:

I've given up the idea of upgrading to a 9900K@5Ghz as I only think it would provide a few more fps.

It's really hard to poinpoint what is causing the slowdown. I had a scenario where changing ground cover from 44 to 45 would make the fps drop 10 frames from 40 to 30. If i reverted the value to 44 it would go back up to 40... 

I confirm what you wrote . I use 4K and most of the time the sim runs great at 30-50 fps. As you, I have sudden drops in fps to a one-digit. Sometimes it‘s just traversing the turret left or right or looking through the optics and 💥 from 35 to 7 fps. I played with the idea to buy a quicker cpu, but as you I came to the conclusion that this would improve the framerate just a bit and not solve the problem. Conclusion: The only options a user with 4K has is...

  •  to reduce the resolution (what in my case will never happen)
  • wait, for a solution from esim

Specs:

4K (3840x2160) resolution in a medium size scenario I get about a nice 40-60 fps

  • System: Win 10 Pro / 64bit,

    • Ram: 32 GB

    • CPU i7-5960X 3 GHz

    • GPU NVIDIA GTX 1080 Ti

Link to comment
Share on other sites

20 hours ago, Hoover said:

I confirm what you wrote . I use 4K and most of the time the sim runs great at 30-50 fps. As you, I have sudden drops in fps to a one-digit. Sometimes it‘s just traversing the turret left or right or looking through the optics and 💥 from 35 to 7 fps. I played with the idea to buy a quicker cpu, but as you I came to the conclusion that this would improve the framerate just a bit and not solve the problem. Conclusion: The only options a user with 4K has is...

  •  to reduce the resolution (what in my case will never happen)
  • wait, for a solution from esim

Specs:

4K (3840x2160) resolution in a medium size scenario I get about a nice 40-60 fps

  • System: Win 10 Pro / 64bit,

    • Ram: 32 GB

    • CPU i7-5960X 3 GHz

    • GPU NVIDIA GTX 1080 Ti

Changing the resolution has almost no impact (at least if your gpu is not fully utilised at 4K which is my case). I provided screenshots earlier to demonstrate the difference in resolutions. 

 

In the weekend I tested the same scenario with differences in cpu speed. 4.4, 4.0 and 3.5ghz. I measured only a minor difference in FPS. Based on extrapolation of this data I believe that a scenario with say 30 FPS today on a 4.4ghz cpu may be 4-5 frames faster running at 5ghz. 

 

For me ‘playable’ in a pro means at least 40+ FPS. In any other sim/game I go for 60 FPS. 

 

An interesting thing is that going to the map view often brings back a good improvement of FPS so it feels like the cpu is still doing a fair amount of the graphics. 

Edited by inexus
Link to comment
Share on other sites

I have been running 4.157, and what has been the slow down as far as my machine goes was the Pagefile and physical RAM. I haven't bothered with getting another 8 GB of RAM but I did increase manually the Swap file in windows which gave me a few extra FPS.

This machine is pretty unique, I am running a pentium 3258 OCed to 4.2 ghz.  with 8 gigs of RAM and a Radeon 550 video card. With medium settings I am getting between 25-30 FPS. With the Benchmark scenario, 20-40.  I am assuming this game, 4.1 is using multiple cores whereas the 4.0 appeared to be using only 2 cores. 

Link to comment
Share on other sites

So decided to run some tests to see which AA settings gave better results in quality and performance.  I was quite surprised when changing nVidia settings and SB settings.  I am pasting the spreadsheet here.  Might be hard to follow, but I'll summarize the results below.  My machine is a lenovo laptop i7-9750, gtx1660 ti, 16 Gb RAM, and an SSD.  There were no thermal issues, but RAM usage was 12 Gb.

 

Scenario 1     Scenario 2     Scenario 3 Best quality  
  nVidia SB   nVidia SB   nVidia SB
                 
FXAA Off NA   Off NA   Off NA
AA Mode Application-controlled NA   Application-controlled NA   nVidia -controlled NA
AA Setting Application-controlled 4   Application-controlled 8   4 4
Advanced AA NA 0   NA 0   NA 0
    FPS (avg)     FPS (avg)     FPS (avg)
Scene 1 M60 observer   63     63     62
Scene 2 M60 Gunsight trees   55     45     48
Scene 3 City Smoke   45     43     45
Scene 1 - Leo looking back   57     60     56
                 
                 
Scenario 4     Scenario 5 Worst Quality   Scenario 6    
  nVidia SB   nVidia SB   nVidia SB
                 
FXAA Off NA   Off NA   On NA
AA Mode nVidia -controlled NA   nVidia -controlled NA   nVidia -controlled NA
AA Setting 4 8   4 0   4 0
Advanced AA NA 0   NA 0   NA 100
    FPS (avg)     FPS (avg)     FPS (avg)
Scene 1 M60 observer   62     62     63
Scene 2 M60 Gunsight trees   60     62     45
Scene 3 City Smoke   42     50     43
Scene 1 - Leo looking back   57     60     60

 

What I found is that there is some kind of interaction between the nVidia settings and SB settings.  The best settings for quality settings appear to be nVidia-controlled AA set at 4 AND SB AA set to 4.  Anything above 4 in SB appears to actually be worse quality with jagged lines on some straight shapes.  Its kind of weird.  There is just some strange interaction between the nVidia and SB settings.  The setting of nVidia at 4 and SB at 8 gave good performance, but its AA quality was not very good.  It almost looked like AA was off or very low.

Edited by thewood
Link to comment
Share on other sites

On 8/6/2019 at 4:54 AM, inexus said:

After having reading people's results and I've tested a large number of different scenarios it seems unlikely that a descent framerate can be achieved even on the fastest overclocked CPUs and GPUs today unless the missions are fairly small in terms of landscape complexity and units. I've given up the idea of upgrading to a 9900K@5Ghz as I only think it would provide a few more fps.

It's really hard to poinpoint what is causing the slowdown. I had a scenario where changing ground cover from 44 to 45 would make the fps drop 10 frames from 40 to 30. If i reverted the value to 44 it would go back up to 40... 

Hi inexus,

did a little bit benchmarking today. Here are the results.

Cheers Hoover

 

[REPORT]
In Benchmark 1 I had the GPU maxed out between 82% and 99%. As I could not believe it, I repeated the test 3-times. It always came down to the same result. If you look at the FPS only, all seems to be OK. The GPU usage tells another story. Before drawing a conclusion, others should confirm that result. Very interessting would be a benchmark result with a quicker GPU like the GeForce RTX 2080 Ti.

[TEST]
 I used MSI Afterburner for testing. Load Afterburner after you entered the station you like to test. Avoid switching to the map. Always take you position first then start Afterburner.

[REMARKS]
From minute 02:00 the weather gets worse and the visibility gets lower, felt proportinal goes the GPU load down from 99% to about 57%.

 

[UPDATE]

In my results the GPU was maxedout to 99%. But this finding seems to be an edge case and happens only when you look in direction of the wood in daysight and in a short distance to the wood. I could reproduce this behaviour (GPU maxedout 99% ) in another scenario. But the FPS stayed in these cases within the (for me) playable (> 30 FPS) range. When I pan away from the wood the GPU loadout drops dramatically.

 

[temp. Conclusion]

In my case with my hardware the GPU seems to be NOT the limiting factor.

 

 

benchmark-results.jpg

Deep Forrest (1)_0014_dayview.jpg

Deep Forrest(1)_0024_thermalview.jpg

Deep Forrest(1)_0250_dayview.jpg

Deep Forrest(1)_0259_thermalview.jpg

Edited by Hoover
Update:
Link to comment
Share on other sites

12 minutes ago, Ssnake said:

How do you determine the "min" framerate?

From the statistics that the screenshot/Alt+F12 frame counter shows?

My way of determining the min rate is from the actual screenshot values, not from those statistics.

I take the Afterburner values.

Link to comment
Share on other sites

  • 2 weeks later...
On 8/3/2019 at 5:11 PM, Ssnake said:

We are using parallelization in some cases, but not generally/all the time.

Given some comments that have been made about multi core use, I have been watching their use.

 

Loading maps definitely utilizes all cores; while playing a scenario also increases the use of all cores but leans predominately on two. The two process using most of the CPU capacity are SBProPEcm.exe (111 Threads) and DecodeProcess.exe (28 Threads).

 

Seems to me reasonable amount of multi core use.

Link to comment
Share on other sites

All in all very happy with the latest update. My PC is somewhat limited with regards performance but I am getting 30-40 fps which is playable. The only real problem area seems to be zoom views using the binoculars or gun sight where I am getting single digit frames especially in wooded areas. Just wondering if there are any plans to have a look at this in a future patch? Could it be related to the tree models? too many polygons? Is it LOD related perhaps? Not an expert but it would be nice to see a fix for this.

Link to comment
Share on other sites

  • Members

I can but recommend that you work with the terrain detail sliders, as they also control the distance at which 3D trees get rendered as simple billboards. The deep forest is a test of the "max overdraw" situation where a gazillion of triangles are all overlapping each other, so reducing the number of triangles by way of LOD balance in the detail sliders should help. At the end of the day however the ability of your graphics card to process 3D geometry data is the bottleneck. I suspect that your graphics card is either old, or low end, or both. We can squeeze what's possible from it, but where there's a hardware limit, only better hardware can solve it. For example, my GTX 980 has no problems whatsoever maintaining a framerate above 60 with the deep forest scene.

Link to comment
Share on other sites

1 hour ago, Ssnake said:

I can but recommend that you work with the terrain detail sliders, as they also control the distance at which 3D trees get rendered as simple billboards. The deep forest is a test of the "max overdraw" situation where a gazillion of triangles are all overlapping each other, so reducing the number of triangles by way of LOD balance in the detail sliders should help. At the end of the day however the ability of your graphics card to process 3D geometry data is the bottleneck. I suspect that your graphics card is either old, or low end, or both. We can squeeze what's possible from it, but where there's a hardware limit, only better hardware can solve it. For example, my GTX 980 has no problems whatsoever maintaining a framerate above 60 with the deep forest scene.

If its geometry,why such a big difference between thermal and day sights? In my case 9 in day and 20-21 in thermal? That's scanning a treeline.

Edited by Raven434th
Link to comment
Share on other sites

That's not all, the free camera's thermal view is full resolution and gets better FPS too, I think it's mainly that the thermal view doesn't use a lot of shaders, so while it is still a lot of overdraw on the trees for example, they're just a plain texture without bump and specular maps (I have to wonder why we need bump and specular mapped leaves, grass, etc., but I suppose that's another discussion.)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...