CPU Utilization...?

0 favourites
  • I have a question regarding cpu utilization. I realize that this might be better directed right at Google, and they will be my next port of call - but before I (as one man with limited technical experience in this field) ask a multi-billion dollar industry leader to consider making changes to their browser, I need to check my understanding of what goes on under a browser's hood...

    When I monitor my in-game CPU utilization using events I find there are occasional spikes (on my laptop) greater than 50% and that these correlate to lots of stuff happening as can be seen in the Javascript Console / Timeline - most of them are my own fault so I have been working hard to optimize my events and layouts. Spikes above 75% tended to be accompanied by drops in framerate, not GPU related - data manipulation in an array is my suspect here because, when I turn those events off, cpuutilisation reduces markedly. I can't do without the array data crunching (well, I would rather not...) so I need to work around this limitation. One thing struck me as rather disappointing - with a cpuutilisation value of 75%+, my Chrome use of my laptop's CPU was just 1.2% and didn't noticeably change when my game became more demanding and started to jank. My game was using 75% of 1.2%!

    Has anyone else noticed this / does anyone care / have any suggestions? My thoughts are that Chrome should be capable of simply demanding more CPU access when running a demanding game, so games like mine can run all of their data processing unimpeded. Perhaps a game maker could raise a flag to permit this to happen. Am I being naive - is Chrome's access to processor time controlled by Windows or can it demand more if it needs it.

  • Just to expand on the above. I have been testing on a reasonably powerful laptop (i7, GTX 870 with 6GB / Intel HD4600 3.4 GHz) all of my previous testing was done using the HD4600 (I want the game to run well on average hardware if possible). However, changing to the run the game with NVidia card support, when running NW and Chrome, the game does not suffer any CPU spikes or slow-downs... which implies that the browser is able to offload some of it's work to the more powerful GPU, enabling it to handle the array functions without a hitch. Not what I expected as other indications are that my intel card is supported by chrome. So the lesson is to tell game playing customers that they need to own high-powered hardware, I guess... :/

  • That's really interesting Colludium. I am not really sure about this, but isn't those built-in intel hd graphics supposed to use the intel processors for their workload as well? I remember the first version with the sandy bridge Intel processors, and many of them were explaining how the built in graphics increased their performance by actually pushing the workload to the intel processors. I am not too sure about the more never built in graphics though.

  • rekjl, I'm not sure what to make of it really - the point that causes me consternation is the browser demands almost a negligible amount of computer cpu time - even when it's in focus and the user is doing nothing else - but it allows the game to become max'd out with jank etc because the allocated cpu time for the javascript is not enough... Now, this might just mean that things cannot be done faster on one thread in javascript, which doesn't bode well for the future of html5. If that's not it, I just don't understand the philosophy behind the design and wandered if anyone could shed light on it or thoughts on if it's worth a discussion with google. My my public maths, using intel HD4600, my game is allowed to jank in chrome because of cpu demand even though the computer has 98.8 cpu available (I know, probably not accurate even though it's from the task manager, but you get the idea).

  • That's definitely frustrating! To know that your game should be able to run much faster but because of certain programming "restrictions" in place, it is not able to take full advantage of it. One thing that does look bad is, if google was serious about improving html5, this issue would have been resolved quite some time ago. Perhaps they really aren't that interested in going that way. If that's the case, and with Chrome being the benchmark of browsers, our hopes might really rely on the MicroSoft Edge. It's still some way off from being stable or the "it" browser, but maybe this is a good thing for us. Since they need to play catch up, they will have to look at areas that give them an advantage over Chrome, hopefully this is one aspect they might concentrate on. Not sure if there's anything Intel is able to do about this on their part.

    But that's the problem nowadays with any technology that is not direct or native (not criticizing Scirra here), cause all the rest will have to depend on so many other groups to be able to function efficiently. It is worth a shot trying to bring this to Google's attention, not sure if they will take this seriously, but at this point, it is the only thing we can do besides waiting.

  • I noticed that chrome is performing very poor compared to IE when i test my game. My game in IE uses somewhere between 10-15% CPU att all times. In Chrome, my game uses 25% CPU, and even higher just by turning on debug mode. I've been trying to optimise events but i think i took it as far as I can. It's kind of funny... when i test my game in windows phone browser it performs better than chrome on my desktop... :p

  • The stutter and fps drop you are seeing with the HD4600 is a known issue in complex games or ones that use a lot of effects and in particular, WebGL shaders. Intel's OpenGL performance is attrocious and their drivers for it even worse. They seem to only care about DX and not OpenGL for gaming.

    I posted something regarding CPU usage here:

    Essentially, C2 is single threaded for the most part, all that logic uses 1 out of 8 threads on your i7 CPU. It can max out on 1 thread, when it shows 99% CPU use, the game will bog down and frames will be dropped, regardless of your GPU.

    So be very careful when you design a bigger complex game, ensure that the reported CPU use stay less than 50% on your i7 as its a very strong CPU with high single thread performance. Someone on an older CPU, or an AMD CPU with weaker single thread performance, will suffer greatly if you are designing your game to go beyond 50% of an i7's single thread capability.

    Also if you want gamers to play it on Intel HD graphics, don't use WebGL shaders. Maybe do a "Low/High FX" option, low for people on Intel iGPU and High for AMD/NV GPUs.

  • ... when i test my game in windows phone browser it performs better than chrome on my desktop... :p

    ! This is so disappointing, yet somehow hilarious...! Html5 is such a good idea but never properly embraced. I hope that W10 & Edge & universal apps perform as well as IE11 does now...

    Essentially, C2 is single threaded for the most part, all that logic uses 1 out of 8 threads on your i7 CPU. It can max out on 1 thread, when it shows 99% CPU use, the game will bog down and frames will be dropped, regardless of your GPU....

    Also if you want gamers to play it on Intel HD graphics, don't use WebGL shaders....

    This and your following advice are gold. I hadn't really NB'd the implications of this limitation before. Luckily I think I'm going to get away without using shaders - only lots of blend modes - but the 50% max is good advice.

    So, let's hope W10 provides what it promises, or it'll be a long wait for hardware to improve so Chrome really becomes viable...

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • Your numbers are very likely wrong. Chrome will happily use up 100% of the cpu if you give it that much work. However measuring CPU time is actually pretty complicated. C2's 'cpuutilisation' expression is really "time spent in main thread", literally based on timer measurements. There isn't a direct way to get CPU usage in Javascript, so this is used as an estimate, but it has some drawbacks:

    • time spent in the main thread is not necessarily the CPU usage, e.g. if the CPU suspends the thread and goes idle for a while, then resumes the thread, all that time will be included in cpuutilisation making it look like the CPU usage is higher than it really is
    • the timers only cover the main event loop, and some things like input triggers are not covered
    • the timers only cover the main thread - anything the browser dispatches to another thread is not covered. For example Chrome basically saves all draw calls then forwards them to another thread to run in parallel, whereas some other browsers run the draw calls on the main thread. This means cpuutilisation excludes the draw calls work in Chrome because they happen on a different thread, but includes the draw calls work in other browsers which do run on the main thread. In actual fact the CPU work done may be identical, but this causes different measurements. Similarly there are other features which may or may not be included in cpuutilisation depending on the threading model of the browser.

    Further, if you look at the per-process CPU usage in Task Manager, there are more gotchas:

    • Chrome is a multi-process browser, so you need to take in to account the sum of all CPU usage across all Chrome's processes
    • Windows tends to merge all cores in to one percentage reading, so e.g. 25% CPU usage on a four-core system may indicate one core at full usage, two cores at 50% usage, four cores at 25% usage, or something else. It can be hard to tell.
    • Since C2's cpuutilisation is only measuring one thread, it could read 100% when Task Manager indicates 25% (assuming a quad-core system again). Both are correct: the main thread is entirely busy, but only one core out of four is fully utilised.

    If you are not aware of these various complexities, you will simply confuse yourself by making misleading measurements. There are reasons that C2 could measure way higher than Task Manager, or way lower. Your impression that Chrome is not able to use the full system resources is almost certainly incorrect and probably just a result of these complications.

    My advice: just look at the measurements in C2's own profiler. That should give you a reasonable idea of which areas of your events are slow. And only treat the specific numbers as a ballpark, as if they can only read "low", "medium" or "high".

    If you're wondering why C2 doesn't "just" run everything over all cores, then see my blog post Why do events only run on one core?

  • Has anyone else noticed this / does anyone care / have any suggestions? My thoughts are that Chrome should be capable of simply demanding more CPU access when running a demanding game, so games like mine can run all of their data processing unimpeded. Perhaps a game maker could raise a flag to permit this to happen. Am I being naive - is Chrome's access to processor time controlled by Windows or can it demand more if it needs it.

    You have to be very careful how you test you game when it comes to CPU and FPS and what conclusions you draw from them.

    Using the debugger to validate CPU and FPS are extremely unreliable and depending on how you measure it you get different results, that you cant really use to a lot I think.

    Here is an example:

    All tests are made from the same place in the game.

    1. The first image shows the stats that I get using the Debugger in NW.js when the system tab is selected.

    2. Second image show how much this information changes simply by changing tab from system to a another tab.

    3. Is without debugger tracking the data in game and fits well with the second test when using debugger.

    So when testing performance you have to take into account which tab you have selected in the debugger. And at least in my opinion, doing any testing in the system tab is completely useless, because you cant use of any of the data you get there to anything, it doesn't reflect your game performance, but the game and the debugger performance combined.

  • Your numbers are very likely wrong.....Chrome will happily use up 100% of the cpu if you give it that much work.

    My numbers were wrong because I was reading from the App Running section of the Task Manager; it was late and I was tired . Chrome was actually demanding nearly 14% of my CPU when the in-game in-thread time was indicated as 60% cpuutilisation. Since making the first post I have already reduced the amount of array work being done at little cost visually to the game, but there is still room for improvement.

    Some good notes in the blog - I will re-read (and try to digest) it! Just to prove that I'm not going bonkers - here's a screen shot from my game (not the demo - the array work happens on a later level and the cpu use in the demo is low). This shows 2 modes of 2 graphics processors in use - normal and max'd out. As you can see,the use of Chrome sits at approx 15%; as Ashley says, there's no easy way to determine how much of which core is being allocated to the game, but it might be reasonable to assume that, with 4 cores and 8 logical processors, that one of those logical processors is being fully utilized. I never expected a little array work to cause so much pain! As you can see, the idle and max'd out values are better when not using intel integrated graphics...

    Thanks everyone for the responses and thoughts - some good refreshing of stuff I should have already known, and a lesson about Intel integrated graphics...

  • nimos100 very true, and ive asked for the debugger to start in anything other then the system page, by selecting an object or having a setting because it bogs down anything you're testing,

    here's a good test to compare how fast our cpu's are, it takes my cpu 7% for running an empty project... (hmm where should i optimize?)

  • I've noticed this as well Colludium but it's not an accurate measurement.

    As Ashley stated, CPU utilization it's not all that accurate, thus your CPU % numbers are most likely incorrect as well. Knowing this shouldn't leave you scratching your head though. Granted the debugger adds to the CPU usage and can bog your game down in the Inspect tab (All of the game ) - knowing that it adds overhead, it still provides some useful info on a per event basis. I typically will use the Profile tab to see what events are using the most CPU. After I see something that is using a bunch of CPU (accurate %'s or not), I will dig deeper and single out those events and and monitor the (Watch) tab to see what they are doing so that I can optimize.

    I'm not looking for rock solid performance on the debugger, (though it would be sweet!) - Ultimately no one will be playing your game with debugger running. Knowing that it adds to the CPU usage makes it pretty easy to figure out where the real issues and spikes are. Profile tab is the key, and then singling out specific events via the Watch tab.

    Out of curiosity and lack of knowledge on multi threading CPU stuff Ashley what is the likely hood of multi threading support in the future or with C3? or is the effectively an HTML5 limitation for the moment?

  • inspect tab bogs time the system... so ... dont start there, starting at profile tab would be better, but hey i dont have lots of problems with it, i wonder you could probably change it in the js code wich tab opens first..

  • The inspector is ace for everything but performance measurement - although the profile tab is awesome for identifying dirty events. That's why I always use a debug text object with a huge list of set-text options that I toggle enabled when required.

    If cycling through an array hurts at 60hz, how will the owner of a 144hz monitor fair? I don't have the luxury... Might see if my TV is 100hz when I get home.

Jump to:
Active Users
There are 1 visitors browsing this topic (0 users and 1 guests)