No Access to Full CPU Usage!

0 favourites
  • 9 posts
From the Asset Store
Shoot balls to destroy as many blocks as possible, at each stage the game will become more difficult.
  • In C2 debug mode it says 100% even 114% CPU Usage, meanwhile in Task Manager it only says 13% CPU Usage.

    I have an 8-core Intel i7, this is pathetic that i can't use all the CPU available.

  • Try Construct 3

    Develop games in your browser. Powerful, performant & highly capable.

    Try Now Construct 3 users don't see these ads
  • Well, 1/8th is 12.5% which is 13% when rounded. So one core is being maxed out. Pat finding uses web-workers so do a lot of pathfinding to Engadge the other cores.

  • Well, 1/8th is 12.5% which is 13% when rounded. So one core is being maxed out. Pat finding uses web-workers so do a lot of pathfinding to Engadge the other cores.

    To only use 1-core today is terrible, that means for a 4-core mobile device u can only use 25% of it's CPU Power. Mobile devices don't have much CPU power to start with so to be limited like this is insane.

    Let's look to the future, we'll see mobile devices with 8-cores and 12-cores, which have already been reported in the tech news, yet the GHZ per core is not really increasing becos of heat and power usage limitations, so that means for the next 5-years our Construct 2 games cannot be any more complex than they are now, they cannot be any better becos we are currently limited to 1-core processing - PATHETIC!

    All the competitors will roll over Scirra if they do not fix this! I play a Unity 3D Game on my PC and it's using all 8-cores of my CPU.

  • What is needed to use all those cores is what's called 'parallelism'. Roughly, that means a program can split it's operations into multiple parts, which only need to 'sync' with each other periodically. At this time, javascript can only do that under special circumstances (Web-workers).

    However, there is work being done to bring parallel support to modern javascript engines, which will eventually be able to optimize incoming code automatically, ie., the code won't need to explicitly request those optimizations.

    tl;dr: In a few years, when 8 to 12 core processors become more common, especially in mobiles, we'll start seeing support for more than 1 core.

  • What is needed to use all those cores is what's called 'parallelism'. Roughly, that means a program can split it's operations into multiple parts, which only need to 'sync' with each other periodically. At this time, javascript can only do that under special circumstances (Web-workers).

    However, there is work being done to bring parallel support to modern javascript engines, which will eventually be able to optimize incoming code automatically, ie., the code won't need to explicitly request those optimizations.

    tl;dr: In a few years, when 8 to 12 core processors become more common, especially in mobiles, we'll start seeing support for more than 1 core.

    Thanks Tiam, that's a great answer, it's still a shame though that Java is so slow on the uptake, multi-core processing has been out for a very long time now.. We didn't go down the track of having Single-Core 5-Ghz processors becos of heat and power issues.

  • It is very difficult to parallelise game engines. It is not at all a simple matter of "divide the work over N cores", and this is not limited to Javascript either - it is similarly difficult to parallelise native engines for the same reason: game engine logic is highly sequential. Take the event sheet, which is required to process in top-to-bottom order for predictability when defining your game's logic (so you know what happens in what order). Any events referring to objects or variables which were used in any way in prior events simply must be run sequentially (i.e. after the previous events have finished running) in order to work correctly. Therefore, that work cannot be split off on to another core, or if it was, other cores would have to wait for the work to be done before continuing, which is no faster (and probably actually slower) than just running on one core.

    Further parallelism comes with a synchronisation overhead. Every time work is sent off to another core, there is a performance overhead of sending the work to another core, waiting for the core to context switch to the thread, probable cache misses while it "warms up" to the new work, and then the same context switch and sending overhead to send the work back. As a result it's actually slower to send work to another core if it's a small amount of work - the overhead of arranging the off-core work will eclipse any benefit. For example if you have 100 instances running a "Set X to 0" action (which is very quick), trying to split that work over 4 cores running 25 instances each is likely far slower than just running it on the same thread. So not only is it difficult to parallelise the whole event sheet, it's difficult to parallelise individial events as well. For other engines, replace "events" with "logic", and it's similarly challenging for them to get useful performance gains on multi-core systems.

    That's not to say there isn't a lot of parallelism going on - here's a list of things which modern browser engines run in parallel:

    • audio processing
    • network requests
    • image/video decoding
    • input (e.g. mouse/keyboard/gamepad input)
    • draw calls (e.g. Chrome bundles up all WebGL calls and runs them on a separate thread)
    • compositing (browser-level rendering of elements)
    • the GPU itself is a large parallel processor running in parallel to the CPU
    • pathfinding is CPU-intensive enough to run on a web worker on another core and benefit performance (this is actually a very nice feature since intense pathfinding does not impact the game framerate)

    Browser developers are well aware of the need to split as much work as possible over different cores to achieve maximum performance, so work is continuing to add more parallel features. We're watching this carefully and will add support where practical.

  • Interesting read, Ashley - made me wonder - would it be practical to have, for example, eye-candy layers - within which objects have no impact on anything else - no collisions, no bounding boxes, no physics, either limited or no behaviours - the system just knows it needs to draw this layer. For stuff like particles and such perhaps the positions of the objects there could be calculated in parallel and then it all gets rendered when we get to that layer in zOrder.

  • Somebody

    To a lesser extent you could do this. Threading would be of good use for procedural terrain generation. Background rendering to it's own CTX canvas then sending it over at say 30fps. There is use, but as you point out non of it on the logic layer.

    Might be an interesting test at some point.

  • Thank you Ashley and everyone else, great info to ponder on

Jump to:
Active Users
There are 1 visitors browsing this topic (0 users and 1 guests)