Is C2/C3 good for large 2D desktop games?

Discussion and feedback on Construct 2

Post » Tue May 16, 2017 6:16 am

Ashley wrote:it's the graphics drivers, which are native technology. As I repeatedly have to say to people who don't seem to believe this, graphics drivers are widely accepted to be terrible and can even completely ruin game launches. Here's two links to back this up:
Batman: Arkham Knight had such poor PC performance it virtually ruined the launch - it ran fine on console, where the drivers are good quality and predictable
"How to delay your indie game" specifically calls out struggling with GPU drivers: "Despite claiming to, their drivers just don’t properly implement the OpenGL specifications... Then a new OSX update will hit and everything will change again, yet at the same time I will need to still cater for the previous releases and older machines... it’s unbearable... Would I support Mac again? No."



I always laugh at this argument. "Graphics card drivers are bad!" so you decided to use possibly the most unsupported and experimental platform (other than Vulkan, which seems to have better performance already), I.E. Web GL ?

That still runs on graphics card drivers! You're still facing the same issue, just hiding it as "not my fault now!" when it's Chrome or Node-Webkit/NodeJS that's blacklisted it instead.

The simple bottom-line is:

1.) Most desktop graphics cards today are optimized specifically for DirectX (at least to 9.0c), and have varying degrees of OpenGL support, both integrated and dedicated

2.) The same graphics card issues that occur in a native engine will still occur in HTML5 + WebGL, because you're still using the GPU to render graphics unless you're blacklisted and running software I guess?

3.) Aside from a few neat tricks to convert native C++ to JavaScript like ASM.JS, JavaScript will always be slower than a majority native code by some amount. Sometimes by a very large amount, as we have noticed with collision and physics.

3b(onus).) If you're someday going to be using ASM.JS why not also export actual C++ code that could be used in another game engine that supports lots more platforms and actually allows people to make viable/sellable commercial desktop and console games? It could be charged per export and still make plenty of cash for Scirra!

To the people saying "You just event bad!" well if the engine is designed that way and I optimize my events as best I can to still have the functionality I need, and it's slow, but I code fairly averagely in some C# and Unity and it runs blazingly fast + with better compatibility on Steam than our previous Construct 2 title, is that not proof enough? It wasn't even a GPU limitation because we added MORE effects to our game!

It's nice when that last argument is made by people who don't have titles selling commercially on Steam, who don't see the issues that occur and have to hear "Give me your source code and/or report it to Chrome or NodeJS !" every time they have an issue in the engine they purchased to make commercial games.

Personally, I'd rather pay $100/year for Scirra to make a commercially sold large 2D pixel-perfect retro platformer in their own engine and then release and troubleshoot it on Steam, than to further invest in HTML5 with C3, and it sucks, because I really like the way Construct does game making (it still has an editor I love very much, which is what keeps me still fairly interested in the developments of Scirra ahead). I just need C2/C3's export to be a lot better for playing, and preferably sooner than "the future" where PC's drop in price to the point where everyone runs basically an Intel i7 CPU + NVIDIA GTX 960+ GPU and the problems "suddenly disappear now" magically.

@Triforce The arguments made above have been my experience, and the experience I have seen from other prominent Construct 2 games on Steam. You can listen to the people who haven't made large games if you want, but we have all roughly come to the same conclusions: C2 is great for hobbyists, educators, and personal or small games. But when you need to get larger, make some prototypes and test a LOT before deciding to stick with it.
Construct Classic - Examples Kit Dropbox is a pile of trash and if you need my old files PM me! :)
B
127
S
43
G
18
Posts: 2,240
Reputation: 20,592

Post » Tue May 16, 2017 11:10 am

ome6a1717 wrote:Also if it is fillrate-bound, how is that combated? I assume that's what it is since if I minimize to the standard resolution (640 x 360) I get a constant 60 fps, but that's a bit unrealistic to play at, and even using low settings it doesn't change anything.

If using a lower display resolution makes it run faster, you've basically proven it's GPU fillrate, exactly as I suspected. I can take a look at the .capx if you email it to me ([email protected]), but I am sure it's that just based on what you've said.

Force-own-texture layers (which includes any with an effect, blend mode or opacity other than 100%) add a lot to fill rate, as do any large sprites or tiled backgrounds. You have to reduce the number of those you use.

Jayjay wrote:I always laugh at this argument. "Graphics card drivers are bad!"

Ask any low-level graphics programmer how they would characterise drivers. Watch them bang their head against the nearest desk/wall while wailing. This is an industry recognised issue, and I backed it up with references in my earlier post. What makes you think this is not actually the case? Do you think those references I provided are wrong? If so, why?

so you decided to use possibly the most unsupported and experimental platform

This is ridiculous. WebGL is a robust, reliable and widely-deployed technology, which actually to a large extent sanitises OpenGL support by closing off some gotchas, defining previously undefined behavior, and working around driver bugs where feasible. To illustrate the point, on my high-end dev machine if I go to chrome://gpu, Chrome tells me all the driver workarounds it's applied, which are the following:

Some drivers are unable to reset the D3D device in the GPU process sandbox
Applied Workarounds: exit_on_context_lost
TexSubImage is faster for full uploads on ANGLE
Applied Workarounds: texsubimage_faster_than_teximage
Clear uniforms before first program use on all platforms: 124764, 349137
Applied Workarounds: clear_uniforms_before_first_program_use
Always rewrite vec/mat constructors to be consistent: 398694
Applied Workarounds: scalarize_vec_and_mat_constructor_args
ANGLE crash on glReadPixels from incomplete cube map texture: 518889
Applied Workarounds: force_cube_complete
Framebuffer discarding can hurt performance on non-tilers: 570897
Applied Workarounds: disable_discard_framebuffer
Limited enabling of Chromium GL_INTEL_framebuffer_CMAA: 535198
Applied Workarounds: disable_framebuffer_cmaa
Zero-copy NV12 video displays incorrect colors on NVIDIA drivers.: 635319
Applied Workarounds: disable_dxgi_zero_copy_video
Disable KHR_blend_equation_advanced until cc shaders are updated: 661715
Decode and Encode before generateMipmap for srgb format textures on Windows: 634519
Applied Workarounds: decode_encode_srgb_for_generatemipmap
Native GpuMemoryBuffers have been disabled, either via about:flags or command line.
Disabled Features: native_gpu_memory_buffers


That's a list of bugs - and performance issues! - you will run in to and need to work around yourself if you don't use WebGL, for just one modern and up-to-date system. Think about extending that across the entire ecosystem of different OSs and hardware. So actually WebGL is a more robust graphics technology. Browser vendors have done a huge amount of workarounds already to maximise reliability and performance.

Given that, on what basis do you believe this is an experimental technology?

That still runs on graphics card drivers!

I'm not sure what your point is, because literally every other GPU-accelerated technology also depends on drivers.

Personally, I'd rather pay $100/year for Scirra to make a commercially sold large 2D pixel-perfect retro platformer in their own engine and then release and troubleshoot it on Steam

We have effectively already done this with the Construct 2 editor itself, which has in the past faced severe graphics drivers (e.g. crash on startup). To this day there is still the odd crash-on-startup issue due to graphics drivers. It is incredibly frustrating and difficult and costs us sales. This is literally one of our motivations to use web technology. We've done the native side and seen how excruciating that can be. WebGL is better! I'm not sure what you expect us to do about it, either - do you think I can just phone up AMD and tell them to fix a wide range of bugs and then retroactively install that to millions of PCs worldwide? We could change technologies, but that could easily make it worse, as I've shown above. What are you expecting us to do in response to these issues?
Scirra Founder
B
402
S
238
G
89
Posts: 24,628
Reputation: 196,023

Post » Tue May 16, 2017 11:20 am

ome6a1717 wrote:
@BackendFreak - Honestly I would LOVE for it to be my bad programming, but if I disable all of my code the fps doesn't really change much. Unless, however, disabling is different from actually deleting it.


If after disabling all of your code you still have performance issues then I assume you have a lot of (probably high res?) sprites with motion behaviors? I saw once a project with many high-res grass objects with sine + effects + the grass had 60 frames animation. This in result was killing the project as both behaviors and 60fps animation takes a lot of CPU.

There's really a lot of things that can affect performance. My advice is to disable/remove features/elements one by one and check what's the issue. Then once tracked think of how can you optimize it.

C2 is surely not a performance beast and that's why it's difficult to make a big project and so - important to code/make it carefully.
ImageImageImage
B
31
S
19
G
82
Posts: 1,038
Reputation: 46,205

Post » Tue May 16, 2017 11:52 am

I've been following this topic a while, and from what I understand and read is that a WebGL application on Windows can beat a native desktop OSX application, because the OSX OpenGL driver sucks.

On windows a native desktop application can easily have 10x more draw call throughput then a WebGL app running on the same machine, BUT not because of slow JS performance, but because of WebGL overhead.

So what people do to combat this is to reduce the amount of draw calls to WebGl.

- next I tested drawing with ANGLE_instanced_arrays, object positions are computed on CPU, written to a (double-buffered) dynamic vertex buffer, and then rendered with a single draw call, in Chrome on Windows with NVIDIA I can get 450k instances before the performance drops below 60fps (so 450k particle position updates per frame in JS, and no sweat!), performance in a native app isn't better here, my suspicion is that the vertex buffer update is the limiter here (500k instances means 8MByte of dynamic vertex data shuffled to the GPU each frame), on my OSX MBP I can go up to about 180k instances (again very likely vertex throughput limited). However in this case, the way the dynamic vertex buffer works is also important, it looks like vertex buffer orphaning is useless in WebGL (see discussion here: https://groups.google.com/forum/#!topic ... MNXSNRAg8M), so I switched to double-buffering


I don't know how well C2/C3 handles this, but it might just be the case it's the draw call overhead, and why many people experience it as slow?

Edit: I'm pretty certain the only performance issues with C2/C3 is overhead, nothing else...
The C3 example, "Quad issue performance test" should probably be able to do double ammount of sprites without breaking a sweat if draw calls were reduced.
Last edited by tunepunk on Tue May 16, 2017 12:10 pm, edited 1 time in total.
Follow my progress on Twitter
or in this thread Archer Devlog
B
41
S
18
G
19
Posts: 1,055
Reputation: 14,019

Post » Tue May 16, 2017 11:57 am

@Ashley - Thanks. I've emailed you a dropbox link for my capx file. You're likely right that it's a fillrate issue, however , my layout size is 4500, 2000 (resolution is 640 x 360). I'm using 4 layers that each have a tilemap on it (3 are the same file, 1 is a collision tilemap), and I do have 1 layer that is set to force texture with maybe 20-30 sprites that have destination out on it. I'm also using maybe 10-20 sprites that have the additive blend mode.

If those are truly what's causing the problem, to ME that literally means that using tilemaps or any blend modes in general should be prohibited for anything that isn't a flappy bird or mario-type game.

Ironically enough, when I deleted all of my "light" with destination out sprites, it didn't actually help anything.

@BackendFreak - Unfortunately, no. It's pixel art with almost nothing having any behaviors or effects. The particular layout is 1 enemy that has no events to it, and I've disabled the groups for all other enemies. I'm not using super heavy fx, or any large objects. Most sprites are either a tilemap or static.
B
44
S
12
G
1
Posts: 545
Reputation: 4,271

Post » Tue May 16, 2017 12:43 pm

@Ashley

Is C2/C3 taking advantage of double buffering to reduce draw calls? Check my previous post, my only issue for my own game on mobile is draw calls, so maybe there's something to it?

The only reason WebGL might perform slower than native is due to overhead, which should be reduced to a minimum for WebGL.
Follow my progress on Twitter
or in this thread Archer Devlog
B
41
S
18
G
19
Posts: 1,055
Reputation: 14,019

Post » Tue May 16, 2017 12:55 pm

I think all modern OSs double buffer everything on-screen. Certainly all modern graphics applications do.
Scirra Founder
B
402
S
238
G
89
Posts: 24,628
Reputation: 196,023

Post » Tue May 16, 2017 1:01 pm

Ashley wrote:I think all modern OSs double buffer everything on-screen. Certainly all modern graphics applications do.


I was under the impression that was something you have to setup manually?

http://stackoverflow.com/questions/29565026/how-to-implement-vbo-double-buffering-in-webgl

Anyway, from what I've read, the main issue with webGL is draw call overhead, so reducing that should improve the performance a lot.

Fewer, larger draw operations will improve performance. If you have 1000 sprites to paint, try to do it as a single drawArrays() or drawElements() call. You can draw degenerate (flat) triangles if you need to draw discontinuous objects as a single drawArrays() call.


https://developer.mozilla.org/en-US/docs/Web/API/WebGL_API/WebGL_best_practices

I'm not doubting the power of WebGL, vs Native, I just feel it's not really optimized when using C2/C3.
Last edited by tunepunk on Tue May 16, 2017 1:11 pm, edited 1 time in total.
Follow my progress on Twitter
or in this thread Archer Devlog
B
41
S
18
G
19
Posts: 1,055
Reputation: 14,019

Post » Tue May 16, 2017 1:11 pm

Elliott wrote:Please share the .capx with Ashley - I get second hand frustration in these threads because users will complain and then never share the final cause of issues; would love to see some closure.


I think at least that will start giving some comparisons in the real world...that will help one develop a game from the start taking into account inherent issues there might be.

jobel wrote:
ome6a1717 wrote:Honestly I would LOVE for it to be my bad programming, but if I disable all of my code the fps doesn't really change much. Unless, however, disabling is different from actually deleting it.


Not only "bad coding" but misuse of budgeting graphics/effects/particles/objects/collisions/force texture or anything that has a behavior etc..



As I understand it Behaviours are less resource intensive than coding your own. (At least the built in ones.)
B
29
S
8
G
4
Posts: 206
Reputation: 5,029

Post » Tue May 16, 2017 2:36 pm

@Ashley

I don't know if any of this makes any sense to you (I'm not a coder), but i just tried the WebGl inspector in Firefox. This looks like a lot of draw calls for 1 frame, where good practice (from what I've read) is to bundle them and draw all sprites at once. Is there any way to optimize this further?

Image

And Gecko seems to be using most CPU time.

Image

As I've said I'm not doubting webGL performance. I'm just suspecting we could get more bang for the buck if C2/C3 was optimized in such a way to reduce draw calls to a minimum. Since overhead is the major issue with webGL. Known fact. That's why people it's good practice to draw many things at once. Otimal is to draw 1 time per frame.

Any thoughts on that? Is there anything that can be done to reduce # of draw calls you think?
Follow my progress on Twitter
or in this thread Archer Devlog
B
41
S
18
G
19
Posts: 1,055
Reputation: 14,019

PreviousNext

Return to Construct 2 General

Who is online

Users browsing this forum: daksamedia, dickylarsonn02 and 7 guests