Since I don't know how was this solved by programmers, experiments will show some stuff.
Here is the simple "add timedelta" thingy:
:arrow: Text('t') adds timedelta meaning it counts seconds
:arrow: Text('t1000') adds timedelta*1000 meaning it counts miliseconds
:arrow: Timer retrieves play time
And all those errors provide difference between Timer and Text('t')*1000 or Timer and Text('t1000'). The [msec] error doesn't divide stuff, the [s] error divide result by 1000.
:arrow: Timer is based on adding timedelta
:arrow: 1000*timedelta produces numerical errors
:arrow: Numerical errors are random (one time positive, one time negative) and that leads to minimalizing the error
:arrow: The bigger the time is, the bigger numerical error is produced on every tick
:arrow: 0 error for adding nonscaled timedelta doesn't neccesary mean that Timer is compatible with system time (well, actually who'd bother if there was like 0.001msec error during the longest play possible?).
I'm now leaving program on to see results after some time...
Because errors are random, maybe after few hours the error will be positive value ^^.
Maybe there's a better point in checking this incrementing with Profiler's "getTickCount()"? I remember using getTickCount() and it's resolution wasn't really pleasing, is it the same way in Construct's profiler?