Blitzmax - What determines game speed?

Started by PixelOutlaw, March 17, 2025, 00:46:46

Previous topic - Next topic

PixelOutlaw

Hello all,

What determines how fast a BlitzMax game runs?
For example let's say you've got a game object and you move it 1 pixel in your main loop.

My games seem to run the same spanning various generations of computers.
I guess I've never really known what internal mechanism is "fixing" the speed.

Ubuntu MATE 20.04: i5-3570K CPU @ 3.40GHz, 8GB RAM, GeForce GTX 1060 3GB

One DEFUN to rule them all, One DEFUN to find them, One DEFUN to RETURN them all, and in the darkness MULTIPLE-VALUE-BIND them.

col

If you havent implemented anything yourself then the Flip(True) command will limit the maximum speed to the monitor refresh rate. This will be speed limiter. The cpu will wait for the Flip command to return before continuing. The Flip command will wait for the vertical blank of the monitor. If the monitor is set to 60hz then 60fps will be your 'maximum' no matter how fast/slow the cpu/gpu processes your code.

Try setting Flip(False) to see the cpu/gpu run as fast as possible. There are still limiting factors such as the cpu waiting for the gpu, and the gpu waiting for the cpu but that a different story.

It could also be that you have some kind of gpu driver app installed (that got installed along with the driver) that will limit the presentation of game frames to the monitor refresh rate regardless of what you do in your code. You'd have to check through your system to know if that is the case.
https://github.com/davecamp

"When you observe the world through social media, you lose your faith in it."

PixelOutlaw

#2
I see.
I wonder if it's just the case that my monitors have always been 60 frames per second.
I know there's a lot of articles that speak against using a fixed time step but for the type of game I'm making I want deterministic updates. I'm making a 2D shooter and if I separate the game logic from the rendering logic then I get bullets that are inconsistently spaced when patterns need to be tight and regular. In other words I really don't want to use real time to say how often something should happen in the game because that leads to non-deterministic gameplay. It's much the case that I would like to emulate setting the logic to 60 frames per second as if everyone had my graphics card. In other words I'm willing to sacrifice game speed being consistent and also screen tearing for the sake of per frame updates.
Ubuntu MATE 20.04: i5-3570K CPU @ 3.40GHz, 8GB RAM, GeForce GTX 1060 3GB

One DEFUN to rule them all, One DEFUN to find them, One DEFUN to RETURN them all, and in the darkness MULTIPLE-VALUE-BIND them.

Midimaster

I see you want to stay at a deterministic system. So this could be a way for you:

If you use FLIP 0 together with a time measuring of 16msec per loop, you would receive a 60Hz speed on any computer. (Because 60x16.67=1000)

All examples use very simple symbolic model:

Code: BASIC
Global Time:Double = Millisecs()
Repeat
   If Time<Millisecs()
      Time = Time + 16.67
      Cls
         ' Do your drawing and calculation jobs here
      Flip 0
   Else
      Delay 1    
   Endif
Until AppTerminate()


The same you could reach with a TIMER:

Code: BASIC
Global Time:TTimer = CreateTimer(60)
Repeat
   Cls
      ' Do your drawing and calculation jobs here
   Flip 0
   WaitTimer Time
Until AppTerminate()


Both has the disadvantage, that high speed moving items like bullets, will need big Xadd values to move very fast on the screen.
Because a Xadd of 1pix needs 1920/60 = 32seconds to cross the screen. So you need to increase Xadd to 10 to do it in 3sec or Xadd=30 to show it in 1sec.

This means, that a collision with an object maybe not detected. Example: A bullet is at X=120: The object is at X=125 and has a width of 20. So the bullet will be checked at x=120 and again at x=150. But this is already beyond the object. So no collision will be detected.

Therefore we often use something like this:

Code: BASIC
Global DrawTime:Double = Millisecs()
Global JobTime:Double  = Millisecs()

Repeat
   If JobTime<Millisecs()
      JobTime = JobTime + 5
         ' Do your calculations here
   ElseIf DrawTime<Millisecs()
      DrawTime = DrawTime + 16.67
      Cls
         ' Do your Drawings here
      Flip 0
   Else
      Delay 1
   Endif
Until AppTerminate()

This calculates 200 times per second the movements and collisions, but drawing the screen with 60Hz.

In this model the branch "calculation" will be prefered when it comes to performance problems. A "Sleeping" will only event if both other branches need no attention.

...on the way to China.

Derron

If you decoupled rendering from updating then you can update as often as needed and simply render the current state.

As your bullets only move when you "update things" (ignoring "tweening" etc for now) the rendering can happen at the refresh rate of your display without affecting the logic - you just need to ensure that "updates" can happen as often as you want per second - if your gpu is not fast enough you might not see all "frames" (bullets might move "non smooth") but your logic will work "deterministically".

Tweening would btw mean that your render() call knows how much time has passed since last update() and knows the update()-interval. Then it could calculate that it eg is currently at 75% till next update() - and "could" already render elements at the position "x + velocityX * 0.75" and "y + velocityY * 0.75" (with the risk of rendering them on a wall etc - or having to care for this visual stuff in the renderer already).
This is useful if you update at a lower rate (because it is cpu hungry whatever you do there) but are able to render way more often and want things to still move "smooth" (just think you move your ship by 10px per update() - but want to render the pixel-wise transition ...).


In my game TVTower (source at github) I use a deltatimer class which offers limiting both, UpdatesPerSecond and RenderPerSecond - and eg run it with 30 Updates per second but vsync-render frames per second. Sources for it in the "source/Dig" folder - if you want to have a look.


bye
Ron 

PixelOutlaw

Thanks for the help!
I'll try these out this afternoon and take a peek at TVTower and see how they feel.
Ubuntu MATE 20.04: i5-3570K CPU @ 3.40GHz, 8GB RAM, GeForce GTX 1060 3GB

One DEFUN to rule them all, One DEFUN to find them, One DEFUN to RETURN them all, and in the darkness MULTIPLE-VALUE-BIND them.

Steve Elliott

QuoteI'll try these out this afternoon and take a peek at TVTower and see how they feel.

With all due respect TV Tower does run silky smooth, but it's a much slower paced game. Scrolling games I feel are really unforgiving. Smooth enough doesn't cut it there; it's a tricky balance. I've not looked at Ron's code, maybe PixelOutlaw can use it and produce a 2d shooter using the code, and we can then all steal Ron's code if performs superbly with 2d shooters too lol.  :D

And it doesn't help the massive number of systems and monitor refresh rates out there. Back in the day it was 50hz for UK and 60hz for other countries - syncing updates to monitor/tv refresh rates gave silky smooth results. You can't assume that these days.

Personally I've not got on with tweening, I tend to use a set 'delta time' of a required ideal FPS (rather than tweening after a set update). You then adjust the delta based on the ideal vs the real FPS. If you don't and the frame rate varies massively, so does the delta time. You can't allow that. Just multiply the delta by the movement speed. For BlitzMax maybe use doubles rather than floats.

Anyway it's a bit of a 'magic sauce' at times I feel lol, use whatever works for you. This is a code snippet (some AGK Code) but you should be able to convert it.

Code: BASIC
Type Player

sprite        As Integer[1]
frame_num    As Integer
max_frame    As Integer
anim_timer    As Float
anim_speed    As Float
x            As Float
y            As Float

Endtype

Global player As Player

// init delta time

Type Delta

prev_time  As Float
time_frame  As Float
update_time As Float
req_fps    As Float
dt          As Float
prev_dt    As Float
min        As Float
max        As Float

EndType

Global delta As Delta
ResetTimer()

delta.req_fps      = 60.0
delta.update_time  = 1000.0 / delta.req_fps
delta.min          = 0.005
delta.max          = 5.0
delta.prev_time    = Timer()
delta.time_frame  = 1.0 / delta.update_time
delta.dt          = 1.0 / delta.update_time
delta.prev_dt      = 1.0 / delta.update_time

// delta time function

Function get_delta()

delta.time_frame = ( Timer() - delta.prev_time ) * 1000.0
delta.prev_time  = Timer()

delta.prev_dt    = delta.dt
delta.dt        = delta.time_frame / delta.update_time

// limit increase from frame to frame

If( delta.dt > (delta.prev_dt * 2.0) )

delta.dt = delta.prev_dt * 2.0

ElseIf( delta.dt < (delta.prev_dt / 2.0) )

delta.dt = delta.prev_dt / 2.0

Endif 

// keep frameskip sensible and avoid delta becoming too small

if( delta.dt > delta.max )

delta.dt = delta.max

Elseif( delta.dt < delta.min )

delta.dt = delta.min

Endif

EndFunction delta.dt

Function make_player()

i As Integer
i = 0

player.sprite = CreateSprite( img_player1 )
player.x            = 100.0
 player.y            = 500.0
 player.frame_num    = 0
 player.max_frame    = 7
 player.anim_timer    = 0.0
 player.anim_speed    = 14.0
 
 SetSpritePosition( player.sprite, player.x, player.y )

Endfunction

Function update_player( dt As Float )
 
 player.x = player.x + 2.0 * dt
 If( player.x > 1920 ) Then player.x = -128
 
 SetSpritePosition( player.sprite, player.x, player.y )
DrawSprite( player.sprite)
Endfunction

// game loop

make_player()
ResetTimer()
Global game_mode = IN_PLAY
dt As Float

Repeat
    dt = get_delta()

    update_player( dt )
    If( GetRawKeyState( KEY_ESC ) ) Then game_mode = QUIT

    Swap()
Until game_mode = QUIT

Win11 64Gb 12th Gen Intel i9 12900K 5.2Ghz Nvidia RTX 3070Ti 8Gb
Win11 16Gb 12th Gen Intel i5 12450H 4.4Ghz Nvidia RTX 2050 8Gb
Win10/Linux Mint 16Gb 4th Gen Intel i5 4570 3.6GHz Nvidia GeForce GTX 1050 2Gb
Linux Mint 8Gb Celeron 2.6Ghz UHD Graphics600
macOS 64Gb M4 Max 16C GPU 40C
Spectrum Next 2Mb

Baggey

I took the liberty of translating @Steve Elliott's code into BlitxMaxNG for a bit of fun and hope it helps someone.

Floats or Doubles dont seem to make a difference?

Runable BlitmaxNG code

Code: BASIC

' Conversion of Steves AGK code to Blitzmax[color=orange]NG[/color] by Baggey

SuperStrict

AppTitle=" SetDeltaTime"

Graphics 1200,600

Type TPlayer

  Field sprite:TPixmap
  Field frame_num:Int
  Field max_frame:Int
  Field anim_timer:Int
  Field anim_speed:Double
  Field x:Int 'Float
  Field y:Int 'Float

EndType

Global Player:Tplayer = New TPlayer

' init delta time

Type TDelta

  Field prev_time:Double
  Field time_frame:Double
  Field update_time:Double
  Field req_fps:Double
  Field dt:Double
  Field prev_dt:Double
  Field Minimum:Double
  Field Maximum:Double

EndType

Global Delta:TDelta = New TDelta

ResetTimer()


Function ResetTimer()

  delta.req_fps      = 60.0
  delta.update_time  = 1000.0 ' delta.req_fps
  delta.Minimum      = 0.005
  delta.Maximum      = 5.0
  delta.prev_time    = MilliSecs()
  delta.time_frame   = 1.0 ' delta.update_time
  delta.dt           = 1.0 ' delta.update_time
  delta.prev_dt      = 1.0 ' delta.update_time

End Function


' delta time Function

Function get_delta:Double()

  delta.time_frame = ( MilliSecs() - delta.prev_time ) * 1000.0
  delta.prev_time  = MilliSecs()

  delta.prev_dt    = delta.dt
  delta.dt        = delta.time_frame / delta.update_time

  ' limit increase from frame To frame

  If( delta.dt > (delta.prev_dt * 2.0) ) Then

     delta.dt = delta.prev_dt * 2.0

  ElseIf( delta.dt < (delta.prev_dt / 2.0) )

    delta.dt = delta.prev_dt / 2.0

  EndIf

  ' keep frameskip sensible And avoid delta becoming too small

  If( delta.dt > delta.Maximum ) Then

    delta.dt = delta.Maximum

  ElseIf( delta.dt < delta.Minimum )

    delta.dt = delta.Minimum

  EndIf

  Return delta.dt

EndFunction


Function make_player()

  Local i:Int
  i = 0

  player.sprite = CreatePixmap(10,10,PF_RGBA8888)
  ClearPixels(player.sprite, $FFFF0000)
  'Local img_player1:TPixmap = LockImage(player.sprite)


  'player.sprite = Createtpixmap( img_player1 )
  player.x            = 100.0
  player.y            = 500.0
  player.frame_num    = 0
  player.max_frame    = 7
  player.anim_timer   = 0.0
  player.anim_speed   = 14.0
 
  'SetSpritePosition( player.sprite, player.x, player.y )

EndFunction


Function update_player( dt:Double )
 
  player.x = player.x + 2.0 * dt
  If( player.x > 1920 ) Then player.x = -128
 
  'SetSpritePosition( player.sprite, player.x, player.y )
  DrawPixmap( player.sprite, player.x, player.y)

EndFunction

' game loop

make_player()
ResetTimer()

Global IN_PLAY:Byte = True
Global QUIT:Byte    = False

Global game_mode:Byte = IN_PLAY
Global dt:Double


Repeat

    Cls()

    dt = get_delta()

    update_player( dt )
    If( KeyHit(KEY_ESCAPE) ) Then game_mode = QUIT

    Flip()

Until game_mode = QUIT Or AppTerminate() Or MouseHit(2)

Your Code has been Assimulated for our Blitzmaxers!  :))

Kind Regards Baggey
Running a PC that just Aint fast enough!? i7 4Ghz Quad core 32GB ram  2x1TB SSD and NVIDIA Quadro K1200 on 2 x HP Z24's . DID Technology stop! Or have we been assimulated!

Windows10, Parrot OS, Raspberry Pi Black Edition! , ZX Spectrum 48k, C64, Enterprise 128K, The SID chip. Im Misunderstood!

RemiD

#8
the way i do it in Blitz3d is to use a 'timer' to limit the number of loops / frames done per second.
and also to use a 'speed coefficient' to adjust the speed of the movements / turns / animations, depending on the milli time it takes to do a loop / render a frame. (when the fps decreases, to keep the appearance of the speed of the game constant)

it works well on my different computers...

blitz3d code :
https://www.syntaxbomb.com/miscellaneous/bb-speed-of-the-turnsmovesanimations-automatically-adjusted-to-appear-constant-w/
post #1

Derron

I did not try to claim "it works for everything" :-)

I just wanted to emphasize that decoupling updating the logic and rendering your content WILL be the way to go - I mean, in the worst case every "update()" is just followed by a "render()". But once you did decouple that, you can more easily adopt the update rate to your physics needs.

Basic issue is the classic "bullet through thin paper" problem. If a bullet has a velocity of 200 px per second and you update 60 times a second, then the bullet moves by 3.22 px per "update". If there was a wall, 1px wide at "x = 100" and your bullet is also 1 px wide and currently at "x = 98" then it did not hit the wall yet (98 + 1 < 100) and on next update it is at "101.22" - which is also not hitting the wall (101.22 > 101) (no, not "int(value)"ing here :D).
It becomes more obvious if you think of "bullet hits bullet" (or the wall would become a moving object which moves into your direction).

Most often people then just increase update rate (1000 times a second...) to avoid having to "from where did it come, did it cross the other one while moving to the new position".


In TVTower I did a bit "similar". The ingame time (the world it simulates) is advancing inside the update() calls. So I call eg 30 times a (real time) second - and calculate how many ingame times would need to get advanced, and then "step through" all of them.

So in your shoot them up you could still update() only "refresh rate" times per second. But inside move all your units in "micro steps" (so that you ensure to hit each "grid point") followed by your "unit handling" (collision checks, enemies hit -> point scored, ...).
Other things can then be updated more slow paced.

Of course there are plenty of alternatives to this approach:
- call the update ticks and only do specific updates when "tick mod 10 = 0" (so at a tenth of the "refresh rate" -- which would then not be fixed and depending on the displays...)
- have various "updateXYZ()" which have their own rules for "when to call" (fixed, dynamic, depending on previous render, ...)

The idea there is to not run too much within "update()" to not clog the CPU leaving no cycles for the rendering function. 


Hope I was able to describe the benefits of decoupling updates/logic and rendering.


PS: "Slow-Motion" or "Fast Forward" also something which could be done then way more easily (update rate) without hogging the GPU (it just renders what is currently to render) 

bye
Ron