Reply
Thread Tools
Posts: 726 | Thanked: 345 times | Joined on Apr 2010 @ Sweden
#11
Originally Posted by baxyp View Post
umm. vsync is not available on tv out only. lcd display refreshes its picture at certain rate too. When display buffers are not swapped at vsync time. visual tearing occurs. Also, when rendering is not restricted by vsync rate, apps may paint too many frames wasting cpu/gpu cycles and power.
So to make OpenGL work flawlessly with an LCD display, the rendering loop has to keep track of when to ask for a new frame AND compensate for not knowing how long it will take to render that frame?

This sounds like an impossible task unless you limit the time any program has to render one frame.
 
Posts: 10 | Thanked: 2 times | Joined on Jan 2010
#12
Originally Posted by Joorin View Post
How do you propose to let every process that is running know if it's ok to render or not? And, conversely, how would the OS know that all running processes are done rendering and ask the display hardware to update the screen?
This is done in the drivers.
Since OpenGLES2.0 relies on the rendering function to swap the buffers when it has rendered one frame, keeping OpenGL applications in sync with normally rendered applications comes across as a bit hard.
This is for games and I tested on fullscreen rendering of course.

You're not satisfied with how they will look?
To the point that's it pointless to try to do anything at all.

http://www.youtube.com/watch?v=cz9iI98WtK8

(Vsync issues for another platform, but you get the idea how it impairs the look of the graphics, just to illustrate what lack of vsync means.)
 
Posts: 10 | Thanked: 2 times | Joined on Jan 2010
#13
Originally Posted by Joorin View Post
This sounds like an impossible task unless you limit the time any program has to render one frame.
This is EXACTLY what should be done by the driver inside the swapbuffers call.
If your application is slower you get a multiple of that frequency (say 30Hz if your app is running heavy graphics) but you get a stable image and no tearing artefacts.

To test it you render nothing at all (like a triangle in the proof of concept) and see if you get a stable frame rate.
 
Posts: 10 | Thanked: 2 times | Joined on Jan 2010
#14
Originally Posted by javicq View Post
I was looking for that a while ago too but it seems it won't be fixed for maemo 5:

https://bugs.maemo.org/show_bug.cgi?id=5556
Well, maybe it's too much to ask for the desktop compositor to support this but at least fullscreen OpenGL surfaces should support vsync.
 

The Following User Says Thank You to jaw_vvd For This Useful Post:
javicq's Avatar
Posts: 94 | Thanked: 319 times | Joined on Mar 2010 @ Barcelona, Spain
#15
Originally Posted by jaw_vvd View Post
Well, maybe it's too much to ask for the desktop compositor to support this but at least fullscreen OpenGL surfaces should support vsync.
Actually I read somewhere else that this was a driver issue, not just the desktop. No API is getting vsync info.
__________________
If you liked my work, you may donate
 
Posts: 10 | Thanked: 2 times | Joined on Jan 2010
#16
Originally Posted by javicq View Post
Actually I read somewhere else that this was a driver issue, not just the desktop. No API is getting vsync info.
I read this too, so I'm wondering if maybe it's fixed in 1.2
If not: I'm jumping ship.
 
Posts: 726 | Thanked: 345 times | Joined on Apr 2010 @ Sweden
#17
Originally Posted by jaw_vvd View Post
This is EXACTLY what should be done by the driver inside the swapbuffers call.
If your application is slower you get a multiple of that frequency (say 30Hz if your app is running heavy graphics) but you get a stable image and no tearing artefacts.
Perhaps I'm too much of a n00b when it comes to 3D graphics (and games) since I have trouble understanding how the driver would be able to limit the rendering time by acting in the swap buffers call. To me, that seems like too late since all the rendering happens before the swap buffer call.

What would the driver do in the swap buffers call? Wait for the proper time and then move data to VRAM slowing down rendering on the device to match the speed of the slowest process? Or something else?

To test it you render nothing at all (like a triangle in the proof of concept) and see if you get a stable frame rate.
What would such a test show? I did just that last night while testing some other OpenGLES2.0 example with a rotating spiral in full screen. I got pretty much the same FPS, around 54, all the time. Would a device with proper vsync handling always give me 54.0000FPS, if 54Hz happens to be the update frequency of the screen, instead of varying from 54.1234 to 55.4321?
 
javicq's Avatar
Posts: 94 | Thanked: 319 times | Joined on Mar 2010 @ Barcelona, Spain
#18
Originally Posted by jaw_vvd View Post
I read this too, so I'm wondering if maybe it's fixed in 1.2
If not: I'm jumping ship.
Unlikely
It's better explained in this other bug report. Look at comment #3 from Eero:

https://bugs.maemo.org/show_bug.cgi?id=7459
__________________
If you liked my work, you may donate
 
Posts: 10 | Thanked: 2 times | Joined on Jan 2010
#19
ok, Nokia multifails. Bye bye.
 
Posts: 10 | Thanked: 2 times | Joined on Jan 2010
#20
Originally Posted by Joorin View Post
What would the driver do in the swap buffers call? Wait for the proper time and then move data to VRAM slowing down rendering on the device to match the speed of the slowest process?
It stalls on the "swapbuffers" call until all the data currently in the buffer has been handled by the display.
 
Reply


 
Forum Jump


All times are GMT. The time now is 15:48.