Reply
Thread Tools
Khertan's Avatar
Posts: 1,012 | Thanked: 817 times | Joined on Jul 2007 @ France
#1
Hi,

I'm trying to do a Apple's Coverflow like display, and so i'm trying to make perspective on an image.

I've successfully found how to do it, but it's really slow. For this i use pygame to display and load image and PIL to make perspective.

It's take at least one minute to calc 5 covers. I think most of the time is taken by the conversion of a pygame image to an pil image :
Code:
    def perspectiv(self,z,w):
      pil_string_image = pygame.image.tostring(self.origineimage, "RGBA",False)
      #pil_string_image = self.image.tostring()
      pil_image = Image.fromstring("RGBA",(250,250),pil_string_image)      
      pil_image = pil_image.transform((250,250),Image.QUAD,(0-w,-w,0-w,250+w,250+z,250+z,250+z,-z) ,Image.BICUBIC)
      mode = pil_image.mode
      size = pil_image.size
      data = pil_image.tostring()
      #assert mode in "RGB", "RGBA"
      self.image = pygame.image.fromstring(data, size, mode)
Do you know an other way to do fast perspectiv image effect in python ?
 
pipeline's Avatar
Posts: 693 | Thanked: 502 times | Joined on Jul 2007
#2
For what's its worth, here's how i would -try- to optimize (never used these python functions).

Just from looking at the documentation at :
http://www.pygame.org/docs/ref/image.html and
http://www.pythonware.com/library/pi...book/image.htm

I would probably keep the original image strings in global array so i didnt need to frombuffer() before every frame. I would do this using the pygame.image.frombuffer() during intial setup only... subsequent accesses just write directly to the string buffer (i guess this is how it would work)


Additionally I would probably -try- to do the same thing with the destination image strings... keeping in global array and just use a third/temp string for transformation. Not sure if regular assignment to the buffers would keep links to image using them though.

So your render function might look like
TmpImageString = OriginalImageStrings[x]
TmpImage = pil_image.transform(...)
#Try uncommented first... if just setting the buffer string doesnt work, revert to original way.
#self.image = pygame.image.fromstring()
DestinationImageStrings[x] = pil_image.tostring()


Also not too sure about the 2d transformation function other than whats the performance difference using NEAREST or BILINEAR instead of BICUBIC? BICUBIC is the nicest/slowest and nearest is fastest.

Last edited by pipeline; 2007-09-19 at 11:57.
 
Khertan's Avatar
Posts: 1,012 | Thanked: 817 times | Joined on Jul 2007 @ France
#3
Thanks.

Since my post i've make some tests, deleted the conversion with StringIO and optimized use of image, use smallest cover 125x125 instead of 250x250, use NEAREST instead of BICUBIC.

But it's still too slow on the device to be usefull.

In fact, the most slower thing is PIL transformation.

Last edited by Khertan; 2007-09-19 at 12:02.
 
zeez's Avatar
Posts: 341 | Thanked: 68 times | Joined on Aug 2007
#4
Can you maybe post the whole source so i can take a look at it ?
 
Khertan's Avatar
Posts: 1,012 | Thanked: 817 times | Joined on Jul 2007 @ France
#5
yes of course ... but it 's just a test a this time of 5 covers movig on the screen. It s really sadly coded !

Here the source (Sorry can't upload a complete zip as this forum only accept 96Kb max files):

Just get the cover i ve upload and duplicate it to "cover1.jpg" "cover2.jpg" "cover3.jpg" "cover4.jpg" "cover5.jpg" in a "covers" folder, located in the flow.py source code folder.

Code:
#!/usr/bin/env python

import os, pygame
from pygame.locals import *
import Image
import StringIO
import time

class Cover:
    def __init__(self, pos):
        self.image = pygame.Surface((125,125))
        #self.origineimage = image
        self.cover_url = os.path.join('covers/cover'+str(pos)+'.jpg')
        #print self.cover_url
        self.pil_image = Image.open(self.cover_url)
        #print self.pil_image
        self.speed = 5
        self.pos = self.image.get_rect().move((800 / 6) * (pos - 1 ), 50)
        #self.oldpos = self.pos
        #self.oldpos.left = self.pos.left - 999
        #print self.pos
        self.perspectiv(00,00)
        
    def move(self):
        self.pos = self.pos.move(self.speed, 0)
        if (self.pos.right > 800):
            self.pos.left = 0
            
        #if (abs(self.pos.left-self.oldpos.left)) > 25:
        #  self.oldpos = self.pos
        if (self.pos.left + 77)<400:
          leftangle = 0
          rightangle = (400 - (self.pos.left + 77)) / 2
        else:
          leftangle = ((self.pos.left+77) - 400 ) / 2
          rightangle = 0
        self.perspectiv(rightangle,leftangle)
        
		
    def perspectiv(self,z,w):
      #pil_string_image = pygame.image.tostring(self.origineimage, "RGBA",False)
      #pil_string_image = self.image.tostring()
      #pil_image = Image.fromstring("RGBA",(250,250),pil_string_image)      
      #pil_image = Image.open(self.cover_url)
      perspectiv_pil_image = self.pil_image.transform((125,125),Image.QUAD,(0-w,-w,0-w,125+w,125+z,125+z,125+z,-z))
      #mode = pil_image.mode
      #size = pil_image.size
      #data = pil_image.tostring()

      #assert mode in "RGB", "RGBA"
      #self.image = pygame.image.fromstring(data, size, mode)

      f = StringIO.StringIO()
      perspectiv_pil_image.save(f, "bmp")
      f.seek(0)
      self.image = pygame.image.load(f)
      f.close()
      
#quick function to load an image
def load_image(name):
    path = os.path.join('covers', name)
    return pygame.image.load(path).convert()


#here's the full code
def main():
    pygame.init()
    screen = pygame.display.set_mode((800, 480))

    #coverimg1 = load_image('cover1.jpg')
    #coverimg2 = load_image('cover2.jpg')
    #coverimg3 = load_image('cover3.jpg')
    #coverimg4 = load_image('cover4.jpg')
    #coverimg5 = load_image('cover5.jpg')
    
    #background = load_image('background.png')
    background = pygame.Surface((800,480))
    #Display Background
    screen.blit(background, (0, 0))

	#Define covers
    objects = []
    o = Cover(1)
    objects.append(o)
    o = Cover(2)
    objects.append(o)
    o = Cover(3)
    objects.append(o)
    o = Cover(4)
    objects.append(o)
    o = Cover(5)
    objects.append(o)
    
    while 1:
        for event in pygame.event.get():
            if event.type in (QUIT, KEYDOWN):
                return

        for o in objects:
            screen.blit(background, o.pos, o.pos)
        for o in objects:
            o.move()
            screen.blit(o.image, o.pos)

        pygame.display.update()



if __name__ == '__main__': main()
Attached Images
 

Last edited by Khertan; 2007-09-19 at 14:53.
 
zeez's Avatar
Posts: 341 | Thanked: 68 times | Joined on Aug 2007
#6
Hmm i'm afraid the transformation ist just too slow. I don't think it is possible to optimize it *heavily* without openGl. You could of course precache the transformations but for a common music collection on a 8GB card i don't think thats reasonable..
 
Khertan's Avatar
Posts: 1,012 | Thanked: 817 times | Joined on Jul 2007 @ France
#7
I don't think too.

I ve tested raytracing algo too ... but really too slow

Hum i'll think of a manual transformation by matrix on the image.
 
zeez's Avatar
Posts: 341 | Thanked: 68 times | Joined on Aug 2007
#8
I'm pretty sure that's what the pil transform function does. And it is *a lot* of computation. We need openGL on the IT
 
Posts: 178 | Thanked: 40 times | Joined on Aug 2007 @ UK
#9
Me a novice to maemo dev but can't the DSP be used to do the transform?
 
Posts: 1,038 | Thanked: 737 times | Joined on Nov 2005 @ Helsinki
#10
hey, don't use PIL to do anything if at all possible. It's freaking slow. If possible by any means, just use pygame / add your own transformer function.

Or, consider this: Take the image to pygame. Make a copy of it. Scale the copy inside pygame to 40x40. Move that to PIL. Make the transformation in PIL. Take the image from PIL. Transfer it to pygame. Scale it to 250x250 in pygame. Phew. Relax. Should have taken much less time that way.

Also, consider doing less frames for the transformation and make it quicker. As it's quick, user might not notice the blurriness of the image because of the 40x40 base image.
 
Reply


 
Forum Jump


All times are GMT. The time now is 10:36.