Reply
Thread Tools
Posts: 61 | Thanked: 77 times | Joined on Dec 2009 @ Lancaster
#21
Originally Posted by klen View Post
I managed to link the needed libraries. I could not do it automatically using pkg-config instead of manually (i.e. -L/usr/lib/gtk-2.0 -lgtk+-2.0).

I added one line to my QMAKE file, where I add a linker flag and define names of libraries that are needed.

QMAKE_LFLAGS += `pkg-config --libs gtk+-2.0 \
--libs gstreamer-interfaces-0.10

One problem solved, one to go.

I still did not manage to link the pipeline sink to any of QT objects. I am looking into Phonon now. Looks promising. Hope it gets me somewhere.

Klen
Phonon is not the way to go. MediObjects can not be created with camera source stream. Unfortunately one also can't create custom MediaSource objects. Therefore, Phonon is only good fro playing video and audio from files and URL streams.

Cheers,
Klen
 
Posts: 14 | Thanked: 15 times | Joined on Feb 2010 @ bay area, us
#22
Originally Posted by ptterb View Post
Ok, so I just spent the ENTIRE day working on this! I got a lot farther, but its just not working quite right. I was able to get the video to play in a separate window, but when i try to put it inside a QWidget on my gui, the screen goes black and starts flashing and I have to restart the phone. Here's the code I have:


class Vid:
def __init__(self, windowId):
self.player = gst.Pipeline("player")
self.source = gst.element_factory_make("v4l2src", "vsource")
self.sink = gst.element_factory_make("autovideosink", "outsink")
self.source.set_property("device", "/dev/video0")
self.scaler = gst.element_factory_make("videoscale", "vscale")
self.window_id = None
self.windowId = windowId

self.player.add(self.source, self.scaler, self.sink)
gst.element_link_many(self.source,self.scaler, self.sink)

bus = self.player.get_bus()
bus.add_signal_watch()
bus.enable_sync_message_emission()
bus.connect("message", self.on_message)
bus.connect("sync-message::element", self.on_sync_message)

def on_message(self, bus, message):
t = message.type
if t == gst.MESSAGE_EOS:
self.player.set_state(gst.STATE_NULL)
elif t == gst.MESSAGE_ERROR:
err, debug = message.parse_error()
print "Error: %s" % err, debug
self.player.set_state(gst.STATE_NULL)

def on_sync_message(self, bus, message):
if message.structure is None:
return
message_name = message.structure.get_name()
if message_name == "prepare-xwindow-id":
win_id = self.windowId
assert win_id
imagesink = message.src
imagesink.set_property("force-aspect-ratio", True)
imagesink.set_xwindow_id(win_id)
def startPrev(self):
self.player.set_state(gst.STATE_PLAYING)
print "should be playing"
vidStream = Vid(wId)
vidStream.startPrev()

where wId is the window id of the widget where I want to video displayed. Any ideas anyone? Thanks!

not sure where my indents went....
In case anyone saw this and had a similar problem, the answer appears to be that I needed to force the resolution of the pipeline to match the widget to where I was pumping the video:

self.fvidscale_cap = gst.element_factory_make("capsfilter", "fvidscale_cap")
self.fvidscale_cap.set_property('caps', gst.caps_from_string('video/x-raw-yuv, width=256, height=192'))

those two lines, and then adding them to the pipeline, did the trick!
 

The Following User Says Thank You to ptterb For This Useful Post:
Posts: 58 | Thanked: 10 times | Joined on Dec 2009
#23
Ptterb can you post your full code please?

I copied your code.
Added fvidscale_cap to pipeline, with:

self.player.add(self.source, self.scaler, self.fvidscale_cap, self.sink)
gst.element_link_many(self.source,self.scaler, self.fvidscale_cap, self.sink)

From the main program I create a new QWidget, and pass its winId() to Vid constructor.
The widget start loading, but crashes.

The output says:
should be playing
Segmentation fault
 
EmaNymton's Avatar
Posts: 141 | Thanked: 267 times | Joined on May 2010 @ Germany
#24
Hi, I'm facing the same problem right now and get also a segmentation fault so the full code would be very helpfull.
Thanks!
 
Posts: 58 | Thanked: 10 times | Joined on Dec 2009
#25
Originally Posted by EmaNymton View Post
Hi, I'm facing the same problem right now and get also a segmentation fault so the full code would be very helpfull.
Thanks!
In the script which contains the Vid class
In the beginning, in imports put
Code:
import gobject
gobject.threads_init()
Threading fixes the seg fault error
 

The Following User Says Thank You to zolakt For This Useful Post:
Posts: 143 | Thanked: 99 times | Joined on Jun 2009 @ Houston
#26
Originally Posted by klen View Post
I found the solution to the second problem. You do not need to use the Phonon library, as you can just pass the WinID of the QWidget in which you want to display the content of your gst pipline sink.

The post above lead me to the solution.

1.) Create your pipeline;

2.) Create the widget where you want to display the camera stream in;

3.) Define the ApplicationAttribute to Qt::AA_NativeWindows in your main function;
QCoreApplication::setAttribute(Qt::AA_NativeWindow s,true);

4.) Call:
QApplication::syncX();

5.) Set the xoverlay using the WinID of your Widget:
gst_x_overlay_set_xwindow_id (GST_X_OVERLAY (GST_MESSAGE_SRC (message)), widget->winId());

6.) Start playing your pipline:
gst_element_set_state (pipeline, GST_STATE_PLAYING);

Hi, can you be more specific about this? I have been trying to get video on a QWidget and I just can't make it work.
I think I'm doing everything like you are, including setting the dimensions of my Widget as the same as my video source and setting the piple to yuv instead of rgb...

I borrowing some classes from qtcartoonizer (camera and videolabel) but still the damn thing won't play...

I suspect its something in the way I build my gstreamer pipeline. Here's how I do it:

Code:
#define VIDEO_SRC "v4l2src"
#define VIDEO_SINK "xvimagesink"


	/* Initialize Gstreamer */
	gst_init( NULL, NULL);

	/* Create pipeline and attach a callback to it's
	 * message bus */
	m_pipeline = gst_pipeline_new("test-camera");

	bus = gst_pipeline_get_bus(GST_PIPELINE(m_pipeline));
	//gst_bus_add_watch(bus, (GstBusFunc)bus_callback, &m_appData);
	gst_object_unref(GST_OBJECT(bus));

	/* Create elements */
	/* Camera video stream comes from a Video4Linux driver */
        camera_src = gst_element_factory_make(VIDEO_SRC, "camera_src");
	/* Colorspace filter is needed to make sure that sinks understands
	 * the stream coming from the camera */
	csp_filter = gst_element_factory_make("ffmpegcolorspace", "csp_filter");
	/* Tee that copies the stream to multiple outputs */
	tee = gst_element_factory_make("tee", "tee");
	/* Queue creates new thread for the stream */
	screen_queue = gst_element_factory_make("queue", "screen_queue");
	/* Sink that shows the image on screen. Xephyr doesn't support XVideo
	 * extension, so it needs to use ximagesink, but the device uses
	 * xvimagesink */
	m_videoSink = gst_element_factory_make(VIDEO_SINK, "screen_sink");
	/* Creates separate thread for the stream from which the image
	 * is captured */
	image_queue = gst_element_factory_make("queue", "image_queue");
	/* Filter to convert stream to use format that the gdkpixbuf library
	 * can use */
	image_filter = gst_element_factory_make("ffmpegcolorspace", "image_filter");
	/* A dummy sink for the image stream. Goes to bitheaven */
	image_sink = gst_element_factory_make("fakesink", "image_sink");

	/* Check that elements are correctly initialized */
	if(!(m_pipeline && camera_src && m_videoSink  && screen_queue && csp_filter
		&& image_queue && image_filter && image_sink))
	{
		qDebug() << "Couldn't create pipeline elements";
		return FALSE;
	}

	/* Set image sink to emit handoff-signal before throwing away
	 * it's buffer */
	g_object_set(G_OBJECT(image_sink),
			"signal-handoffs", TRUE, NULL);

	/* Add elements to the pipeline. This has to be done prior to
	 * linking them */
	gst_bin_add_many(GST_BIN(m_pipeline), camera_src, csp_filter,
			tee, screen_queue, m_videoSink, image_queue,
			image_filter, image_sink, NULL);

	/* Specify what kind of video is wanted from the camera */
        caps = gst_caps_new_simple("video/x-raw-yuv",
                        "width", G_TYPE_INT, 640,
                        "height", G_TYPE_INT, 480,
			NULL);


	/* Link the camera source and colorspace filter using capabilities
	 * specified */
	if(!gst_element_link_filtered(camera_src, csp_filter, caps))
	{
		return FALSE;
	}
	gst_caps_unref(caps);

	/* Connect Colorspace Filter -> Tee -> Screen Queue -> Screen Sink
	 * This finalizes the initialization of the screen-part of the pipeline */
	if(!gst_element_link_many(csp_filter, tee, screen_queue, m_videoSink, NULL))
	{
        qDebug () << "gst video sink init fail";
		return FALSE;
	}

	/* gdkpixbuf requires 8 bits per sample which is 24 bits per
	 * pixel */
        caps = gst_caps_new_simple("video/x-raw-yuv",
                        "width", G_TYPE_INT, 640,
                        "height", G_TYPE_INT, 480,
			"bpp", G_TYPE_INT, 24,
			"depth", G_TYPE_INT, 24,
			"framerate", GST_TYPE_FRACTION, 15, 1,
			NULL);

	/* Link the image-branch of the pipeline. The pipeline is
	 * ready after this */
	if(!gst_element_link_many(tee, image_queue, image_filter, NULL))
    {
        qDebug () << "gst tee init fail";
        return FALSE;
    }
	if(!gst_element_link_filtered(image_filter, image_sink, caps))
    {

        qDebug () << "gst filterinit fail";
        return FALSE;
    }

    gst_caps_unref(caps);

    gst_element_set_state(m_pipeline, GST_STATE_NULL);
does this sound right to you?

Huge thanks to anyone that can help.

By the way, does anyone knows how to select the front camera? how do you do that?
 
Posts: 8 | Thanked: 1 time | Joined on May 2010
#27
Hi,

I think the idea for someone who has a working Qt camera example to publish it would be highly useful (best way to learn for beginners). I'd really appreciate it as well.

I tried building a Qt camera program based on an example, which seems to have been taken offline.
Code:
#include <QtGui/QApplication>
#include <gst/gst.h>
#include "mainwindow.h"

#ifdef HAVE_CONFIG_H
#include "config.h"
#endif

#include "main.h"

#include <QApplication>
#include <QTimer>

#include <gst/interfaces/xoverlay.h>

#include <stdlib.h>
#include "fast.h"

#define DEFAULT_VIDEOSINK "autovideosink"

#define IMGWIDTH 400
#define IMGHEIGHT 256

float *img;
byte *myimg;
int ret_num_corners, b=30;
xy* result;

GMainLoop *loop;

GstElement *pipeline, *camsource, *caps_yuv, *caps_rgb, *colorspace2, *colorspace, *xvsink;
GstBus *bus;

int rgb=1;

static gboolean bus_call (GstBus *bus, GstMessage *msg, gpointer data)
{
  GMainLoop *loop = (GMainLoop *) data;

  switch (GST_MESSAGE_TYPE (msg)) {

    case GST_MESSAGE_EOS:
      g_print ("End of stream\n");
      g_main_loop_quit (loop);
      break;

    case GST_MESSAGE_ERROR: {
      gchar  *debug;
      GError *error;

      gst_message_parse_error (msg, &error, &debug);
      g_free (debug);

      g_printerr ("Error: %s\n", error->message);
      g_error_free (error);

      g_main_loop_quit (loop);
      break;
    }
    default:
      break;
  }

  return TRUE;
}


SinkPipeline::SinkPipeline(QGraphicsView *parent) : QObject(parent)
{
  GstStateChangeReturn sret;

  // Create gstreamer elements
  // pipeline = gst_pipeline_new("gst-test");
  camsource = gst_element_factory_make("v4l2camsrc", NULL); //v4l2camsrc
  caps_rgb = gst_element_factory_make("capsfilter", NULL);
  colorspace = gst_element_factory_make("ffmpegcolorspace", NULL);
  //colorspace2 = gst_element_factory_make("ffmpegcolorspace", NULL);
  xvsink = gst_element_factory_make("xvimagesink", NULL);

  if (!(pipeline && camsource && caps_yuv && colorspace && caps_rgb && colorspace2 && xvsink)) {
    g_printerr ("One element could not be created. Exiting.\n");
  }

  //  Set up the pipeline

  // we set the input filename to the source element
  char yuvcapsstr[256], rgbcapsstr[256];
  sprintf(yuvcapsstr, "video/x-raw-yuv,width=%d,height=%d,bpp=24,depth=24,framerate=25/1", IMGWIDTH, IMGHEIGHT);
  sprintf(rgbcapsstr, "video/x-raw-rgb,width=%d,height=%d,bpp=32,depth=24,framerate=25/1", IMGWIDTH, IMGHEIGHT);

  g_object_set(G_OBJECT(caps_yuv), "caps", gst_caps_from_string(yuvcapsstr), NULL);
  g_object_set(G_OBJECT(caps_rgb), "caps", gst_caps_from_string(rgbcapsstr), NULL);

  // we add a message handler
  bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
  gst_bus_add_watch (bus, bus_call, loop);
  gst_object_unref (bus);

  if(rgb){
    // We add a buffer probe RGB
    GstPad *pad = gst_element_get_pad(caps_rgb, "src");
    gst_object_unref(pad);
    g_print("RGB\n");
  }else{
    // We add a buffer probe YUV
    GstPad *pad = gst_element_get_pad(caps_yuv, "src");
    gst_object_unref(pad);
    g_print("YUV\n");
  }

  // Create pipeline & test source
  pipeline = gst_pipeline_new ("xvoverlay");
  src = gst_element_factory_make ("videotestsrc", NULL);

  if ((sink = gst_element_factory_make ("xvimagesink", NULL))) {
    sret = gst_element_set_state (sink, GST_STATE_READY);
    if (sret != GST_STATE_CHANGE_SUCCESS) {
      gst_element_set_state (sink, GST_STATE_NULL);
      gst_object_unref (sink);

      if ((sink = gst_element_factory_make ("ximagesink", NULL))) {
        sret = gst_element_set_state (sink, GST_STATE_READY);
        if (sret != GST_STATE_CHANGE_SUCCESS) {
          gst_element_set_state (sink, GST_STATE_NULL);
          gst_object_unref (sink);

          if (strcmp (DEFAULT_VIDEOSINK, "xvimagesink") != 0 &&
              strcmp (DEFAULT_VIDEOSINK, "ximagesink") != 0) {

            if ((sink = gst_element_factory_make (DEFAULT_VIDEOSINK, NULL))) {
              if (!GST_IS_BIN (sink)) {
                sret = gst_element_set_state (sink, GST_STATE_READY);
                if (sret != GST_STATE_CHANGE_SUCCESS) {
                  gst_element_set_state (sink, GST_STATE_NULL);
                  gst_object_unref (sink);
                  sink = NULL;
                }
              } else {
                gst_object_unref (sink);
                sink = NULL;
              }
            }
          }
        }
      }
    }
  }

  if (sink == NULL)
    g_error ("Couldn't find a working video sink.");

  gst_bin_add_many (GST_BIN (pipeline), src, sink, caps_rgb, caps_yuv, colorspace, NULL);
  gst_element_link_many (src, colorspace, caps_rgb, sink, NULL);

  xwinid = parent->winId();
}

SinkPipeline::~SinkPipeline()
{
  gst_element_set_state (pipeline, GST_STATE_NULL);
  gst_object_unref (pipeline);
}

//Setzt im Wesentlichen Pipeline auf Status Playing
void SinkPipeline::startPipeline()
{
  GstStateChangeReturn sret;

  /* we know what the video sink is in this case (xvimagesink), so we can
   * just set it directly here now (instead of waiting for a prepare-xwindow-id
   * element message in a sync bus handler and setting it there)*/

  gst_x_overlay_set_xwindow_id (GST_X_OVERLAY (sink), xwinid);

  sret = gst_element_set_state (pipeline, GST_STATE_PLAYING);
  if (sret == GST_STATE_CHANGE_FAILURE) {
    gst_element_set_state (pipeline, GST_STATE_NULL);
    gst_object_unref (pipeline);
    // Exit application
    QTimer::singleShot(0, QApplication::activeWindow(), SLOT(quit()));
  }

  // Allow e display to be delayed
  g_object_set(G_OBJECT(xvsink), "sync", FALSE, NULL);
}


int main(int argc, char *argv[])
{
    QApplication a(argc, argv);
    MainWindow w;

    QGraphicsScene scene;
    scene.setSceneRect(-100.0, -100.0, 200.0, 200.0);

    QGraphicsView graphicsView (&scene);
    graphicsView.resize(400,256);//800,480
    graphicsView.setWindowTitle("Fancy application");
    graphicsView.show();

    img = (float*)malloc(sizeof(float)*400*256);
    myimg = (byte*)malloc(sizeof(byte)*400*256);

    loop = g_main_loop_new (NULL, FALSE);

    // Initialisation
    gst_init (&argc, &argv); //init gstreamer
    SinkPipeline sinkpipe(&graphicsView);
    sinkpipe.startPipeline();

    // Iterate
    g_print("Running...\n");
    g_main_loop_run(loop);

    // Out of the main loop, clean up nicely
    g_print("Returned, stopping playback\n");
    gst_element_set_state(pipeline, GST_STATE_NULL);

    g_print("Deleting pipeline\n");
    gst_object_unref(GST_OBJECT(pipeline));
    free(img);
    free(myimg);
    a.quit();
}
Of course, one has to add
Code:
INCLUDEPATH += /usr/include/gstreamer-0.10 /usr/include/glib-2.0 /usr/lib/glib-2.0/include /usr/include/libxml2
LIBS          += -lgstreamer-0.10 -lgobject-2.0 -lgmodule-2.0 -lgthread-2.0 -lrt -lxml2 -lglib-2.0 -lgstinterfaces-0.10
to the .pro file to compile this source.

The funny thing is that it works fine with the test source, but only displays a white screen when used with the actual source. Any way to fix this?

Btw, tpaixao: /dev/video0 is the main cam, /dev/video1 the front cam - I just don't know how to select it in gstreamer, yet, but maybe it helps.

Thanks in advance.
 

The Following User Says Thank You to Dorfmeister For This Useful Post:
Posts: 143 | Thanked: 99 times | Joined on Jun 2009 @ Houston
#28
Originally Posted by Dorfmeister View Post
Hi,

I think the idea for someone who has a working Qt camera example to publish it would be highly useful (best way to learn for beginners). I'd really appreciate it as well.
yes, i think that would be extremely helpful. I think klen managed to do it, maybe he could share some code?

Originally Posted by Dorfmeister View Post
The funny thing is that it works fine with the test source, but only displays a white screen when used with the actual source. Any way to fix this?
Exactly! I have exactly the same symptoms... I'm really lost here...
Help, anyone?
 
Posts: 124 | Thanked: 213 times | Joined on Dec 2009
#29
I use a quick'n'dirty method to create my pipelines...for now, anyway

Code:
gst_init(NULL, NULL);

pipeline = gst_parse_launch("v4l2src device=/dev/video[0|1] ! xvimagesink", NULL);

GstIterator* iter = gst_bin_iterate_sinks((GstBin*)pipeline);

GstElement* thisElem;

// Find sink at end of pipeline - there should be only one!
if (gst_iterator_next(iter, (void**)&thisElem) == GST_ITERATOR_OK)
{
  sink = thisElem;
  QApplication::syncX();
  gst_element_set_state(pipeline, GST_STATE_READY);
  gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(sink), Widget.winID());
  gst_element_set_state(pipeline, GST_STATE_PLAYING);
}
Basically, play with gst_launcher on the cmd-line until you're happy with the results, then paste the arguments into the gst_parse_launch() call.
 

The Following User Says Thank You to Dak For This Useful Post:
Posts: 14 | Thanked: 15 times | Joined on Feb 2010 @ bay area, us
#30
I didn't know anyone was still commenting on this thread! Sorry guys, never tried subscribing to the thread....

Anyway, I just posted this on Stack Overflow on my original question. I need to clean up the code before posting in full, but I think the important bits are here. Let me know if this still isn't working for you all.



self.cameraWindow = QtGui.QWidget(self)
self.cameraWindow.setGeometry(QtCore.QRect(530, 20, 256, 192))
self.cameraWindow.setObjectName("cameraWindow")
self.cameraWindow.setAttribute(0, 1); # AA_ImmediateWidgetCreation == 0
self.cameraWindow.setAttribute(3, 1); # AA_NativeWindow == 3

global wId
wId = self.cameraWindow.winId()

self.camera = Vid(wId)

self.camera.startPrev()

class Vid:
def __init__(self, windowId):
self.player = gst.Pipeline("player")
self.source = gst.element_factory_make("v4l2src", "vsource")
self.sink = gst.element_factory_make("autovideosink", "outsink")
self.source.set_property("device", "/dev/video0")
#self.scaler = gst.element_factory_make("videoscale", "vscale")
self.fvidscale = gst.element_factory_make("videoscale", "fvidscale")
self.fvidscale_cap = gst.element_factory_make("capsfilter", "fvidscale_cap")
self.fvidscale_cap.set_property('caps', gst.caps_from_string('video/x-raw-yuv, width=256, height=192'))
self.window_id = None
self.windowId = windowId
print windowId

self.player.add(self.source, self.fvidscale, self.fvidscale_cap, self.sink)
gst.element_link_many(self.source,self.fvidscale, self.fvidscale_cap, self.sink)

bus = self.player.get_bus()
bus.add_signal_watch()
bus.enable_sync_message_emission()
bus.connect("message", self.on_message)
bus.connect("sync-message::element", self.on_sync_message)

def on_message(self, bus, message):
t = message.type
if t == gst.MESSAGE_EOS:
self.player.set_state(gst.STATE_NULL)
elif t == gst.MESSAGE_ERROR:
err, debug = message.parse_error()
print "Error: %s" % err, debug
self.player.set_state(gst.STATE_NULL)

def on_sync_message(self, bus, message):
if message.structure is None:
return
message_name = message.structure.get_name()
if message_name == "prepare-xwindow-id":
win_id = self.windowId
assert win_id
imagesink = message.src
imagesink.set_property("force-aspect-ratio", True)
imagesink.set_xwindow_id(win_id)
def startPrev(self):
self.player.set_state(gst.STATE_PLAYING)
def pausePrev(self):
self.player.set_state(gst.STATE_NULL)
 
Reply


 
Forum Jump


All times are GMT. The time now is 11:54.