Introduction
In this video I look at video streaming using the Beaglebone black using: RTP, UDP unicasting, and UDP multicasting, which allows one to many streaming. In all of these examples I used the VLC media player to display the video data. The final part of this video goes on to describe how you can build your own software implementation that can display the data using LibVLC and the Qt framework. The advantage of doing this is that you can add your own data processing and controlling functionality into the video display. You could even develop code for capturing multiple streams simultaneously and processing the data — for example, for stereo imaging.
The Video
Please note that I use Camtasia to capture the video stream on the PC desktop for this video and it limits the framerate that I can capture. The actual framerate of the video being streamed appears to be around 30 frames per second, which is very fluid. The camera works best if it is stationary due to the compression algorithm used.
If you use this code or the content of the associated video in your research, please cite:
Molloy, D. [DerekMolloyDCU]. (2013, July, 19). Beaglebone: Streaming Video from Embedded Linux [Video file]. Retrieved from http://youtu.be/-6DBR8PSejw
The Important Blog Posts (in Order)
The first page that is important is the first post on this topic at: Beaglebone: Video Capture and Image Processing on Embedded Linux using OpenCV, which looks at how you can get started with video capture and image processing on the Beaglebone. It is an introductory video that should give people who are new to this topic a starting point to work from.
Once you have that working, the following posts are the important ones on the topic of streaming video:
- Streaming Video using RTP on the Beaglebone Black
- UDP Unicast and Multicast Streaming Video using the Beaglebone Black
- Custom Video Streaming Player using LibVLC and Qt
These are the core posts that are discussed in the video.
Source Code
The code for this video is available at: github.com/derekmolloy/boneCV/ but the important code is presented below:
The Execution Scripts are as follows:
streamVideoRTP
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | #!/bin/bash echo "Video Streaming for the Beaglebone - derekmolloy.ie" echo "Piping the output of capture to avconv" # Next line not necessary if you are using my -F option on capture v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=1 # Pipe the output of capture into avconv/ffmpeg # capture "-F" My H264 passthrough mode # "-o" Output the video (to be passed to avconv via pipe) # "-c0" Capture 0 frames, which means infinite frames in my program # avconv "-re" read input at the native frame rate # "-i -" Take the input from the pipe # "-vcodec copy" Do not transcode the video # "-f rtp rtp://192.168.1.2:1234/" Force rtp to output to address of my PC on port 1234 ./capture -F -o -c0|avconv -re -i - -vcodec copy -f rtp rtp://192.168.1.4:1234/ |
streamVideoUDP
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | #!/bin/bash echo "Video Streaming for the Beaglebone - derekmolloy.ie" echo "Piping the output of capture to avconv" # Next line not necessary if you are using my -F option on capture v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=1 # Pipe the output of capture into avconv/ffmpeg # capture "-F" My H264 passthrough mode # "-o" Output the video (to be passed to avconv via pipe) # "-c0" Capture 0 frames, which means infinite frames in my program # avconv "-re" read input at the native frame rate # "-i -" Take the input from the pipe # "-vcodec copy" Do not transcode the video # "-f rtp rtp://192.168.1.2:1234/" Force rtp to output to address of my PC on port 1234 ./capture -F -o -c0|avconv -re -i - -vcodec copy -f mpegts udp://192.168.1.4:1234 |
streamVideoMulti
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | #!/bin/bash echo "Video Streaming for the Beaglebone - derekmolloy.ie" echo "Piping the output of capture to avconv" # Next line not necessary if you are using my -F option on capture v4l2-ctl --set-fmt-video=width=1920,height=1080,pixelformat=1 # Pipe the output of capture into avconv/ffmpeg # capture "-F" My H264 passthrough mode # "-o" Output the video (to be passed to avconv via pipe) # "-c0" Capture 0 frames, which means infinite frames in my program # avconv "-i -" Take the input from the pipe # "-vcodec copy" Do not transcode the video # "-f rtp rtp://192.168.1.2:1234/" Force rtp to output to address of my PC on port 1234 ./capture -F -o -c0|avconv -re -i - -vcodec copy -f mpegts udp://226.0.0.1:1234 |
I’ve been trying to run your code, but I keep getting this error:
pipe:: Invalid data found when processing input
clearly avconv is recieving wrong input type, but I do not know how to configure capture.c to give me right input.
Any help??
What webcam are you using. I think avconv requires an input in .h264 format to work and the camera Derek uses does this natively, so check if your webcam can output in .h264. Also in this page Derek goes over how to use the capture program http://derekmolloy.ie/beaglebone/beaglebone-video-capture-and-image-processing-on-embedded-linux-using-opencv/
I can’t remember for sure, but I think Derek mentions what to do if your webcam can’t output in .h264
Hi,
I am having the same problem. Did you find any workarounds for that?
Got the same error message as Manjinder Singh
Same pipe error here too.
Thanks a lot for the source code – it does exactly what I wanted to do with my BBB, and it worked straight away.
In principle the following would have been much simpler and should also work, but for some reason it doesn’t…
avconv -f video4linux2 -i /dev/video0 -vcodec copy -f rtp rtp://192.168.12.102:8090/
Do you know why? I guess you also tried it at length.
I still don’t understand why streaming of video always has to be so difficult…
Hi Nicolas,
Yes, I did try that and I too am at a loss for why it is not working.
Derek.
Hi. I was test this blog. It’s good. Thanks.
But the source is not sending audio.
Please tell me audio send to know.
Thanks.
Really impressive blog,
I was questening myself if the latency (that already is low) could be lower with use of another webcam. Like the newer c930e. Because I read that the c920 give a latency of around 1s (because of the encoding in the webcam) and some people have test the c930e and got a latency around 0,2s with this new cam of Logitech. Do you think that this could also be possible in combination with a BBB?
Keep going on with this good stuff.
Hi Derek,
I just wanted to send you a BIG thank you! All the work you’ve done and compiled together for everyone is amazing!
I’ve read all your posts and videos related to C920 + BBB. I managed to get all working and streaming with RTP, now I’m digging into the code and trying to understand every little piece, so then I can modify it in the future to suit my needs (I’m using C920 + BBB + Qt, so your tutorials were extremely helpful).
Hey Derek.. Thnx a lot. Your video & your blogs helps me a lot for learning image processing on Beaglebone black.I am making project using beaglebone black. I had took pictures & edge detection using beaglebone black. But after capturing video , it is not playing in VLC midea means it shows that we had took 20 seconds video but it remains still. & we had tried on Gstreamer also. Using Gstreamer results were better then VLC media but not exactly what we want.
so please guide me. Please reply as soon as possible. Please……..
Hi, Thanks for all the great work. I really appreciate your work. Will you please guide me on how to transfer this video stream over wifi to another beaglebone board.
Hi, I’m not sure what the application is, but you could write C++ client socket code to send the MPG4 stream from the first board to the second board running C++ server socket. If you are planning to play the MPG4 stream on the second BBB be aware that it will likely not work. You would have to use a Raspberry PI to play the data as it has built in MPG4 hardware decoding? Interesting project! Derek.
Hello Sir,
I am trying to do the same project with Beaglebone Black and and a Logitech c170 instead of the HD C920.
c170 only support 640×480, can you help me to modify your codes(especially capture.c) to work fine with c170.
Great fan of your work. Greetings and Thanks.
Hello there,
I have beaglebone black board. What my purpose is i have to play endless videos on that board. I am new. While googling i came to this blog. So any help will be delighted.
Great tutorial , I learned a lot from it but I don’t understand why you put all that effort and wrote the code for the h264 format when most of the people watching this won’t have a camera with that format and in any case will be using YUV for robotics image processing etc which is what you are (if I’m not mistaken) making this tutorial for.
Thanks Shagas, having H264 gives you full HD streaming, which would not be possible with MJPEG cameras and the BBB. I do use the camera for computer vision applications too, but on the BBB directly in the RGB colour space. Kind regards, Derek.
Hello this info has been very helpful but I have a question?
I am wondering why is it necessary to use to use ffmpeg or avconv to pass the video stream of H264 compressed video from capture to mpegts. If the flags “-vcodec copy -f” are bing used no transcoding is being done by ffmpeg so why use it. What am I missing?
Where can I get ./capture? is it somewhere on my beaglebone black?
Thanks for the video.I try to do this with logitech C270 which ;only supports MJPEG. Changed your code on capture.c compiled it, did capture video and saw it after transfering it but on streaming…I get an error ” Format mjpeg detected only with low score of 25, misdetection possible!” but the thing goes on and tries to send the data. But I can’t see it with vlc.That’s the only error I get any ideas??
Thank you in advance
Loc 12938: In the example where you configure the camera to use a certain set of pixel dimensions, I am having a problem. Basically, even though the v4l2-ctl help reports that it recognizes the -d and –device= parameters, it doesn’t actually seem to. For instance when I try, I get this…
****************************************
$ v4l2-ctl –set-format-video=width=640,height=480,pixelformat=1 -d 0
v4l2-ctl: unrecognized option ‘–set-format-video=width=640,height=480,pixelformat=1′
Unknown argument `-d’
Usage:
Common options:
–all display all information available
…
…
-d, –device= use device instead of /dev/video0
if is a single digit, then /dev/video is used
****************************************
So you can see that it is supposed to recognize the device parameter(s), however it simply doesn’t. Therefore I cannot get a pixel dimension configured, and consequently I cannot actually use the capture program a lit later in the chapter. Because I have the C910 (not the C920) camera, I cannot use the h.264 codec–because my camera doesn’t seem to support it. So I need to go recompile the capture.c file to length the select() timeout, which is easy enough. Basically though, this line sets ‘r’ to 0…
~~~~~~~~~~~~~~~~~~~~~~~~~~~
r = select(fd + 1, &fds, NULL, NULL, &tv);
~~~~~~~~~~~~~~~~~~~~~~~~~~~
…so I keep getting a select() error. I reviewed the man page for select and I think it’s just because select() is timing out (and thus returning 0). And I *think* it’s timing out because my camera is not set to the default 640×480 pixel format…which brings me back to that v4l2-ctl command. Whew! Anyway, I am unsure where to go from here, but can’t see how you got that line to work (the “v4l2-ctl –set-fmt…” line), given the error I showed above. Any ideas?
My team and I are using your library to stream video from our robot to a pc. We were hoping to reduce the latency. I didn’t know if you had some insight as to what sort of settings to change. We are not looking for a HD stream, just one that has lower latency.
Thanks
Tim
Hey Derek,
I am doing onboard image processing. My board, which is of course not BBB, will be inside a Unmanned Underwater Robot. I want to see what the robot sees, which is not the direct stream from the camera but the opencv output. Any clue how I do this? I am not familiar with netwoking concepts, but am willing to learn it, but lack time.
Hi Dalton, it is going to be difficult given the amount of data that you would have to transact and the processing capability of the BeagleBone. However, if you can use very low frame rates then you could use Qt sockets and send encoded (e.g., JPG) image frames directly to the viewer application. If you look at the Valent(FX) FPGA board you will see that they have a solution for this, but it builds in a significant layer of complexity. Kind regards, Derek.
Hey Derek,
I was using some of this code and I was wondering; Do you think there’s any way to run the capture video program from a php site? I looked and prodded around the internet looking for a way to do something similar to it, but nothing came up. Any ideas?
Thanks,
– Andrew
Hi Derek,
thanks for sharing your project. Do you have any clue what is bottleneck which causes video to lag?