This document describes scripts for preparing, playing and network streaming of videos on hi-res LCD walls using various methods with and without the SAGE2 software.
Currently, we support the following methods:
- Local playback of hi-res video (tested up to 8k) on an LCD wall (without SAGE2)
- Network streaming of video to SAGE2 using virtual web cameras and external streaming software
- Network streaming of video to SAGE2 using WebRTC
- Network streaming of video to SAGE2 using JPEG
Method 2 is the most powerful network streaming. Methods 3 and 4 are alternatives with simpler setup.
- python (version 3.4 or newer)
- libyuri
- virtual camera
- ffmpeg - Tested with original FFMPEG. It may work with libav, but it's not supported in any way.
- ultragrid – for method 2
- nodejs - for method 3.
An ebuild for these scripts and dependencies (mainly libyuri and virtual webcamera) is also provided as ebuild for Gentoo Linux in cave_overlay and there is an overlay list XML on https://iim.cz/cave.xml.
If you have layman installed, then you can (as root):
cd /etc/layman/overlays
wget https://iim.cz/cave.xml
layman -a cave_overlay
emerge -atv sage_video virtual_camera
When installing withou an ebuild, you should:
- Install dependencies (python, ...)
- install the scripts:
- Put the scripts anywhere in your PATH
- create a configuration file in /etc/sage_video/config.json (based on the example python/config.json.sample)
- Do not forget to specify the path to the configs directory in config.json
The sample configuration file looks like this:
{
"total_x": 9600,
"total_y": 4320,
"stripes_x": 5,
"stripes_y": 1,
"renderers": {
"0_0": {
"dir": "/usr/share/sage_video/configs",
"address": "sage@xxxx",
"webcam_root": "/XXXX/XXX/virtual_webcam",
"alt_ip": "xxxxx"
}
},
"server": {
"dir": "/usr/share/sage_video/configs"
}
}
- total_x, total_y - Resolution of the whole wall in pixels
- stripes_x, stripes_y - How are the PC nodes organized (not displays). For example, in our case, we have 20 displays - 4 rows and 5 columns. There are 5 PC nodes, each of them driving a single column of 4 displays. So there are 5 stripes in X axis, and only one stripe in Y axis.
- server.dir - Directory with XML configuration files from the repository.
- renderers - The nodes.
- name of each renderer is "X_Y" - where X is the number of the stripe in the X axis and Y is the number of the stripe in the Y axis.
- dir - directory with XML's (like server.dir)
- address - user and address of the node. e.g. [email protected]
- alt_ip - alternative IP address (if you have multiple network interfaces on nodes and you want to use non-default interfaces, such as local interconnection over private IP addresses)
- webcam_root - root directory of the virtual webcam installation.
Make sure that the user running the script on the server has rights to access all the nodes using ssh without password (authenticated by a key).
The nodes should have the module vcmod (for virtual camera) loaded, preferably set to load automatically during boot.
This solution works as a distributed player. Video is divided into stripes. Each stripe is played out on a separate node and presented on a vertical column of LCD monitors.
- Server - master PC controlling the playback
- Nodes - PCs for playout of individual stripes
- Storage - common disk storage accessible from all nodes
User starts a script on the server and the file is then played synchronously on all the nodes. The script also starts a webserver for controlling the playback (pause/play).
Internally, it can work in two modes:
- Direct playback - The server and all nodes accept the video file in its full resolution. The server sends messages to the nodes to maintain synchronization. The server can also play sound.
- Prepared videos - The video file is first cut into stripes, one for each node. The server then plays the original video (or a scaled down version) and each node plays just its own stripe. This mode is more effective and allows to play higher resolution videos (8K).
Assuming that all PCs (server and nodes) have shared a storage mounted on /storage:
sagevideo /storage/VIDEO.mp4
This plays a video file /storage/VIDEO.mp4 on all nodes, where each node has to process the full resolution video file.
sagevideo -s /storage/VIDEO.mp4
The same with sound (using JACK) enabled.
sagevideo -p /storage/VIDEO.mp4 -o /storage/VIDEO
This takes the input video file, calculates the required cropping/padding and creates video files for all the stripes and one down scaled video file for the server.
sagevideo /storage/VIDEO_small.mp4 -o /storage/VIDEO
This plays the small video file on the server and the pre-encoded stripes on nodes.
In our setup with five nodes, each driving four 1080p displays, it is possible to use the direct method for formats up to 4K30p. For higher resolutions or fps, it is necessary to first prepare the video into stripes.
The setup consists of a server that sends the video (either from a video file or from a capture card), receivers on nodes with virtual web cameras and the SAGE2 application displaying the video.
The actual network streaming is implemented using external streaming software, such as Ultragrid, with H.264 compression and streaming in RTP packets. The PC nodes decode the received video streams and put them into virtual web cameras. The SAGE2 application then displays the output of the web cameras inside web browsers.
- Load the vcmod module for the virtual web camera on all nodes
- Run the sagewebcam script
- Run the SAGE2 application local_webcam to show the web camera output
Running the sagewebcam script:
sagewebcam /storage/VIDEO.mp4
This script configures the virtual cameras on all nodes, starts receivers, opens a video source and streams it to all nodes.
There are several parameters that change the behaviour:
- -s/--streamer - Sets the external streaming software. The values can be 'uv' for Ultragrid and ‘yuri’ as an alternative using libyuri internal streaming routines.
- -a/--alternative - Use alternative addresses (in case of networking problems on the main network interface)
- -d/--decklink - Use a Decklink capture card. The filename should be set to a valid Decklink format (eg., 1080p25)
- -l/--log - Saves the output to log files (/tmp/sagewebcam*)
- -m/--mtu - Specifies mtu
- -e/--encode – If the input video is stored in a H.264 encoded file, stream it as it is, without decoding and encoding it again
The virtual web cameras are automatically created by the sagewebcam script exactly for the resolution of the input video. The same virtual web cameras can be used repeatedly for streaming of multiple video sources, as long as the resolution does not change. If you change resolution of the input video, it is necessary to stop all Chrome/Electron web browsers of the SAGE2 before running the sagewebcam script and start all Chrome/Electron web browsers afterwards. Otherwise the web browsers hold the virtual web cameras open and they cannot be re-created by the sagewebcam script.
The sender side uses a virtual web camera and a web browser (Chrome), which takes video from the virtual web camera and streams it to all web browsers of SAGE2. The sender side PC must have the vcmod module loaded.
Set the address of the sender in the sage_webrtc.js script.
On the sender, in the webrtc directory, run:
npm install
On the sender, in the webrtc directory run:
npm run start
On the sender, run the sagewebrtc script:
sagewebrtc /storage/video.mp4
On the sender, start a Chrome web browser and point it to the URL http://localhost:8800
Now you can start the SAGE2 applicationwebrtc_test, which should receive the streamed video. You can see statistics on the web page opened by the web browser on the sender.
The sender opens a file, encodes frames to JPEG images and starts a webserver providing them over the network. The SAGE2 application then periodically requests images from the web server and displays them.
Set the address of the sender in the sage_jpeg.js script.
On the sender, start the sagewebjpeg script:
sagewebjpeg /storage/VIDEO.mp4
Now you can start the SAGE2 application yuri_image, which should receive the streamed video.