Developer Martin Eisenbarth is currently working on a “video hack” for UpStage, in conjunction with the project We Have a Situation. Helen spoke to Martin to find out more about just what is a video hack, and what it will mean for UpStage artists.
What is the goal of the video hack?
The purpose of the video hack is to enable audio and video streaming in UpStage. Until now the only way to include a video-like appearance is to pre-upload media files or make use of the MJPEG (motion jpeg) feature, which takes a live webcam feed and updates it every second or two.
What is Red5, and how will it be used in the video hack?
Red5 is a powerful open source streaming solution for Flash-based applications. It is chosen because it is freely available with no further costs. In practice it is replaceable by any other software providing Flash-based streaming services, like the Flash Media Server or the Wowza Media Server.
Red5 will be used for streaming only, as remoting functionality is not needed and is covered by the UpStage text-protocol implementation. In theory, the Red5 install location is independent from the UpStage install location. For convenience it will be on the same server during development and early deployments.
What other technologies will the video hack use?
The video hack will integrate with the currently used technologies like ActionScript2, Python with Twisted and Linux. Apart from that, the use of Red5 introduces some new technologies like RTMP (and not to forget the related media-codecs) and a little bit of Java.
How will it function for the player – will we need any additional software ourselves or will it all work through the UpStage interface?
Basically the mediastream will be represented in UpStage as an additional avatar type. The core functionality can be divided into two parts: stream publishing and avatar control (onstage). Publishing the stream, meaning starting and stopping it, is handled by Red5, whereas controling the avatar is done via the UpStage user interface (e.g. visibility, positioning, etc.).
For publishing a stream there exist various approaches on how to initiate a media stream. Usually it is done either via the Red5 web-interface or any RTMP-capable third party software connected to Red5. Publishing a stream and controling an avatar may be combined in the UpStage user interface at a later time but for now is the simpliest way to incooperate streaming functionality as ‘hack’. Essentially controlling a stream avatar will not differ from handling already known avatar types.
So in the end there will be no need to have additional software installed as it will be possible to manage everything via the web interface.
Will there be a limit to how many webcams can be on one stage at a time?
That’s a really good question! For sure there are limits for how many streams can be on stage simultaneously. It just depends on the power of the streaming server itself and therefore on its sum of available hardware resources.
The theoretical formula can be broken down to:
number of streams = total bandwidth / bitrate of a single stream
For example: if the streaming server has a total bandwidth of 100 Mbit/s available and all streams equally consume 300 kbit/s it is possible to handle approximately 333 streams in parallel. In this case, if there is a single stream on the stage it means it can be served to 333 clients (including players and audience). If there are two streams onstage the number of clients drops to approx. 166, three streams drop the number to 111, and so on…
Technologies like cloud computing and content distribution with reflectors can offer solutions to overcome hardware limitations, but will probably be far more expensive than operating on a lower level.
In future versions of UpStage, will the video hack be able to be incorporated as a stable feature within UpStage?
Yes, sure. All sources are publically available and can be found on GitHub:
What specific tasks are available for anyone interested in helping?
In short: everything concerning development. Many parts of the UpStage code lack development documentation so the process is similiar to reverse engineering, at least for myself.
In terms of an iterative process, the development can be broken down to the following specific tasks:
- code analysis and debugging
- modifying the code base
- and testing the modifications.
Both on the server and the client side. Especially the UpStage text-protocol plays a major role as the whole communication between client and server is based on it. But also the integration of a streaming player on the client side is a specific task.
A preliminary list of found bugs is published on GitHub (section ‘Issues’) and may be extended with specific tasks for cooperative work. Right now I am the only developer and I handle it pragmatically by focussing on programming. Documentation on the code modifications can be found in the GitHub commit log.If there is an interest to join the development efforts, feel free to subscribe to the UpStage developer list or enter the IRC channel #upstage on Freenode and get more detailed information.
The video hack overall has a tight timeframe and will be finalized by the beginning of April 2013, so there is not much time left. I am looking forward to giving any collaborators a warm welcome and answering any further questions that may arise.