Posted by: Badri | August 31, 2012

Kinect and Reactive Extensions (Rx)

Kinect and Reactive Extensions (Rx) are made for each other – one pumps events and the other one handles them. I was working on a WPF application using Kinect. One of the screens have to (a) track the user in front and show him/her in motion more like an infrared image, (b) show a ticking timer and (c) track the skeletion points and continously compute angles and apply some business logic. Too much for one screen! Doing everything in the UI thread will be a disaster and I was in need of more threads to handle these things but ultimately the end result of all the processing is UI getting updated. Doing this by creating my own threads will result in sub-optimal code. Rx to the rescue.

Reactive Extensions’ Observable.FromEvent() allows Kinect frame ready events to be converted into an observable sequence. Once I have an observable sequence, I can leverage the features of Rx. For example, I need the time stamp along with every skeleton received. This will help me calculate the time duration between two skeletons (start and end) that I was interested in. I can simply use Timestamp() to let Rx record time stamp for me.

The thing I like the most is that Rx allows me to subscribe on the worker threads in the thread pool and observe on dispatcher for updating UI with just one line of code. For drawing the infrared picture of the user, I can leverage the sampling feature. I can trade fidelity for CPU utilization by sampling. I might not need all the 30 frames generated in a second. To write into my write-able bitmap to show the infrared image, I might need only one image per half a second. It might get jittery but is gentle on the CPU. More importantly, I get all these by writing one line of code. Now building on this, sampling frequency can be dynamically increased or decreased based on the number of CPU cores and CPU usage at that point in time. It is just a great feature. Hail Erik Meijer!

Here is the Sensor class that uses Rx to handle Kinect events. This class can be dependency injected into a view model class, in an MVVM scenario. Of course, an interface has to be extracted out of this class and the interface must include the callbacks defined for the view model to hook into this. Just so that I’m not very abstract here, I can define a property Action<Skeleton, DateTimeOffset> SkeletonCallback { get; set; } in ISensor and have the property implemented in Sensor. View model can set this property to one of it’s own methods and Sensor class can call that method from HandleFrame() through the Action delegate property SkeletonCallback.

internal class Sensor : IDisposable
{
    private KinectSensor sensor = null;
    private IDisposable framesSubsrciption = null;
    private IDisposable imagesSubsrciption = null;

    public Sensor() { }

    // Start Kinect and use the first sensor.
    // Subscribe to the AllFramesReady and DepthFrameReady events.
    public void Start()
    {
        if (KinectSensor.KinectSensors.Count > 0)
        {
            sensor = KinectSensor.KinectSensors[0];
            sensor.DepthStream.Enable(DepthImageFormat
                                          .Resolution320x240Fps30);
            sensor.SkeletonStream.Enable();
            sensor.Start();

            // AllFramesReadyEventArgs will all be timestamped.
            // Subscription is set to run on ThreadPool but observer
            // will be notified using UI thread dispatcher
            var frames = Observable.FromEvent<AllFramesReadyEventArgs>(
                ev => { sensor.AllFramesReady += (s, e) => ev(e); },
                ev => { sensor.AllFramesReady -= (s, e) => ev(e); })
            .Timestamp()
            .ObserveOnDispatcher()
            .SubscribeOn(Scheduler.ThreadPool);

            // Subscription is set to run on ThreadPool but observer
            // will be notified using UI thread dispatcher.
            // All events are not processed but only a sample.

            // If player animation gets jittery, decrease interval
            TimeSpan interval = TimeSpan.FromMilliseconds(500);

            var images = Observable
                .FromEvent<DepthImageFrameReadyEventArgs>(
                    ev => { sensor.DepthFrameReady += (s, e) => ev(e); },
                    ev => { sensor.DepthFrameReady -= (s, e) => ev(e); })
                .Sample(interval)
                .ObserveOnDispatcher()
                .SubscribeOn(Scheduler.ThreadPool);

            framesSubsrciption = frames.Subscribe(HandleFrame);
            imagesSubsrciption = images.Subscribe(HandleImage);
        }
    }

    private void HandleFrame(Timestamped<AllFramesReadyEventArgs> args)
    {
        if (args != null)
        {
            // Get your Skeleton and time stamp
            // from args and do your thing...
        }
    }

    private void HandleImage(DepthImageFrameReadyEventArgs args)
    {
        if (args != null)
        {
            // Get your DepthImageFrame from args and do your thing
        }
    }

    public void Dispose()
    {
        if (framesSubsrciption != null)
            framesSubsrciption.Dispose();

        if (imagesSubsrciption != null)
            imagesSubsrciption.Dispose();

        if (sensor != null)
        {
            sensor.Stop();
            sensor.AudioSource.Stop();
            sensor.Dispose();
        }
    }
}
About these ads

Responses

  1. Thanks Badri. It was a good learning for me..


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: