The Hands On Labs to complete a sample application for Windows 8.1 and the Kinect 2 for Windows
Estimated Time to Complete: 40min
This lab is part of a series of hands on labs which teach you how to create a Windows 8.1 Store Application using almost every available feature of the Kinect 2. This is the ninth lab in the series, and teaches you how to use the Kinect 2 to get face points and use them to manipulate a camera in a 3D game.
This lab will explain the following:
To simplify this tutorial, an msdn Windows 8.1 game sample has been extracted into a usable portable library. This library is called DirectXSceneStore and it is a panel which renders a 3D shooter game in a room with targets to hit. The game camera is exposed through a method to set the pitch and yaw, which will be used later.
This lab, and all subsequent labs in this series, are built using C# and
assume you have a fundamental knowledge of the C# language.
The screenshots here are from Visual Studio Community Edition.
Start by adding the DirectXSceneStore library to the project. It is a native library (C++) which uses DirectX to render the game, and so some of its resources are external. They must be accessible from the library's location, but they don't have to be included in your project.
Add the DirectXSceneStore.winmd library as a reference by browsing to the Libraries directory.
Note your current configuration settings by selecting BUILD > Configuration Manager... and checking the Active Solution Platform: on the top right. Make sure it is set to x64.
Build and Run the application and make sure there are no errors.
To get the game up and running, open the MainPage.xaml.cs file and add a new DisplayFrameType of FaceGame.
namespace Kinect2Sample { public enum DisplayFrameType { Infrared, Color, Depth, BodyMask, BodyJoints, BackgroundRemoved, FaceOnColor,FaceGame}
Open the MainPage.xaml to add the panel which will host the game. The game is hosted in a user control which is a panel, and can be any size within the application. You will position it in the center of the page along with all the other frame presenters from the previous labs. The game will load itself on app startup as it is added to the page, but it will only be visible when the CurrentDisplayFrame is set to FaceGame.
Add the game panel, then a button to display it, then a handler for the button click event.
<Page x:Class="Kinect2Sample.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="using:Kinect2Sample" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"xmlns:dx="using:DirectXSceneStore"mc:Ignorable="d"> <Viewbox Grid.Row="1" HorizontalAlignment="Center"> <Grid x:Name="BodyJointsGrid" Background="Transparent" Width="512" Height="414"/> </Viewbox><dx:ScenePanel x:Name="DXScenePanel" Grid.Row="1"Margin="20"Visibility="{Binding CurrentDisplayFrameType,Converter={StaticResourceDisplayTypeToVisibilityConverter},ConverterParameter=FaceGame }" /><Viewbox Grid.Row="1" HorizontalAlignment="Center"> <Canvas x:Name="FacePointsCanvas"/> </Viewbox>
<Button Style="{StaticResource FrameSelectorButtonStyle}" Click="InfraredFaceButton_Click"> <TextBlock Text="Infrared Face" TextWrapping="Wrap" /> </Button><Button Style="{StaticResource FrameSelectorButtonStyle}"Click="FaceGameButton_Click"><TextBlock Text="Face Game" TextWrapping="Wrap" /></Button></StackPanel>
private void FaceGameButton_Click(object sender, RoutedEventArgs e){SetupCurrentDisplay(DisplayFrameType.FaceGame);}
Build and run the application, Click the Face Game button and the game should present itself. It may still be loading the scene so you may have to wait a second to begin the level.
It's a boring game without any interaction, so lets update the game with input retrieved from face data.
In the MainPage.xaml.cs SetupCurrentDisplay() method, add a new case to the switch with the DisplayFrameType of FaceGame. Within this case, you will set up the FacePointCanvas which you will use to show points of the face for debugging.
private void SetupCurrentDisplay(DisplayFrameType newDisplayFrameType) { //... switch (CurrentDisplayFrameType) { case DisplayFrameType.Infrared: //... case DisplayFrameType.Color: //... case DisplayFrameType.Depth: //... case DisplayFrameType.BodyMask: //... case DisplayFrameType.BodyJoints: //... case DisplayFrameType.BackgroundRemoved: //... case DisplayFrameType.FaceOnColor: //... case DisplayFrameType.FaceOnInfrared: //...case DisplayFrameType.FaceGame:colorFrameDescription =this.kinectSensor.ColorFrameSource.FrameDescription;this.CurrentFrameDescription = colorFrameDescription;this.FacePointsCanvas.Width = colorFrameDescription.Width;this.FacePointsCanvas.Height = colorFrameDescription.Height;break;default: break; } }
In the Reader_MultiSourceFrameArrived() method, add a new case for FaceGame which contains a call to a new method which will update the Look direction within the game, from the face data.
private void Reader_MultiSourceFrameArrived(MultiSourceFrameReader sender, MultiSourceFrameArrivedEventArgs e) { //... switch (CurrentDisplayFrameType) { case DisplayFrameType.Infrared: //... case DisplayFrameType.Color: //... case DisplayFrameType.Depth: //... case DisplayFrameType.BodyMask: //... case DisplayFrameType.BodyJoints: //... case DisplayFrameType.BackgroundRemoved: //... case DisplayFrameType.FaceOnColor: //... case DisplayFrameType.FaceOnInfrared: //...case DisplayFrameType.FaceGame:FaceGameLookUpdate();break;default: break; } }
The FaceGameLookUpdate method is going to get the latest face results and use one of the retrieved faces to affect the game. As a debug helper, all the face points will be displayed as ellipses so you are aware of whose face is in control of the game. You need to extract the rotational data (in degrees) from the face, this is done in a new method.
private void FaceGameLookUpdate(){this.FacePointsCanvas.Children.Clear();FaceFrameResult[] results =faceManager.GetLatestFaceFrameResults();for (int i = 0; i < results.Count(); i++){if (results[i] != null){foreach (KeyValuePair<FacePointType, Point> facePointKVP inresults[i].FacePointsInColorSpace){if (facePointKVP.Value.X == 0.0|| facePointKVP.Value.Y == 0.0){break;}Size ellipseSize = new Size(10, 10);Ellipse ellipse = new Ellipse();ellipse.Width = ellipseSize.Width;ellipse.Height = ellipseSize.Height;ellipse.Fill = new SolidColorBrush(Colors.Red);Canvas.SetLeft(ellipse, facePointKVP.Value.X -(ellipseSize.Width / 2));Canvas.SetTop(ellipse, facePointKVP.Value.Y -(ellipseSize.Height / 2));this.FacePointsCanvas.Children.Add(ellipse);}double pitch, roll, yaw = 0;ExtractFaceRotationInDegrees(results[i].FaceRotationQuaternion,out pitch, out yaw, out roll);// TODO - use pitch and yaw to move camera in gamebreak;}}}
Add the ExtractFaceRotationInDegrees method at the bottom of the MainPage class. This class converts the complete quaternion rotation into separate pitch, yaw, and roll values, in degrees.
private static void ExtractFaceRotationInDegrees(Vector4 rotQuaternion, out double pitch, out double yaw, out double roll){double x = rotQuaternion.X;double y = rotQuaternion.Y;double z = rotQuaternion.Z;double w = rotQuaternion.W;// convert face rotation quaternion to Euler angles in degreespitch = Math.Atan2(2 * ((y * z) + (w * x)), (w * w) -(x * x) - (y * y) + (z * z)) / Math.PI * 180.0;yaw = Math.Asin(2 * ((w * y) - (x * z))) / Math.PI * 180.0;roll = Math.Atan2(2 * ((x * y) + (w * z)), (w * w) +(x * x) - (y * y) - (z * z)) / Math.PI * 180.0;}
Finish the FaceGameLookUpdate() method. It's possible to give the direct pitch and yaw to the DXPanel.SetYawPitch() method but as you will see, the orientations of the face jitter rapidly, so to mitigate this issue there is an accuracy threshold and a sensitivity multiplier. You only update the actual camera if the difference between the previous yaw and pitch and the current yaw and pitch is greater than the accuracy value.
To do this you need some new constants declared, and to store the previous Yaw and previous Pitch as class variables.
Finally, detect if the mouth is open, and call the Fire() method, if it is.
private const DisplayFrameType DEFAULT_DISPLAYFRAMETYPE = DisplayFrameType.Infrared;private const double FACE_AIMING_ACCURACY= 1.0;private const double FACE_AIMING_SENSITIVITY = 0.01;//... //Cat assets private Image[] catEyeRightOpen, catEyeRightClosed, catEyeLeftOpen, catEyeLeftClosed, catNose;//Face Orientation Shapingprivate double prevPitch = 0.0f;private double prevYaw = 0.0f;public event PropertyChangedEventHandler PropertyChanged; //... private void FaceGameLookUpdate() { this.FacePointsCanvas.Children.Clear(); FaceFrameResult[] results = faceManager.GetLatestFaceFrameResults(); for (int i = 0; i < results.Count(); i++) { if (results[i] != null) { foreach (KeyValuePairfacePointKVP in results[i].FacePointsInColorSpace) { if (facePointKVP.Value.X == 0.0 || facePointKVP.Value.Y == 0.0) { break; } Size ellipseSize = new Size(10, 10); Ellipse ellipse = new Ellipse(); ellipse.Width = ellipseSize.Width; ellipse.Height = ellipseSize.Height; ellipse.Fill = new SolidColorBrush(Colors.Red); Canvas.SetLeft(ellipse, facePointKVP.Value.X - (ellipseSize.Width / 2)); Canvas.SetTop(ellipse, facePointKVP.Value.Y - (ellipseSize.Height / 2)); this.FacePointsCanvas.Children.Add(ellipse); } double pitch, roll, yaw = 0; ExtractFaceRotationInDegrees(results[i].FaceRotationQuaternion, out pitch, out yaw, out roll); double pitchDiff = Math.Abs(pitch - prevPitch);double yawDiff = Math.Abs(yaw - prevYaw);if (pitchDiff > FACE_AIMING_ACCURACY ||yawDiff > FACE_AIMING_ACCURACY){this.DXScenePanel.SetYawPitch(-(float)(yaw * FACE_AIMING_SENSITIVITY),(float)(pitch * FACE_AIMING_SENSITIVITY));prevPitch = pitch;prevYaw = yaw;}if (results[i].FaceProperties[FaceProperty.MouthOpen]== DetectionResult.Yes){this.DXScenePanel.Fire();}break; } } }
Build and run the application. Click on the Face Game and Tap to play the game.
Get your body within vision of the Kinect camera (remember that the Kinect cannot see a face until it can register a whole body). Your face points will appear in red, and your face orientation will affect the camera.
Open your mouth to fire balls!
This lab described how to use the DirectXSceneStore library to run a simple game and control it with your face. All the code to run the complete game exists within this sample on MSDN: https://code.msdn.microsoft.com/windowsapps/Metro-style-DirectX-18f98448 This new library simplifies the game for you.
This lab is designed to teach you another use for face data. As you can tell there are important considerations when using the face as an input, mostly sensitivity and movement smoothing. Using the face to manipulate a game can result in some great experiences and with some polish it can be a huge factor in creating a sense of immersion.
The next lab will begin from the code completed in this lab code.