Posted on November 14, 2016
This is the third part of a series see https://peted.azurewebsites.net/hololens-end-to-end-in-unity/ Steps to create the demo project that I used in my Future Decoded UK 2016 presentation
To get set up with the HoloToolkit in Unity see https://mtaulty.com/2016/11/10/hitchiking-the-holotoolkit-unity-leg-1/
Stacking the Bottles
We want the bottles created in 3D Models: From Blender to HoloLens to be stacked on top of the plinth and also to respect other physical properties like falling over. To achieve this we can add RigidBody components to each bottle, some colliders and add a box collider component to the plinth. The colliders I set up for the bottle consisted of two box colliders for the cap and the base of the bottle and a capsule collider for the body of the bottle. These were assembled by adding the components to the bottle prefab and then scaling and translating them until they were a close fit around the mesh of the bottle. Once done the bottles could be positioned into a stack. Adding the RigidBody means that we will later be able to apply a force to the surface of the bottles.
Knocking the Stack Over
In the previous post about Gaze we covered how to identify objects in your environment which the user would like to interact with. Once an object has been identified, in order to carry out the interaction the HoloLens platform offers two alternatives; gesture or voice commands. I will cover voice commands in the next post. The HoloLens is set up to have the ability to detect two different gestures; the bloom gesture and the ‘air tap’ (see https://developer.microsoft.com/en-us/windows/holographic/Gestures.html). The bloom is a system-level gesture to invoke the Start menu but we can use the ‘air tap’ and associated gestures in our app. This is how I did that for my demo:
Find the GestureManager script from the Project panel (use the search bar at the top of that panel)
If you have followed the Gaze post then you will already have a Managers game object in your scene so you can drag the GestureManager script onto that. If not, create an empty game object and call it Managers and then drag the GestureManager script onto it.
The GestureManager script declares the GazeManager script as a required component so if you don’t have one already one will automatically get added for you (as GestureManager relies on GazeManager otherwise it will not work)
The GestureManager script will listen for gestures and propagate a message to the focused object in the scene if one exists. It uses Unity’s SendMessage function to pass the messages along.
Be aware that SendMessage uses reflection to match the function by it’s string name which is not necessarily the best option for performance and avoiding errors. Also, there is no concept of propagating events up/down the hierarchy that you might find in hierarchical user interface platforms.
So, if you want to call a function directly on the gazed-at object then this will work fine (I have another example in a later post where that is not the case). For my demo I want to knock over the bottles when tapped over. So the cursor hovers on a bottle, the user air taps and a force is applied to the bottle at that location. The GestureManager will call a function called OnSelect so we need to add one to a script applied to each bottle. Here is the OnSelect on my BottleScript.cs:
When this is called a force is applied in the opposite direction to the normal at the gaze intersection point. The Unity physics system takes care of the rest and the bottles will fall over.