This is the first part of a series see https://peted.azurewebsites.net/hololens-end-to-end-in-unity/
See https://peted.azurewebsites.net/hololens-intro-resources-to-get-started/ for links to get all of the tools to get started with HoloLens development. Also, for this post I will be using Blender for Windows (I used version 2.77) which is a free download here https://www.blender.org/download/
There are many ways to source 3D models for use in HoloLens projects; if you are a developer working with 3D artists then you just need to agree an interchange format, the scale of the scene and a process that ensures that transformations are initialised correctly on export. If you are lucky the company you are working with will already have a catalogue of 3D models to use as a starting point or you can look through the Unity Asset Store or one of the many online 3D model catalogues to source free or paid models such as TurboSquid.
Modelling
Now, 3D software can be particularly complex and Blender is no different in this respect so this post is not meant as an in-depth tutorial but more to give a flavour of how some simple tasks can be achieved. I will explain how I created the 3D models I used in my presentation at Future Decoded 2016. This shows the models as the final result as shown in my HoloLens Unity project
So, this required creating and importing the bottle model and the textured plinth model. Let’s look at each in turn. For each model we need to ensure that the transformations are reset to be at the origin and that the rotations are all zeroed and the scale is at one. If there are translations and rotations on the model when we import then these will cause unexpected movement to occur when we try to transform the models inside Unity.
I am using FBX file interchange format, originally created by Kaydara for their Filmbox product and has subsequently become a commonly used format with most 3D software using it for import/export of 3D models and associated data. It is, however a proprietary format which the industry is looking to replace with more open standards such as gltf – see https://www.khronos.org/gltf
This can be achieved by ‘Applying’ the transforms which will effectively rest them and then when we export to FBX there is a setting !EXPERIMENTAL! Apply Transform that we can use to ensure that everything works correctly. I will show this in the screencasts for each model below:
Note: Key presses will appear as an overlay on the right hand side of the video
Plinth
So we start by creating a cube and then using the ‘Loop cut and Slice’ tool which can be accessed by going into Edit Mode and then choosing the Tools panel and finding the option for ‘Loop Cut and Slide’ or pressing Ctrl-R. This enables us to subdivide the cube so that it consists of more faces, then we can apply one material to the part of the cube that we want the Future Decoded texture and another material everywhere else. This short screencast should help to understand the process:
Bottle
The bottle was created by using 2 scaled cubes and a cylinder, merging the meshes and then resetting the transformations as before.
Import to Unity
Once we have exported as an FBX file this can simply be drag ‘n’ dropped onto the Project pane in Unity and it will be imported. You can set a scaling factor at this stage if required amongst other settings:
Repurpose Existing Model
If you are not creating the models in-house but instead are using an online catalogue or the Unity asset store or just reusing some existing 3D model then you may be restricted by the amount of detail in the model. Remember that the HoloLens is a mobile device and does not have a desktop level GPU like those which are required to power tethered headsets. Here are the performance guidelines for HoloLens https://developer.microsoft.com/en-us/windows/holographic/performance_recommendations and further per tips here http://www.wikiholo.net/index.php?title=Performance_Tips. Profiling the graphics performance of your app would be the next step to determine the graphics pipeline bottlenecks, after all, it’s not just the number of polygons that could affect the performance – quantity of and/or non-optimised shader instructions amongst other factors could also cause issues. Graphics optimisations such as using lower LOD (level of detail) for Holograms that are farther away and culling polygons that are not visible can be used to help maintain a 60FPS frame rate. Dropping frames and/or a consistently low frame rate will provide a particularly poor experience on a mixed reality device (since the mismatch between the real world and the holograms will become more noticeable) and can lead to discomfort for the user. If it turns out that the number of polygons is a bottleneck for your app then tools such as the Decimate modifier in Blender can be used to reduce the polygon count for a model whilst retaining shape and features as can other tools such as Simplygon (which has a Unity plugin with a free tier) and Meshlab. Lost detail in the model can be reintroduced using normal maps generated from the original high polygon model. I will save a deeper discussion around performance for a later post but hopefully this will help to outline the basics around use of 3D models.
All resources used are available on Github https://github.com/peted70/fd-holodemo (there is a branch for each stage of the project)
3 thoughts on “3D Models: From Blender to HoloLens”