The Azure integration provides key building blocks to developers, minimizing the work they need to do when building a new Mixed Reality application.
- Azure Active Directory provides secure identity and login
- Billing is out of the box
- Audio/video transmission including special audio for realistic interactive experiences
The core solutions that make it much easier for developers to create mixed reality experiences using the Mesh SDK fall into four main categories:
- Immersive Presence – Mesh comes with out of the box avatars you can customize or you can import your own rigged avatars. Their movement is powered by AI motion models to accurately represent the user’s movement. You can also import volumetric captures for a more realistic visual of users, however these do not have the same functionality of avatars because of the complexity. Microsoft hopes to solve this in the near future, but for now avatars are the better way to go to take advantage of the interactivity.
- Spatial Maps – Spatial maps allow you to create digital maps of your world that are much more accurate than GPS and are persistent globally. This enables content to be anchored, users to share the same point-of-view, and collaboration on the same 3D model.
- Holographic Rendering – Holographic rendering refers to Microsoft’s integration with Azure giving you the option to render locally using the power of the device you are on or rendering in the cloud.
- Multiuser Sync – the multi-user synchronization ensures that each user is seeing the same thing at the same time. All movements within the mixed reality space have 100 milliseconds of latency or less.
While Microsoft Mesh is providing the backbone to create shared mixed reality experiences, solving a lot of problems for developers, you still need 3D models or digital twins of products to interact with. This can be accomplished by 3D scanning real-world objects or dropping in existing 3D designs. 3D scanning is good if you have the physical products all within a specific geography, otherwise shipping costs and coordination can make this complicated. That is why being able to use existing 3D designs is preferable for larger, global companies.
Both 3D scans and 3D designs result in very large 3D models that will still require optimization. This has traditionally been a very manual process, which has significantly limited the number of mixed reality applications due to the time and cost required to prepare 3D models leading many companies to do one off mixed reality experiences instead of incorporating them into their everyday workflow.
While the holographic rendering of Mesh gives you the option to render in the cloud so you can technically import huge 3d models not previously possible with HoloLens, you will be limited by your internet bandwidth. Azure remote rendering requires 40mbps down and 5mbps up for one user assuming there is no other traffic on your network. A Microsoft study done in 2019 showed most of the US had 25mbps download speed or less:
As many of us know working from home for the past year, when nodes are crowded our speed is throttled so our broadband is not always the speed it should be. 5G coupled with edge computing will help unlock this capability. While every carrier has touted their 5G roll out, in reality it is not available in most markets and will not be fully rolled out until 2025 or later. 5G provides an internet connection that is up to 10x 4G, but edge computing is what will decrease latency. Both are required for true real-time holographic rendering.
If you plan on using the local rendering power of HoloLens 2 you are limited by the below specifications:
- Exporting – Assets must be delivered in the .glb file format (binary glTF)
- Modeling – Assets must be less than 10k triangles, have no more than 64 nodes and 32 submeshes per LOD. The Up axis should be set to “Y”. The asset should face “forward” towards the positive Z axis. All assets should be built on the ground plane at the scene origin (0,0,0). Working Units should be set to meters and assets so that assets can be authored at world scale.
- Materials – Textures can’t be larger than 4096 x 4096 and the smallest mip map should be no larger than 4 on either dimension. 512×512 is recommended. All meshes don’t need to be combined, but it’s recommended. All meshes should share one material, with only one texture set being used for the whole asset. UVs must be laid out in a square arrangement in the 0-1 space. Avoid tiling textures although they’re permitted. Multi-UVs aren’t supported. Double-sided materials aren’t supported
- Animation – Animations can’t be longer than 20 minutes at 30 FPS (36,000 keyframes) and must contain <= 8192 morph target vertices.
Using the cloud will no doubt lift some of these constraints, but developers need to think about the limitations of their users’ locations and devices. One of the great benefits of Mesh is the ability to create applications that work across iOS, Android, HoloLens, HP Reverb G2 and Oculus Quest 2. Developers need to design for the lowest support of mobile if their end-users will be using their phones for the application. Also, if you are creating an application that requires multiple 3D models, the more you optimize the better your experiences will be.
So, while it is very exciting that Mesh is setup for cloud rendering, 3D manufacturing designs still need to be prepared and optimized for the medium. There have been many players in the space working on this issue for industrial use cases from AutoCAD, but fashion, footwear and consumer goods have been a wide-open space. These industries have their own unique 3D design programs like Browzwear and Clo for apparel and Modo and Rhino for footwear to name a few. These files have unique properties of fabric and soles that are very different from industrial manufacturing models, resulting in a slew of new rendering and optimization issues.
In addition to these unique software design programs and file types, these industries really have scale. While an auto company may launch 10 new styles in a year, fashion companies are launching tens of thousands per year.
This is where VNTANA has focused our work to help unlock the millions of 3D designs in fashion, footwear and consumer goods for mixed reality use cases. Companies like Otto International launch 80,000 products per year, there is no way to possibly manually prepare and optimize that number of designs in a time and cost-efficient way without automated software. Clients like adidas launch 25,000 products per year and were able to accomplish in 1 hour with VNTANA 3D automation what used to take them 6 weeks.
The fashion and footwear industry in particular needs to view multiple 3D models at once for line reviews and assortment planning. This is generally 10-25 designs, which puts further strain on the device running the experience. At VNTANA we’re excited to help these industries quickly and easily take advantage of platforms like Microsoft Mesh in a way that fits within their existing workflows and doesn’t require manual work.