Magic Leap Spacial Computing Integration Project. This is an experimental display gallery to understand scale, positioning, and markup.
This project contains Magic Leap compatible content viewable with the Helio browser.
Other Experimental Projects:
Catapult Demo Assets (also works with Helio)
!! Please note, this project is currently known to crash the Helio browser due to the number and size of the objects listed under Assets.
for better working example, you can visit:
Why We are Making this Project
This is a project used to demonstrate and learn how to use spatial computing in everyday applications. This project contains an evolving collection of objects that can be observed, extracted and placed using the Magic Leap One. makeSEA is the premier content management and delivery platform for Magic Leap. To learn more and become a beta user please Contact Us.
This project focuses on content designed for delivery via the Helio browser using Prismatic scripting and markup.
- ML Remote
- World Sensing
- Wold Mesh with Occlusion
- Headpose-based Object Placement
- Near Clip Plane Exploitation*
- Helio Browser
*considered a spatial computing no-no and reality-breaker but, useful for certain technical applications
https://www.script-tutorials.com/webgl-with-three-js-lesson-7/ ---how to do inline previews for various types
requires (notes are fuzzy on build fixes but should be correct)
Architecture workflow options:
Matterport OBJ Conversion:
download OBJ, cd to root of project,
obj2gltf -i <file>.obj -o <file>.glb
SKP (SketchUp) -> GLB export (free plugin @Warehouse from Centaur)
Optional: export SKP -> textured FBX or DAE (Collada) -be sure to export two-sided faces and textures - will produce folder hierarchy; then use COLLADA2GLTF-bin command line to transfer then:
(assumes texture child folders exist)
DAE (Collada) -> GLB
COLLADA2GLTF-v2-2/COLLADA2GLTF-bin in.dae out.glb -b --doubleSided
FBX -> GLTF (GLB)
download and rename/perm as binary
cd to file location so that children are found then
FBX2glTF -b --pbr-metallic-roughness <if>
(pbr-matallic-roughness may not be needed but try)
How to Convert FBX -> FBX with embed textures
Import current FBX into Autocad Maya
Set all textures / materials according to asset
Export with the Game Exporter as a FBX Version: FBX 2014/2015 and the File Type: Binary
glTF is your friend for Web - look here for resources:
Use Package Manager to get latest preview release (currenlty 2.0.2) !! version in Asset Store is stale
try OOTB build first, if errors:
in order for this to build on mac with unity env the scripts with names including Test need to be moved under directory named "Editor":
move /Assets/Resources/*Test* /Assets/Editor
move /Assets/Unity/Scripts/Tests/*Test* /Assets/Unity/Scripts/Tests/Editor (dir should already exist()
- Install Blender and change the initial start up to have nothing in it
- Delete the initial cube then save the space as the start up
- Save the convert_stl_to_fbx.py from the assets of this project to the folder Blender under Blender Foundation.
- In the command prompt navigate to the same folder as above, then input the following:
blender --background --python convert_stl_to_fbx.py -- test.stl test.fbx
- Where the test.stl will be the location of the stl file you wish to convert, and test.fbx will be the location of the converted file.
Use Revit plus GLTF Converter for Autodesk® Revit®
optional: Unity reduce material metallic value to 0, set normals to Calculate (on prefab), add directional lighting then save to GLB
alt: mod to Standard shader (results in broken bends for some models)
- Works well with photos even if the photos are a little bit blurry.
- Once the sparse point cloud is created, the next best thing to do is to change the bounding box to just your subject. In this case a stove. That way it will be able to still use the background noise to help create the dense point cloud, but all of the points outside of the bounding box will n
- 3DF Zephyr takes a long time to extract photos from the video.
- At 1 frame per second capture rate, the program was able to obtain 87 photos from the video that is 01:23 minutes long.
- Out of those 87 photos only BLANK were able to be used.
- I was able to increase this number to 70 by changing the sparse cloud from default to deep.
- Sparse point cloud and the dense point cloud created from just taking photos are a lot more detailed with less holes when creating the mesh