Magic Leap

Project Summary

Magic Leap Spacial Computing Integration Project.  This is an experimental display gallery to understand scale, positioning, and markup. 

This project contains Magic Leap compatible content viewable with the Helio browser.

Project Details

begin stove   end stove

begin clorox wipes end clorox wipes

begin salamander   end salamander

 

begin IRL microwave end IRL microwave

begin IRL dishwasher end IRL dishwasher

begin wind turbine   end wind turnine

 

!! Please note, this project is currently known to crash the Helio browser due to the number and size of the objects listed under Assets.

for better working example, you can visit:

Grid Test

 

Why We are Making this Project

This is a project used to demonstrate and learn how to use spatial computing in everyday applications.  This project contains an evolving collection of objects that can be observed, extracted and placed using the Magic Leap One.  makeSEA is the premier content management and delivery platform for Magic Leap.  To learn more and become a beta user please Contact Us.

 

Other Experimental Projects:

Foley Studio

CA Modern House

Einscan Tests

Fluke Parts Project

Half Dome

Heating Infrastructure

Mammoth Mountain

MC Test

Aspen Gondola

Supercar

 

InstallationInstructions-W10631155-RevA.pdf

 

Topics Covered

This project focuses on content designed for delivery via the Helio browser using Prismatic scripting and markup.

Inputs

  • ML Remote
  • World Sensing
  • Wold Mesh with Occlusion

Design Practices

  • Headpose-based Object Placement
  • Near Clip Plane Exploitation*

Presentation

  • Helio Browser
  • Casting

*considered a spatial computing no-no and reality-breaker but, useful for certain technical applications

 

Working Notes

Useful Links

FUSION

gltf-viewer.donmccurdy.com

KhronosGroup/glTF: glTF – Runtime 3D Asset Delivery

three.js/GLTFLoader.js at r97 · mrdoob/three.js

Translate a Source File into OBJ Format | Model Derivative API | Autodesk Forge

https://myminifactory.github.io/stl2gltf/

CAD Forum - overview of CAD formats - conversion assistant

Yet Another Conversion Chart

sbtron/makeglb: Convert glTF to glb

https://myminifactory.github.io/Fast-Quadric-Mesh-Simplification/

meshconv

glTF Viewer

glTF Model Converter c/o Cesiumjs.org uses:

gollada2gltf

obj2gltf

gltf-pipeline

https://www.script-tutorials.com/webgl-with-three-js-lesson-7/ ---how to do inline previews for various types

requires (notes are fuzzy on build fixes but should be correct)

 

Architecture workflow options:

SKP (SketchUp) -> GLB export (free plugin @Warehouse from Centaur)

Optional:  export SKP -> textured FBX or DAE (Collada) -be sure to export two-sided faces and textures - will produce folder hierarchy; then use COLLADA2GLTF-bin command line to transfer then:

(assumes texture child folders exist)

DAE -> GLB

COLLADA2GLTF-v2-2/COLLADA2GLTF-bin in.dae out.glb -b --doubleSided

source/ref:  https://github.com/KhronosGroup/COLLADA2GLTF

OR

FBX -> GLTF (GLB)

use FBX2glTF

source: https://github.com/facebookincubator/FBX2glTF

download and rename/perm as binary

help:

FBX2glTF -help

use:

cd to file location so that children are found then
FBX2glTF -b --pbr-metallic-roughness <if>

(pbr-matallic-roughness may not be needed but try)

 

How to Convert FBX -> FBX with embed textures

Import current FBX into Autocad Maya

Set all textures / materials according to asset

Export with the Game Exporter as a FBX Version: FBX 2014/2015 and the File Type: Binary

 

GLTF/GLB Options

glTF is your friend for Web - look here for resources:

https://github.com/KhronosGroup/glTF#converters-importers-and-exporters

 

GLTF for Unity

Use Package Manager to get latest preview release (currenlty 2.0.2) !! version in Asset Store is stale

(old news:)

try OOTB build first, if errors:

in order for this to build on mac with unity env the scripts with names including Test need to be moved under directory named "Editor":

move /Assets/Resources/*Test* /Assets/Editor

move /Assets/Unity/Scripts/Tests/*Test* /Assets/Unity/Scripts/Tests/Editor (dir should already exist()

 
DAE textured (Collada) OR FBX textured -> GLB see Arch notes above
 
SKP (SketchUp) -> GLB see Arch notes above
 
now serving auto-transformed FBX using blender see:
STL -> FBX through Command Prompt
  • Install Blender and change the initial start up to have nothing in it
    • Delete the initial cube then save the space as the start up
  • Save the convert_stl_to_fbx.py from the assets of this project to the folder Blender under Blender Foundation.
  • In the command prompt navigate to the same folder as above, then input the following:

blender --background --python convert_stl_to_fbx.py -- test.stl test.fbx

  • Where the test.stl will be the location of the stl file you wish to convert, and test.fbx will be the location of the converted file.
(older, works but requires human):
STL -> OBJ (meshconv) -> Unity -> light/textrure -> glTF -> glb (gltf-pipeline)
working (experimental):
use obj2gtlf to get direct from obj -> glb with light/texture embedded (as .mtl); we are having issues getting good result on MLOne
alt attempt:  try getting to fbx instead using Blender and command line w/script, ref:
 
(older; may be better options; was used for CAD from mfg I think)
dwg (!! see above is there a better way with newer arch methods?) -> fusion360 import -> fxb -> gtlf
http://52.4.31.236/convertmodel.html
 
photogrammetry compare notes:
3DF Zephyr
AutoCad Recap Pro
Agisoft PhotoScan
 
 
Stove Appliance Notes
  • Works well with photos even if the photos are a little bit blurry.
  • Once the sparse point cloud is created, the next best thing to do is to change the bounding box to just your subject. In this case a stove. That way it will be able to still use the background noise to help create the dense point cloud, but all of the points outside of the bounding box will n
  • 3DF Zephyr takes a long time to extract photos from the video. 
    • At 1 frame per second capture rate, the program was able to obtain 87 photos from the video that is 01:23 minutes long.
    • Out of those 87 photos only BLANK were able to be used.
    • I was able to increase this number to 70 by changing the sparse cloud from default to deep.
  • Sparse point cloud and the dense point cloud created from just taking photos are a lot more detailed with less holes when creating the mesh
 

 

Experience Overview
Enter text...

 

Walkthrough
Enter text...

 

Final Takeaways
Enter text...

 

Suggested Next Steps
Enter text...

 

Average (2 Votes)

Assets
Related Assets
Remote Assets