close
close

Interactive 3D Device Presentation with Threepipe

Threepipe is a new framework for creating 3D web applications using JavaScript or TypeScript. It offers a high-level API built on Three.js, providing a more intuitive and efficient way to develop 3D experiences for the web. Threepipe comes with a plugin system (and many built-in plugins), making it easy to extend functionality and integrate different features into your 3D projects.

In this tutorial we will create an interactive 3D device mockup showcase using Threepipe, featuring a MacBook and an iPhone mockup, where users can interact with the model by clicking and hovering over the objects, and drop images to display on the devices. Check out the final version.

Check out the Pen ThreePipe: Device Mockup Experiment (Codrops) by Palash Bansal (@repalash).

This can be further extended to create a full web experience to showcase websites, designs, mockups, etc. This is inspired by an old three.js experiment to render custom device mockups – carbonmockups.com, which requires a lot more work if you are just working with three.js from scratch. This tutorial will cover setting up the mockup, animations in a no-code editor, and using code with pre-built plugins to add user interactions for websites.

Setting up the project

Codepen

You can prototype quickly in JavaScript on Codepen. Here’s a starter pen with the basic setup: https://codepen.io/repalash/pen/GRbEONZ?editors=0010

Grab your pen and start coding.

Local installation

To get started with Threepipe locally, you need to have Node.js installed on your machine. Vite projects require Node.js version 18+, so upgrade if your package manager warns you about it.

  1. With help from npm create command. Open your terminal and run the following command:
npm create threepipe
  1. Follow the directions:
    • Choose a project name (for example ‘device-mockup-showcase’)
    • Select “JavaScript” or “TypeScript” based on your preference
    • Choose “A basic scene” as a template
  2. This creates a basic project structure with a 3D scene using Threepipe and a bundler setup using Vite.
  3. Navigate to your project folder and run the project:
cd device-mockup-showcase
npm install
npm run dev
  1. Open the project in your browser by going to http://localhost:5173/ and you should see a simple 3D scene.

Start code

After you have created a basic project, open the file src/main.ts.

This is a basic setup for a 3D scene using Threepipe which loads a sample 3D model of a helmet and an environment map (for lighting). The scene is rendered on a canvas element with the ID threepipe-canvas (which is added to the file index.html).

The ThreeViewer class is used to create a new 3D viewer instance. The viewer has several components including a scene, camera (with controls), Renderer, RenderManager, AssetManager and some basic plugins. It is set up to provide a quickstart to create a three.js app with all the required components. In addition, there are plugins such as LoadingScreenPlugin, ProgressivePlugin, SSAAPluginAnd ContactShadowGroundPlugin are added to extend the functionality of the viewer. We will add more plugins to the viewer for different use cases as we progress through the tutorial.

Check the comments in the code to understand what each part does.

import {
  ContactShadowGroundPlugin,
  IObject3D,
  LoadingScreenPlugin,
  ProgressivePlugin,
  SSAAPlugin,
  ThreeViewer
} from 'threepipe';
import {TweakpaneUiPlugin} from '@threepipe/plugin-tweakpane';

async function init() {

  const viewer = new ThreeViewer({
    // The canvas element where the scene will be rendered
    canvas: document.getElementById('threepipe-canvas') as HTMLCanvasElement,
    // Enable/Disable MSAA
    msaa: false,
    // Set the render scale automatically based on the device pixel ratio
    renderScale: "auto",
    // Enable/Disable tone mapping
    tonemap: true,
    // Add some plugins
    plugins: (
        // Show a loading screen while the model is downloading
        LoadingScreenPlugin,
        // Enable progressive rendering and SSAA
        ProgressivePlugin, SSAAPlugin,
        // Add a ground with contact shadows
        ContactShadowGroundPlugin
    )
  });

  // Add a plugin with a debug UI for tweaking parameters
  const ui = viewer.addPluginSync(new TweakpaneUiPlugin(true));

  // Load an environment map
  await viewer.setEnvironmentMap('https://threejs.org/examples/textures/equirectangular/venice_sunset_1k.hdr', {
    // The environment map can also be used as the scene background
    setBackground: false,
  });

  // Load a 3D model with auto-center and auto-scale options
  const result = await viewer.load('https://threejs.org/examples/models/gltf/DamagedHelmet/glTF/DamagedHelmet.gltf', {
    autoCenter: true,
    autoScale: true,
  });

  // Add some debug UI elements for tweaking parameters
  ui.setupPlugins(SSAAPlugin)
  ui.appendChild(viewer.scene)
  ui.appendChild(viewer.scene.mainCamera.uiConfig)

  // Every object, material, etc has a UI config that can be added to the UI to configure it.
  const model = result?.getObjectByName('node_damagedHelmet_-6514');
  if (model) ui.appendChild(model.uiConfig, {expanded: false});

}

init();

Creating the 3D scene

For this showcase we are using 3D models of a MacBook and an iPhone. You can find free 3D models online or create your own with software like Blender.

These are two great models from Sketchfab that we’ll be using in this tutorial:

Using the models we create a scene with a MacBook and an iPhone on a table. The user can interact with the scene by rotating and zooming in/out.

Threepipe provides an online editor that allows you to quickly create a scene and set plug-in and object properties, which you can then export as glb and use in your project.

When the model is downloaded from the editor, all settings including the environment map, camera views, post-processing, other plugin settings, etc. are included in the glb file. This makes it easy to load the model into the project and use it right away.

For the tutorial I created and configured a file called device-mockup.glb which you can download here . Watch the video below on how it is done in the tweakpane editor – https://threepipe.org/examples/tweakpane-editor/

Adding the 3D models to the scene

To load the 3D model into the project, we can either load the file directly from the URL or download the file to the public folder in the project and load it from there.

Since this model contains all the settings, including the environment map, we can remove the code for loading the environment map from the starting code and load the file directly.

const viewer = new ThreeViewer({
  canvas: document.getElementById('threepipe-canvas') as HTMLCanvasElement,
  msaa: true,
  renderScale: "auto",
  plugins: (
    LoadingScreenPlugin, ProgressivePlugin, SSAAPlugin, ContactShadowGroundPlugin,
  )
});

const ui = viewer.addPluginSync(new TweakpaneUiPlugin(true));

// Note - We dont need autoscale and center, since that is done in the editor already.
const devices = await viewer.load('https://asset-samples.threepipe.org/demos/tabletop_macbook_iphone.glb')!;
// or if the model is in the public directory
// const devices = await viewer.load('./models/tabletop_macbook_iphone.glb')!;

// Find the objects roots by name
const macbook = devices.getObjectByName('macbook')!
const iphone = devices.getObjectByName('iphone')!

const macbookScreen = macbook.getObjectByName('Bevels_2')! // the name of the object in the file
macbookScreen.name = 'Macbook Screen' // setting the name for easy identification in the UI.

console.log(macbook, iphone, macbookScreen);

// Add the object to the debug UI. The stored Transform objects can be seen and edited in the UI.
ui.appendChild(macbookScreen.uiConfig, {expanded: false})
ui.appendChild(iphone.uiConfig, {expanded: false})
// Add the Camera View UI to the debug UI. The stored Camera Views can be seen and edited in the UI.
ui.setupPluginUi(CameraViewPlugin, {expanded: false})
ui.appendChild(viewer.scene.mainCamera.uiConfig)

This code loads the 3D model into the scene and adds the objects to the debug UI so you can adjust the parameters.

Plugins and animations

The file is configured in the editor with different camera views(states) and object transform(position, rotation) states. This is done using the plugins CameraViewPlugin And TransformAnimationPluginTo view and interact with the saved camera views and object transformations, we need to add them to the viewer and the debug UI.

First add the plugins to the viewer constructor

const viewer = new ThreeViewer({
   canvas: document.getElementById('threepipe-canvas') as HTMLCanvasElement,
   msaa: true,
   renderScale: "auto",
   plugins: (
      LoadingScreenPlugin, ProgressivePlugin, SSAAPlugin, ContactShadowGroundPlugin,
      CameraViewPlugin, TransformAnimationPlugin
   )
});

Then add the CameraViewPlugin to the debug UI

ui.setupPluginUi(CameraViewPlugin)

We don’t need the TransformAnimationPlugin to the user interface, because the states are attached to objects and are visible in the user interface when the object is added.

We can now interact with the user interface to play the animations and animate them to different camera angles.

Transformation states are added to two objects in the file: the MacBook screen and the iPhone.

The camera views are stored in the plugin and not with an object in the scene. We can preview and animate different camera views using the plugin UI. Here we have two sets of camera views, one for desktop and one for mobile (with different FoV/Position)

User interaction

Now that we have the scene set up with the models and animations, we can add user interaction to the scene. The idea is to tilt the model slightly when the user hovers over it and open it fully when clicked, along with animating the camera shots. Let’s do it step by step.

For the interaction we can use the PickingPlugin which provides events for handling hover and click interactions with 3D objects in the scene.

Add first PickingPlugin to the viewer plugins

plugins: (
   LoadingScreenPlugin, ProgressivePlugin, SSAAPlugin, ContactShadowGroundPlugin,
   CameraViewPlugin, TransformAnimationPlugin, PickingPlugin
)

This will now allow us to click on any object in the scene and it will be highlighted with a selection box.

Now we can configure the plugin to hide this box and subscribe to the events the plugin provides to handle the interactions.

// get the plugin instance from the viewer
const picking = viewer.getPlugin(PickingPlugin)!
const transformAnim = viewer.getPlugin(TransformAnimationPlugin)!

// disable the widget(3D bounding box) that is shown when an object is clicked
picking.widgetEnabled = false

// subscribe to the hitObject event. This is fired when the user clicks on the canvas.
picking.addEventListener('hitObject', async(e) => {
   const object = e.intersects.selectedObject as IObject3D
   // selectedObject is null when the user clicks the empty space
   if (!object) {
       // close the macbook screen and face down the iphone
      await transformAnim.animateTransform(macbookScreen, 'closed', 500)?.promise
      await transformAnim.animateTransform(iphone, 'facedown', 500)?.promise
      return
   }
   // get the device name from the object
   const device = deviceFromHitObject(object)
   // Change the selected object to the root of the device models. This is used by the widget or other plugins like TransformControlsPlugin to allow editing.
   e.intersects.selectedObject = device === 'macbook' ? macbook : iphone

   // Animate the transform state of the object based on the device name that is clicked
   if(device === 'macbook')
      await transformAnim.animateTransform(macbookScreen, 'open', 500)?.promise
   else if(device === 'iphone')
      await transformAnim.animateTransform(iphone, 'floating', 500)?.promise
})

here the animateTransform function is used to animate the transformation state of the object. The function takes the object, state name and duration as arguments. The promise returned by the function can be used to wait for the animation to complete.

The deviceFromHitObject function is used to get the device name of the clicked object. This function iterates through the parents of the object to find the device model.

function deviceFromHitObject(object: IObject3D) {
   let device = ''
   object.traverseAncestors(o => {
      if (o === macbook) device = 'macbook'
      if (o === iphone) device = 'iphone'
   })
   return device
}

With this code we can now interact with the scene by clicking on the models to open/close the MacBook screen and hold/float the iPhone screen down.

Now we can also add camera animations, so that different camera angles are displayed as the user interacts with the scene.

Get the plugin copy

const cameraView = viewer.getPlugin(CameraViewPlugin)!

Update the listener to animate the views using the animateToView function. The views are called ‘start’, ‘macbook’ and ‘iphone’ in the plugin.

const object = e.intersects.selectedObject as IObject3D
if (!object) {
   await Promise.all((
      transformAnim.animateTransform(macbookScreen, 'closed', 500)?.promise,
      transformAnim.animateTransform(iphone, 'facedown', 500)?.promise,
      cameraView.animateToView('start', 500),
   ))
   return
}
const device = deviceFromHitObject(object)
if(device === 'macbook') {
   await Promise.all((
     cameraView.animateToView('macbook', 500),
     await transformAnim.animateTransform(macbookScreen, 'open', 500)?.promise
   ))
}else if(device === 'iphone') {
   await Promise.all((
     cameraView.animateToView('iphone', 500),
     await transformAnim.animateTransform(iphone, 'floating', 500)?.promise
   ))
}

This will now also animate the camera to the respective views when the user clicks on the models.

Similarly, PickingPlugin provides an event hoverObjectChanged which can be used to handle hover interactions with the objects.

This is pretty much the same code, but we animate to different states (with different durations) when the user hovers over the objects. We don’t need to animate the camera here, because the user doesn’t click on the objects.

// We need to first enable hover events in the Picking Plugin (disabled by default)
picking.hoverEnabled = true

picking.addEventListener('hoverObjectChanged', async(e) => {
   const object = e.object as IObject3D
   if (!object) {
      await Promise.all((
         transformAnim.animateTransform(macbookScreen, 'closed', 250)?.promise,
         transformAnim.animateTransform(iphone, 'facedown', 250)?.promise,
      ))
      return
   }
   const device = deviceFromHitObject(object)
   if(device === 'macbook') {
      await transformAnim.animateTransform(macbookScreen, 'hover', 250)?.promise
   }else if(device === 'iphone') {
      await transformAnim.animateTransform(iphone, 'tilted', 250)?.promise
   }
})

When you do this, the MacBook’s screen will open slightly as you move over it and the iPhone will tilt slightly.

Drop files

To allow users to drag images to display on the devices, we can use the DropzonePlugin Provided by Threepipe, this plugin allows users to drag and drop files onto the canvas and incorporate the files into the code.

The plugins can be easily set up by dropzone property in the ThreeViewer constructor. The plugin will be added and set automatically.

Let’s set some options for processing the images dropped onto the canvas.

const viewer = new ThreeViewer({
  canvas: document.getElementById('threepipe-canvas') as HTMLCanvasElement,
  // ...,
  dropzone: {
    allowedExtensions: ('png', 'jpeg', 'jpg', 'webp', 'svg', 'hdr', 'exr'),
    autoImport: true,
    addOptions: {
      disposeSceneObjects: false,
      autoSetBackground: false,
      autoSetEnvironment: true, // when hdr, exr is dropped
    },
  },
  // ...,
});

We are setting up autoSetEnvironment to true, which will automatically set the environment map of the scene when an HDR or EXR file is dropped onto the canvas. This way a user can drop their own environment map and it will be used for lighting.

Now to set the dropped image on the devices we can listen to the loadAsset event of the AssetManager and set the image to the material of the device screen. This event is called because the DropzonePlugin also automatically imports as a three.js Texture object and loads the file into the asset manager. For more control, you can also subscribe to the events in the DropzonePlugin and manage the files yourself.

// Listen to when a file is dropped
viewer.assetManager.addEventListener('loadAsset', (e)=>{
  if (!e.data?.isTexture) return
  const texture = e.data as ITexture
  texture.colorSpace = SRGBColorSpace
  // The file has different objects that have the material.
  const mbpScreen = viewer.scene.getObjectByName('Object_7')?.material as PhysicalMaterial
  const iPhoneScreen = viewer.scene.getObjectByName('xXDHkMplTIDAXLN')?.material as PhysicalMaterial
  console.log(mbpScreen, iPhoneScreen)
  if(!mbpScreen || !iPhoneScreen) return
  mbpScreen.color.set(0,0,0)
  mbpScreen.emissive.set(1,1,1)
  mbpScreen.roughness = 0.2
  mbpScreen.metalness = 0.8
  mbpScreen.map = null
  mbpScreen.emissiveMap = texture
  iPhoneScreen.emissiveMap = texture
  mbpScreen.setDirty()
  iPhoneScreen.setDirty()
})

This code listens to the loadAsset event and checks if the loaded item is a texture. If so, it sets the texture to the MacBook and iPhone screen material. The texture is set as the emissive map of the material to make it glow. The emissive color is set to white to make the texture visible. The material changes only need to be made to the Macbook screen material and not to the iPhone, since the iPhone material setting was done directly in the editor.

Finishing touches

While interacting with the project, you may notice that the animations are not properly synchronized. This is because the animations are executed asynchronously and do not wait for the previous animation to finish.

To fix this, we need to maintain the state properly and wait for all animations to complete before changing the state.

Here is the final code with correct state management and other improvements in typescript. The JavaScript version can be found on Codepen.

import {
  CameraViewPlugin, CanvasSnapshotPlugin,
  ContactShadowGroundPlugin,
  IObject3D, ITexture,
  LoadingScreenPlugin, PhysicalMaterial,
  PickingPlugin,
  PopmotionPlugin, SRGBColorSpace,
  ThreeViewer,
  timeout,
  TransformAnimationPlugin,
  TransformControlsPlugin,
} from 'threepipe'
import {TweakpaneUiPlugin} from '@threepipe/plugin-tweakpane'

async function init() {

  const viewer = new ThreeViewer({
    canvas: document.getElementById('threepipe-canvas') as HTMLCanvasElement,
    msaa: true,
    renderScale: 'auto',
    dropzone: {
      allowedExtensions: ('png', 'jpeg', 'jpg', 'webp', 'svg', 'hdr', 'exr'),
      autoImport: true,
      addOptions: {
        disposeSceneObjects: false,
        autoSetBackground: false,
        autoSetEnvironment: true, // when hdr, exr is dropped
      },
    },
    plugins: (LoadingScreenPlugin, PickingPlugin, PopmotionPlugin,
      CameraViewPlugin, TransformAnimationPlugin,
      new TransformControlsPlugin(false),
      CanvasSnapshotPlugin,
      ContactShadowGroundPlugin),
  })

  const ui = viewer.addPluginSync(new TweakpaneUiPlugin(true))

  // Model configured in the threepipe editor with Camera Views and Transform Animations, check the tutorial to learn more.
  // Includes Models from Sketchfab by timblewee and polyman Studio and HDR from polyhaven/threejs.org
  // https://sketchfab.com/3d-models/apple-iphone-15-pro-max-black-df17520841214c1792fb8a44c6783ee7
  // https://sketchfab.com/3d-models/macbook-pro-13-inch-2020-efab224280fd4c3993c808107f7c0b38
  const devices = await viewer.load('./models/tabletop_macbook_iphone.glb')
  if (!devices) return

  const macbook = devices.getObjectByName('macbook')!
  const iphone = devices.getObjectByName('iphone')!

  const macbookScreen = macbook.getObjectByName('Bevels_2')!
  macbookScreen.name = 'Macbook Screen'

  // Canvas snapshot plugin can be used to download a snapshot of the canvas.
  ui.setupPluginUi(CanvasSnapshotPlugin, {expanded: false})
  // Add the object to the debug UI. The stored Transform objects can be seen and edited in the UI.
  ui.appendChild(macbookScreen.uiConfig, {expanded: false})
  ui.appendChild(iphone.uiConfig, {expanded: false})
  // Add the Camera View UI to the debug UI. The stored Camera Views can be seen and edited in the UI.
  ui.setupPluginUi(CameraViewPlugin, {expanded: false})
  ui.appendChild(viewer.scene.mainCamera.uiConfig)
  ui.setupPluginUi(TransformControlsPlugin, {expanded: true})

  // Listen to when an image is dropped and set it as the emissive map for the screens.
  viewer.assetManager.addEventListener('loadAsset', (e)=>{
    if (!e.data?.isTexture) return
    const texture = e.data as ITexture
    texture.colorSpace = SRGBColorSpace
    // The file has different objects that have the material.
    const mbpScreen = viewer.scene.getObjectByName('Object_7')?.material as PhysicalMaterial
    const iPhoneScreen = viewer.scene.getObjectByName('xXDHkMplTIDAXLN')?.material as PhysicalMaterial
    console.log(mbpScreen, iPhoneScreen)
    if(!mbpScreen || !iPhoneScreen) return
    mbpScreen.color.set(0,0,0)
    mbpScreen.emissive.set(1,1,1)
    mbpScreen.roughness = 0.2
    mbpScreen.metalness = 0.8
    mbpScreen.map = null
    mbpScreen.emissiveMap = texture
    iPhoneScreen.emissiveMap = texture
    mbpScreen.setDirty()
    iPhoneScreen.setDirty()
  })

  // Separate views are created in the file with different camera fields of view and positions to account for mobile screen.
  const isMobile = ()=>window.matchMedia('(max-width: 768px)').matches
  const viewName = (key: string) => isMobile() ? key + '2' : key

  const transformAnim = viewer.getPlugin(TransformAnimationPlugin)!
  const cameraView = viewer.getPlugin(CameraViewPlugin)!

  const picking = viewer.getPlugin(PickingPlugin)!
  // Disable widget(3D bounding box) in the Picking Plugin (enabled by default)
  picking.widgetEnabled = false
  // Enable hover events in the Picking Plugin (disabled by default)
  picking.hoverEnabled = true

  // Set initial state
  await transformAnim.animateTransform(macbookScreen, 'closed', 50)?.promise
  await transformAnim.animateTransform(iphone, 'facedown', 50)?.promise
  await cameraView.animateToView(viewName('start'), 50)

  // Track the current and the next state.
  const state = {
    focused: '',
    hover: '',
    animating: false,
  }
  const nextState = {
    focused: '',
    hover: '',
  }
  async function updateState() {
    if (state.animating) return
    const next = nextState
    if (next.focused === state.focused && next.hover === state.hover) return
    state.animating = true
    const isOpen = state.focused
    Object.assign(state, next)
    if (state.focused) {
      await Promise.all((
        transformAnim.animateTransform(macbookScreen, state.focused === 'macbook' ? 'open' : 'closed', 500)?.promise,
        transformAnim.animateTransform(iphone, state.focused === 'iphone' ? 'floating' : 'facedown', 500)?.promise,
        cameraView.animateToView(viewName(state.focused === 'macbook' ? 'macbook' : 'iphone'), 500),
      ))
    } else if (state.hover) {
      await Promise.all((
        transformAnim.animateTransform(macbookScreen, state.hover === 'macbook' ? 'hover' : 'closed', 250)?.promise,
        transformAnim.animateTransform(iphone, state.hover === 'iphone' ? 'tilted' : 'facedown', 250)?.promise,
      ))
    } else {
      const duration = isOpen ? 500 : 250
      await Promise.all((
        transformAnim.animateTransform(macbookScreen, 'closed', duration)?.promise,
        transformAnim.animateTransform(iphone, 'facedown', duration)?.promise,
        isOpen ? cameraView.animateToView(viewName('front'), duration) : null,
      ))
    }
    state.animating = false
  }
  async function setState(next: typeof nextState) {
    Object.assign(nextState, next)
    while (state.animating) await timeout(50)
    await updateState()
  }

  function deviceFromHitObject(object: IObject3D) {
    let device = ''
    object.traverseAncestors(o => {
      if (o === macbook) device = 'macbook'
      if (o === iphone) device = 'iphone'
    })
    return device
  }

  // Fired when the current hover object changes.
  picking.addEventListener('hoverObjectChanged', async(e) => {
    const object = e.object as IObject3D
    if (!object) {
      if (state.hover && !state.focused) await setState({hover: '', focused: ''})
      return
    }
    if (state.focused) return
    const device = deviceFromHitObject(object)
    await setState({hover: device, focused: ''})
  })

  // Fired when the user clicks on the canvas.
  picking.addEventListener('hitObject', async(e) => {
    const object = e.intersects.selectedObject as IObject3D
    if (!object) {
      if (state.focused) await setState({hover: '', focused: ''})
      return
    }
    const device = deviceFromHitObject(object)
    // change the selected object for transform controls.
    e.intersects.selectedObject = device === 'macbook' ? macbook : iphone
    await setState({focused: device, hover: ''})
  })

  // Close all devices when the user presses the Escape key.
  document.addEventListener('keydown', (ev)=>{
    if (ev.key === 'Escape' && state.focused) setState({hover: '', focused: ''})
  })

}

init()

Here we maintain the state of the scene and wait for the animations to finish before changing the state. This ensures that the animations are synchronized correctly and the user interactions are handled correctly. Since we are using a single nextState, only the last interaction is considered and the previous ones are ignored.

Also CanvasSnapshotPlugin And TransformControlsPlugin are added to the viewer so that users can take snapshots of the canvas and move/rotate the devices on the table. Check the debug UI for both plugins.

Check out the full project on Codepen or Github and experiment with the scene.

Codepen: https://codepen.io/repalash/pen/ExBXvby?editors=0010 (JS)

Github: https://github.com/repalash/threepipe-device-mockup-codrops (TS)

Next steps

This tutorial covers the basics of creating an interactive 3D device mockup showcase using Threepipe. You can further enhance the project by adding more models, animations, and interactions.

Extending the model can be done both in the editor and in the code. See the Threepipe website for more information.

Here are some ideas to expand the project:

  • Add some post-processing plugins like SSAO, SSR, etc. to enhance the footage.
  • Create a custom environment map or use another HDR image for the scene.
  • Add more 3D models and create a complete 3D environment.
  • Embed an iframe into the scene to display a website or video directly on the device screen.
  • Add video rendering to export 3D mockups of UI designs.