In this section, we'll explore how to make your 3D scenes come alive by responding to user interactions. We'll cover the fundamental mouse and touch events that allow users to click on objects, drag them around, and even perform more complex gestures. This interactivity is key to creating engaging and intuitive 3D web experiences.
Three.js doesn't directly handle input events like clicks or drags on 3D objects. Instead, it leverages the browser's standard DOM (Document Object Model) events. We'll typically attach event listeners to the HTML canvas element where our Three.js scene is rendered. This allows us to capture mouse and touch movements and then determine which 3D objects, if any, the user is interacting with.
The core concept for interacting with 3D objects is 'raycasting'. Imagine a ray of light shooting out from the camera, through the mouse cursor's position on the screen, and into your 3D scene. If this ray intersects with any objects in the scene, we know the user is pointing at them. Three.js provides the Raycaster class to perform this operation.
const raycaster = new THREE.Raycaster();
const mouse = new THREE.Vector2();
function onMouseMove(event) {
// Calculate mouse position in normalized device coordinates (-1 to +1)
mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
raycaster.setFromCamera(mouse, camera);
const intersects = raycaster.intersectObjects(scene.children);
if (intersects.length > 0) {
// The first element of intersects is the closest object to the camera
const intersectedObject = intersects[0].object;
console.log('You are pointing at:', intersectedObject.name);
// You can now change the object's color, scale, or perform other actions
}
}The raycaster.setFromCamera(mouse, camera) method is crucial. It takes your normalized mouse coordinates and the current camera, and sets up the raycaster to originate from the camera and pass through the mouse's position in screen space. The raycaster.intersectObjects(scene.children) method then checks for intersections with all objects in the scene (or a specific array of objects if you provide one).
To handle actual clicks, we'll listen for the 'click' event on the canvas. Inside the click handler, we'll perform the raycasting as shown above. If an object is intersected, we can then trigger an action, such as changing its color, making it disappear, or opening some information related to the object.
function onClick(event) {
// Calculate mouse position (same as in onMouseMove)
mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
raycaster.setFromCamera(mouse, camera);
const intersects = raycaster.intersectObjects(scene.children);
if (intersects.length > 0) {
const clickedObject = intersects[0].object;
console.log('You clicked on:', clickedObject.name);
// Example: Change color on click
if (clickedObject.material.color) {
clickedObject.material.color.setHex(Math.random() * 0xffffff);
}
}
}
canvas.addEventListener('click', onClick, false);Dragging objects involves a bit more state management. We need to track when the user starts a drag (e.g., on 'mousedown'), what object they are trying to drag, and how their mouse movement translates to changes in the object's position. We'll also need to handle the end of the drag (e.g., on 'mouseup').
For touch devices, the principles are largely the same, but we'll be listening to touch events like 'touchstart', 'touchmove', and 'touchend'. The event.touches property will contain an array of touch points, allowing for multi-touch gestures. For simplicity, we'll often focus on the first touch point (event.touches[0]).
graph TD
A[User Interacts (Mouse/Touch)] --> B{Event Listener on Canvas}
B --> C{Calculate Normalized Coordinates}
C --> D[Create Raycaster Ray]
D --> E{Ray Intersects Scene Objects?}
E -- Yes --> F[Get Intersected Object(s)]
E -- No --> G[No Interaction]
F --> H{Action: Click? Drag Start? Drag Move? Drag End?}
H -- Click --> I[Trigger Click Action]
H -- Drag Start --> J[Store Dragging Object & Initial Position]
H -- Drag Move --> K[Update Object Position Based on Mouse/Touch Movement]
H -- Drag End --> L[Release Dragging Object]
Implementing drag functionality typically involves:
- On 'mousedown' or 'touchstart': Use raycasting to identify the object being clicked. If an object is found, store a reference to it and record the initial mouse/touch position.
- On 'mousemove' or 'touchmove': If an object is being dragged, calculate the difference in mouse/touch position from the start. Project this difference into 3D space and update the object's position accordingly.
- On 'mouseup' or 'touchend': Release the reference to the dragged object, signifying the end of the drag operation.
More advanced interactions like pinch-to-zoom or rotation can be achieved by analyzing the movement of multiple touch points. By comparing the distance between two touch points (for pinch-to-zoom) or the angle of rotation formed by them, you can translate these gestures into changes in the camera's zoom level or the scene's overall rotation.