Many web apps have support for uploading video files. Whether it’s a media-focused platform (such as a video sharing service) or just offering users a chance to add vlogs to their profile - videos are a powerful mechanism for distributing ideas.
For services providing image upload functionality, it is relatively simple to build in processes that extract smaller versions of the files (e.g. thumbnails) to be used as image previews. This allows other users to see roughly what an image is about before opening a larger version. It also enables more interesting, responsive, and attractive interfaces - since the smaller images can be loaded more quickly.
For videos, however, the process is less obvious. Particularly when using the native video
tag, on some browsers often just an empty black rectangle is displayed before the video begins playing. This can result in uglier interfaces in which users cannot get a preview before the video plays.
However, there is a way in which video preview thumbnails can be extracted from your videos. These can then be stored alongside your video and displayed in-place of the video on a webpage to provide context about the video before it plays.
This process can be completed as part of the video selection process on the user’s device, so no extra server-side processing is required. It essentially involves “playing” the video in the background in a hidden player for a second or two, and then extracting the current frame as an image to be stored.
The code below should help explain the process in more detail.
const getVideoPreview = (file, time) => {
return new Promise((resolve, reject) => {
// create a video player
const player = document.createElement('video');
// when the metadata loads…
player.addEventListener('loadedmetadata', () => {
// when the video has seeked…
player.addEventListener('seeked', () => {
// create canvas and draw current frame
const canvas = document.createElement('canvas');
canvas.width = player.videoWidth;
canvas.height = player.videoHeight;
const ctx = canvas.getContext('2d');
ctx.drawImage(player, 0, 0, canvas.width, canvas.height);
// resolve with image file
ctx.canvas.toBlob(resolve, 'image/jpeg', 0.75);
});
player.currentTime = time;
});
player.setAttribute('src', URL.createObjectURL(file));
player.load();
});
}
Elsewhere in your code, you could now call const videoPreview = await getVideoPreview(file, 1);
in order to get an image file representing the video after playing for one second. Then both file
(the video selected by the user from the filesystem) and videoPreview
(the preview image) can be separately uploaded.
When you then want to show the video preview, you can simply load the image file and then begin to play the video once it is selected by the user.