fluent-ffmpeg / node-fluent-ffmpeg

A fluent API to FFMPEG (http://www.ffmpeg.org)

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Output video is not seekable if I am saving the video using writestream() or pipe() in s3.

MohitKumarDubey opened this issue · comments

Version information

  • fluent-ffmpeg version:"^2.1.2"
  • ffmpeg version : 4.2.9
  • OS: Windows

Code to reproduce


const { spawn } = require('child_process');
const ffmpeg = require('fluent-ffmpeg');
const AWS = require('aws-sdk');
const stream = require("stream");
const fs = require('fs');
// Configure AWS
AWS.config.update({
  accessKeyId: configData.accessKeyId,
  secretAccessKey: configData.secretAccessKey,
  region: configData.region
});
const bucketNameOfS3 = configData.bucketNameOfS3
const s3 = new AWS.S3({
  apiVersion: '2006-03-01',
  signatureVersion: 'v4',
  accessKeyId: configData.accessKeyId,
  secretAccessKey: configData.secretAccessKey,
  region: configData.region
});


const command = ffmpeg(inputVideoURL)
  .videoCodec('libx264')
  .audioCodec('aac')
  .size('854x480') // set the desired resolution (480p)
  .outputFormat('mp4')
  .addOption('-movflags', 'frag_keyframe+empty_moov')
  .on('progress',(p)=>{ 
    console.log(p)
  })
  .on('stderr',(err)=>console.log(err))
  .on('start', commandLine => console.log('Spawned FFmpeg with command: ' + commandLine))
  .on('end', () => console.log('Transcoding finished.'))
  .on('error', err => console.error('Error:', err))


//=>To save file into s3 using write steam.
const outputPathInS3 = 'StreamCheck/output2.mp4'
command.writeToStream(uploadFolderFileWriteStream(outputPathInS3));


/*
* This function will store the output file into the "StreamCheck" folder of s3 bucket using the write stream.
*/
function uploadFolderFileWriteStream(fileName) {
  try {
    const pass = new stream.PassThrough();

    const params = {
      Bucket: bucketNameOfS3,
      Key: fileName,
      Body: pass,
      ACL: "public-read",
      ContentType:'video/mp4' ,
    };
 
    const upload = s3.upload(params);

    upload.on('error', function(err) {
      console.log("Error uploading to S3:", err);
    });

    upload.send(function(err, data) {
      if(err) console.log(err);
      else console.log("Upload to S3 completed:", data);
    });

    return pass;
  } catch (err) {
    console.log("[S3 createWriteStream]", err);
  }
}


(note: if the problem only happens with some inputs, include a link to such an input file)

Expected results

I am expecting the output video should play without loading whole video into the browser. Let say output video size is of 10GB and as I hitting the object URL of that into the browser then on that case the video should start playing without loading whole 10GB data.

Observed results

If the output video is of size 10GB and I am hitting the object url of the same in the browser on that case video is not playing until it is completely loaded mean whole 10GB of video is being loaded then after video will start playing. I think there is the problem with
'moov' atom positioning.

If I am saving the output vidoe without using the writestream or pipe then the uploading that video to s3 then after playing the video using the object url then It plays without loading completely.

Checklist

  • I have read the FAQ
  • I tried the same with command line ffmpeg and it works correctly (hint: if the problem also happens this way, this is an ffmpeg problem and you're not reporting it to the right place)
  • I have included full stderr/stdout output from ffmpeg

have you resolved the issue ?

https://ffmpeg.org/ffmpeg-formats.html#Fragmentation
The ‘mov’, ‘mp4’, and ‘ismv’ muxers support fragmentation. Normally, a MOV/MP4 file has all the metadata about all packets stored in one location.

This data is usually written at the end of the file, but it can be moved to the start for better playback by adding +faststart to the -movflags, or using the qt-faststart tool).

movflags action runs AFTER convertation, you can't use stream to save during converting