是否可以从浏览器在Amazon s3上上传流? [英] Is it possible to upload stream on amazon s3 from browser?

查看:170
本文介绍了是否可以从浏览器在Amazon s3上上传流?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想捕获网络摄像头视频流,然后将其直接流式传输到S3存储.

I want to capture webcam video stream, and directly stream it to S3 storage.

我了解到您可以通过流将其上传到s3: https://aws.amazon.com/blogs/aws/amazon- s3-multipart-upload/

I've learned that you can upload via stream to s3: https://aws.amazon.com/blogs/aws/amazon-s3-multipart-upload/

我了解到您可以通过浏览器上传: http://docs.aws.amazon.com/AmazonS3/latest/dev/HTTPPOSTExamples.html#HTTPPOSTExamplesFileUpload

I've learned that you can upload via browser: http://docs.aws.amazon.com/AmazonS3/latest/dev/HTTPPOSTExamples.html#HTTPPOSTExamplesFileUpload

但是我仍然不知道该怎么做.

But Im still lost on how to actually do it.

我需要一个将getusermediastream上载到S3的示例,如上.

I need an example of someone uploadin getusermediastream to S3 like above.

缓冲区,二进制数据,分段上传,流...这一切都是我所不了解的.我希望我知道这些东西,但现在甚至都不在哪里学习.

Buffer, Binary data, multipart upload, stream... this is all beyond my knowledge. Stuff I wish I knew, but don't even now where to learn.

推荐答案

当前,您不能简单地将媒体流传递给任何S3方法来自动进行分段上传.

Currently, you cannot simply pass the media stream to any S3 method to do the multipart upload automatically.

但是,仍然有一个名为dataavailable的事件,该事件在每个给定的时间间隔内都会生成视频块.因此,我们可以订阅dataavailable并手动执行S3分段上传.

But still, there is an event called dataavailable which produces the chunks of video each given time interval. So we can subscribe to dataavailable and do the S3 Multipart Upload manually.

这种方法带来一些复杂性:说视频块每1秒生成一次,但是我们不知道将视频块上载到S3需要多长时间.例如.由于连接速度,上传时间可能会延长3倍.因此,我们可能会试图同时发出多个PUT请求而陷入困境.

This approach brings some complications: say chunks of video are generated each 1 second, but we don't know how long does it take to upload the chunk to S3. E.g. the upload can take 3 times longer due to the connection speed. So we can get stuck trying to make multiple PUT requests at the same time.

潜在的解决方案是一个一个地上传块,直到上一个版本开始上一个块.一个已上传. 以下是如何使用Rx.js和AWS开发工具包处理此代码的片段.请看我的评论.

The potential solution would be to upload the chunks one by one and don't start uploading the next chunk until the prev. one is uploaded. Here is a snippet of how this can be handled using Rx.js and AWS SDK. Please see my comments.

// Configure the AWS. In this case for the simplicity I'm using access key and secret.
AWS.config.update({
  credentials: {
    accessKeyId: "YOUR_ACCESS_KEY",
    secretAccessKey: "YOUR_SECRET_KEY",
    region: "us-east-1"
  }
});

const s3 = new AWS.S3();
const BUCKET_NAME = "video-uploads-123";

let videoStream;
// We want to see what camera is recording so attach the stream to video element.
navigator.mediaDevices
  .getUserMedia({
    audio: true,
    video: { width: 1280, height: 720 }
  })
  .then(stream => {
    console.log("Successfully received user media.");

    const $mirrorVideo = document.querySelector("video#mirror");
    $mirrorVideo.srcObject = stream;

    // Saving the stream to create the MediaRecorder later.
    videoStream = stream;
  })
  .catch(error => console.error("navigator.getUserMedia error: ", error));

let mediaRecorder;

const $startButton = document.querySelector("button#start");
$startButton.onclick = () => {
  // Getting the MediaRecorder instance.
  // I took the snippet from here: https://github.com/webrtc/samples/blob/gh-pages/src/content/getusermedia/record/js/main.js
  let options = { mimeType: "video/webm;codecs=vp9" };
  if (!MediaRecorder.isTypeSupported(options.mimeType)) {
    console.log(options.mimeType + " is not Supported");
    options = { mimeType: "video/webm;codecs=vp8" };
    if (!MediaRecorder.isTypeSupported(options.mimeType)) {
      console.log(options.mimeType + " is not Supported");
      options = { mimeType: "video/webm" };
      if (!MediaRecorder.isTypeSupported(options.mimeType)) {
        console.log(options.mimeType + " is not Supported");
        options = { mimeType: "" };
      }
    }
  }

  try {
    mediaRecorder = new MediaRecorder(videoStream, options);
  } catch (e) {
    console.error("Exception while creating MediaRecorder: " + e);
    return;
  }

  //Generate the file name to upload. For the simplicity we're going to use the current date.
  const s3Key = `video-file-${new Date().toISOString()}.webm`;
  const params = {
    Bucket: BUCKET_NAME,
    Key: s3Key
  };

  let uploadId;

  // We are going to handle everything as a chain of Observable operators.
  Rx.Observable
    // First create the multipart upload and wait until it's created.
    .fromPromise(s3.createMultipartUpload(params).promise())
    .switchMap(data => {
      // Save the uploadId as we'll need it to complete the multipart upload.
      uploadId = data.UploadId;
      mediaRecorder.start(15000);

      // Then track all 'dataavailable' events. Each event brings a blob (binary data) with a part of video.
      return Rx.Observable.fromEvent(mediaRecorder, "dataavailable");
    })
    // Track the dataavailable event until the 'stop' event is fired.
    // MediaRecorder emits the "stop" when it was stopped AND have emitted all "dataavailable" events.
    // So we are not losing data. See the docs here: https://developer.mozilla.org/en-US/docs/Web/API/MediaRecorder/stop
    .takeUntil(Rx.Observable.fromEvent(mediaRecorder, "stop"))
    .map((event, index) => {
      // Show how much binary data we have recorded.
      const $bytesRecorded = document.querySelector("span#bytesRecorded");
      $bytesRecorded.textContent =
        parseInt($bytesRecorded.textContent) + event.data.size; // Use frameworks in prod. This is just an example.

      // Take the blob and it's number and pass down.
      return { blob: event.data, partNumber: index + 1 };
    })
    // This operator means the following: when you receive a blob - start uploading it.
    // Don't accept any other uploads until you finish uploading: http://reactivex.io/rxjs/class/es6/Observable.js~Observable.html#instance-method-concatMap
    .concatMap(({ blob, partNumber }) => {
      return (
        s3
          .uploadPart({
            Body: blob,
            Bucket: BUCKET_NAME,
            Key: s3Key,
            PartNumber: partNumber,
            UploadId: uploadId,
            ContentLength: blob.size
          })
          .promise()
          // Save the ETag as we'll need it to complete the multipart upload
          .then(({ ETag }) => {
            // How how much bytes we have uploaded.
            const $bytesUploaded = document.querySelector("span#bytesUploaded");
            $bytesUploaded.textContent =
              parseInt($bytesUploaded.textContent) + blob.size;

            return { ETag, PartNumber: partNumber };
          })
      );
    })
    // Wait until all uploads are completed, then convert the results into an array.
    .toArray()
    // Call the complete multipart upload and pass the part numbers and ETags to it.
    .switchMap(parts => {
      return s3
        .completeMultipartUpload({
          Bucket: BUCKET_NAME,
          Key: s3Key,
          UploadId: uploadId,
          MultipartUpload: {
            Parts: parts
          }
        })
        .promise();
    })
    .subscribe(
      ({ Location }) => {
        // completeMultipartUpload returns the location, so show it.
        const $location = document.querySelector("span#location");
        $location.textContent = Location;

        console.log("Uploaded successfully.");
      },
      err => {
        console.error(err);

        if (uploadId) {
          // Aborting the Multipart Upload in case of any failure.
          // Not to get charged because of keeping it pending.
          s3
            .abortMultipartUpload({
              Bucket: BUCKET_NAME,
              UploadId: uploadId,
              Key: s3Key
            })
            .promise()
            .then(() => console.log("Multipart upload aborted"))
            .catch(e => console.error(e));
        }
      }
    );
};

const $stopButton = document.querySelector("button#stop");
$stopButton.onclick = () => {
  // After we call .stop() MediaRecorder is going to emit all the data it has via 'dataavailable'.
  // And then finish our stream by emitting 'stop' event.
  mediaRecorder.stop();
};

button {
    margin: 0 3px 10px 0;
    padding-left: 2px;
    padding-right: 2px;
    width: 99px;
}

button:last-of-type {
    margin: 0;
}

p.borderBelow {
    margin: 0 0 20px 0;
    padding: 0 0 20px 0;
}

video {
    height: 232px;
    margin: 0 12px 20px 0;
    vertical-align: top;
    width: calc(20em - 10px);
}


video:last-of-type {
    margin: 0 0 20px 0;
}

<div id="container">
	<video id="mirror" autoplay muted></video>

	<div>
		<button id="start">Start Streaming</button>
		<button id="stop">Stop Streaming</button>
	</div>

	<div>
		<span>Recorded: <span id="bytesRecorded">0</span> bytes</span>;
		<span>Uploaded: <span id="bytesUploaded">0</span> bytes</span>
	</div>

	<div>
		<span id="location"></span>
	</div>
</div>

<!-- include adapter for srcObject shim -->
<script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/aws-sdk/2.175.0/aws-sdk.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/5.5.6/Rx.js"></script>

注意事项:

  • 所有分段上传都需要完成或中止.如果您将其永久保留,则将向您收费.请参见此处.
  • 您上载的每个块(最后一个除外)必须大于5 MB.否则将引发错误.在此处中查看详细信息.因此,您需要调整时间范围/分辨率.
  • 实例化SDK时,请确保存在具有s3:PutObject权限的策略.
  • 您需要在存储桶CORS配置中公开ETag.这是CORS配置的示例:
  • All Multipart Uploads need to be either completed or aborted. You will be charged if you leave it pending forever. See the "Note" here.
  • Each chunk that you Upload (except the last one) must be larger than 5 MB. Or an error will be thrown. See the details here. So you need to adjust the timeframe/resolution.
  • When you are instantiating the SDK make sure that there is a policy that with the s3:PutObject permission.
  • You need to expose the ETag in your bucket CORS configuration. Here is the example of CORS configuration:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
    <AllowedOrigin>*</AllowedOrigin>
    <AllowedMethod>GET</AllowedMethod>
    <AllowedMethod>POST</AllowedMethod>
    <AllowedMethod>PUT</AllowedMethod>
    <ExposeHeader>ETag</ExposeHeader>
    <AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>

限制:

  • 请谨慎使用,因为MediaRecorder API仍未广泛采用.在产品中使用它之前,请确保检查您访问过 caniuse.com .

这篇关于是否可以从浏览器在Amazon s3上上传流?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆