使用适用于 .NET 的 AWS 开发工具包的 AWS Elemental MediaConvert CreateJob 示例 [英] AWS Elemental MediaConvert CreateJob Example Using the AWS SDK for .NET

查看:32
本文介绍了使用适用于 .NET 的 AWS 开发工具包的 AWS Elemental MediaConvert CreateJob 示例的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试更改文件输入输入剪辑起始时间码和结束时间码,并将剪辑的视频保存到S3存储桶中的文件输出目标强>

I am trying to change the Input Clipping StartTimecode and end timecode of my file input and save the clipped video to file output Destination in S3 bucket

目前,我可以使用以下代码执行操作:

Currently, I am able to perform the operation using the below code:

 using System;
 using System.Threading.Tasks;
 using Amazon.MediaConvert;
 using Amazon.MediaConvert.Model;

namespace MediaConvertNET
{

class Program
{

    static async Task MainAsync()
    {
        String mediaConvertRole = "Your AWS Elemental MediaConvert role ARN";
        String fileInput = "s3://yourinputfile";
        String fileOutput = "s3://youroutputdestination";
        String mediaConvertEndpoint = "";

        // If we do not have our customer-specific endpoint
        if (String.IsNullOrEmpty(mediaConvertEndpoint))
        {
            // Obtain the customer-specific MediaConvert endpoint
            AmazonMediaConvertClient client = new AmazonMediaConvertClient("AccessKey", "AccessSecret", Amazon.RegionEndpoint.USWest1);
            DescribeEndpointsRequest describeRequest = new DescribeEndpointsRequest();

            DescribeEndpointsResponse describeResponse = await client.DescribeEndpointsAsync(describeRequest);
            mediaConvertEndpoint = describeResponse.Endpoints[0].Url;
        }

        // Since we have a service url for MediaConvert, we do not
        // need to set RegionEndpoint. If we do, the ServiceURL will
        // be overwritten
        AmazonMediaConvertConfig mcConfig = new AmazonMediaConvertConfig
        {
            ServiceURL = mediaConvertEndpoint,
        };

        AmazonMediaConvertClient mcClient = new AmazonMediaConvertClient("AccessKey", "AccessSecret", mcConfig);
        CreateJobRequest createJobRequest = new CreateJobRequest();

        createJobRequest.Role = mediaConvertRole;
        createJobRequest.UserMetadata.Add("Customer", "Amazon");

        #region Create job settings
        JobSettings jobSettings = new JobSettings();
        jobSettings.AdAvailOffset = 0;
        jobSettings.TimecodeConfig = new TimecodeConfig();
        jobSettings.TimecodeConfig.Source = TimecodeSource.EMBEDDED;
        createJobRequest.Settings = jobSettings;

        #region OutputGroup
        OutputGroup ofg = new OutputGroup();
        ofg.Name = "File Group";
        ofg.OutputGroupSettings = new OutputGroupSettings();
        ofg.OutputGroupSettings.Type = OutputGroupType.FILE_GROUP_SETTINGS;
        ofg.OutputGroupSettings.FileGroupSettings = new FileGroupSettings();
        ofg.OutputGroupSettings.FileGroupSettings.Destination = fileOutput;

        Output output = new Output();
        output.NameModifier = "_1";

        #region VideoDescription
        VideoDescription vdes = new VideoDescription();
        output.VideoDescription = vdes;
        vdes.ScalingBehavior = ScalingBehavior.DEFAULT;
        vdes.TimecodeInsertion = VideoTimecodeInsertion.DISABLED;
        vdes.AntiAlias = AntiAlias.ENABLED;
        vdes.Sharpness = 50;
        vdes.AfdSignaling = AfdSignaling.NONE;
        vdes.DropFrameTimecode = DropFrameTimecode.ENABLED;
        vdes.RespondToAfd = RespondToAfd.NONE;
        vdes.ColorMetadata = ColorMetadata.INSERT;
        vdes.CodecSettings = new VideoCodecSettings();
        vdes.CodecSettings.Codec = VideoCodec.H_264;
        H264Settings h264 = new H264Settings();
        h264.InterlaceMode = H264InterlaceMode.PROGRESSIVE;
        h264.NumberReferenceFrames = 3;
        h264.Syntax = H264Syntax.DEFAULT;
        h264.Softness = 0;
        h264.GopClosedCadence = 1;
        h264.GopSize = 90;
        h264.Slices = 1;
        h264.GopBReference = H264GopBReference.DISABLED;
        h264.SlowPal = H264SlowPal.DISABLED;
        h264.SpatialAdaptiveQuantization = H264SpatialAdaptiveQuantization.ENABLED;
        h264.TemporalAdaptiveQuantization = H264TemporalAdaptiveQuantization.ENABLED;
        h264.FlickerAdaptiveQuantization = H264FlickerAdaptiveQuantization.DISABLED;
        h264.EntropyEncoding = H264EntropyEncoding.CABAC;
        h264.Bitrate = 2000000;
        h264.FramerateControl = H264FramerateControl.SPECIFIED;
        h264.RateControlMode = H264RateControlMode.CBR;
        h264.CodecProfile = H264CodecProfile.MAIN;
        h264.Telecine = H264Telecine.NONE;
        h264.MinIInterval = 0;
        h264.AdaptiveQuantization = H264AdaptiveQuantization.HIGH;
        h264.CodecLevel = H264CodecLevel.AUTO;
        h264.FieldEncoding = H264FieldEncoding.PAFF;
        h264.SceneChangeDetect = H264SceneChangeDetect.ENABLED;
        h264.QualityTuningLevel = H264QualityTuningLevel.SINGLE_PASS;
        h264.FramerateConversionAlgorithm = H264FramerateConversionAlgorithm.DUPLICATE_DROP;
        h264.UnregisteredSeiTimecode = H264UnregisteredSeiTimecode.DISABLED;
        h264.GopSizeUnits = H264GopSizeUnits.FRAMES;
        h264.ParControl = H264ParControl.SPECIFIED;
        h264.NumberBFramesBetweenReferenceFrames = 2;
        h264.RepeatPps = H264RepeatPps.DISABLED;
        h264.FramerateNumerator = 30;
        h264.FramerateDenominator = 1;
        h264.ParNumerator = 1;
        h264.ParDenominator = 1;
        output.VideoDescription.CodecSettings.H264Settings = h264;
        #endregion VideoDescription

        #region AudioDescription
        AudioDescription ades = new AudioDescription();
        ades.LanguageCodeControl = AudioLanguageCodeControl.FOLLOW_INPUT;
        // This name matches one specified in the Inputs below
        ades.AudioSourceName = "Audio Selector 1";
        ades.CodecSettings = new AudioCodecSettings();
        ades.CodecSettings.Codec = AudioCodec.AAC;
        AacSettings aac = new AacSettings();
        aac.AudioDescriptionBroadcasterMix = AacAudioDescriptionBroadcasterMix.NORMAL;
        aac.RateControlMode = AacRateControlMode.CBR;
        aac.CodecProfile = AacCodecProfile.LC;
        aac.CodingMode = AacCodingMode.CODING_MODE_2_0;
        aac.RawFormat = AacRawFormat.NONE;
        aac.SampleRate = 48000;
        aac.Specification = AacSpecification.MPEG4;
        aac.Bitrate = 64000;
        ades.CodecSettings.AacSettings = aac;
        output.AudioDescriptions.Add(ades);
        #endregion AudioDescription

        #region Mp4 Container
        output.ContainerSettings = new ContainerSettings();
        output.ContainerSettings.Container = ContainerType.MP4;
        Mp4Settings mp4 = new Mp4Settings();
        mp4.CslgAtom = Mp4CslgAtom.INCLUDE;
        mp4.FreeSpaceBox = Mp4FreeSpaceBox.EXCLUDE;
        mp4.MoovPlacement = Mp4MoovPlacement.PROGRESSIVE_DOWNLOAD;
        output.ContainerSettings.Mp4Settings = mp4;
        #endregion Mp4 Container

        ofg.Outputs.Add(output);
        createJobRequest.Settings.OutputGroups.Add(ofg);
        #endregion OutputGroup

        #region Input
        Input input = new Input();

        InputClipping ip = new InputClipping();
        ip.StartTimecode= "00:00:00:00";
        ip.EndTimecode= "00:00:05:00";

        input.FilterEnable = InputFilterEnable.AUTO;
        input.PsiControl = InputPsiControl.USE_PSI;
        input.FilterStrength = 0;
        input.DeblockFilter = InputDeblockFilter.DISABLED;
        input.DenoiseFilter = InputDenoiseFilter.DISABLED;
        input.TimecodeSource = InputTimecodeSource.ZEROBASED;
        input.InputClippings.Add(ip);
        input.FileInput = fileInput;

        AudioSelector audsel = new AudioSelector();
        audsel.Offset = 0;
        audsel.DefaultSelection = AudioDefaultSelection.NOT_DEFAULT;
        audsel.ProgramSelection = 1;
        audsel.SelectorType = AudioSelectorType.TRACK;
        audsel.Tracks.Add(1);
        input.AudioSelectors.Add("Audio Selector 1", audsel);

        input.VideoSelector = new VideoSelector();
        input.VideoSelector.ColorSpace = ColorSpace.FOLLOW;

        createJobRequest.Settings.Inputs.Add(input);
        #endregion Input
        #endregion Create job settings

        try
        {
            CreateJobResponse createJobResponse =await mcClient.CreateJobAsync(createJobRequest);
            Console.WriteLine("Job Id: {0}", createJobResponse.Job.Id);
        }
        catch (BadRequestException bre)
        {
            // If the enpoint was bad
            if (bre.Message.StartsWith("You must use the customer-"))
            {
                // The exception contains the correct endpoint; extract it
                mediaConvertEndpoint = bre.Message.Split('\'')[1];
                // Code to retry query
            }
        }

    }



    static void Main(string[] args)
    {
       Task.Run(() => MainAsync()).GetAwaiter().GetResult();
    }
}

}

有几点我想知道:

  1. 是否必须按照我的意愿创建 VideoDescription 对象和 AudioDescription 对象只进行裁剪操作

  1. Is it mandatory to create VideoDescription Object and AudioDescription objects as I just wanted to perform only clipping operation

 InputClipping ip = new InputClipping();
 ip.StartTimecode= "00:00:00:00";
 ip.EndTimecode= "00:00:05:00";

2.CreateJobResponse createJobResponse =await mcClient.CreateJobAsync(createJobRequest);如何检查我的工作流程是否完成

2.CreateJobResponse createJobResponse =await mcClient.CreateJobAsync(createJobRequest); How can I check my job process is completed or not

  1. 如果作业过程完成,作为回报,我如何获取 S3 存储桶新输出文件创建的 URL,因为我想将该 URL 保存到我的数据库中.

推荐答案

对于问题 1:根据您的工作流程,您的输出对象必须包含以下描述组合:

For question 1: Depending on your workflow you output object must contain on the following descriptions combinations :

  • 视频说明和音频描述(视频和音频混合)
  • 视频说明(仅限视频)
  • 音频描述(仅音频)

这将确保您的输出只有视频/视频和音频/音频.

This will insure your output has video only/video and audio/audio only.

MediaConvert 将在您定义的剪辑区域中对输入进行编码.该服务不会将视频或音频传递到输出(有时在视频社区中称为转换).将 MediaConvert 的输出视为一个全新的文件.

MediaConvert will encode the input in the clipping region you define. The service do not pass video or audio through to the output (sometimes called transmuxing in the video community). Think of the output out of MediaConvert as a brand new file.

问题 2:我建议使用 CloudWatch Events 来监控工作进展.请参阅以下文档:https://docs.aws.amazon.com/mediaconvert/latest/ug/how-mediaconvert-jobs-progress.html
https://docs.aws.amazon.com/mediaconvert/latest/ug/cloudwatch_events.html

Question 2: I would advise using CloudWatch Events to monitor job progression. See the following documentation: https://docs.aws.amazon.com/mediaconvert/latest/ug/how-mediaconvert-jobs-progress.html
https://docs.aws.amazon.com/mediaconvert/latest/ug/cloudwatch_events.html

问题 3:请参阅我在 如何在 MediaConvert 中完成工作后检索编码文件和路径列表?

Question 3: See my post in How to retrieve list of encoded files and paths after a done job in MediaConvert?

您可以通过收集 COMPLETE CloudWatch 事件来获取此信息.

You can get this information by collecting the COMPLETE CloudWatch Event.

这篇关于使用适用于 .NET 的 AWS 开发工具包的 AWS Elemental MediaConvert CreateJob 示例的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆