如何以安全方式(授权和认证)从S3提供HLS流 [英] How to serve HLS streams from S3 in secure way (authorized & authenticated)
问题描述
问题:
我正在给定文件结构的 S3 中存储许多 HLS 流:
Video1
├──hls3
├──hlsv3-master.m3u8
├──media-1
├──media-2
├──media-3
├──media-4
├──media-5
├──hls4
├──hlsv4-master.m3u8
├──media-1
├──media-2
├──media-3
├──media-4
├──media-5
在我的用户 API 中,我确切地知道哪个用户有权访问哪些视频内容 但我还需要确保视频链接不可共享且只能访问 拥有权限的用户.
解决方案:
1)对私有的 S3 内容使用签名的/临时的 S3 网址.每当客户想要播放一些特定的视频时, 发送请求到我的 API .如果用户具有权限,则 API 会生成签名的URL 并将其返回给客户端,客户端会将其传递给播放器.
我在这里看到的问题是真实的视频内容存储在 media-* 目录中的许多段文件中 我真的看不到如何保护所有这些文件-我需要分别对每个段文件URL签名吗?
2) S3 内容是私人的.播放器发出的视频流请求通过我的 API 或单独的反向代理. 因此,每当客户决定播放特定视频时, API /反向代理都会收到请求,并进行身份验证和验证.授权 并传递正确的内容(主播放列表文件和段).
在这种情况下,我仍然需要将 S3 内容设为私有,并且只能通过我的 API /反向代理进行访问.这里推荐的方法是什么? 通过令牌进行S3剩余身份验证?
3)使用受保护的密钥进行加密.在这种情况下,所有视频段均已加密并公开可用.密钥也存储在 S3 中 但不公开可用.玩家提出的每个密钥请求都必须经过身份验证和验证.由我的 API /反向代理授权.
这些是我目前脑海中的3个解决方案.不相信所有人.我正在寻找简单且防弹的东西.有什么建议/建议吗?
使用过的技术:
我需要分别对每个段文件URL进行签名吗?
如果玩家直接从S3请求,则为是.因此,这可能不是理想的方法.
一个选项是存储桶前面的CloudFront.可以为CloudFront配置一个原始访问身份,该身份允许它对请求进行签名并将其发送到S3,以便它可以代表授权用户获取私有S3对象,并且CloudFront支持两个签名URL(使用与S3不同的算法,有两个重要的区别,我将在下面进行解释)或与已签名的Cookie . CloudFront中已签名的请求和cookie的工作原理非常相似,重要的区别在于,一次cookie可以设置一次,然后浏览器会自动将其用于每个后续请求,而无需对单个URL进行签名. (啊哈)
对于CloudFront中的已签名URL和已签名Cookie,如果您使用自定义策略,您还将获得两个S3很难完成的附加功能:
-
与CloudFront签名关联的策略可以允许
当然,添加CloudFront还应该通过缓存和缩短Internet路径来提高性能,因为请求跳到比直接发送到S3的请求通常更接近浏览器的托管AWS网络上.使用CloudFront时,来自浏览器的请求被发送到60个以上的全局边缘位置"中的任何一个都被认为是最接近发出请求的浏览器.当然,只要sig或cookie有效,CloudFront可以为具有不同URL或cookie的不同用户提供相同的缓存对象.
要使用CloudFront签名的cookie,您的应用程序至少必须有一部分(设置cookie的部分)位于指向存储桶的同一CloudFront发行版的后面".这是通过将您的应用程序声明为分发的其他来源,并为特定的路径模式创建缓存行为"来完成的,该行为在被请求时由CloudFront转发给您的应用程序,然后可以使用适当的
Set-Cookie:
标头进行响应.我不隶属于AWS,所以请不要误以为是以下内容-只是预料到您的下一个问题:CloudFront + S3的定价使得与单独使用S3相比,成本差异通常可以忽略不计-通过CloudFront请求对象时,S3不会向您收取带宽费用,并且CloudFront的带宽费用在某些情况下会比直接使用S3的费用略低.尽管这似乎违反直觉,但有意义的是,AWS将以这样一种方式来定价,即鼓励在其网络上分配请求,而不是将所有请求集中在单个S3区域上.
请注意,没有一种机制可以完全不受未经授权的共享"的影响,因为身份验证信息对于用户来说是必不可少的,因此取决于用户的专业知识,对于浏览器而言,因此对于用户而言,...但是这两种方法似乎足以使诚实的用户保持诚实,这是您所希望做的所有事情.由于签名的URL和cookie上的签名具有到期时间,因此共享能力的持续时间受到限制,您可以通过CloudFront日志分析来识别此类模式,并做出相应的反应.无论采用哪种方法,都不要忘记保持日志记录的重要性.
反向代理也是一个好主意,可能易于实现,并且如果运行代理的EC2机器与存储桶位于同一AWS区域中,并且反向代理的性能不高,则不会产生任何额外的数据传输费用或吞吐量问题.代理基于可靠,高效的代码,例如Nginx或HAProxy中的代码.
在此环境中,您无需进行任何签名,因为您可以将存储桶配置为允许反向代理访问私有对象,因为它具有固定的IP地址.
在存储桶策略中,您可以通过向匿名"用户授予
s3:getObject
特权来做到这一点,只要 他们的源IPv4地址与其中一个代理的IP地址匹配.代理代表授权用户从S3匿名请求对象(无需签名).这要求您不使用S3 VPC端点,而应为代理提供一个弹性IP地址或将其放置在NAT网关或NAT实例后面,并使S3信任NAT设备的源IP.如果您使用S3 VPC端点,则应该允许S3仅仅因为它遍历了S3 VPC端点就可以信任该请求,尽管我尚未对此进行测试. (S3 VPC端点是可选的;如果您未明确配置一个,则您将没有,甚至可能不需要一个.)
如果我正确理解的话,您的第三个选择似乎最弱.一个经过授权但恶意的用户可以获得可以全天共享的密钥.
Problem:
I am storing number of HLS streams in S3 with given file structure:
Video1 ├──hls3 ├──hlsv3-master.m3u8 ├──media-1 ├──media-2 ├──media-3 ├──media-4 ├──media-5 ├──hls4 ├──hlsv4-master.m3u8 ├──media-1 ├──media-2 ├──media-3 ├──media-4 ├──media-5
In my user API I know which exactly user has access to which video content but I also need to ensure that video links are not sharable and only accessible by users with right permissions.
Solutions:
1) Use signed / temp S3 urls for private S3 content. Whenever client wants to play some specific video it is sending request to my API. If user has right permissions the API is generating signed url and returning it back to client which is passing it to player.
The problem I see here is that real video content is stored in dozen of segments files in media-* directories and I do not really see how can I protect all of them - would I need to sign each of the segment file urls separately?
2) S3 content is private. Video stream requests made by players are going through my API or separate reverse-proxy. So whenever client decides to play specific video, API / reverse-proxy is getting the request, doing authentication & authorization and passing the right content (master play list files & segments).
In this case I still need to make S3 content private and accessible only by my API / reverse-proxy. What should be the recommended way here? S3 rest authentication via tokens?
3) Use encryption with protected key. In this case all of video segments are encrypted and publicly available. The key is also stored in S3 but is not publicly available. Every key request made by player is authenticated & authorized by my API / reverse-proxy.
These are 3 solutions I have in my mind right now. Not convinced on all of them. I am looking for something simple and bullet proof secure. Any recommendations / suggestions?
Used technology:
解决方案would I need to sign each of the segment file urls separately?
If the player is requesting directly from S3, then yes. So that's probably not going to be the ideal approach.
One option is CloudFront in front of the bucket. CloudFront can be configured with an Origin Access Identity, which allows it to sign requests and send them to S3 so that it can fetch private S3 objects on behalf of an authorized user, and CloudFront supports both signed URLs (using a different algorithm than S3, with two important differences that I will explain below) or with signed cookies. Signed requests and cookies in CloudFront work very similarly to each other, with the important difference being that a cookie can be set once, then automatically used by the browser for each subsequent request, avoiding the need to sign individual URLs. (Aha.)
For both signed URLs and signed cookies in CloudFront, you get two additional features not easily done with S3 if you use a custom policy:
The policy associated with a CloudFront signature can allow a wildcard in the path, so you could authorize access to any file in, say
/media/Video1/*
until the time the signature expires. S3 signed URLs do not support wildcards in any form -- an S3 URL can only be valid for a single object.As long as the CloudFront distribution is configured for IPv4 only, you can tie a signature to a specific client IP address, allowing only access with that signature from a single IP address (CloudFront now supports IPv6 as an optional feature, but it isn't currently compatible with this option). This is a bit aggressive and probably not desirable with a mobile user base, which will switch source addresses as they switch from provider network to Wi-Fi and back.
Signed URLs must still all be generated for all of the content links, but you can generate and sign a URL only once and then reuse the signature, just string-rewriting the URL for each file making that option computationally less expensive... but still cumbersome. Signed cookies, on the other hand, should "just work" for any matching object.
Of course, adding CloudFront should also improve performance through caching and Internet path shortening, since the request hops onto the managed AWS network closer to the browser than it typically will for requests direct to S3. When using CloudFront, requests from the browser are sent to whichever of 60+ global "edge locations" is assumed to be nearest the browser making the request. CloudFront can serve the same cached object to different users with different URLs or cookies, as long as the sigs or cookies are valid, of course.
To use CloudFront signed cookies, at least part of your application -- the part that sets the cookie -- needs to be "behind" the same CloudFront distribution that points to the bucket. This is done by declaring your application as an additional Origin for the distribution, and creating a Cache Behavior for a specific path pattern which, when requested, is forwarded by CloudFront to your application, which can then respond with the appropriate
Set-Cookie:
headers.I am not affiliated with AWS, so don't mistake the following as a "pitch" -- just anticipating your next question: CloudFront + S3 is priced such that the cost difference compared to using S3 alone is usually negligible -- S3 doesn't charge you for bandwidth when objects are requested through CloudFront, and CloudFront's bandwidth charges are in some cases slightly lower than the charge for using S3 directly. While this seems counterintuitive, it makes sense that AWS would structure pricing in such a way as to encourage distribution of requests across its network rather than to focus them all against a single S3 region.
Note that no mechanism, either the one above or the one below is completely immune to unauthorized "sharing," since the authentication information is necessarily available to the browser, and thus to the user, depending on the user's expertise... but both approaches seem more than sufficient to keep honest users honest, which is all you can ever hope to do. Since signatures on signed URLs and cookies have expiration times, the duration of the share-ability is limited, and you can identify such patterns through CloudFront log analysis, and react accordingly. No matter what approach you take, don't forget the importance of staying on top of your logs.
The reverse proxy is also a good idea, probably easily implemented, and should perform quite acceptably with no additional data transport charges or throughput issues, if the EC2 machines running the proxy are in the same AWS region as the bucket, and the proxy is based on solid, efficient code like that found in Nginx or HAProxy.
You don't need to sign anything in this environment, because you can configure the bucket to allow the reverse proxy to access the private objects because it has a fixed IP address.
In the bucket policy, you do this by granting "anonymous" users the
s3:getObject
privilege, only if their source IPv4 address matches the IP address of one of the proxies. The proxy requests objects anonymously (no signing needed) from S3 on behalf of authorized users. This requires that you not be using an S3 VPC endpoint, but instead give the proxy an Elastic IP address or put it behind a NAT Gateway or NAT instance and have S3 trust the source IP of the NAT device. If you do use an S3 VPC endpoint, it should be possible to allow S3 to trust the request simply because it traversed the S3 VPC Endpoint, though I have not tested this. (S3 VPC Endpoints are optional; if you didn't explicitly configure one, then you don't have one, and probably don't need one).
Your third option seems weakest, if I understand it correctly. An authorized but malicious user gets the key an can share it all day long.
这篇关于如何以安全方式(授权和认证)从S3提供HLS流的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!