并行使用s3cmd将文件上传到s3 [英] Uploading files to s3 using s3cmd in parallel
问题描述
我在服务器上有一堆文件,我想将它们上传到S3.这些文件以.data扩展名存储,但实际上它们只是一堆jpeg,png,zip或pdf.
I've got a whole heap of files on a server, and I want to upload these onto S3. The files are stored with a .data extension, but really they're just a bunch of jpegs,pngs,zips or pdfs.
我已经写了一个简短的脚本,该脚本可以找到mime类型并将其上传到S3上,并且可以运行,但是速度很慢.有什么办法可以使用gnu并行运行以下内容?
I've already written a short script which finds the mime type and uploads them onto S3 and that works but it's slow. Is there any way to make the below run using gnu parallel?
#!/bin/bash
for n in $(find -name "*.data")
do
data=".data"
extension=`file $n | cut -d ' ' -f2 | awk '{print tolower($0)}'`
mimetype=`file --mime-type $n | cut -d ' ' -f2`
fullpath=`readlink -f $n`
changed="${fullpath/.data/.$extension}"
filePathWithExtensionChanged=${changed#*internal_data}
s3upload="s3cmd put -m $mimetype --acl-public $fullpath s3://tff-xenforo-data"$filePathWithExtensionChanged
response=`$s3upload`
echo $response
done
我也相信总体上可以大大改善此代码:)反馈提示将不胜感激.
Also I'm sure this code could be greatly improved in general :) Feedback tips would be greatly appreciated.
推荐答案
您显然精于编写Shell,并且非常接近解决方案:
You are clearly skilled in writing shell, and extremely close to a solution:
s3upload_single() {
n=$1
data=".data"
extension=`file $n | cut -d ' ' -f2 | awk '{print tolower($0)}'`
mimetype=`file --mime-type $n | cut -d ' ' -f2`
fullpath=`readlink -f $n`
changed="${fullpath/.data/.$extension}"
filePathWithExtensionChanged=${changed#*internal_data}
s3upload="s3cmd put -m $mimetype --acl-public $fullpath s3://tff-xenforo-data"$filePathWithExtensionChanged
response=`$s3upload`
echo $response
}
export -f s3upload_single
find -name "*.data" | parallel s3upload_single
这篇关于并行使用s3cmd将文件上传到s3的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!