为什么我的Cron工作不正常? [英] Why Doesn't My Cron Job Work Properly?

查看:102
本文介绍了为什么我的Cron工作不正常?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个cron工作在Ubuntu Hardy VPS,只有一半的工作,我不能解决为什么。该作业是一个Ruby脚本,它使用mysqldump备份Rails应用程序使用的MySQL数据库,然后使用SFTP将其压缩并上传到远程服务器。



gzip文件已成功创建并复制,但始终为零字节。但是如果我直接从命令行运行cron命令,它会工作得很好。



这是cron工作:

  PATH = / usr / bin 
10 3 * * * ruby​​ /home/deploy/bin/datadump.rb
pre>

这是datadump.rb:

 #!/ usr / bin / ruby​​ 
require'yaml'
require'logger'
require'rubygems'
require'net / ssh'
require'net / sftp'

APP ='/ home / deploy / apps / myapp / current'
LOGFILE ='/home/deploy/log/data.log'
TIMESTAMP ='%Y%m% d-%H%M'
TABLES ='table1 table2'

log = Logger.new(LOGFILE,5,10 * 1024)
dump =myapp - #{ Time.now.strftime(TIMESTAMP)}。sql.gz
ftpconfig = YAML :: load(open('/ home / deploy / apps / myapp / shared / config / sftp.yml'))
config = YAML :: load(open(APP +'/config/database.yml'))['production']
cmd =mysqldump -u#{config ['username']} -p# config ['database']} #include'config'[''''''}} - {TABLES} | gzip -cf9> #{dump}

log.info'准备好创建备份'
`#{cmd}`

#Strongspace
日志。信息'备份已创建,开始传输到Strongspace'
Net :: SSH.start(ftpconfig ['strongspace'] ['host'],ftpconfig ['strongspace'] ['username'],ftpconfig ['strongspace ''['password'])do | ssh |
ssh.sftp.connect do | sftp |
sftp.open_handle(#{ftpconfig ['strongspace'] ['dir']} /# {dump},'w')do | handle |
sftp.write(handle,open(#{dump})。read)
end
end
end
log.info'完成将备份传输到Strongspace'

log.info'删除本地文件'
cmd =rm -f#{dump}
log .debug正在执行:#{cmd}
`#{cmd}`
log.info'本地文件已删除'

我已检查并仔细检查所有路径,并且都正确。 sftp.yml (SFTP凭据)和 database.yml (MySQL凭据)由具有该用户(chmod 400)的只读权限的执行用户(deploy)拥有。我使用的1.1.x版本的net-ssh和net-sftp。我知道他们不是最新的,但他们是我目前熟悉的。



可能导致cron作业失败?

解决方案

您确定在作为cron作业运行时正确创建了临时文件吗?您的脚本的工作目录将在HOME环境变量中指定,或者在安装cron作业的用户的/ etc / passwd条目中指定。如果deploy对它正在执行的目录没有写入权限,那么您可以为转储文件指定一个绝对路径来解决问题。


I have a cron job on an Ubuntu Hardy VPS that only half works and I can't work out why. The job is a Ruby script that uses mysqldump to back up a MySQL database used by a Rails application, which is then gzipped and uploaded to a remote server using SFTP.

The gzip file is created and copied successfully but it's always zero bytes. Yet if I run the cron command directly from the command line it works perfectly.

This is the cron job:

PATH=/usr/bin
10 3 * * * ruby /home/deploy/bin/datadump.rb

This is datadump.rb:

#!/usr/bin/ruby
require 'yaml'
require 'logger'
require 'rubygems'
require 'net/ssh'
require 'net/sftp'

APP        = '/home/deploy/apps/myapp/current'
LOGFILE    = '/home/deploy/log/data.log'
TIMESTAMP  = '%Y%m%d-%H%M'
TABLES     = 'table1 table2'

log        = Logger.new(LOGFILE, 5, 10 * 1024)
dump       = "myapp-#{Time.now.strftime(TIMESTAMP)}.sql.gz"
ftpconfig  = YAML::load(open('/home/deploy/apps/myapp/shared/config/sftp.yml'))
config     = YAML::load(open(APP + '/config/database.yml'))['production']
cmd        = "mysqldump -u #{config['username']} -p#{config['password']} -h #{config['host']} --add-drop-table --add-locks --extended-insert --lock-tables #{config['database']} #{TABLES} | gzip -cf9 > #{dump}"

log.info 'Getting ready to create a backup'
`#{cmd}`    

# Strongspace
log.info 'Backup created, starting the transfer to Strongspace'
Net::SSH.start(ftpconfig['strongspace']['host'], ftpconfig['strongspace']['username'], ftpconfig['strongspace']['password']) do |ssh|
  ssh.sftp.connect do |sftp|
    sftp.open_handle("#{ftpconfig['strongspace']['dir']}/#{dump}", 'w') do |handle|
      sftp.write(handle, open("#{dump}").read)
    end
  end
end
log.info 'Finished transferring backup to Strongspace'

log.info 'Removing local file'
cmd       = "rm -f #{dump}" 
log.debug "Executing: #{cmd}"
`#{cmd}`
log.info 'Local file removed'

I've checked and double-checked all the paths and they're correct. Both sftp.yml (SFTP credentials) and database.yml (MySQL credentials) are owned by the executing user (deploy) with read-only permissions for that user (chmod 400). I'm using the 1.1.x versions of net-ssh and net-sftp. I know they're not the latest, but they're what I'm familiar with at the moment.

What could be causing the cron job to fail?

解决方案

Are you sure the temporary file is being created correctly when running as a cron job? The working directory for your script will either be specified in the HOME environment variable, or the /etc/passwd entry for the user that installed the cron job. If deploy does not have write permissions for the directory in which it is executing, then you could specify an absolute path for the dump file to fix the problem.

这篇关于为什么我的Cron工作不正常?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆