弹性beantalk环境的Cloudwatch日志组上没有日志显示 [英] No logs appear on Cloudwatch log group for elastic beanstalk environment

查看:96
本文介绍了弹性beantalk环境的Cloudwatch日志组上没有日志显示的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个弹性beantalk环境,该环境正在运行具有节点js API的docker容器.在AWS控制台上,如果我选择我的环境,请转到配置/软件,我具有以下内容:

  • 日志组:/aws/elasticbeanstalk/my-environment
  • 日志流:已启用
  • 保留: 3天
  • 生命周期:终止后保留.

但是,如果我在Cloudwatch控制台上单击该日志组,则我有几个星期前的 Last Event Time (我相信这与创建环境的时间相对应),并且没有任何内容在日志上.

由于这是一个dockerized应用程序,因此服务器本身的日志应位于/aws/elasticbeanstalk/my-environment/var/log/eb-docker/containers/eb-current-app/stdouterr.log . 如果我改为再次从EB环境直接从实例中获取日志,请单击日志",然后单击请求最后100行",则日志记录正确进行.使用CloudWatch时我什么都看不到.

很高兴得到任何帮助

解决方案

我能够解决此问题. 因此,CloudWatch根据您的日志文件的第一行和日志流密钥进行哈希处理,问题是我在 stdouterr.log 文件上的第一行实际上是一个空行!

经过几天的努力并获得了良好的AWS支持团队的帮助,我首先通过SSH连接到与EB环境关联的EC2实例,您需要将以下行添加到/etc/awslogs中/config/beanstalklogs.conf 文件,紧接在"file =/var/log/eb-docker/containers/eb-current-app/ stdouterr.log "行之后:

file_fingerprint_lines = 1-20

使用这些,您告诉AWS服务它应该使用日志文件上的第1至20行来计算哈希值.您可以将20更改为更大或更小的数字,具体取决于您的日志记录内容.但是我不知道该值是否有上限.

这样做之后,您需要在实例上重新启动 AWS Logs Service .

为此,您将执行:

  • sudo服务awslogs停止
  • sudo服务awslogs启动

或更简单:

sudo服务awslogs重新启动

完成这些步骤后,我开始使用我的环境,并且现在已将日志记录正确地流式传输到CloudWatch控制台! 但是,如果进行了新的部署,替换了EC2实例或自动可伸缩组产生了另一个实例,这将不起作用.

要对此进行修复,可以在部署之前通过 .ebextensions 目录在应用程序的根目录中添加日志配置.

我在新创建的 .ebextensions 目录中添加了一个名为 logs.config 的文件,并放置了以下内容:

files:
  "/etc/awslogs/config/beanstalklogs.conf":
    mode: "000644"
    user: root
    group: root
    content: |
      [/var/log/eb-docker/containers/eb-current-app/stdouterr.log]
      log_group_name=/aws/elasticbeanstalk/EB-ENV-NAME/var/log/eb-docker/containers/eb-current-app/stdouterr.log
      log_stream_name={instance_id}
      file=/var/log/eb-docker/containers/eb-current-app/*stdouterr.log
      file_fingerprint_lines=1-20

commands:
  01_remove_eb_stream_config:
    command: 'rm /etc/awslogs/config/beanstalklogs.conf.bak'
  02_restart_log_agent:
    command: 'service awslogs restart'

根据我在EB上的环境名称来更改 EB-ENV-NAME .

希望它可以帮助其他人!

I have an elastic beanstalk environment, which is running a docker container that has a node js API. On the AWS Console, if I select my environment, then go to Configuration/Software I have the following:

  • Log groups: /aws/elasticbeanstalk/my-environment
  • Log streaming: Enabled
  • Retention: 3 days
  • Lifecycle: Keep after termination.

However, if I click on that log group on the Cloudwatch console, I have a Last Event Time of some weeks ago (which I believe corresponds to when the environment was created) and have no content on the logs.

Since this is a dockerized application, Logs for the server itself should be at /aws/elasticbeanstalk/my-environment/var/log/eb-docker/containers/eb-current-app/stdouterr.log. If I instead get the Logs directly from the instances by going once again to my EB environment, clicking "Logs" and then "Request last 100 Lines" the logging is happening correctly. I just can't see a thing when using CloudWatch.

Any help is gladly appreciated

解决方案

I was able to get around this problem. So CloudWatch makes a hash based on the first line of your log file and the log stream key, and the problem is that my first line on the stdouterr.log file was actually an empty line!

After couple of days playing around and getting help from the good AWS support team, I first connected via SSH to my EC2 instance associated to the EB environment and you need to add the following line to the /etc/awslogs/config/beanstalklogs.conf file, right after the "file=/var/log/eb-docker/containers/eb-current-app/stdouterr.log" line:

file_fingerprint_lines=1-20

With these, you tell the AWS service that it should calculate the hash using lines 1 through 20 on the log file. You could change 20 for larger or smaller numbers depending on your logging content; however I don't know if there is an upper limit for the value.

After doing so, you need to restart the AWS Logs Service on the instance.

For this you would execute:

  • sudo service awslogs stop
  • sudo service awslogs start

or simpler:

sudo service awslogs restart

After these steps I started using my environment and the logging was now being properly streamed to the CloudWatch console! However this would not work if a new deployment is made, if the EC2 instance gets replaced or the auto scalable group spawns another.

To have a fix for this, it is possible to add log config via the .ebextensions directory, at the root of your application before deploying.

I added a file called logs.config to the newly created .ebextensions directory and placed the following content:

files:
  "/etc/awslogs/config/beanstalklogs.conf":
    mode: "000644"
    user: root
    group: root
    content: |
      [/var/log/eb-docker/containers/eb-current-app/stdouterr.log]
      log_group_name=/aws/elasticbeanstalk/EB-ENV-NAME/var/log/eb-docker/containers/eb-current-app/stdouterr.log
      log_stream_name={instance_id}
      file=/var/log/eb-docker/containers/eb-current-app/*stdouterr.log
      file_fingerprint_lines=1-20

commands:
  01_remove_eb_stream_config:
    command: 'rm /etc/awslogs/config/beanstalklogs.conf.bak'
  02_restart_log_agent:
    command: 'service awslogs restart'

Changing of course EB-ENV-NAME by my environment name on EB.

Hope it can help someone else!

这篇关于弹性beantalk环境的Cloudwatch日志组上没有日志显示的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆