在Logstash中为sql_last_value使用表的ID? [英] Using an id of a table for sql_last_value in logstash?
问题描述
我在logstash
输入中的jdbc
插件中有这样的MySQL语句.
I'm having a MySQL statement as such within my jdbc
plugin in logstash
input.
statement => "SELECT * from TEST where id > :sql_last_value"
我的表没有这样的date
或datetime
字段.因此,我试图通过使用scheduler
每分钟检查一次索引来更新索引,是否在表中添加了任何新行.
My table doesn't have any date
or datetime
field as such. So I'm trying to update the index, by checking minute by minute using a scheduler
, whether any new rows have been added to the table.
我应该只能更新新记录,而不能更新现有记录中的现有值更改.为此,我要使用logstash
输入:
I should only be able to update the new records, rather than updating the existing value changes from an existing record. So to do this I'm having this kinda of a logstash
input:
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://myhostmachine:3306/mydb"
jdbc_user => "root"
jdbc_password => "root"
jdbc_validate_connection => true
jdbc_driver_library => "/mypath/mysql-connector-java-5.1.39-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_paging_enabled => "true"
jdbc_page_size => "50000"
schedule => "* * * * *"
statement => "SELECT * from mytable where id > :sql_last_value"
use_column_value => true
tracking_column => id
last_run_metadata_path => "/path/.logstash_jdbc_last_run"
clean_run => true
}
}
因此,每当我创建索引并运行此logstash
文件以上传文档时,它根本就不会上传.文档计数显示为零.在运行logstash
conf文件之前,请确保已删除.logstash_jdbc_last_run
.
So whenever I create an index and run this logstash
file in order to upload the docs, it doesn't get uploaded at all. The docs count shows as zero. I made sure that I deleted the .logstash_jdbc_last_run
before I ran the logstash
conf file.
logstash控制台输出的一部分:
Part of logstash console output:
[2016-11-02T16:33:00,294] [INFO] [logstash.inputs.jdbc] (0.002000s)SELECT count(*)AS
count
FROM(SELECT * from TEST其中 id>'2016-11-02 11:02:00')ASt1
LIMIT 1
[2016-11-02T16:33:00,294][INFO ][logstash.inputs.jdbc ] (0.002000s) SELECT count(*) AS
count
FROM (SELECT * from TEST where id > '2016-11-02 11:02:00') ASt1
LIMIT 1
并继续进行逐分钟检查,这是正确的,但随后却没有记录.它是如何工作的?
and this keeps on going by checking minute by minute which is correct but then it doesn't get the records. How does it work?
我错过了什么吗?任何帮助,将不胜感激.
Am I missing something? Any help could be appreciated.
推荐答案
您需要像这样修改您的logstash配置:
You need to modify your logstash configuration like this:
jdbc {
jdbc_connection_string => "jdbc:mysql://myhostmachine:3306/mydb"
jdbc_user => "root"
jdbc_password => "root"
jdbc_validate_connection => true
jdbc_driver_library => "/mypath/mysql-connector-java-5.1.39-bin.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_paging_enabled => "true"
jdbc_page_size => "50000"
schedule => "* * * * *"
statement => "SELECT * from TEST where id > :sql_last_value"
use_column_value => true
tracking_column => "id"
tracking_column_type => "numeric"
clean_run => true
last_run_metadata_path => "/mypath/.logstash_jdbc_last_run"
}
最后五个设置对您而言很重要.同样,即使clean_run => true
确实要删除.logstash_jdbc_last_run
文件,也要确保删除该文件.
The last five settings are important in your case. Also make sure to delete the .logstash_jdbc_last_run
file even though clean_run => true
does it.
这篇关于在Logstash中为sql_last_value使用表的ID?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!