ELK--mysql slow.log 小鱼儿 2023-10-10 10:19 24阅读 0赞 思路:Beats -> Logstash -> Elasticsearch filebeat.inputs: - type: log # Change to true to enable this input configuration. enabled: true # Paths that should be crawled and fetched. Glob based paths. paths: - /data/dblogs/mysql3306/slowlogs/mysql_slow.log #- c:\programdata\elasticsearch\logs\* # Exclude lines. A list of regular expressions to match. It drops the lines that are # matching any regular expression from the list. exclude_lines: ['^# Time'] # Include lines. A list of regular expressions to match. It exports the lines that are # matching any regular expression from the list. #include_lines: ['^ERR', '^WARN'] # Exclude files. A list of regular expressions to match. Filebeat drops the files that # are matching any regular expression from the list. By default, no files are dropped. #exclude_files: ['.gz$'] # Optional additional fields. These fields can be freely picked # to add additional information to the crawled log files for filtering fields: type: mysql-slow-log # level: debug # review: 1 ### Multiline options # Multiline can be used for log messages spanning multiple lines. This is common # for Java Stack Traces or C-Line Continuation # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [ multiline.pattern: "^# User@Host:" # Defines if the pattern set under pattern should be negated or not. Default is false. multiline.negate: true # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern # that was (not) matched before or after or as long as a pattern is not matched based on negate. # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash multiline.match: after filebeat.config.modules: # Glob pattern for configuration loading path: ${path.config}/modules.d/*.yml # Set to true to enable config reloading reload.enabled: false # Period on which files under path should be checked for changes #reload.period: 10s setup.template.settings: index.number_of_shards: 1 #index.codec: best_compression #_source.enabled: false setup.kibana: # Kibana Host # Scheme and port can be left out and will be set to the default (http and 5601) # In case you specify and additional path, the scheme is required: http://localhost:5601/path # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601 #host: "localhost:5601" # Kibana Space ID # ID of the Kibana Space into which the dashboards should be loaded. By default, # the Default Space will be used. #space.id: # Array of hosts to connect to. #hosts: ["localhost:9200"] # Optional protocol and basic auth credentials. #protocol: "https" #username: "elastic" #password: "changeme" output.logstash: # The Logstash hosts #hosts: ["localhost:5044"] hosts: ["192.168.31.6:5044"] # Optional SSL. By default is off. # List of root certificates for HTTPS server verifications #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"] # Certificate for SSL client authentication #ssl.certificate: "/etc/pki/client/cert.pem" # Client Certificate Key #ssl.key: "/etc/pki/client/cert.key" processors: - add_host_metadata: ~ - add_cloud_metadata: ~ input { beats { port => 5044 } } filter{ if [fields][type] == "mysql-slow-log" { mutate { gsub => ["message", "\\n", ""] } grok { match => [ "message", "^#\s+User@Host:\s+%{USER:user}\[[^\]]+\]\s+@\s+(?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\s+Id:\s+%{ NUMBER:id}\s*# Query_time: %{ NUMBER:query_time}\s+Lock_time: %{ NUMBER:lock_time}\s+Rows_sent: %{ NUMBER:rows_sent}\s+Rows_examined: %{ NUMBER:rows_examined}\s*SET\s+timestamp=%{ NUMBER:timestamp_mysql};\s*(?<query>[\s\S]*);" ] } ruby { code => "event.set('timestamp', event.get('@timestamp').time.localtime + 8*60*60)" } ruby { code => "event.set('@timestamp',event.get('timestamp'))" } mutate { remove_field => ["timestamp"] } mutate { remove_field => ["ecs","input","flags","message","host","tags","timestamp_mysql","@version"] } } } output { # stdout { codec => rubydebug } if [fields][type] == "mysql-slow-log" { elasticsearch{ hosts => ["192.168.0.1:9200"] index => "mysql-slow-log-%{ +YYYY-MM}" } } } 转载于:https://www.cnblogs.com/monkeybron/p/11565583.html
相关 php-fpm通过request_slowlog_timeout检查哪个脚本执行时间长 来源:http://www.nginx.cn/2035.html 很多站长转到nginx+php-fpm后,饱受500,502问题困扰。 当nginx收到如上错误码时,可以 灰太狼/ 2023年10月17日 07:30/ 0 赞/ 24 阅读
相关 Redis 性能问题排查:slowlog 和排队延时 一、Redis Slowlog介绍 Redis slowlog是排查性能问题关键监控指标。它是记录Redis queries运行时间超时特定阀值的系统。 这类慢查询命 缺乏、安全感/ 2023年03月13日 05:59/ 0 赞/ 26 阅读
相关 用elastic stack来分析下你的redis slowlog redis是目前最流行的 NoSQL 内存数据库,然而如果在使用过程中出现滥用、乱用的情况,很容易发生性能问题,此时我们就要去关注慢查询日志,本文尝试给大家介绍一种通过 `el 矫情吗;*/ 2022年11月27日 03:11/ 0 赞/ 167 阅读
相关 redis实战(10):SLOWLOG命令详解 > redis高负载排查时候用到它,仔细研究研究它 -------------------- 1 什么是 SLOWLOG > Slow log 是 Redis 用来记 分手后的思念是犯贱/ 2022年11月11日 05:53/ 0 赞/ 228 阅读
相关 Elasticsearch启动报FileNotFoundException: search_slowlog.json (Permission denied),带详细解决方法 【现象】 2021-07-06 11:54:25,559 main ERROR RollingFileManager (/opt/apps/es/elasticsea 心已赠人/ 2022年10月13日 05:10/ 0 赞/ 56 阅读
还没有评论,来说两句吧...