docker-记录一次容器日志<container_id>-json.log超大问题的处理
文章目录
- 现象
- 一、查找源头
- 二、分析
- 总结
现象
同事联系说部署在虚拟机里面的用docker启动xxl-job的服务不好使了,需要解决一下,我就登陆虚拟机检查,发现根目录满了,就一层一层的找,发现是<container_id>-json.log超级大,追空之后重启服务就好使了。
/var/lib/docker/containers/<container_id>/<container_id>-json.log
一、查找源头
[root@xxl-job docker]# du -sh *
88K buildkit
16G containers
4.0K engine-id
444K image
48K network
470M overlay2
0 plugins
0 runtimes
0 swarm
0 tmp
0 tmp-old
24K volumes
[root@xxl-job docker]# cd containers/
[root@xxl-job containers]# ls
98d41152c3fccd36f21445751249927ade16d8c56b2adedaa2733b3c210a7c68
[root@xxl-job containers]# cd 98d41152c3fccd36f21445751249927ade16d8c56b2adedaa2733b3c210a7c68/
[root@xxl-job 98d41152c3fccd36f21445751249927ade16d8c56b2adedaa2733b3c210a7c68]# du -sh *
16G 98d41152c3fccd36f21445751249927ade16d8c56b2adedaa2733b3c210a7c68-json.log
0 checkpoints
4.0K config.v2.json
4.0K hostconfig.json
4.0K hostname
4.0K hosts
0 mounts
4.0K resolv.conf
4.0K resolv.conf.hash
二、分析
tail -f 看了一下这个文件,感觉就是服务日志没有转到日志目录下导致没有被日志的定时清理机制管理上。所以越来越大。
[root@xxl-job 98d41152c3fccd36f21445751249927ade16d8c56b2adedaa2733b3c210a7c68]# tail -f 98d41152c3fccd36f21445751249927ade16d8c56b2adedaa2733b3c210a7c68-json.log
{"log":"16:45:10.202 logback [main] INFO o.s.b.a.w.s.WelcomePageHandlerMapping - Adding welcome page template: index\n","stream":"stdout","time":"2025-05-28T08:45:10.202987974Z"}
{"log":"16:45:10.249 logback [xxl-job, admin JobRegistryMonitorHelper-registryMonitorThread] INFO com.zaxxer.hikari.HikariDataSource - HikariCP - Start completed.\n","stream":"stdout","time":"2025-05-28T08:45:10.249966349Z"}
{"log":"16:45:10.799 logback [main] INFO o.s.b.a.e.web.EndpointLinksResolver - Exposing 1 endpoint(s) beneath base path '/actuator'\n","stream":"stdout","time":"2025-05-28T08:45:10.800971863Z"}
{"log":"16:45:10.826 logback [main] INFO o.a.coyote.http11.Http11NioProtocol - Starting ProtocolHandler [\"http-nio-9100\"]\n","stream":"stdout","time":"2025-05-28T08:45:10.8269339Z"}
{"log":"16:45:10.854 logback [main] INFO o.a.c.c.C.[.[.[/xxl-job-admin] - Initializing Spring DispatcherServlet 'dispatcherServlet'\n","stream":"stdout","time":"2025-05-28T08:45:10.854915167Z"}
{"log":"16:45:10.854 logback [main] INFO o.s.web.servlet.DispatcherServlet - Initializing Servlet 'dispatcherServlet'\n","stream":"stdout","time":"2025-05-28T08:45:10.854980125Z"}
{"log":"16:45:10.855 logback [main] INFO o.s.web.servlet.DispatcherServlet - Completed initialization in 0 ms\n","stream":"stdout","time":"2025-05-28T08:45:10.856876273Z"}
{"log":"16:45:10.857 logback [main] INFO o.s.b.w.e.tomcat.TomcatWebServer - Tomcat started on port(s): 9100 (http) with context path '/xxl-job-admin'\n","stream":"stdout","time":"2025-05-28T08:45:10.858898162Z"}
{"log":"16:45:10.887 logback [main] INFO c.x.job.admin.XxlJobAdminApplication - Started XxlJobAdminApplication in 5.392 seconds (JVM running for 6.573)\n","stream":"stdout","time":"2025-05-28T08:45:10.89199265Z"}
{"log":"16:45:14.001 logback [xxl-job, admin JobScheduleHelper#scheduleThread] INFO c.x.j.a.c.thread.JobScheduleHelper - \u003e\u003e\u003e\u003e\u003e\u003e\u003e\u003e\u003e init xxl-job admin scheduler success.\n","stream":"stdout","time":"2025-05-28T08:45:14.003073573Z"}
总结
第一次遇到这种问题,记录一下