基于K8S部署filebeat及logstash并输出到java程序中
创始人
2024-02-14 02:52:19
0

需求来源:

        采集K8S集群的容器日志,并集中存储。

解决方案:

        1、DaemonSet 

                以守护进程的方式运行Filebeat,Filebeat将采集日志通过logstash发送的JAVA程序中,再由JAVA程序处理后,集中存储起来。

        2、Sidecar 

                每个POD中额外增加一个Filebeat容器,Filebeat通过文件共享方式,读取相应的日志并通过logstash发送到JAVA程序中。

两个方式可以并存,并不冲突。DaemonSet方式采集容器的标准输出,如果有特殊需求,再通过Sidecar方式定制采集日志即可。

下面介绍的是daemonSet方式采集容器日志的内容:

先将K8S部署的yaml文件贴出来:

# 创建账户
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: itsm-node-managername: itsm-node-managernamespace: kube-system
---
# 创建角色
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:labels:k8s-app: itsm-node-managername: itsm-node-manager-rolenamespace: kube-system
rules:
- apiGroups:- ""resources:- nodes- namespaces- events- podsverbs:- get- list- watch
---
# 账户与角色绑定
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:name: itsm-node-manager-role-bindingnamespace: kube-system
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: itsm-node-manager-role
subjects:
- kind: ServiceAccountname: itsm-node-managernamespace: kube-system
---
# 创建logstash配置文件
apiVersion: v1
kind: ConfigMap
metadata:labels:k8s-app: itsm-node-managername: logstash-confignamespace: kube-system
data:logstash.yml: 'config.reload.automatic: true'pipeline.conf: |-input {beats {port => 5044codec => json}}filter {}output {http {http_method => "post"format => "json"# 此处配置程序的url路径,java代码会在下面贴出来。如果调用的是集群内部的程序,可以采用和filebeat一样的域名方式url => "http://192.168.0.195:8080/containerLog/insert"content_type => "application/json"}}
---
# 创建logstash
apiVersion: apps/v1
kind: Deployment
metadata:name: logstashnamespace: kube-systemlabels:server: logstash-7.10.1
spec:selector:matchLabels:k8s-app: logstashtemplate:metadata:creationTimestamp: nulllabels:k8s-app: logstashname: logstashspec:containers:- image: elastic/logstash:7.10.1imagePullPolicy: IfNotPresentname: logstashsecurityContext:procMount: DefaultrunAsUser: 0volumeMounts:- mountPath: /usr/share/logstash/config/logstash.ymlname: logstash-configreadOnly: truesubPath: logstash.yml- mountPath: /usr/share/logstash/pipeline/logstash.confname: logstash-configreadOnly: truesubPath: pipeline.confdnsPolicy: ClusterFirstrestartPolicy: AlwaysschedulerName: default-schedulersecurityContext: {}terminationGracePeriodSeconds: 120imagePullSecrets:- name: dockerpullvolumes:- configMap:defaultMode: 420name: logstash-configname: logstash-config
---
# 创建logstash service
apiVersion: v1
kind: Service
metadata:labels:k8s-app: logstashname: logstashnamespace: kube-system
spec:type: ClusterIPselector:k8s-app: logstashports:- port: 5044protocol: TCPtargetPort: 5044
---
# 创建filebeat配置文件
apiVersion: v1
kind: ConfigMap
metadata:labels:k8s-app: itsm-node-managername: filebeat-confignamespace: kube-system
data:filebeat.yml: |-filebeat.autodiscover:providers:- type: kuberneteshost: ${NODE_NAME}hints.enabled: truehints.default_config:type: containerpaths:- /var/log/containers/*${data.kubernetes.container.id}.logprocessors:- add_cloud_metadata:- add_host_metadata:output.logstash:hosts: ["logstash.kube-system.svc.cluster.local:5044"]     # kubectl -n logs get svc enabled: true
---
# 创建filebeat守护进程
apiVersion: apps/v1
kind: DaemonSet
metadata:name: filebeatnamespace: kube-systemlabels:server: filebeat-7.10.1
spec:selector:matchLabels:name: filebeatkubernetes.io/cluster-service: "true"template:metadata:creationTimestamp: nulllabels:name: filebeatkubernetes.io/cluster-service: "true"spec:containers:- args:- -c- /etc/filebeat.yml- -eenv:- name: NODE_NAMEvalueFrom:fieldRef:apiVersion: v1fieldPath: spec.nodeNameimage: elastic/filebeat:7.10.1imagePullPolicy: IfNotPresentname: filebeatresources:limits:memory: 200Mirequests:cpu: 100mmemory: 100MisecurityContext:procMount: DefaultrunAsUser: 0volumeMounts:- mountPath: /etc/filebeat.ymlname: configreadOnly: truesubPath: filebeat.yml- mountPath: /usr/share/filebeat/dataname: data- mountPath: /var/lib/docker/containersname: varlibdockercontainersreadOnly: true- mountPath: /var/logname: varlogreadOnly: truerestartPolicy: AlwaysserviceAccount: itsm-node-managerserviceAccountName: itsm-node-managervolumes:- configMap:defaultMode: 384name: filebeat-configname: config- hostPath:path: /var/lib/docker/containerstype: ""name: varlibdockercontainers- hostPath:path: /var/logtype: ""name: varlog- hostPath:path: /opt/filebeat/datatype: DirectoryOrCreatename: data

这是将多个部署信息放在了一个yaml文件中,用“---”分隔开来。

以下是JAVA代码片段:

@Api(tags = "服务日志控制类")
@Slf4j
@RestController
@RequestMapping("/containerLog")
public class ContainerLogController {@Autowiredprivate ContainerLogService containerLogService;@ApiOperation(value = "容器日志写入接口",produces = "application/json", response = String.class)@PostMapping("insert")public Result insert(HttpServletRequest httpServletRequest){BufferedReader br = null;StringBuilder sb = new StringBuilder("");try {br = httpServletRequest.getReader();String str;while ((str=br.readLine())!=null){sb.append(str);}containerLogService.insert(sb.toString());} catch (IOException e) {e.printStackTrace();}return Result.newSuccess();}
}

至此,便可获得logstash发送过来的日志信息了,容器日志均是Json格式。

其中有三个地方可以根据需求进行扩展:

1、filebeat采集规则

2、logstash过滤规则

3、程序处理逻辑

相关内容

热门资讯

AWSECS:访问外部网络时出... 如果您在AWS ECS中部署了应用程序,并且该应用程序需要访问外部网络,但是无法正常访问,可能是因为...
银河麒麟V10SP1高级服务器... 银河麒麟高级服务器操作系统简介: 银河麒麟高级服务器操作系统V10是针对企业级关键业务...
【NI Multisim 14...   目录 序言 一、工具栏 🍊1.“标准”工具栏 🍊 2.视图工具...
不能访问光猫的的管理页面 光猫是现代家庭宽带网络的重要组成部分,它可以提供高速稳定的网络连接。但是,有时候我们会遇到不能访问光...
AWSElasticBeans... 在Dockerfile中手动配置nginx反向代理。例如,在Dockerfile中添加以下代码:FR...
月入8000+的steam搬砖... 大家好,我是阿阳 今天要给大家介绍的是 steam 游戏搬砖项目,目前...
​ToDesk 远程工具安装及... 目录 前言 ToDesk 优势 ToDesk 下载安装 ToDesk 功能展示 文件传输 设备链接 ...
北信源内网安全管理卸载 北信源内网安全管理是一款网络安全管理软件,主要用于保护内网安全。在日常使用过程中,卸载该软件是一种常...
AWS管理控制台菜单和权限 要在AWS管理控制台中创建菜单和权限,您可以使用AWS Identity and Access Ma...
AWR报告解读 WORKLOAD REPOSITORY PDB report (PDB snapshots) AW...