宜賓在線_宜賓最專業的企業資訊門戶網站
加入收藏 網站地圖

福彩:Kubernetes 副本機制篇

來源:時間:2020-03-21 06:40:27 閱讀:-

福彩 www.otoab.com

前言

在上文Kubernetes Pod操作篇介紹了kubernetes的核心組件Pod,本文繼續介紹kubernetes的副本機制,正是因為副本機制你的部署能自動保待運行,并且保持健康,無須任何手動干預。

探針

kubernetes可以通過存活探針(liveness probe)檢查容器是否還在運行??梢暈猵od中的每個容器單獨指定存活探針;如果探測失敗,kubernetes將定期執行探針并重新啟動容器;
kubernetes有以下三種探測容器的機制:

  • HTTP GET探針對容器的IP地址執行HTTP GET請求;
  • TCP套接字探針嘗試與容器指定端口建立TCP連接;
  • Exec探針在容器內執行任意命令,并檢查命令的退出狀態碼。

1.準備鏡像

1.1 準備App.js

為了測試探針的作用,需要準備新的鏡像;在之前的服務中稍作改動,在第五個請求之后,給每個請求返回HTTP狀態碼500(Internal Server Error),app.js做如下改動:

const http = require('http');const os = require('os');console.log("kubia server is starting...");var requestCount = 0;var handler = function(request,response){    console.log("Received request from " + request.connection.remoteAddress);    requestCount++;    if (requestCount > 5) {      response.writeHead(500);      response.end("I'm not well. Please restart me!");      return;    }    response.writeHead(200);    response.end("You've hit " + os.hostname()+"\n");};var www = http.createServer(handler);www.listen(8080);

requestCount記錄請求的次數,大于5次直接返回500狀態碼,這樣探針可以捕獲狀態碼進行服務器重啟;

1.2 構建鏡像

[[email protected] unhealthy]# docker build -t kubia-unhealthy .Sending build context to Docker daemon  3.584kBStep 1/3 : FROM node:7 ---> d9aed20b68a4Step 2/3 : ADD app.js /app.js ---> e9e1b44f8f54Step 3/3 : ENTRYPOINT ["node","app.js"] ---> Running in f58d6ff6bea3Removing intermediate container f58d6ff6bea3 ---> d36c6390ec66Successfully built d36c6390ec66Successfully tagged kubia-unhealthy:latest

通過docker build構建kubia-unhealthy鏡像

1.3 推送鏡像

[[email protected] unhealthy]# docker tag kubia-unhealthy ksfzhaohui/kubia-unhealthy[[email protected] unhealthy]# docker loginAuthenticating with existing credentials...WARNING! Your password will be stored unencrypted in /root/.docker/config.json.Configure a credential helper to remove this warning. Seehttps://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded[[email protected] unhealthy]# docker push ksfzhaohui/kubia-unhealthyThe push refers to repository [docker.io/ksfzhaohui/kubia-unhealthy]40d9e222a827: Pushed ......latest: digest: sha256:5fb3ebeda7f98818bc07b2b1e3245d6a21014a41153108c4dcf52f2947a4dfd4 size: 2213

首先給鏡像附加標簽,然后登錄docker hub,最后推送到docker hub:

2.探針實戰

2.1 Http探針YAML文件

創建YAML描述文件,指定了一個Http Get存活探針,告訴Kubernetes定期在端口路徑下執行Http Get請求,以確定容器是否健康;

apiVersion: v1kind: Podmetadata:    name: kubia-livenessspec:    containers:    - image: ksfzhaohui/kubia-unhealthy     name: kubia     livenessProbe:         httpGet:            path: /           port: 8080

2.2 創建Pod

[d:\k8s]$ kubectl create -f kubia-liveness-probe.yamlpod/kubia-liveness created[d:\k8s]$ kubectl get podsNAME             READY   STATUS              RESTARTS   AGEkubia-liveness   0/1     ContainerCreating   0          3s

創建名稱為kubia-liveness的Pod,查看的RESTARTS為0,隔一段時間再次觀察:

[d:\k8s]$ kubectl get podsNAME             READY   STATUS    RESTARTS   AGEkubia-liveness   1/1     Running   2          4m

觀察可以發現此時的RESTARTS=2,表示重啟了2次,因為每次探測都會發送http請求,而服務在接收5次請求之后會返回500狀態碼,Kubernetes探測之后就會重啟容器;

2.3 Pod探針描述

[d:\k8s]$ kubectl describe po kubia-livenessName:         kubia-liveness......    State:          Running      Started:      Mon, 23 Dec 2019 15:42:45 +0800    Last State:     Terminated      Reason:       Error      Exit Code:    137      Started:      Mon, 23 Dec 2019 15:41:15 +0800      Finished:     Mon, 23 Dec 2019 15:42:42 +0800    Ready:          True    Restart Count:  2    Liveness:       http-get //:8080/ delay=0s timeout=1s period=10s #success=1 #failure=3......Events:  Type     Reason     Age                    From               Message  ----     ------     ----                   ----               ------- ......  Warning  Unhealthy  85s (x9 over 5m5s)     kubelet, minikube  Liveness probe failed: HTTP probe failed with statuscode: 500  Normal   Killing    85s (x3 over 4m45s)    kubelet, minikube  Container kubia failed liveness probe, will be restarted......

State:當前狀態是運行中;
Last State:最后的狀態是終止,原因是出現了錯誤,退出代碼為137有特殊的含義:表示該進程由外部信號終止,數字137是兩個數字的總和:128+x, 其中x是終止進程的信號編號,這里x=9是SIGKILL的信號編號,意味著這個進程被強行終止;
Restart Count:重啟的次數;
Liveness:存活探針的附加信息,delay(延遲)、timeout(超時)、period(周期);大致意思就是開始探測延遲為0秒,探測超時時間為1秒,每隔10秒檢測一次,探測連續失敗三次重啟容器;定義探針時可以自定義這些參數,比如initialDelaySeconds設置初始延遲等;
Events:列出了發生的事件,比如探測到失敗,殺進程,重啟容器等;

3.探針總結

首先生產環境運行的pod一定要配置探針;其次探針一定要檢查程序的內部,不受外部因數影響比如外部服務,數據庫等;最后就是探針應該足夠輕量。
以上方式創建的pod,kubernetes在使用探針發現服務不可能就會重啟服務,這項任務由承載pod的節點上的Kubelet執行,在主服務器上運行的Kubernetes Control Plane組件不會參與此過程;但如果節點本身崩潰,由于Kubelet本身運行在節點上,所以如果節點異常終止,它將無法執行任何操作,這時候就需要ReplicationController或類似機制管理pod。

ReplicationController

ReplicationController是一種kubernetes資源,可確保它的pod始終保持運行狀態;如果pod因任何原因消失(包括節點崩潰),則ReplicationController會重新創建Pod;
ReplicationController會持續監控正在運行的pod列表,是確保pod的數量始終與其標簽選擇器匹配,一個ReplicationController有三個主要部分:

  • label selector(標簽選擇器),用于確定ReplicationController作用域中有哪些pod;
  • replica count(副本個數),指定應運行的pod數量;
  • pod template(pod模板),用于創建新的pod副本。

以上三個屬性可以隨時修改,但是只有副本個數修改對當前pod會有影響,比如當前副本數量減少了,那當前pod有可能會被刪除;ReplicationController提供的好處:

  • 確保一個pod(或多個pod副本)持續運行,失敗重啟新pod;
  • 集群節點發生故障時,它將為故障節點上運行的所有pod創建副本;
  • 輕松實現pod的水平伸縮。

1.創建ReplicationController

apiVersion: v1kind: ReplicationControllermetadata:    name: kubiaspec:    replicas: 3   selector:       app: kubia   template:      metadata:          labels:            app: kubia      spec:          containers:          - name: kubia           image: ksfzhaohui/kubia           ports:            - containerPort: 8080

指定了類型為ReplicationController,名稱為kubia;replicas設置副本為3,selector為標簽選擇器,template為pod創建的模版,三個要素都指定了,執行創建命令:

[d:\k8s]$ kubectl create -f kubia-rc.yamlreplicationcontroller/kubia created[d:\k8s]$ kubectl get podsNAME          READY   STATUS    RESTARTS   AGEkubia-dssvz   1/1     Running   0          73skubia-krlcr   1/1     Running   0          73skubia-tg29c   1/1     Running   0          73s

創建完之后等一會執行獲取pod列表可以發現創建了三個容器,刪除其中一個,再次觀察:

[d:\k8s]$ kubectl delete pod kubia-dssvzpod "kubia-dssvz" deleted[d:\k8s]$ kubectl get podsNAME          READY   STATUS        RESTARTS   AGEkubia-dssvz   1/1     Terminating   0          2m2skubia-krlcr   1/1     Running       0          2m2skubia-mgz64   1/1     Running       0          11skubia-tg29c   1/1     Running       0          2m2s

被刪除的pod結束中,新的pod已經啟動,獲取有關ReplicationController的信息:

[d:\k8s]$ kubectl get rcNAME    DESIRED   CURRENT   READY   AGEkubia   3         3         3       4m20s

期望3個副本,當前3個副本,準備好的也是3個,更詳細的可以使用describe命令:

[d:\k8s]$ kubectl describe rc kubiaName:         kubiaNamespace:    defaultSelector:     app=kubiaLabels:       app=kubiaAnnotations:  <none>Replicas:     3 current / 3 desiredPods Status:  3 Running / 0 Waiting / 0 Succeeded / 0 FailedPod Template:......Events:  Type    Reason            Age    From                    Message  ----    ------            ----   ----                    -------  Normal  SuccessfulCreate  5m20s  replication-controller  Created pod: kubia-dssvz  Normal  SuccessfulCreate  5m20s  replication-controller  Created pod: kubia-tg29c  Normal  SuccessfulCreate  5m20s  replication-controller  Created pod: kubia-krlcr  Normal  SuccessfulCreate  3m29s  replication-controller  Created pod: kubia-mgz64  Normal  SuccessfulCreate  75s    replication-controller  Created pod: kubia-vwnmf

Replicas顯示副本期望數和當前數,Pods Status顯示每種狀態下的副本數,最后的Events為發生的事件,測試一共刪除2個pod,可以看到一個創建了5個pod;

注:因為使用的是Minikube,只有一個節點同時充當主節點和工作節點,節點故障無法模擬。

2.修改標簽

通過更改pod的標簽,可以將它從ReplicationController的作用域中添加或刪除:

[d:\k8s]$ kubectl get pods --show-labelsNAME          READY   STATUS    RESTARTS   AGE   LABELSkubia-mgz64   1/1     Running   0          27m   app=kubiakubia-tg29c   1/1     Running   0          28m   app=kubiakubia-vwnmf   1/1     Running   0          24m   app=kubia[d:\k8s]$ kubectl label pod kubia-mgz64 app=foo --overwritepod/kubia-mgz64 labeled[d:\k8s]$ kubectl get pods --show-labelsNAME          READY   STATUS              RESTARTS   AGE   LABELSkubia-4dzw8   0/1     ContainerCreating   0          2s    app=kubiakubia-mgz64   1/1     Running             0          27m   app=fookubia-tg29c   1/1     Running             0          29m   app=kubiakubia-vwnmf   1/1     Running             0          25m   app=kubia

可以發現初始創建的是三個Pod標簽都是app=kubia,當把kubia-mgz64的標簽設置為foo之后就脫離了當前ReplicationController的控制,這樣ReplicationController控制的副本就變成了2個,所以會里面重新創建一個Pod;脫離控制的Pod還是照常運行,除非我們手動刪除;

[d:\k8s]$ kubectl delete pod kubia-mgz64pod "kubia-mgz64" deleted[d:\k8s]$ kubectl get pods --show-labelsNAME          READY   STATUS    RESTARTS   AGE   LABELSkubia-4dzw8   1/1     Running   0          20h   app=kubiakubia-tg29c   1/1     Running   0          21h   app=kubiakubia-vwnmf   1/1     Running   0          21h   app=kubia

3.修改Pod模版

ReplicationController的pod模板可以隨時修改:

[d:\k8s]$ kubectl edit rc kubia......replicationcontroller/kubia edited

使用如上命令即可,會彈出文本編輯器,修改Pod模版標簽,如下所示:

  template:    metadata:      creationTimestamp: null      labels:        app: kubia        type: special

添加新的標簽type:special,保存退出即可;修改Pod模版之后并不影響現有的pod,只會影響重新創建的pod:

[d:\k8s]$ kubectl get pods --show-labelsNAME          READY   STATUS    RESTARTS   AGE   LABELSkubia-4dzw8   1/1     Running   0          21h   app=kubiakubia-tg29c   1/1     Running   0          21h   app=kubiakubia-vwnmf   1/1     Running   0          21h   app=kubia[d:\k8s]$ kubectl delete pod kubia-4dzw8pod "kubia-4dzw8" deleted[d:\k8s]$ kubectl get pods --show-labelsNAME          READY   STATUS    RESTARTS   AGE     LABELSkubia-6qrxj   1/1     Running   0          2m12s   app=kubia,type=specialkubia-tg29c   1/1     Running   0          21h     app=kubiakubia-vwnmf   1/1     Running   0          21h     app=kubia

刪除一個pod,重新創建的pod有了新的標簽;

4.水平縮放pod

通過文本編輯器來修改副本數,修改spec.replicas為5

[d:\k8s]$ kubectl edit rc kubiareplicationcontroller/kubia edited[d:\k8s]$ kubectl get pods --show-labelsNAME          READY   STATUS              RESTARTS   AGE     LABELSkubia-6qrxj   1/1     Running             0          9m49s   app=kubia,type=specialkubia-9crmf   0/1     ContainerCreating   0          4s      app=kubia,type=specialkubia-qpwbl   0/1     ContainerCreating   0          4s      app=kubia,type=specialkubia-tg29c   1/1     Running             0          21h     app=kubiakubia-vwnmf   1/1     Running             0          21h     app=kubia

可以發現自動創建了2個Pod,達到副本數5;通過kubectl scale重新修改為3:

[d:\k8s]$ kubectl scale rc kubia --replicas=3replicationcontroller/kubia scaled[d:\k8s]$ kubectl get pods --show-labelsNAME          READY   STATUS    RESTARTS   AGE   LABELSkubia-6qrxj   1/1     Running   0          15m   app=kubia,type=specialkubia-tg29c   1/1     Running   0          22h   app=kubiakubia-vwnmf   1/1     Running   0          21h   app=kubia

5.刪除ReplicationController

通過kubectl delete刪除ReplicationController時默認會刪除pod,但是也可以指定不刪除:

[d:\k8s]$ kubectl delete rc kubia --cascade=falsereplicationcontroller "kubia" deleted[d:\k8s]$ kubectl get pods --show-labelsNAME          READY   STATUS    RESTARTS   AGE    LABELSkubia-6qrxj   1/1     Running   0          103m   app=kubia,type=specialkubia-tg29c   1/1     Running   0          23h    app=kubiakubia-vwnmf   1/1     Running   0          23h    app=kubia[d:\k8s]$ kubectl get rc kubiaError from server (NotFound): replicationcontrollers "kubia" not found

--cascade=false可以不刪除pod,只刪除ReplicationController

ReplicaSet

ReplicaSet是新一代ReplicationController,將完全替代ReplicationController;ReplicaSet的行為與ReplicationController完全相同,但pod選擇器的表達能力更強;

1.創建ReplicaSet

apiVersion: apps/v1kind: ReplicaSetmetadata:    name: kubiaspec:    replicas: 3   selector:       matchLabels:          app: kubia         template:      metadata:          labels:            app: kubia      spec:          containers:          - name: kubia           image: ksfzhaohui/kubia

apiVersion指定為apps/v1:apps表示API組,v1表示實際的API版本;如果是在核心的API組中,API是可以不用指定的,比如之前的ReplicationController只需要指定v1;
其他定義基本和ReplicationController類似,除了在selector下使用了matchLabels選擇器;

[d:\k8s]$ kubectl create -f kubia-replicaset.yamlreplicaset.apps/kubia created[d:\k8s]$ kubectl get pods --show-labelsNAME          READY   STATUS    RESTARTS   AGE    LABELSkubia-6qrxj   1/1     Running   0          150m   app=kubia,type=specialkubia-tg29c   1/1     Running   0          24h    app=kubiakubia-vwnmf   1/1     Running   0          24h    app=kubia[d:\k8s]$ kubectl get rsNAME    DESIRED   CURRENT   READY   AGEkubia   3         3         3       49s

創建完ReplicaSet之后,重新接管了原來的3個pod;更詳細的可以使用describe命令:

[d:\k8s]$ kubectl describe rsName:         kubiaNamespace:    defaultSelector:     app=kubiaLabels:       <none>Annotations:  <none>Replicas:     3 current / 3 desiredPods Status:  3 Running / 0 Waiting / 0 Succeeded / 0 FailedPod Template:  Labels:  app=kubia  Containers:   kubia:    Image:        ksfzhaohui/kubia    Port:         <none>    Host Port:    <none>    Environment:  <none>    Mounts:       <none>  Volumes:        <none>Events:           <none>

可以看到Events事件列表為空,當前的3個pod都是接管的原來已經創建的pod;

2.ReplicaSet標簽選擇器

ReplicaSet相對于ReplicationController的主要改進是它更具表達力的標簽選擇器;

   selector:       matchExpressions:       - key: app         operator: In         values:             - kubia

ReplicaSet除了可以使用matchLabels,還可以使用功能更強大的matchExpressions;每個表達式都必須包含一個key、一個operator(運算符)、可能還有一個values的列表,運算符可以有:

  • In:Label的值必須與其中一個指定的values匹配;
  • Notln:Label的值與任何指定的values不匹配;
  • Exists:pod必須包含一個指定名稱的標簽,使用此運算符時,不應指定values字段;
  • DoesNotExist:pod不得包含有指定名稱的標簽,不應指定values字段;

3.刪除ReplicaSet

[d:\k8s]$ kubectl delete rs kubiareplicaset.apps "kubia" deleted[d:\k8s]$ kubectl get pods --show-labelsNo resources found in default namespace.

刪除ReplicaSet的同時會刪除其管理的pod;

DaemonSet

Replicationcontroller和ReplicaSet都用于在kubernetes集群上運行部署特定數量的pod;而DaemonSet可以在所有集群節點上運行一個pod,比如希望在每個節點上運行日志收集器和資源監控器;當然也可以通過節點選擇器控制只有哪些節點運行pod;

1.創建DaemonSet

apiVersion: apps/v1kind: DaemonSetmetadata:    name: ssd-monitorspec:    selector:       matchLabels:          app: ssd-monitor   template:      metadata:          labels:           app: ssd-monitor      spec:          nodeSelector:            disk: ssd         containers:          - name: main           image: ksfzhaohui/kubia

準備如上創建DaemonSet的YAML文件,以上屬性基本和ReplicaSet類似,除了nodeSelector也就是節點選擇器,指定了選擇disk=ssd標簽;
的節點標簽;

[d:\k8s]$ kubectl create -f ssd-monitor-daemonset.yamldaemonset.apps/ssd-monitor created[d:\k8s]$ kubectl get dsNAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGEssd-monitor   0         0         0       0            0           disk=ssd        24s[d:\k8s]$ kubectl get pods --show-labelsNo resources found in default namespace.

創建完之后,并沒有給當前節點創建pod,因為當前節點沒有指定disk=ssd標簽;

[d:\k8s]$ kubectl get nodeNAME       STATUS   ROLES    AGE   VERSIONminikube   Ready    master   8d    v1.17.0[d:\k8s]$ kubectl label node minikube disk=ssdnode/minikube labeled[d:\k8s]$ kubectl get node --show-labelsNAME       STATUS   ROLES    AGE   VERSION   LABELSminikube   Ready    master   8d    v1.17.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disk=ssd,gpu=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=minikube,kubernetes.io/os=linux,node-role.kubernetes.io/master=[d:\k8s]$ kubectl get pods --show-labelsNAME                READY   STATUS    RESTARTS   AGE   LABELSssd-monitor-84hxd   1/1     Running   0          31s   app=ssd-monitor,controller-revision-hash=5dc77f567d,pod-template-generation=1

首先獲取當前節點名稱為minikube,然后設置標簽disk=ssd,這時候會自動在當前節點創建一個pod,因為在minikube中只有一個節點不好在多個節點上模擬;

2.刪除pod和DaemonSet

[d:\k8s]$ kubectl label node minikube disk=hdd --overwritenode/minikube labeled[d:\k8s]$ kubectl get pods --show-labelsNo resources found in default namespace.

修改節點minkube的標簽,可以發現節點上的pod會自動刪除,因為不滿足節點選擇器;

[d:\k8s]$ kubectl delete ds ssd-monitordaemonset.apps "ssd-monitor" deleted[d:\k8s]$ kubectl get dsNo resources found in default namespace.

刪除DaemonSet也會一起刪除這些pod;

Job

ReplicationController、ReplicaSet和DaemonSet會持續運行任務,永遠達不到完成態,這些pod中的進程在退出時會重新啟動;kubernetes通過Job資源允許你運行一種pod, 該pod在內部進程成功結束時,不重啟容器,一旦任務完成,pod就被認為處千完成狀態;
在發生節點故障時,該節點上由Job管理的pod,重新安排到其他節點;如果進程本身異常退出,可以將Job配置為重新啟動容器;

1.創建Job

在創建Job前先準備一個構建在busybox的鏡像,該容器將調用sleep 命令兩分鐘:

FROM busyboxENTRYPOINT echo "$(date) Batch job starting"; sleep 120; echo "$(date) Finished succesfully"

此鏡像已經推送到docker hub:

apiVersion: batch/v1kind: Jobmetadata:    name: batch-jobspec:    template:      metadata:          labels:           app: batch-job      spec:          restartPolicy: OnFailure         containers:          - name: main           image: ksfzhaohui/batch-job

Job屬于batch API組,其中重要的屬性是restartPolicy默認為Always表示無限期運行,其他選項還有OnFailure或Never,表示進程失敗重啟和不重啟;

[d:\k8s]$ kubectl create -f exporter.yamljob.batch/batch-job created[d:\k8s]$ kubectl get jobNAME        COMPLETIONS   DURATION   AGEbatch-job   0/1           7s         8s[d:\k8s]$ kubectl get podNAME              READY   STATUS    RESTARTS   AGEbatch-job-7sw68   1/1     Running   0          25s

創建Job,會自動創建一個pod,pod中的進程運行2分鐘后會結束:

[d:\k8s]$ kubectl get podNAME              READY   STATUS      RESTARTS   AGEbatch-job-7sw68   0/1     Completed   0          3m1s[d:\k8s]$ kubectl get jobNAME        COMPLETIONS   DURATION   AGEbatch-job   1/1           2m11s      3m12s

可以發現pod狀態為Completed,同樣job的COMPLETIONS同樣為完成;

2.Job中運行多個pod實例

作業可以配置為創建多個pod實例,并以并行或串行方式運行它們;可以通過設置completions和parallelism屬性來完成;

2.1 順序運行Job pod

apiVersion: batch/v1kind: Jobmetadata:    name: multi-completion-batch-jobspec:    completions: 3   template:      metadata:          labels:           app: multi-completion-batch-job      spec:          restartPolicy: OnFailure         containers:          - name: main           image: ksfzhaohui/batch-job

completions設置為3,一個一個的運行3個pod,所有完成整個job完成;

[d:\k8s]$ kubectl get podNAME                               READY   STATUS      RESTARTS   AGEmulti-completion-batch-job-h75j8   0/1     Completed   0          2m19smulti-completion-batch-job-wdhnj   1/1     Running     0          15s[d:\k8s]$ kubectl get jobNAME                         COMPLETIONS   DURATION   AGEmulti-completion-batch-job   1/3           2m28s      2m28s

可以看到完成一個pod之后會啟動第二pod,所有都運行完之后如下所示:

[d:\k8s]$ kubectl get podNAME                               READY   STATUS      RESTARTS   AGEmulti-completion-batch-job-4vjff   0/1     Completed   0          2m7smulti-completion-batch-job-h75j8   0/1     Completed   0          6m16smulti-completion-batch-job-wdhnj   0/1     Completed   0          4m12s[d:\k8s]$ kubectl get jobNAME                         COMPLETIONS   DURATION   AGEmulti-completion-batch-job   3/3           6m13s      6m18s

2.2 并行運行Job pod

apiVersion: batch/v1kind: Jobmetadata:    name: multi-completion-parallel-batch-jobspec:    completions: 3   parallelism: 2   template:      metadata:          labels:           app: multi-completion-parallel-batch-job      spec:          restartPolicy: OnFailure         containers:          - name: main           image: ksfzhaohui/batch-job

同時設置了completions和parallelism,表示job可以同時運行兩個pod,其中任何一個執行完成可以運行第三個pod:

[d:\k8s]$ kubectl create -f multi-completion-parallel-batch-job.yamljob.batch/multi-completion-parallel-batch-job created[d:\k8s]$ kubectl get podNAME                                        READY   STATUS              RESTARTS   AGEmulti-completion-parallel-batch-job-f7wn8   0/1     ContainerCreating   0          3smulti-completion-parallel-batch-job-h9s29   0/1     ContainerCreating   0          3s

2.3 限制Job pod完成任務的時間

在pod配置中設置activeDeadlineSeconds屬性,可以限制pod的時間;如果pod運行時間超過此時間,系統將嘗試終止pod, 并將Job標記為失敗;

apiVersion: batch/v1kind: Jobmetadata:  name: time-limited-batch-jobspec:  activeDeadlineSeconds: 30  template:    metadata:      labels:        app: time-limited-batch-job    spec:      restartPolicy: OnFailure      containers:      - name: main        image: ksfzhaohui/batch-job

指定activeDeadlineSeconds為30秒,超過30秒自動失敗;

[d:\k8s]$ kubectl create -f time-limited-batch-job.yamljob.batch/time-limited-batch-job created[d:\k8s]$ kubectl get jobNAME                     COMPLETIONS   DURATION   AGEtime-limited-batch-job   0/1           3s         3s[d:\k8s]$ kubectl get podNAME                           READY   STATUS    RESTARTS   AGEtime-limited-batch-job-jgmm6   1/1     Running   0          29s[d:\k8s]$ kubectl get podNAME                           READY   STATUS        RESTARTS   AGEtime-limited-batch-job-jgmm6   1/1     Terminating   0          30s[d:\k8s]$ kubectl get podNo resources found in default namespace.[d:\k8s]$ kubectl get jobNAME                     COMPLETIONS   DURATION   AGEtime-limited-batch-job   0/1           101s       101s

可以觀察AGE標簽下面的時間表示已經運行的時間,30秒之后pod狀態變成Terminating;

2.4 Job定期運行

job也支持定期執行,有點像quartz,也支持類似的quartz表達式:

apiVersion: batch/v1beta1kind: CronJobmetadata:  name: corn-batch-jobspec:  schedule: "0-59 * * * *"  jobTemplate:    spec:      template:        metadata:          labels:            app: corn-batch-job        spec:          restartPolicy: OnFailure          containers:          - name: main            image: ksfzhaohui/batch-job

指定schedule用來表示表達式分別是:分鐘,小時,每個月中的第幾天,月,星期幾;以上配置表示每分鐘運行一個job;

[d:\k8s]$ kubectl create -f cronjob.yamlcronjob.batch/corn-batch-job created[d:\k8s]$ kubectl get podNAME                              READY   STATUS              RESTARTS   AGEcorn-batch-job-1577263560-w2fq2   0/1     Completed           0          3m3scorn-batch-job-1577263620-92pc7   1/1     Running             0          2m2scorn-batch-job-1577263680-tmr8p   1/1     Running             0          62scorn-batch-job-1577263740-jmzqk   0/1     ContainerCreating   0          2s[d:\k8s]$ kubectl get jobNAME                        COMPLETIONS   DURATION   AGEcorn-batch-job-1577263560   1/1           2m5s       3m48scorn-batch-job-1577263620   1/1           2m4s       2m47scorn-batch-job-1577263680   0/1           107s       107scorn-batch-job-1577263740   0/1           47s        47s

每個一分鐘就運行一個job,可以刪除CronJob

[d:\k8s]$ kubectl delete CronJob corn-batch-jobcronjob.batch "corn-batch-job" deleted

總結

本文繼續在閱讀Kubernetes in Action過程中,實際操作的筆記;主要介紹了相關的副本機制探針,ReplicationController,ReplicaSet,DaemonSet以及Job相關知識點。

參考

Kubernetes in Action

博客地址

https://github.com/ksfzhaohui/blog

圖文推薦

宜賓在線版權及免責聲明:

1、凡本網注明 “來源:***(非宜賓在線)” 的作品,均轉載自其它媒體,轉載目的在于傳遞更多信息,并不代表本網贊同其觀點和對其真實性負責。

2、如因作品內容、版權和其它問題需要同本網聯系的,請在30日內進行。

最新新聞
熱門資訊榜
{ganrao}