- Update at 2016-08-01: Re-test petset redeployment, previous PVC could be re-claim by re-deployed petset in default namespace.
Abstract
petset
introduced from Kubernetes 1.3 provides strong capability to scale out/in an persistent service thru persistent claim abstraction and namespace management. petset is really suitable for an persistent service deployment, e.g. database deployment.
In this article, I will address the extensible usage for petset with both shared storage and iosolated storage. In addition, and I exposed a couple of limitation and solution for database deployment using petset
. (alpha. Kubernetes v1.3.2)
- namespace of PersistentVolumeClaim seems not supported well in v1.3.2. When user deploy petset to different namespace than
default
,PersistentVolumeClaim is not bound
failure led to failure of overall deployment.
- namespace of PersistentVolumeClaim seems not supported well in v1.3.2. When user deploy petset to different namespace than
- Zombie PV resource
Deploy Petset with shared storage
Petset deployment leverage volumeClaimTemplates to claim storage dynamically for scale out/in. I am trying to attach shared storage for each pod of petset for specific needs when databases deployment require shared meta data across cluster.
Advantage: Scale out / scale in
Pactch an petset to scale out an mysql deployment kubectl patch petset mysql -p '{"spec": {"replicas": <number>}}'
, newly created mysql become available in 35s.
kubectl get pod
NAME READY STATUS RESTARTS AGE
mysql-0 1/1 Running 0 1h
mysql-1 1/1 Running 0 1h
mysql-2 1/1 Running 0 19m
mysql-3 1/1 Running 0 18m
root@# kubectl patch petset mysql -p '{"spec": {"replicas": 2}}'
"mysql" patched
root@# kubectl get pod
NAME READY STATUS RESTARTS AGE
mysql-0 1/1 Running 0 1h
mysql-1 1/1 Running 0 1h
root@# kubectl patch petset mysql -p '{"spec": {"replicas": 4}}'
"mysql" patched
kubectl get pod
NAME READY STATUS RESTARTS AGE
mysql-0 1/1 Running 0 2h
mysql-1 1/1 Running 0 2h
mysql-2 1/1 Running 0 38s
mysql-3 0/1 Running 0 7s
Advantage: Database Cluster Failover with petset
Recover an database cluster using existing data volume after
petset
is deleted from k8s. Once an petset was re-created (delete/create), all PVCs were kept. previously claimed PVC could be re-claimed by re-created petset. So petset give much flexibility, at mean time, users lost the full control on recovery operation. e.g. Uisng pod/rc, database could be recovered well using existing data PVC, PV or volume easily.So petset give much flexibility, at mean time, users could recovery petset with existing volume claimed.
Petset re-claim policy:
-
|
|
|
|
Limitation 1: namespace of PersistentVolumeClaim
namespace of PersistentVolumeClaim seems not supported well in v1.3.2.
Controller failed to bind an claim even PVC were claimed well. Looks like an bug on petset impl today.
6m 6m 1 {default-scheduler } Warning FailedScheduling [PersistentVolumeClaim is not bound: "datadir-mysql-0", PersistentVolumeClaim is not bound: "datadir-mysql-0", PersistentVolumeClaim is not bound: "datadir-mysql-0", PersistentVolumeClaim is not bound: "datadir-mysql-0"]
kubectl get pvc --namespace=petset-sharedfs
NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
datadir-mysql-0 Bound spv21 0 33m
datadir-mysql-1 Bound spv22 0 33m
datadir-mysql-2 Bound spv23 0 33m
shared-head-claim Bound spv20 0 2h
Solution:
- Followup issue fix and watch source change.
Limitation 2: Zombie PV resource
Once user delete PVC of petset deployment. PV become ‘Released’ status. These PV resource become zombie since they are not allocated again by k8s controller.
|
|
- Pending claim when PV become released status
|
|
- Controller failed to re-claim PV in released status
E0728 02:18:02.307295 10626 factory.go:517] Error scheduling default mysql-0: [PersistentVolumeClaim is not bound: “datadir-mysql-0”, PersistentVolumeClaim is not bound: “datadir-mysql-0”, PersistentVolumeClaim is not bound: “datadir-mysql-0”, PersistentVolumeClaim is not bound: “datadir-mysql-0”]; retrying
Solution to recycle/recover zombie PV resource
Set PV status from ‘Released’ to ‘Available’ thru etcdctl
. Then we could recover Pod of petset using exiting data.
- Retrieve meta data of PV in released status
|
|
- Update PV meta data thru removing claimRef
"claimRef":{"kind":"PersistentVolumeClaim","namespace":"petset-sharedfs","name":"shared-head-claim","uid":"0ca05fed-5494-11e6-a022-0cc47a662568","apiVersion":"v1","resourceVersion":"46240"}"
|
|
- Verify PV become available.
root@:/nfs/petset# etcdctl get /registry/persistentvolumes/spv14
kubectl get pv
spv14 1Gi RWX Available 1h