Replication Controller:
Controllers are the brain behind Kubernetes, They are the processes that monitor objects & respond accordingly. Let us talk about one in particular, the replication controller.
The replication controller helps us run multiple instances of a POD thus providing high availability. Basically it ensures that the specified number of pods are running at all the times.
Replication controllers also help us to scale & do load balancing. The replication controller will span across multiple nodes in a cluster.
How to create a replication controller ?
We start by defining a Yaml file. Just like any Kubernetes file, it will have 4 mandatory sections. apiVersion, Metadata, Kind & Spec.
up until the metadata section, it is pretty much similar to what we have already seen in the creation of POD except that the kind here is Replication Controller.
The difference primarily comes under the spec section. under the spec, we define a template of the POD we are trying to create.
under the spec section, we would have a the meta data & spec sections again, but these are for the container
The replicas section represent the number of pod’s that must be deployed. The replicas & the top level spec are siblings so the indentation should be the same
apiVersion: v1 kind: ReplicationController metadata: name: my-rc labels: app: mytestapp type: front-end spec: template: metadata name: nginx labels: app: nginx type: frontend spec: containers: - name: nginx-container image: nginx replicas: 3
once the file is ready, we an use the kubectl create -f {file-name}.yml
to create the pods.
We can see the command kubectl get replicationcontroller
command to see the status.
Replica Set:
Replica set is similar to Replication controller, it is just a newer version. It is the new recommended way of setting replication. The key difference can be noted in the apiVersion, kind & also a new additional property called selector is added to Replica-set using which we can match labels to define pods which will be managed by replica-set. This is because, replicaset can also manage pods that were not created by it.
labels & selectors become important so that the labels can be matched & the replicas will be only created for the ones with matching labels even if the pod’s were not created using the replica set.
The template is still required even if the pod is not created using the replicaset becuase, in future, should one of the POD fail, the replica set should have a template to create the pod.
apiVersion: apps/v1 kind: ReplicaSet metadata: name: my-replicaset labels: app: mytestapp type: front-end spec: template: metadata name: my-app-test labels: app: nginx type: frontend spec: containers: - name: nginx-container image: nginx replicas: 3 selector: matchLabels: type: front-end
we can use the command kubectl get replicaset
to get information about the replicaset.
Scale:
Let’s say, we are running 3 POD’S now we are planning to scale the number of Pods to 6. How do we do it ?
One way is to update the replicas in the Yaml file to 6 & then run the kubectl replace -f {filename}.yml
Another way to do it is to use the kubectl scale --replicas=6 -f {filename}.yml
Below the screenshot of Yaml file & the commands to create.

After scaling :
