In this lab we will cover following services:
- Cluster IP type
- Nodeport
- Service Without Selectors (endpoints)
- Headless services
- LB type services
Lab1: Cluster IP type service
apiVersion
: The version of the Kubernetes API being used. In this case, it’sv1
, which is the basic version of the API.kind
: The type of Kubernetes resource being defined. Here, it’s aService
.metadata
: Metadata about the service resource, including itsname
.spec
: The specification for the service, which includes:selector
: A set of labels that are used to identify the pods to which this service should route traffic. In this case, it’s selecting pods with the labelapp.kubernetes.io/name: proxy
.ports
: The ports that the service should expose. In this case, there’s one port defined:name
: A name for the port.protocol
: The protocol used for the port, in this case,TCP
.port
: The port number as exposed by the service.targetPort
: The port on the pods to which the traffic will be forwarded.http-web-svc
should be a name or a number representing the port used by the pods.
The labels
in the pod’s metadata match the selector
in the previously defined Service. This means that the Service you defined earlier (nginx-service
) would route traffic to this pod based on the label app.kubernetes.io/name: proxy.
git clone https://github.com/amitopenwriteup/k8s.git
kubectl create -f svc.yaml kubectl get svc kubectl get ep kubectl create -f dep.yaml kubectl get pods kubectl get ep ##Delete the service kubectl delete -f svc.yaml kubectl detete -f dep.yaml
Lab 2 : NodePort
selector
: A set of labels that are used to identify the pods to which this service should route traffic. In this case, it’s selecting pods with the labelapp.kubernetes.io/name: proxy
.ports
: The ports that the service should expose. In this case, there’s one port defined:port
: The port number as exposed by the service.targetPort
: The port on the pods to which the traffic will be forwarded. This is set to80
to match the port the pods are listening on.nodePort
: An optional field that allows you to specify a specific port number on the nodes. If not specified, Kubernetes will allocate a port from the default range (typically30000
to32767
).nodePort
(e.g.,http://<Node_IP>:30007
).
kubectl create -f dep.yaml kubectl create -f Nodeport.yaml kubectl get svc kubectl get ep #Delete the nodeport service kubectl delete svc my-service
Lab3: Service without Selectors
This service definition does not include a selector
, which means it doesn’t specify which pods the service should route traffic to. Without a selector, the service will not be associated with any pods, and it won’t have any effect in terms of routing traffic. We need to mention the endpoint kind object for this service.
This Endpoints definition manually specifies that the Service named my-svc
should route traffic to the IP address 192.168.40.131
on port 80
. This can be useful if you need to route traffic to specific IP addresses and ports that are not automatically managed by Kubernetes Service discovery.
kubectl create -f svcwithoutsel.yaml kubectl get svc kubectl get ep kubectl create -f ep.yaml kubectl get ep
Lab4: Headless service
clusterIP: None
specifies that no cluster IP should be assigned, making it a Headless Service.When you create a Headless Service, DNS records are automatically created for each pod that matches the selector. These DNS records can be used to directly resolve the IP addresses of individual pods in the service. Use case: statefulsets
kubectl create -f headless.yaml kubectl get svc #Delete service kubeclt delete -f headless.yaml
Lab5: Loadbalancer Type Service:
Setup the MetalLB is a load balancer implementation
Install LB
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.14.3/config/manifests/metallb-native.yaml
setting up ip range for lb
kubectl get pods -n metallb-system
type
: The service type. In this case, it’s LoadBalancer
, which requests a load balancer from the cloud provider (if available) and directs external traffic to the service. Please note that the availability of load balancers depends on your Kubernetes environment and cloud provider. In some cases, you might need to configure the cloud provider integration to enable automatic load balancer provisioning.
kubectl create -f lb.yaml kubectl get svc