Where default kubernetes service gets a list of endpoints? - kubernetes

I have a cluster with 3 control-planes. As any cluster my cluster also has a default kubernetes service. As any service it has a list of endpoints:
apiVersion: v1
items:
- apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: 2017-12-12T17:08:34Z
name: kubernetes
namespace: default
resourceVersion: "6242123"
selfLink: /api/v1/namespaces/default/endpoints/kubernetes
uid: 161edaa7-df5f-11e7-a311-d09466092927
subsets:
- addresses:
- ip: 10.9.22.25
- ip: 10.9.22.26
- ip: 10.9.22.27
ports:
- name: https
port: 8443
protocol: TCP
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Everything is ok, but I completely can't understand where do these endpoints come from? It is logical to assume from the Service label selector, but there's no any label selectors:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2017-12-12T17:08:34Z
labels:
component: apiserver
provider: kubernetes
name: kubernetes
namespace: default
resourceVersion: "6"
selfLink: /api/v1/namespaces/default/services/kubernetes
uid: 161e4f00-df5f-11e7-a311-d09466092927
spec:
clusterIP: 10.100.0.1
ports:
- name: https
port: 443
protocol: TCP
targetPort: 8443
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
type: ClusterIP
status:
loadBalancer: {}
So, could anybody explain how k8s services and endpoints work in case of built-in default kubernetes service?

Its not clear how you created multi-node cluster, but here are some research for you:
Set up High-Availability Kubernetes Masters describe HA k8s creation. They have notes about default kubernetes service.
Instead of trying to keep an up-to-date list of Kubernetes apiserver
in the Kubernetes service, the system directs all traffic to the
external IP:
in one master cluster the IP points to the single master,
in multi-master cluster the IP points to the load balancer in-front of
the masters.
Similarly, the external IP will be used by kubelets to communicate
with master
So I would rather expect a LB ip instead of 3 masters.
Service creation: https://github.com/kubernetes/kubernetes/blob/master/pkg/master/controller.go#L46-L83
const kubernetesServiceName = "kubernetes"
// Controller is the controller manager for the core bootstrap Kubernetes
// controller loops, which manage creating the "kubernetes" service, the
// "default", "kube-system" and "kube-public" namespaces, and provide the IP
// repair check on service IPs
type Controller struct {
ServiceClient corev1client.ServicesGetter
NamespaceClient corev1client.NamespacesGetter
EventClient corev1client.EventsGetter
healthClient rest.Interface
ServiceClusterIPRegistry rangeallocation.RangeRegistry
ServiceClusterIPInterval time.Duration
ServiceClusterIPRange net.IPNet
ServiceNodePortRegistry rangeallocation.RangeRegistry
ServiceNodePortInterval time.Duration
ServiceNodePortRange utilnet.PortRange
EndpointReconciler reconcilers.EndpointReconciler
EndpointInterval time.Duration
SystemNamespaces []string
SystemNamespacesInterval time.Duration
PublicIP net.IP
// ServiceIP indicates where the kubernetes service will live. It may not be nil.
ServiceIP net.IP
ServicePort int
ExtraServicePorts []corev1.ServicePort
ExtraEndpointPorts []corev1.EndpointPort
PublicServicePort int
KubernetesServiceNodePort int
runner *async.Runner
}
Service periodically updates: https://github.com/kubernetes/kubernetes/blob/master/pkg/master/controller.go#L204-L242
// RunKubernetesService periodically updates the kubernetes service
func (c *Controller) RunKubernetesService(ch chan struct{}) {
// wait until process is ready
wait.PollImmediateUntil(100*time.Millisecond, func() (bool, error) {
var code int
c.healthClient.Get().AbsPath("/healthz").Do().StatusCode(&code)
return code == http.StatusOK, nil
}, ch)
wait.NonSlidingUntil(func() {
// Service definition is not reconciled after first
// run, ports and type will be corrected only during
// start.
if err := c.UpdateKubernetesService(false); err != nil {
runtime.HandleError(fmt.Errorf("unable to sync kubernetes service: %v", err))
}
}, c.EndpointInterval, ch)
}
// UpdateKubernetesService attempts to update the default Kube service.
func (c *Controller) UpdateKubernetesService(reconcile bool) error {
// Update service & endpoint records.
// TODO: when it becomes possible to change this stuff,
// stop polling and start watching.
// TODO: add endpoints of all replicas, not just the elected master.
if err := createNamespaceIfNeeded(c.NamespaceClient, metav1.NamespaceDefault); err != nil {
return err
}
servicePorts, serviceType := createPortAndServiceSpec(c.ServicePort, c.PublicServicePort, c.KubernetesServiceNodePort, "https", c.ExtraServicePorts)
if err := c.CreateOrUpdateMasterServiceIfNeeded(kubernetesServiceName, c.ServiceIP, servicePorts, serviceType, reconcile); err != nil {
return err
}
endpointPorts := createEndpointPortSpec(c.PublicServicePort, "https", c.ExtraEndpointPorts)
if err := c.EndpointReconciler.ReconcileEndpoints(kubernetesServiceName, c.PublicIP, endpointPorts, reconcile); err != nil {
return err
}
return nil
}
Endpoint update place: https://github.com/kubernetes/kubernetes/blob/72f69546142a84590550e37d70260639f8fa3e88/pkg/master/reconcilers/lease.go#L163
Also endpoint could be created manually. Visit Services without selectors for more info.

Related

Is TLS a "must" requirement in Kubernetes Admission Webhook?

I am trying to setup a custom admission webhook. I do not want TLS protocol for it. I do understand the the client ( which is kube api server) would do an "https" request to the webhook server and hence , we require a TLS Server in Webhook.
The Webhook Server Code is as below. I define a dummy valid cert & key as constants. below server works fine in a the webhook service. :
package main
import (
"crypto/tls"
"fmt"
"html"
"log"
"net/http"
)
func handleRoot(w http.ResponseWriter, r *http.Request) {
fmt.Fprintf(w, "hello test.. %q", html.EscapeString(r.URL.Path))
}
type Config struct {
CertFile string
KeyFile string
}
func main() {
log.Print("Starting server ...2.2")
http.HandleFunc("/test", handleRoot)
config := Config{
CertFile: cert,
KeyFile: key,
}
server := &http.Server{
Addr: ":9543",
TLSConfig: configTLS(config),
}
err := server.ListenAndServeTLS("", "")
if err != nil {
panic(err)
}
}
func configTLS(config Config) *tls.Config {
sCert, err := tls.X509KeyPair([]byte(config.CertFile), []byte(config.KeyFile))
if err != nil {
log.Fatal(err)
}
return &tls.Config{
Certificates: []tls.Certificate{sCert},
ClientAuth: tls.NoClientCert,
InsecureSkipVerify: true,
}
}
Also the MutatingWebhookConfiguration yaml looks like below:
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
creationTimestamp: null
labels:
test-webhook-service.io/system: "true"
name: test-webhook-service-mutating-webhook-configuration
webhooks:
- admissionReviewVersions:
- v1
- v1beta1
clientConfig:
service:
name: test-webhook-service-service
namespace: test-webhook-service-system
path: /test
failurePolicy: Ignore
matchPolicy: Equivalent
name: mutation.test-webhook-service.io
rules:
- apiGroups:
- ""
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- pods
sideEffects: None
Now, In order to test whether the admission controller works, I created a new POD. The admission controller gives error:
2022/09/30 05:42:56 Starting server ...2.2
2022/09/30 05:43:23 http: TLS handshake error from 10.100.0.1:37976: remote error: tls: bad certificate
What does this mean? Does this mean I have to put valid caBundle in the MutatingWebhookConfiguration and thus, TLS is required. If this is the case, I am not sure what following means in the official k8s document (source):
The example admission webhook server leaves the ClientAuth field
empty, which defaults to NoClientCert. This means that the webhook
server does not authenticate the identity of the clients, supposedly
API servers.

GCP GKE - Healthcheck for native GKE ingress for the same application with 2 exposed ports

I have an application that is supposed to expose 2 x ports and the application does not have the default healthcheck endpoint of / that returns 200, so at the moment, I supply a custom healthcheck endpoint just for 1 port. I haven't exposed the other port yet as I don't know how to provide another custom healthcheck endpoint for the same application.
This is how my Terraform configuration looks like.
resource "kubernetes_deployment" "core" {
metadata {
name = "core"
labels = {
app = "core"
}
}
spec {
replicas = 1
selector {
match_labels = {
app = "core"
}
}
template {
metadata {
labels = {
app = "core"
}
}
spec {
container {
name = "core"
image = "asia.gcr.io/admin/core:${var.app_version}"
port {
container_port = 8069
}
readiness_probe {
http_get {
path = "/web/database/selector"
port = "8069"
}
initial_delay_seconds = 15
period_seconds = 30
}
image_pull_policy = "IfNotPresent"
}
}
}
}
}
resource "kubernetes_service" "core_service" {
metadata {
name = "core-service"
}
spec {
type = "NodePort"
selector = {
app = "core"
}
port {
port = 8080
protocol = "TCP"
target_port = "8069"
}
}
}
How do I tell GKE to expose the other port (8072) and use a custom healthcheck endpoint for both ports?
There are a GKE Ingress feature called FrontendConfig and BackendConfig custom resource definitions (CRDs) that allow you to further customize the load balancer, you can use a Unique BackendConfig per Service port to specify a custom BackendConfig for a specific port or ports of a Service or MultiClusterService, using a key that matches the port's name or port's number. The Ingress controller uses the specific BackendConfig when it creates a load balancer backend service for a referenced Service port
When using a BackendConfig to provide a custom load balancer health check, the port number you use for the load balancer's health check can differ from the Service's spec.ports[].port number, here's an example of the service and the custom health check:
Service:
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/backend-config: '{"ports": {
"service-reference-a":"backendconfig-reference-a",
"service-reference-b":"backendconfig-reference-b"
}}'
spec:
ports:
- name: port-name-1
port: port-number-1
protocol: TCP
targetPort: 50000
- name: port-name-2
port: port-number-2
protocol: TCP
targetPort: 8080
Custom Health Check:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: my-backendconfig
spec:
healthCheck:
checkIntervalSec: interval
timeoutSec: timeout
healthyThreshold: health-threshold
unhealthyThreshold: unhealthy-threshold
type: protocol
requestPath: path
port: port

How to create ingress on K8S Cluster machine using Terraform?

I am new to K8S and Terraform. I installed ingress_nginx on K8S Cluster running on Bare-metal.
[root#control02 ~]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-hello-world-svc NodePort 10.xx.xx.121 <none> 8086:30333/TCP 13d
ingress-nginx-controller NodePort 10.xx.xx.124 <none> 80:31545/TCP,443:30198/TCP 13d
ingress-nginx-controller-admission ClusterIP 10.xx.xx.85 <none> 443/TCP 13d
I created Deployment, Service and Ingress and am able to access the docker-hello-world-svc from browser successfully. Ingress.yaml is given below
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-world-ing
annotations:
kubernetes.io/ingress.class: "nginx"
namespace: ingress-nginx
spec:
#ingressClassName : nginx
rules:
- host: foo.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: docker-hello-world-svc
port:
number: 8086
My requirement is to containerize our PHP based applications on K8S Cluster.
Creating ingress via Terraform's resource "kubernetes_ingress" "web" and ingress.yaml:kubernetes.io/ingress.class are same (or) are they different?
How can I create 'ingress' on K8S Cluster machine using Terraform ?
For example, when I trigger a job from GitLab, Terraform should create a new "resource kubernetes_ingress" on K8S Cluster or Control-Plane machine. Is this possible ?
Kindly clarify on the queries mentioned above and let me know if my understanding is wrong
The ingress.class is needed to let the nginx ingress controller understand thathe need to handle this resource.
To create an ingress with terraform you can use the following
resource "kubernetes_ingress" "ingress" {
metadata {
name = "ingress-name"
namespace = "ingress-namespace"
labels = {
app = "some-label-app"
}
annotations = {
"kubernetes.io/ingress.class" : "nginx"
}
}
spec {
rule {
host = "foo.com"
http {
path {
backend {
service_name = "svc"
service_port = "http"
}
}
}
}
}
}
I was able to create the service on existing K8S Cluster (Bare metal) using the following code
K8S Cluster was setup on 192.168.xxx.xxx on which I created a service example. We need to mention the 'host' parameter inside 'kubernetes' block
provider "kubernetes" {
**host = "https://192.168.xxx.xxx:6443"**
cluster_ca_certificate = "${base64decode(var.cluster_ca_certificate)}"
client_certificate = "${base64decode(var.client_certificate)}"
client_key = "${base64decode(var.client_key)}"
}
resource "kubernetes_service" "example" {
metadata {
name = "example"
}
spec {
port {
port = 8585
target_port = 80
}
type = "ClusterIP"
}
}
for,
resource "kubernetes_ingress"
this,
metadata {
annotations = {
"kubernetes.io/ingress.class" : "nginx"
}
}
should now be,
spec {
ingress_class_name = "nginx"
}
see,
https://kubernetes.io/docs/concepts/services-networking/ingress/#deprecated-annotation
https://github.com/hashicorp/terraform-provider-kubernetes/commit/647bf733b333bc0ccbabdbd937a6f759800a253a

How to configure EKS ALB with Terraform

I'm having a hard time getting EKS to expose an IP address to the public internet. Do I need to set up the ALB myself or do you get that for free as part of the EKS cluster? If I have to do it myself, do I need to define it in the terraform template file or in the kubernetes object yaml?
Here's my EKS cluster defined in Terraform along with what I think are the required permissions.
// eks.tf
resource "aws_iam_role" "eks_cluster_role" {
name = "${local.env_name}-eks-cluster-role"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = {
Service = "eks.amazonaws.com"
},
Action = "sts:AssumeRole"
}
]
})
}
resource "aws_iam_role_policy_attachment" "eks-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster_role.name
}
resource "aws_iam_role_policy_attachment" "eks-AmazonEKSVPCResourceController" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
role = aws_iam_role.eks_cluster_role.name
}
resource "aws_kms_key" "eks_key" {
description = "EKS KMS Key"
deletion_window_in_days = 7
enable_key_rotation = true
tags = {
Environment = local.env_name
Service = "EKS"
}
}
resource "aws_kms_alias" "eks_key_alias" {
target_key_id = aws_kms_key.eks_key.id
name = "alias/eks-kms-key-${local.env_name}"
}
resource "aws_eks_cluster" "eks_cluster" {
name = "${local.env_name}-eks-cluster"
role_arn = aws_iam_role.eks_cluster_role.arn
enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
vpc_config {
subnet_ids = [aws_subnet.private_a.id, aws_subnet.private_b.id]
}
encryption_config {
resources = ["secrets"]
provider {
key_arn = aws_kms_key.eks_key.arn
}
}
tags = {
Environment = local.env_name
}
}
resource "aws_iam_role" "eks_node_group_role" {
name = "${local.env_name}-eks-node-group"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = {
Service = "ec2.amazonaws.com"
},
Action = "sts:AssumeRole"
}
]
})
}
resource "aws_iam_role_policy_attachment" "eks-node-group-AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.eks_node_group_role.name
}
resource "aws_iam_role_policy_attachment" "eks-node-group-AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.eks_node_group_role.name
}
resource "aws_iam_role_policy_attachment" "eks-node-group-AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.eks_node_group_role.name
}
resource "aws_eks_node_group" "eks_node_group" {
instance_types = var.node_group_instance_types
node_group_name = "${local.env_name}-eks-node-group"
node_role_arn = aws_iam_role.eks_node_group_role.arn
cluster_name = aws_eks_cluster.eks_cluster.name
subnet_ids = [aws_subnet.private_a.id, aws_subnet.private_b.id]
scaling_config {
desired_size = 1
max_size = 1
min_size = 1
}
// Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
// Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
aws_iam_role_policy_attachment.eks-node-group-AmazonEC2ContainerRegistryReadOnly,
aws_iam_role_policy_attachment.eks-node-group-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.eks-node-group-AmazonEKSWorkerNodePolicy,
]
And here's my kubernetes object yaml:
# hello-kubernetes.yaml
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.9
ports:
- containerPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-ingress
spec:
backend:
serviceName: hello-kubernetes
servicePort: 80
I've run terraform apply and the cluster is up and running. I've installed eksctl and kubectl and run kubectl apply -f hello-kubernetes.yaml. The pods, service, and ingress appear to be running fine.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-kubernetes-6cb7cd595b-25bd9 1/1 Running 0 6h13m
hello-kubernetes-6cb7cd595b-lccdj 1/1 Running 0 6h13m
hello-kubernetes-6cb7cd595b-snwvr 1/1 Running 0 6h13m
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-kubernetes LoadBalancer 172.20.102.37 <pending> 80:32086/TCP 6h15m
$ kubectl get ingresses
NAME CLASS HOSTS ADDRESS PORTS AGE
hello-ingress <none> * 80 3h45m
What am I missing and which file does it belong in?
You need to install the AWS Load Balancer Controller by following the installation instructions; first you need to create IAM Role and permissions, this can be done with Terraform; then you need to apply Kubernetes Yaml for installing the controller into your cluster, this can be done with Helm or Kubectl.
You also need to be aware of the subnet tagging that is needed for e.g. creating a public or private facing load balancer.
Usually the way to go is to put an ALB and redirect traffic to the EKS cluster, managing it with the ALB Ingress Controller. This ingress controller will act as the communication between the cluster and your ALB, here is the AWS documentation that is pretty straight foward
EKS w/ALB
Other solution could be using an NGINX ingress controller with an NLB if the ALB doesn't suits your applications needs, as described in the following article
NGINX w/NLB
This happened with me too that after all the setup, I was not able to see the ingress address. The best way to debug this issue is to check the logs for the ingress controller. You can do this by:
Get the Ingress controller po name by using: kubectl get po -n kube-system
Check logs for the po using: kubectl logs <po_name> -n kube-system
This will point you to the exact issue as to why you are not seeing the address.
If you do not find any po running by the name ingress, then u will have to create ingress controller first.

Bad latency in GKE between Pods

We are having a very strange behavior with unacceptable big latency for communication within a kubernetes cluster (GKE).
The latency is jumping between 600ms and 1s for a endpoint that has a Memorystore get/store action and a CloudSQL query. The same setup running locally in dev enivornment (although without k8s) is not showing this kind of latency.
About our architecture:
We are running a k8s cluster on GKE using terraform and service / deployment (yaml) files for the creation (I added those below).
We're running two node APIs (koa.js 2.5). One API is exposed with an ingress to the public and connects via a nodeport to the API pod.
The other API pod is private reachable through an internal loadbalancer from google. This API is connected to all the resource we need (CloudSQL, Cloud Storage).
Both APIs are also connected to a Memorystore (Redis).
The communication between those pods is secured with self-signed server/client certificates (which isn't the problem, we already removed it temporarily to test).
We checked the logs and saw that the request from the public API to the private one is taking about 200ms only to reach it.
Also the response to the public API from the private one took about 600ms (messured from the point when the whole business logic of the private API went throw until we received that response back at the pubilc API)
We're really out of things to try... We already connected all the Google Cloud resources to our local environment which didn't show that kind of bad latency.
In a complete local setup the latency is only about 1/5 to 1/10 of what we see in the cloud setup.
We also tried to ping the private POD from the public one which was in the 0.100ms area.
Do you have any ideas where we can further investigate ?
Here is the terraform script about our Google Cloud setup
// Configure the Google Cloud provider
provider "google" {
project = "${var.project}"
region = "${var.region}"
}
data "google_compute_zones" "available" {}
# Ensuring relevant service APIs are enabled in your project. Alternatively visit and enable the needed services
resource "google_project_service" "serviceapi" {
service = "serviceusage.googleapis.com"
disable_on_destroy = false
}
resource "google_project_service" "sqlapi" {
service = "sqladmin.googleapis.com"
disable_on_destroy = false
depends_on = ["google_project_service.serviceapi"]
}
resource "google_project_service" "redisapi" {
service = "redis.googleapis.com"
disable_on_destroy = false
depends_on = ["google_project_service.serviceapi"]
}
# Create a VPC and a subnetwork in our region
resource "google_compute_network" "appnetwork" {
name = "${var.environment}-vpn"
auto_create_subnetworks = "false"
}
resource "google_compute_subnetwork" "network-with-private-secondary-ip-ranges" {
name = "${var.environment}-vpn-subnet"
ip_cidr_range = "10.2.0.0/16"
region = "europe-west1"
network = "${google_compute_network.appnetwork.self_link}"
secondary_ip_range {
range_name = "kubernetes-secondary-range-pods"
ip_cidr_range = "10.60.0.0/16"
}
secondary_ip_range {
range_name = "kubernetes-secondary-range-services"
ip_cidr_range = "10.70.0.0/16"
}
}
# GKE cluster setup
resource "google_container_cluster" "primary" {
name = "${var.environment}-cluster"
zone = "${data.google_compute_zones.available.names[1]}"
initial_node_count = 1
description = "Kubernetes Cluster"
network = "${google_compute_network.appnetwork.self_link}"
subnetwork = "${google_compute_subnetwork.network-with-private-secondary-ip-ranges.self_link}"
depends_on = ["google_project_service.serviceapi"]
additional_zones = [
"${data.google_compute_zones.available.names[0]}",
"${data.google_compute_zones.available.names[2]}",
]
master_auth {
username = "xxxxxxx"
password = "xxxxxxx"
}
ip_allocation_policy {
cluster_secondary_range_name = "kubernetes-secondary-range-pods"
services_secondary_range_name = "kubernetes-secondary-range-services"
}
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/trace.append"
]
tags = ["kubernetes", "${var.environment}"]
}
}
##################
# MySQL DATABASES
##################
resource "google_sql_database_instance" "core" {
name = "${var.environment}-sql-core"
database_version = "MYSQL_5_7"
region = "${var.region}"
depends_on = ["google_project_service.sqlapi"]
settings {
# Second-generation instance tiers are based on the machine
# type. See argument reference below.
tier = "db-n1-standard-1"
}
}
resource "google_sql_database_instance" "tenant1" {
name = "${var.environment}-sql-tenant1"
database_version = "MYSQL_5_7"
region = "${var.region}"
depends_on = ["google_project_service.sqlapi"]
settings {
# Second-generation instance tiers are based on the machine
# type. See argument reference below.
tier = "db-n1-standard-1"
}
}
resource "google_sql_database_instance" "tenant2" {
name = "${var.environment}-sql-tenant2"
database_version = "MYSQL_5_7"
region = "${var.region}"
depends_on = ["google_project_service.sqlapi"]
settings {
# Second-generation instance tiers are based on the machine
# type. See argument reference below.
tier = "db-n1-standard-1"
}
}
resource "google_sql_database" "core" {
name = "project_core"
instance = "${google_sql_database_instance.core.name}"
}
resource "google_sql_database" "tenant1" {
name = "project_tenant_1"
instance = "${google_sql_database_instance.tenant1.name}"
}
resource "google_sql_database" "tenant2" {
name = "project_tenant_2"
instance = "${google_sql_database_instance.tenant2.name}"
}
##################
# MySQL USERS
##################
resource "google_sql_user" "core-user" {
name = "${var.sqluser}"
instance = "${google_sql_database_instance.core.name}"
host = "cloudsqlproxy~%"
password = "${var.sqlpassword}"
}
resource "google_sql_user" "tenant1-user" {
name = "${var.sqluser}"
instance = "${google_sql_database_instance.tenant1.name}"
host = "cloudsqlproxy~%"
password = "${var.sqlpassword}"
}
resource "google_sql_user" "tenant2-user" {
name = "${var.sqluser}"
instance = "${google_sql_database_instance.tenant2.name}"
host = "cloudsqlproxy~%"
password = "${var.sqlpassword}"
}
##################
# REDIS
##################
resource "google_redis_instance" "redis" {
name = "${var.environment}-redis"
tier = "BASIC"
memory_size_gb = 1
depends_on = ["google_project_service.redisapi"]
authorized_network = "${google_compute_network.appnetwork.self_link}"
region = "${var.region}"
location_id = "${data.google_compute_zones.available.names[0]}"
redis_version = "REDIS_3_2"
display_name = "Redis Instance"
}
# The following outputs allow authentication and connectivity to the GKE Cluster.
output "client_certificate" {
value = "${google_container_cluster.primary.master_auth.0.client_certificate}"
}
output "client_key" {
value = "${google_container_cluster.primary.master_auth.0.client_key}"
}
output "cluster_ca_certificate" {
value = "${google_container_cluster.primary.master_auth.0.cluster_ca_certificate}"
}
The service and deployment of the private API
# START CRUD POD
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: crud-pod
labels:
app: crud
spec:
template:
metadata:
labels:
app: crud
spec:
containers:
- name: crud
image: eu.gcr.io/dev-xxxxx/crud:latest-unstable
ports:
- containerPort: 3333
env:
- name: NODE_ENV
value: develop
volumeMounts:
- [..MountedConfigFiles..]
# [START proxy_container]
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=dev-xxxx:europe-west1:dev-sql-core=tcp:3306,dev-xxxx:europe-west1:dev-sql-tenant1=tcp:3307,dev-xxxx:europe-west1:dev-sql-tenant2=tcp:3308",
"-credential_file=xxxx"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- [..ConfigFilesVolumes..]
# [END volumes]
# END CRUD POD
-------
# START CRUD SERVICE
apiVersion: v1
kind: Service
metadata:
name: crud
annotations:
cloud.google.com/load-balancer-type: "Internal"
spec:
type: LoadBalancer
loadBalancerSourceRanges:
- 10.60.0.0/16
ports:
- name: crud-port
port: 3333
protocol: TCP # default; can also specify UDP
selector:
app: crud # label selector for Pods to target
# END CRUD SERVICE
And the public one (including ingress)
# START SAPI POD
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sapi-pod
labels:
app: sapi
spec:
template:
metadata:
labels:
app: sapi
spec:
containers:
- name: sapi
image: eu.gcr.io/dev-xxx/sapi:latest-unstable
ports:
- containerPort: 8080
env:
- name: NODE_ENV
value: develop
volumeMounts:
- [..MountedConfigFiles..]
volumes:
- [..ConfigFilesVolumes..]
# END SAPI POD
-------------
# START SAPI SERVICE
kind: Service
apiVersion: v1
metadata:
name: sapi # Service name
spec:
selector:
app: sapi
ports:
- port: 8080
targetPort: 8080
type: NodePort
# END SAPI SERVICE
--------------
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dev-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: api-dev-static-ip
labels:
app: sapi-ingress
spec:
backend:
serviceName: sapi
servicePort: 8080
tls:
- hosts:
- xxxxx
secretName: xxxxx
We fixed the issue by removing the #google-cloud/logging-winston from our logTransport.
For some reason it blocked our traffic so that we got such bad latency.