How to convert yaml configmap file to terraform - kubernetes

I am trying to integrate Kubewatch in a kubernetes cluster. The cluster was built using Terraform's kubernetes provider. How do I convert the data section of this configmap yaml file to terraform?
YAML
apiVersion: v1
kind: ConfigMap
metadata:
name: kubewatch
data:
.kubewatch.yaml: |
namespace: "default"
handler:
slack:
token: xoxb-OUR-BOT-TOKEN
channel: kubernetes-events
resource:
deployment: true
replicationcontroller: false
replicaset: false
daemonset: false
services: true
pod: true
secret: true
configmap: false

While I haven't done very complex config maps, this should get you pretty close.
resource "kubernetes_config_map" "example" {
metadata {
name = "kubewatch"
}
data {
namespace = "default"
handler {
slack {
token = "xoxb-OUR-BOT-TOKEN"
channel = "kubernetes-events"
}
}
resource {
deployment = true
replicationcontroller = false
replicaset = false
daemonset = false
services = true
pod = true
secret = true
configmap = false
}
api_host = "myhost:443"
db_host = "dbhost:5432"
}
}

Related

Vault Helm Chart not using Config from values.yaml

I'm trying to install Hashicorp Vault with the official Helm chart from Hashicorp. I'm installing it via Argocd via the UI. I have a git repo with values.yaml file that specifies some config thats not default (for example, ha mode and AWS KMS unseal). When I set up the chart via the Argocd web UI, I can point it to the values.yaml file, and see the values I set in the parameters section of the app. However, when I deploy the chart, the config doesn't get applied. I checked the configmap created by the chart, and it seems to follow the defaults despite my overrides. I'm thinking perhaps I'm using argocd wrong as I'm fairly new to it, although it very clearly shows the overrides from my values.yaml in the app's parameters.
Here is the relevant section of my values.yaml
server:
extraSecretEnvironmentVars:
- envName: AWS_SECRET_ACCESS_KEY
secretName: vault
secretKey: AWS_SECRET_ACCESS_KEY
- envName: AWS_ACCESS_KEY_ID
secretName: vault
secretKey: AWS_ACCESS_KEY_ID
- envName: AWS_KMS_KEY_ID
secretName: vault
secretKey: AWS_KMS_KEY_ID
ha:
enabled: true
replicas: 3
apiAddr: https://myvault.com:8200
clusterAddr: https://myvault.com:8201
raft:
enabled: true
setNodeId: false
config: |
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "raft" {
path = "/vault/data"
}
service_registration "kubernetes" {}
seal "awskms" {
region = "us-west-2"
kms_key_id = "$VAULT_KMS_KEY_ID"
}
However, the deployed config looks like this
disable_mlock = true
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
# Enable unauthenticated metrics access (necessary for Prometheus Operator)
#telemetry {
# unauthenticated_metrics_access = "true"
#}
}
storage "file" {
path = "/vault/data"
}
# Example configuration for using auto-unseal, using Google Cloud KMS. The
# GKMS keys must already exist, and the cluster must have a service account
# that is authorized to access GCP KMS.
#seal "gcpckms" {
# project = "vault-helm-dev"
# region = "global"
# key_ring = "vault-helm-unseal-kr"
# crypto_key = "vault-helm-unseal-key"
#}
# Example configuration for enabling Prometheus metrics in your config.
#telemetry {
# prometheus_retention_time = "30s",
# disable_hostname = true
#}
I've tried several changes to this config, such as setting the AWS_KMS_UNSEAL environment variable, which doesnt seem to get applied. I've also execed into the containers and none of my environment variables seem to be set when I run a printenv command. I can't seem to figure out why its deploying the pods with the default config.
With the help of murtiko I figured this out. My indentation of the config block was off. It needs to be nested below the ha block. My working config looks like this:
global:
enabled: true
server:
extraSecretEnvironmentVars:
- envName: AWS_REGION
secretName: vault
secretKey: AWS_REGION
- envName: AWS_ACCESS_KEY_ID
secretName: vault
secretKey: AWS_ACCESS_KEY_ID
- envName: AWS_SECRET_ACCESS_KEY
secretName: vault
secretKey: AWS_SECRET_ACCESS_KEY
- envName: VAULT_AWSKMS_SEAL_KEY_ID
secretName: vault
secretKey: VAULT_AWSKMS_SEAL_KEY_ID
ha:
enabled: true
config: |
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
seal "awskms" {
}
storage "raft" {
path = "/vault/data"
}
raft:
enabled: true
setNodeId: true
config: |
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
seal "awskms" {
}
storage "raft" {
path = "/vault/data"
}

Mount Kubernetes ConfigMap to Helm Chart values.yaml inside volume[ ] settings

Hello iam trying to insert a Kubernetes ConfigMap inside the cert-manager Helm Chart. The Helm Chart gets defined with a values.yaml.
The needed ConfigMap is already defined with the corresponding data inside the same namespace as my Helm Chart.
resource "helm_release" "certmanager" {
name = "cert-manager"
repository = "https://charts.jetstack.io"
chart = "cert-manager"
namespace = kubernetes_namespace.certmanager.metadata.0.name
version = local.helm_cert_manager_version
values = [
file("./config/cert-manager-values.yml")
]
}
# !! ConfigMap is defined with Terraform !! #
resource "kubernetes_config_map" "example" {
metadata {
name = "test-config"
namespace = kubernetes_namespace.certmanager.metadata.0.name
}
data = {
"test_ca" = "${data.google_secret_manager_secret_version.test_crt.secret_data}"
}
}
The data of the ConfigMap should be mounted to the path /etc/ssl/certs inside my Helm Chart.
I think down below is the rigth spot to mount the data?
...
volumes: []
volumeMounts: []
..
Do you have any idea how to mount that ConfigMap over /etc/ssl/certs within the cert-manager Chart?
Based on your question, there could be two things you could do:
Pre-populate the ./config/cert-manager-values.yml file with the values you want.
Use the templatefile [1] built-in function and pass the values dynamically.
In the first case, the changes to the file would probably have to be as follows:
...
volumes:
- name: config-map-volume
configMap:
name: test-config
volumeMounts:
- name: config-map-volume
mountPath: /etc/ssl/certs
...
Make sure the indentation is correct since this is YML. In the second case, you could do something like this in the helm_release resource:
resource "helm_release" "certmanager" {
name = "cert-manager"
repository = "https://charts.jetstack.io"
chart = "cert-manager"
namespace = kubernetes_namespace.certmanager.metadata.0.name
version = local.helm_cert_manager_version
values = [templatefile("./config/cert-manager-values.yml", {
config_map_name = kubernetes_config_map.example.metadata[0].name
volume_mount_path = "/etc/ssl/certs"
})]
}
In this case, you would also have to use template variables as placeholders inside of the cert-manager-values.yml file:
...
volumes:
- name: config-map-volume
configMap:
name: ${config_map_name}
volumeMounts:
- name: config-map-volume
mountPath: ${mount_path}
...
Note that the first option might not work as expected due to Terraform parallelism which tries to create as many resources as possible. If the ConfigMap is not created before the chart is applied it might fail.
[1] https://www.terraform.io/language/functions/templatefile

How to configure EKS ALB with Terraform

I'm having a hard time getting EKS to expose an IP address to the public internet. Do I need to set up the ALB myself or do you get that for free as part of the EKS cluster? If I have to do it myself, do I need to define it in the terraform template file or in the kubernetes object yaml?
Here's my EKS cluster defined in Terraform along with what I think are the required permissions.
// eks.tf
resource "aws_iam_role" "eks_cluster_role" {
name = "${local.env_name}-eks-cluster-role"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = {
Service = "eks.amazonaws.com"
},
Action = "sts:AssumeRole"
}
]
})
}
resource "aws_iam_role_policy_attachment" "eks-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster_role.name
}
resource "aws_iam_role_policy_attachment" "eks-AmazonEKSVPCResourceController" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"
role = aws_iam_role.eks_cluster_role.name
}
resource "aws_kms_key" "eks_key" {
description = "EKS KMS Key"
deletion_window_in_days = 7
enable_key_rotation = true
tags = {
Environment = local.env_name
Service = "EKS"
}
}
resource "aws_kms_alias" "eks_key_alias" {
target_key_id = aws_kms_key.eks_key.id
name = "alias/eks-kms-key-${local.env_name}"
}
resource "aws_eks_cluster" "eks_cluster" {
name = "${local.env_name}-eks-cluster"
role_arn = aws_iam_role.eks_cluster_role.arn
enabled_cluster_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
vpc_config {
subnet_ids = [aws_subnet.private_a.id, aws_subnet.private_b.id]
}
encryption_config {
resources = ["secrets"]
provider {
key_arn = aws_kms_key.eks_key.arn
}
}
tags = {
Environment = local.env_name
}
}
resource "aws_iam_role" "eks_node_group_role" {
name = "${local.env_name}-eks-node-group"
assume_role_policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Effect = "Allow",
Principal = {
Service = "ec2.amazonaws.com"
},
Action = "sts:AssumeRole"
}
]
})
}
resource "aws_iam_role_policy_attachment" "eks-node-group-AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.eks_node_group_role.name
}
resource "aws_iam_role_policy_attachment" "eks-node-group-AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.eks_node_group_role.name
}
resource "aws_iam_role_policy_attachment" "eks-node-group-AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.eks_node_group_role.name
}
resource "aws_eks_node_group" "eks_node_group" {
instance_types = var.node_group_instance_types
node_group_name = "${local.env_name}-eks-node-group"
node_role_arn = aws_iam_role.eks_node_group_role.arn
cluster_name = aws_eks_cluster.eks_cluster.name
subnet_ids = [aws_subnet.private_a.id, aws_subnet.private_b.id]
scaling_config {
desired_size = 1
max_size = 1
min_size = 1
}
// Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.
// Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.
depends_on = [
aws_iam_role_policy_attachment.eks-node-group-AmazonEC2ContainerRegistryReadOnly,
aws_iam_role_policy_attachment.eks-node-group-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.eks-node-group-AmazonEKSWorkerNodePolicy,
]
And here's my kubernetes object yaml:
# hello-kubernetes.yaml
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 3
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: paulbouwer/hello-kubernetes:1.9
ports:
- containerPort: 8080
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-ingress
spec:
backend:
serviceName: hello-kubernetes
servicePort: 80
I've run terraform apply and the cluster is up and running. I've installed eksctl and kubectl and run kubectl apply -f hello-kubernetes.yaml. The pods, service, and ingress appear to be running fine.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-kubernetes-6cb7cd595b-25bd9 1/1 Running 0 6h13m
hello-kubernetes-6cb7cd595b-lccdj 1/1 Running 0 6h13m
hello-kubernetes-6cb7cd595b-snwvr 1/1 Running 0 6h13m
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-kubernetes LoadBalancer 172.20.102.37 <pending> 80:32086/TCP 6h15m
$ kubectl get ingresses
NAME CLASS HOSTS ADDRESS PORTS AGE
hello-ingress <none> * 80 3h45m
What am I missing and which file does it belong in?
You need to install the AWS Load Balancer Controller by following the installation instructions; first you need to create IAM Role and permissions, this can be done with Terraform; then you need to apply Kubernetes Yaml for installing the controller into your cluster, this can be done with Helm or Kubectl.
You also need to be aware of the subnet tagging that is needed for e.g. creating a public or private facing load balancer.
Usually the way to go is to put an ALB and redirect traffic to the EKS cluster, managing it with the ALB Ingress Controller. This ingress controller will act as the communication between the cluster and your ALB, here is the AWS documentation that is pretty straight foward
EKS w/ALB
Other solution could be using an NGINX ingress controller with an NLB if the ALB doesn't suits your applications needs, as described in the following article
NGINX w/NLB
This happened with me too that after all the setup, I was not able to see the ingress address. The best way to debug this issue is to check the logs for the ingress controller. You can do this by:
Get the Ingress controller po name by using: kubectl get po -n kube-system
Check logs for the po using: kubectl logs <po_name> -n kube-system
This will point you to the exact issue as to why you are not seeing the address.
If you do not find any po running by the name ingress, then u will have to create ingress controller first.

Import dashboard with Helm using Sidecar for dashboards

I've exported a Grafana Dashboard (output is a json file) and now I would like to import it when I install Grafana (all automatic, with Helm and Kubernetes)
I just red this post about how to add a datasource which uses the sidecar setup. In short, you need to create a values.yaml with
sidecar:
image: xuxinkun/k8s-sidecar:0.0.7
imagePullPolicy: IfNotPresent
datasources:
enabled: true
label: grafana_datasource
And a ConfigMap which matches that label
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-grafana-datasource
labels:
grafana_datasource: '1'
data:
datasource.yaml: |-
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
access: proxy
orgId: 1
url: http://source-prometheus-server
Ok, this works, so I tried to do something similar for bashboards, so I updated the values.yaml
sidecar:
image: xuxinkun/k8s-sidecar:0.0.7
imagePullPolicy: IfNotPresent
dashboards:
enabled: false
# label that the configmaps with dashboards are marked with
label: grafana_dashboard
datasources:
enabled: true
label: grafana_datasource
And the ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-grafana-dashboards
labels:
grafana_dashboard: '1'
data:
custom-dashboards.json: |-
{
"annotations": {
"list": [
{
...
However when I install grafana this time and login, there are no dashboards
Any suggestions what I'm doing wrong here?
sidecar:
image: xuxinkun/k8s-sidecar:0.0.7
imagePullPolicy: IfNotPresent
dashboards:
enabled: false
# label that the configmaps with dashboards are marked with
label: grafana_dashboard
datasources:
enabled: true
label: grafana_datasource
In the above code there should be dashboard.enabled: true to get dashboard enabled.

Bad latency in GKE between Pods

We are having a very strange behavior with unacceptable big latency for communication within a kubernetes cluster (GKE).
The latency is jumping between 600ms and 1s for a endpoint that has a Memorystore get/store action and a CloudSQL query. The same setup running locally in dev enivornment (although without k8s) is not showing this kind of latency.
About our architecture:
We are running a k8s cluster on GKE using terraform and service / deployment (yaml) files for the creation (I added those below).
We're running two node APIs (koa.js 2.5). One API is exposed with an ingress to the public and connects via a nodeport to the API pod.
The other API pod is private reachable through an internal loadbalancer from google. This API is connected to all the resource we need (CloudSQL, Cloud Storage).
Both APIs are also connected to a Memorystore (Redis).
The communication between those pods is secured with self-signed server/client certificates (which isn't the problem, we already removed it temporarily to test).
We checked the logs and saw that the request from the public API to the private one is taking about 200ms only to reach it.
Also the response to the public API from the private one took about 600ms (messured from the point when the whole business logic of the private API went throw until we received that response back at the pubilc API)
We're really out of things to try... We already connected all the Google Cloud resources to our local environment which didn't show that kind of bad latency.
In a complete local setup the latency is only about 1/5 to 1/10 of what we see in the cloud setup.
We also tried to ping the private POD from the public one which was in the 0.100ms area.
Do you have any ideas where we can further investigate ?
Here is the terraform script about our Google Cloud setup
// Configure the Google Cloud provider
provider "google" {
project = "${var.project}"
region = "${var.region}"
}
data "google_compute_zones" "available" {}
# Ensuring relevant service APIs are enabled in your project. Alternatively visit and enable the needed services
resource "google_project_service" "serviceapi" {
service = "serviceusage.googleapis.com"
disable_on_destroy = false
}
resource "google_project_service" "sqlapi" {
service = "sqladmin.googleapis.com"
disable_on_destroy = false
depends_on = ["google_project_service.serviceapi"]
}
resource "google_project_service" "redisapi" {
service = "redis.googleapis.com"
disable_on_destroy = false
depends_on = ["google_project_service.serviceapi"]
}
# Create a VPC and a subnetwork in our region
resource "google_compute_network" "appnetwork" {
name = "${var.environment}-vpn"
auto_create_subnetworks = "false"
}
resource "google_compute_subnetwork" "network-with-private-secondary-ip-ranges" {
name = "${var.environment}-vpn-subnet"
ip_cidr_range = "10.2.0.0/16"
region = "europe-west1"
network = "${google_compute_network.appnetwork.self_link}"
secondary_ip_range {
range_name = "kubernetes-secondary-range-pods"
ip_cidr_range = "10.60.0.0/16"
}
secondary_ip_range {
range_name = "kubernetes-secondary-range-services"
ip_cidr_range = "10.70.0.0/16"
}
}
# GKE cluster setup
resource "google_container_cluster" "primary" {
name = "${var.environment}-cluster"
zone = "${data.google_compute_zones.available.names[1]}"
initial_node_count = 1
description = "Kubernetes Cluster"
network = "${google_compute_network.appnetwork.self_link}"
subnetwork = "${google_compute_subnetwork.network-with-private-secondary-ip-ranges.self_link}"
depends_on = ["google_project_service.serviceapi"]
additional_zones = [
"${data.google_compute_zones.available.names[0]}",
"${data.google_compute_zones.available.names[2]}",
]
master_auth {
username = "xxxxxxx"
password = "xxxxxxx"
}
ip_allocation_policy {
cluster_secondary_range_name = "kubernetes-secondary-range-pods"
services_secondary_range_name = "kubernetes-secondary-range-services"
}
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/trace.append"
]
tags = ["kubernetes", "${var.environment}"]
}
}
##################
# MySQL DATABASES
##################
resource "google_sql_database_instance" "core" {
name = "${var.environment}-sql-core"
database_version = "MYSQL_5_7"
region = "${var.region}"
depends_on = ["google_project_service.sqlapi"]
settings {
# Second-generation instance tiers are based on the machine
# type. See argument reference below.
tier = "db-n1-standard-1"
}
}
resource "google_sql_database_instance" "tenant1" {
name = "${var.environment}-sql-tenant1"
database_version = "MYSQL_5_7"
region = "${var.region}"
depends_on = ["google_project_service.sqlapi"]
settings {
# Second-generation instance tiers are based on the machine
# type. See argument reference below.
tier = "db-n1-standard-1"
}
}
resource "google_sql_database_instance" "tenant2" {
name = "${var.environment}-sql-tenant2"
database_version = "MYSQL_5_7"
region = "${var.region}"
depends_on = ["google_project_service.sqlapi"]
settings {
# Second-generation instance tiers are based on the machine
# type. See argument reference below.
tier = "db-n1-standard-1"
}
}
resource "google_sql_database" "core" {
name = "project_core"
instance = "${google_sql_database_instance.core.name}"
}
resource "google_sql_database" "tenant1" {
name = "project_tenant_1"
instance = "${google_sql_database_instance.tenant1.name}"
}
resource "google_sql_database" "tenant2" {
name = "project_tenant_2"
instance = "${google_sql_database_instance.tenant2.name}"
}
##################
# MySQL USERS
##################
resource "google_sql_user" "core-user" {
name = "${var.sqluser}"
instance = "${google_sql_database_instance.core.name}"
host = "cloudsqlproxy~%"
password = "${var.sqlpassword}"
}
resource "google_sql_user" "tenant1-user" {
name = "${var.sqluser}"
instance = "${google_sql_database_instance.tenant1.name}"
host = "cloudsqlproxy~%"
password = "${var.sqlpassword}"
}
resource "google_sql_user" "tenant2-user" {
name = "${var.sqluser}"
instance = "${google_sql_database_instance.tenant2.name}"
host = "cloudsqlproxy~%"
password = "${var.sqlpassword}"
}
##################
# REDIS
##################
resource "google_redis_instance" "redis" {
name = "${var.environment}-redis"
tier = "BASIC"
memory_size_gb = 1
depends_on = ["google_project_service.redisapi"]
authorized_network = "${google_compute_network.appnetwork.self_link}"
region = "${var.region}"
location_id = "${data.google_compute_zones.available.names[0]}"
redis_version = "REDIS_3_2"
display_name = "Redis Instance"
}
# The following outputs allow authentication and connectivity to the GKE Cluster.
output "client_certificate" {
value = "${google_container_cluster.primary.master_auth.0.client_certificate}"
}
output "client_key" {
value = "${google_container_cluster.primary.master_auth.0.client_key}"
}
output "cluster_ca_certificate" {
value = "${google_container_cluster.primary.master_auth.0.cluster_ca_certificate}"
}
The service and deployment of the private API
# START CRUD POD
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: crud-pod
labels:
app: crud
spec:
template:
metadata:
labels:
app: crud
spec:
containers:
- name: crud
image: eu.gcr.io/dev-xxxxx/crud:latest-unstable
ports:
- containerPort: 3333
env:
- name: NODE_ENV
value: develop
volumeMounts:
- [..MountedConfigFiles..]
# [START proxy_container]
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=dev-xxxx:europe-west1:dev-sql-core=tcp:3306,dev-xxxx:europe-west1:dev-sql-tenant1=tcp:3307,dev-xxxx:europe-west1:dev-sql-tenant2=tcp:3308",
"-credential_file=xxxx"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- [..ConfigFilesVolumes..]
# [END volumes]
# END CRUD POD
-------
# START CRUD SERVICE
apiVersion: v1
kind: Service
metadata:
name: crud
annotations:
cloud.google.com/load-balancer-type: "Internal"
spec:
type: LoadBalancer
loadBalancerSourceRanges:
- 10.60.0.0/16
ports:
- name: crud-port
port: 3333
protocol: TCP # default; can also specify UDP
selector:
app: crud # label selector for Pods to target
# END CRUD SERVICE
And the public one (including ingress)
# START SAPI POD
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sapi-pod
labels:
app: sapi
spec:
template:
metadata:
labels:
app: sapi
spec:
containers:
- name: sapi
image: eu.gcr.io/dev-xxx/sapi:latest-unstable
ports:
- containerPort: 8080
env:
- name: NODE_ENV
value: develop
volumeMounts:
- [..MountedConfigFiles..]
volumes:
- [..ConfigFilesVolumes..]
# END SAPI POD
-------------
# START SAPI SERVICE
kind: Service
apiVersion: v1
metadata:
name: sapi # Service name
spec:
selector:
app: sapi
ports:
- port: 8080
targetPort: 8080
type: NodePort
# END SAPI SERVICE
--------------
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dev-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: api-dev-static-ip
labels:
app: sapi-ingress
spec:
backend:
serviceName: sapi
servicePort: 8080
tls:
- hosts:
- xxxxx
secretName: xxxxx
We fixed the issue by removing the #google-cloud/logging-winston from our logTransport.
For some reason it blocked our traffic so that we got such bad latency.