Universal splunk forwarder as sidecar not showing internal splunk logs - kubernetes

I have implemented a sidecar container to forward my main application logs to splunk.
Have used universalsplunkforwarder image.
After I deploy both my main application and forwarder seems up and running. But anyway not recieving any logs in splunk index specified.
To troubleshoot splunkd log or any specific splunk internal logs are not found in /var/log path.
Can someone please help how we enable this splunk internal logs?
piece of deployment.yaml
- name: universalforwarder
> image: <docker-registry>/splunk/universalforwarder:latest
> imagePullPolicy: Always
> env:
> - name: SPLUNK_START_ARGS
> value: "--accept-license --answer-yes"
> - name: SPLUNK_USER
> value: splunk
> - name: SPLUNK_PASSWORD
> value: ****
> - name: SPLUNK_CMD
> value: add monitor /var/log
> resources:
> limits:
> memory: "312Mi"
> cpu: "300m"
> requests:
> memory: "80Mi"
> cpu: "80m"
> volumeMounts:
> - name: shared-logs
> mountPath: /var/log
Piece of confgmap.yml
outputs.conf: |-
[tcpout]
defaultGroup = idxm4d-bigdata
[tcpout:idxm4d-bigdata]
server = <servers>
clientCert = /opt/splunkforwarder/etc/auth/ca.pem
sslPassword = password
sslVerifyServerCert = false
inputs.conf: |-
[monitor:/bin/streaming/adapters/logs/output.log]
[default]
host = localhost
index = krp_idx
[monitor:/bin/streaming/adapters/logs/output.log]
disabled = false
sourcetype = log4j
recursive = True
deploymentclients.conf: |-
targetUri = <target-uri>
props.conf: |-
[default]
TRANSFORMS-routing=duplicate_data
[telegraf]
category = Metrics
description = Telegraf Metrics
pulldown_type = 1
DATETIME_CONFIG =
NO_BINARY_CHECK = true
SHOULD_LINEMERGE = true
disabled = false
INDEXED_EXTRACTIONS = json
KV_MODE = none
TIMESTAMP_FIELDS = time
TRANSFORMS-routing=duplicate_data
kind: ConfigMap
Not able to view splunkd logs to troubleshoot if splunk is able to get the logs or what might be the issue
Thanks

Related

Using k8s to deploy jenkins and make slave Pods to perform tasks, how to put files or installation packages generated in the pod into jenkins pvc

I deployed jenkins and used the slave pod to run it. I used the local pv mode of openebs, deployed jenkins and volume on the same node. I used the volume mode to transfer the data generated by the pod. It is shared in the jenkins volume of the host computer. My task is to download the code and add some installation packages to it, but it takes a long time to download the packages. I hope that the slave pod will not be downloaded every time it is deployed.
#!/usr/bin/env groovy
// groovy公共变量
def PROJECT = "CI-code"
def WORKDIR_PATH = "/opt/status"
def DOWNLOAD_KUBE_DOWNLOAD_URL = "xxx/kube-1.19.0-v2.2.0-amd64.tar.gz"
def PVC_PATH = "/var/openebs/local/pvc-8e8f9830-9bdc-494d-ac45-19310cbda035/cloudybase"
pipeline {
agent {
kubernetes {
yaml """
apiVersion: v1
kind: Pod
metadata:
name: jenkins-slave
namespace: devops-tools
spec:
containers:
- name: jnlp
image: "xxx/google_containers/jenkins-slave-jdk11-wget:latest"
imagePullPolicy: Always
securityContext:
privileged: true
runAsUser: 0
volumeMounts:
- name: docker-cmd
mountPath: /usr/bin/docker
- name: docker-sock
mountPath: /var/run/docker.sock
- name: code
mountPath: /home/jenkins/agent/workspace/${PROJECT}
volumes:
- name: docker-cmd
hostPath:
path: /usr/bin/docker
- name: docker-sock
hostPath:
path: /var/run/docker.sock
- name: code
hostPath:
path: ${PVC_PATH}
"""
}
}
stages {
stage('拉取代码') {
steps {
git branch: 'release-2.2.0', credentialsId: 'e17ba069-aa8b-4bfd-9c9e-3f3956914f09', url: 'xxx/deployworker.git'
}
}
stage('下载依赖包到组件包对应目录') {
steps {
sh """
logging() {
echo -e "\033[32m $(/bin/date)\033[0m" - $#
}
main () {
logging Check ${DOWNLOAD_KUBE_DOWNLOAD_URL} installation package if download ...
if [ ! -f "${PVC_PATH}/kube_status_code" ];then
download_kube
fi
}
download_kube () {
DOWNLOAD_KUBE_NAME=$(echo ${DOWNLOAD_KUBE_DOWNLOAD_URL} | /bin/sed 's|.*/||')
cd / && { /bin/curl -O ${DOWNLOAD_KUBE_DOWNLOAD_URL} ; cd -; }
if [ "$?" -ne "0" ]; then
echo "Failed"
exit 1
fi
echo 'true' > ${W_PATH}/kube_status_code
mkdir -p /home/jenkins/agent/workspace/deploywork/Middleware-choreography/kubeQ/kubeQ/
tar xf /\${DOWNLOAD_KUBE_NAME} -C /home/jenkins/agent/workspace/deploywork/Middleware-choreography/kubeQ/kubeQ/
if [ "$?" -ne "0" ]; then
echo "Failed"
exit 1
fi
}
"""
}
}
Hi zccharts from the above explanation I can get that you are trying to create your container from scratch every time and it is consuming time. This can be solved by creating your base container image with all the packages installed and you can use it in your pipeline for deploying your application.

Kubernetes Executor with Proxy Settings

I am using the helm chart for gitlab runner in a Kubernetes Cluster and need to pass environment variables to my Kubernetes Runner to allow him to download for example content from s3 cache. Unfortunately it does not work. Anyone any solutions for me ?
my values.yaml:
gitlabUrl: https://example.com
image: default-docker/gitlab-runner:alpine-v14.0.1
runnerRegistrationToken: XXXXXXXXXXXXX
imagePullPolicy: IfNotPresent
imagePullSecrets:
- name: "k8runner-secret"
rbac:
create: true
runners:
config: |
[[runners]]
environment = ["http_proxy: http://webproxy.comp.db.de:8080", "https_proxy: http://webproxy:comp:db:de:8080", "no_proxy: \"localhost\""]
[runners.kubernetes]
image = "default-docker/ubuntu:16.04"
cpu_request = "500m"
memory_request = "1Gi"
namespace = "gitlab"
[runners.cache]
Type = "s3"
Path = "cachepath"
Shared = true
[runners.cache.s3]
ServerAddress = "s3.amazonaws.com"
BucketName = "exampleBucket"
BucketLocation = "eu-west-1"
Insecure = false
tags: "test"
locked: true
name: "k8s-runner"
resources:
limits:
memory: 1Gi
cpu: 500m
requests:
memory: 250m
cpu: 50m
ENVIRONMENT:
http_proxy: http://webproxy.comp.db.de:8080
https_proxy: http://webproxy:comp:db:de:8080
no_proxy: "localhost"
config.template.toml located on the pod:
[[runners]]
[runners.kubernetes]
image = "default-docker/ubuntu:16.04"
cpu_request = "500m"
memory_request = "1Gi"
namespace = "gitlab"
[runners.cache]
Type = "s3"
Path = "cachepath"
Shared = true
[runners.cache.s3]
ServerAddress = "s3.amazonaws.com"
BucketName = "exampleBucket"
BucketLocation = "eu-west-1"
Insecure = false
config.toml located on the pod:
concurrent = 10
check_interval = 30
log_level = "info"
listen_address = ':9252'
It looks for me that he is not adding the environment variables. If I enter the env cmd I also can't find the environment variables.
I am thankful for every helping hand

How do I view the logs from a task in Argo?

I am using Argo and have a question about the workflow of workflows example. (https://github.com/argoproj/argo-workflows/blob/master/examples/workflow-of-workflows.yaml)
UPDATED YET AGAIN
As pointed out below, it is a task that I need to view. So my question is now - How do I view the logs from a task?
My workflow completes without error, but does not produce the expected output. I would like to look at the logs of one of the containers within one of the workflows within the overall workflow, but I cannot get the syntax right I am using the following convention to get the logs from the relevant pod.
argo logs -n argo wf-name pod-name
and getting:
workflow-of-workflows-k8fm5-3824346685: 2021-04-05T17:55:43.360917900Z time="2021-04-05T17:55:43.360Z" level=info msg="Starting Workflow Executor" executorType= version="{untagged 2021-04-05T17:09:35Z 79eb50b42e948466f82865b8a79756b57f9b66d9 untagged clean go1.15.7 gc linux/amd64}"
workflow-of-workflows-k8fm5-3824346685: 2021-04-05T17:55:43.362737800Z time="2021-04-05T17:55:43.362Z" level=info msg="Creating a docker executor"
workflow-of-workflows-k8fm5-3824346685: 2021-04-05T17:55:43.362815200Z time="2021-04-05T17:55:43.362Z" level=info msg="Executor (version: untagged, build_date: 2021-04-05T17:09:35Z) initialized (pod: argo/workflow-of-workflows-k8fm5-3824346685) with template:\n{\"name\":\"run\",\"inputs\":{\"parameters\":[{\"name\":\"runTemplate\",\"value\":\"demo1run.yaml\"}]},\"outputs\":{},\"metadata\":{},\"resource\":{\"action\":\"create\",\"manifest\":\"# Example of using a hard-wired artifact location from a HTTP URL.\\napiVersion: argoproj.io/v1alpha1\\nkind: Workflow\\nmetadata:\\n generateName: message-passing-1-\\n namespace: argo\\nspec:\\n serviceAccountName: argo\\n entrypoint: entrypoint\\n\\n templates:\\n\\n - name: echo\\n container:\\n image: weilidma/curl:0.4\\n command:\\n - \\\"/bin/bash\\\"\\n - \\\"-c\\\"\\n args:\\n - \\\"cat /mnt/raw/raw1.json \\u0026\\u0026 exit\\\"\\n volumeMounts:\\n - name: raw-p1-vol\\n mountPath: /mnt/raw\\n - name: log-p1-vol\\n mountPath: /mnt/logs/\\n\\n - name: process1\\n container:\\n image: weilidma/curl:0.4\\n command: \\n - \\\"/bin/bash\\\"\\n - \\\"-c\\\"\\n args: \\n - \\\"jq \\\\u0027[.data[].Platform |= test(\\\\u0022Healy\\\\u0022) | .[][] | select(.Platform == true) | {survey: .Survey, url: .\\\\u0022Data Access\\\\u0022}]\\\\u0027 /mnt/raw/raw1.json \\u003e /mnt/processed/filtered1.json \\u0026\\u0026 exit\\\"\\n volumeMounts:\\n - name: raw-p1-vol\\n mountPath: /mnt/raw/\\n - name: processed-p1-vol\\n mountPath: /mnt/processed/\\n - name: log-p1-vol\\n mountPath: /mnt/logs/\\n\\n - name: process2\\n container:\\n image: weilidma/curl:0.4\\n command: \\n - \\\"/bin/bash\\\"\\n - \\\"-c\\\"\\n args: \\n - \\\"jq \\\\u0027[.data[].Platform |= test(\\\\u0022Healy\\\\u0022) | .[][] | select(.Platform == true) | {survey: .Survey, url: .\\\\u0022Data Access\\\\u0022}]\\\\u0027 /mnt/raw/raw2.json \\u003e /mnt/processed/filtered2.json \\u0026\\u0026 exit\\\"\\n volumeMounts:\\n - name: raw-p2-vol\\n mountPath: /mnt/raw/\\n - name: processed-p2-vol\\n mountPath: /mnt/processed/\\n - name: log-p2-vol\\n mountPath: /mnt/logs/\\n\\n - name: join\\n container:\\n image: weilidma/curl:0.4\\n command: \\n - \\\"/bin/bash\\\"\\n - \\\"-c\\\"\\n args: \\n - \\\"jq -n --slurpfile f1 /mnt/processed1/filtered1.json --slurpfile f2 /mnt/processed2/filtered2.json -f .jq/join.jq --arg field \\\\u0022survey\\\\u0022 \\u003e /mnt/processed1/output.json \\u0026\\u0026 exit\\\"\\n volumeMounts:\\n - name: processed-p1-vol\\n mountPath: /mnt/processed1/\\n - name: processed-p2-vol\\n mountPath: /mnt/processed2/\\n - name: log-p1-vol\\n mountPath: /mnt/logs1/\\n - name: log-p2-vol\\n mountPath: /mnt/logs2/\\n\\n - name: egress\\n inputs:\\n parameters:\\n - name: ipaddr\\n container:\\n image: weilidma/curl:0.4\\n command: \\n - \\\"/bin/bash\\\"\\n - \\\"-c\\\"\\n args: \\n - \\\"cat /mnt/processed/output.json \\u0026\\u0026 exit\\\"\\n volumeMounts:\\n - name: processed-p1-vol\\n mountPath: /mnt/processed/\\n - name: log-p1-vol\\n mountPath: /mnt/logs/\\n\\n - dag:\\n tasks:\\n - name: echo\\n template: echo\\n dependencies:\\n - name: p1\\n template: process1\\n dependencies:\\n\\n - name: p2\\n template: process2\\n dependencies:\\n\\n - name: j\\n template: join\\n dependencies:\\n - p1\\n - p2\\n\\n - name: e\\n template: egress\\n arguments:\\n parameters: \\n - name: ipaddr \\n value: 'https://192.241.129.100'\\n dependencies:\\n - j\\n\\n name: entrypoint\\n\"}}"
workflow-of-workflows-k8fm5-3824346685: 2021-04-05T17:55:43.362847900Z time="2021-04-05T17:55:43.362Z" level=info msg="Loading manifest to /tmp/manifest.yaml"
workflow-of-workflows-k8fm5-3824346685: 2021-04-05T17:55:43.362942100Z time="2021-04-05T17:55:43.362Z" level=info msg="kubectl create -f /tmp/manifest.yaml -o json"
workflow-of-workflows-k8fm5-3824346685: 2021-04-05T17:55:43.837625500Z time="2021-04-05T17:55:43.837Z" level=info msg="Resource: argo/Workflow.argoproj.io/message-passing-1-t8749. SelfLink: /apis/argoproj.io/v1alpha1/namespaces/argo/workflows/message-passing-1-t8749"
workflow-of-workflows-k8fm5-3824346685: 2021-04-05T17:55:43.837636900Z time="2021-04-05T17:55:43.837Z" level=info msg="Starting SIGUSR2 signal monitor"
workflow-of-workflows-k8fm5-3824346685: 2021-04-05T17:55:43.837696900Z time="2021-04-05T17:55:43.837Z" level=info msg="No output parameters"
Based on this output, the container name seems to be argo/Workflow.argoproj.io/message-passing-1-t8749 but when I add that to the end I get an error. Here are the commands I have tried:
argo logs -n argo workflow-of-workflows-k8fm5 workflow-of-workflows-k8fm5-3824346685 -c argo/Workflow.argoproj.io/message-passing-1-t8749
or
argo logs -n argo workflow-of-workflows-k8fm5 workflow-of-workflows-k8fm5-3824346685 -c message-passing-1-t8749
Thanks to Alex of ArgoProj!
Here is a command I did not know:
kubectl get workflow
will list (surprise) workflows! From there, I could see the individual workflows embedded into the larger workflow.
The default container names on an Argo Workflows pod are init, main, and wait.
I'm not sure what message-passing-1-t8749 refers to, but it might be the "step/task name."

airflow kubernetes not reading pod_template_file

I am running Airflow with k8s executor.
I have everything set up under the [kubernetes] section and things are working fine. However, I would prefer to use a pod file for the worker.
So I generated a pod.yaml from one of the worker container that spins up.
I have placed this file on a location accessible by the scheduler pod something like
/opt/airflow/yamls/workerpod.yaml
But when I try to specify this file in pod_template_file parameter, it gives me these errors
[2020-03-02 22:12:24,115] {pod_launcher.py:84} ERROR - Exception when attempting to create Namespaced Pod.
Traceback (most recent call last):
File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/airflow/contrib/kubernetes/pod_launcher.py", line 81, in run_pod_async
resp = self._client.create_namespaced_pod(body=req, namespace=pod.namespace, **kwargs)
File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 6115, in create_namespaced_pod
(data) = self.create_namespaced_pod_with_http_info(namespace, body, **kwargs)
File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/kubernetes/client/apis/core_v1_api.py", line 6206, in create_namespaced_pod_with_http_info
collection_formats=collection_formats)
File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 334, in call_api
_return_http_data_only, collection_formats, _preload_content, _request_timeout)
File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 168, in __call_api
_request_timeout=_request_timeout)
File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 377, in request
body=body)
File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/kubernetes/client/rest.py", line 266, in POST
body=body)
File "/opt/rh/rh-python36/root/usr/lib/python3.6/site-packages/kubernetes/client/rest.py", line 222, in request
raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (403)
Reason: Forbidden
HTTP response headers: HTTPHeaderDict({'Audit-Id': 'ab2bc6dc-96f9-4014-8a08-7dae6e008aad', 'Cache-Control': 'no-store', 'Content-Type': 'application/json', 'Date': 'Mon, 02 Mar 2020 22:12:24 GMT', 'Content-Length': '660'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"examplebashoperatorrunme0-c9ca5d619bc54bf2a456e133ad79dd00\" is forbidden: unable to validate against any security context constraint: [fsGroup: Invalid value: []int64{0}: 0 is not an allowed group spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 0: must be in the ranges: [1000040000, 1000049999] spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 0: running with the root UID is forbidden]","reason":"Forbidden","details":{"name":"examplebashoperatorrunme0-c9ca5d619bc54bf2a456e133ad79dd00","kind":"pods"},"code":403}
[2020-03-02 22:12:24,141] {kubernetes_executor.py:863} WARNING - ApiException when attempting to run task, re-queueing. Message: pods "examplebashoperatorrunme0-c9ca5d619bc54bf2a456e133ad79dd00" is forbidden: unable to validate against any security context constraint: [fsGroup: Invalid value: []int64{0}: 0 is not an allowed group spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 0: must be in the ranges: [1000040000, 1000049999] spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 0: running with the root UID is forbidden]
Just to clarify, the pod.yaml file is generated from same running container that comes from configs in kubernetes section of airflow.cfg that works just fine. The run as user is correct. The SA is correct but still I am getting this error.
I am unsure if I should place this file in relation to where I kick off my kubectl apply ?
Since it goes in the airflow.cfg, I didn't think that would be the case but rather should be accessible from within the scheduler container.
One strange thing I noticed is that even though I have specified and seem to be using KubernetesExecutor but when the individual worker pods come on they said LocalExecutor. That's something I had changed in the workerpod.yaml file to KubernetesExecutor.
here is pod yaml file
apiVersion: v1
kind: Pod
metadata:
annotations:
openshift.io/scc: nonroot
labels:
app: airflow-worker
kubernetes_executor: "True"
name: airflow-worker
# namespace: airflow
spec:
affinity: {}
containers:
env:
- name: AIRFLOW_HOME
value: /opt/airflow
- name: AIRFLOW__CORE__EXECUTOR
value: KubernetesExecutor
#value: LocalExecutor
- name: AIRFLOW__CORE__DAGS_FOLDER
value: /opt/airflow/dags
- name: AIRFLOW__CORE__SQL_ALCHEMY_CONN
valueFrom:
secretKeyRef:
key: MYSQL_CONN_STRING
name: db-secret
image: ourrepo.example.com/airflow-lab:latest
imagePullPolicy: IfNotPresent
name: base
# resources:
# limits:
# cpu: "1"
# memory: 1Gi
# requests:
# cpu: 400m
# memory: 1Gi
securityContext:
capabilities:
drop:
- KILL
- MKNOD
- SETGID
- SETUID
volumeMounts:
- mountPath: /opt/airflow/dags
name: airflow-dags
readOnly: true
subPath: airflow/dags
- mountPath: /opt/airflow/logs
name: airflow-logs
- mountPath: /opt/airflow/airflow.cfg
name: airflow-config
readOnly: true
subPath: airflow.cfg
# - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
# name: airflow-cluster-access-token-5228g
# readOnly: true
dnsPolicy: ClusterFirst
# imagePullSecrets:
# - name: airflow-cluster-access-dockercfg-85twh
priority: 0
restartPolicy: Never
schedulerName: default-scheduler
securityContext:
# fsGroup: 0
runAsUser: 1001
seLinuxOptions:
level: s0:c38,c12
serviceAccount: airflow-cluster-access
serviceAccountName: airflow-cluster-access
# tolerations:
# - effect: NoSchedule
# key: node.kubernetes.io/memory-pressure
# operator: Exists
volumes:
- name: airflow-dags
persistentVolumeClaim:
claimName: ucdagent
- emptyDir: {}
name: airflow-logs
- configMap:
defaultMode: 420
name: airflow-config
name: airflow-config
# - name: airflow-cluster-access-token-5228g
# secret:
# defaultMode: 420
# secretName: airflow-cluster-access-token-5228g
Here is the working kubernetes config from airflow.cfg
[kubernetes]
#pod_template_file = /opt/airflow/yamls/workerpod.yaml
dags_in_image = False
worker_container_repository = ${AIRFLOW_IMAGE_NAME}
worker_container_tag = ${AIRFLOW_IMAGE_TAG}
worker_container_image_pull_policy = IfNotPresent
delete_worker_pods = False
in_cluster = true
namespace = ${AIRFLOW_NAMESPACE}
airflow_configmap = airflow-config
run_as_user = 1001
dags_volume_subpath = airflow/dags
dags_volume_claim = ucdagent
worker_service_account_name = airflow-cluster-access
[kubernetes_secrets]
AIRFLOW__CORE__SQL_ALCHEMY_CONN = db-secret=MYSQL_CONN_STRING
UPDATE: my airflow version is 1.10.7. I am guessing this is a newer parameters. I am trying to find if this is currently an empty config reference or it has been implemented in latest which is right now 1.10.9
UPDATE: This parameter has not beeen implemented as of 1.10.9

Bad latency in GKE between Pods

We are having a very strange behavior with unacceptable big latency for communication within a kubernetes cluster (GKE).
The latency is jumping between 600ms and 1s for a endpoint that has a Memorystore get/store action and a CloudSQL query. The same setup running locally in dev enivornment (although without k8s) is not showing this kind of latency.
About our architecture:
We are running a k8s cluster on GKE using terraform and service / deployment (yaml) files for the creation (I added those below).
We're running two node APIs (koa.js 2.5). One API is exposed with an ingress to the public and connects via a nodeport to the API pod.
The other API pod is private reachable through an internal loadbalancer from google. This API is connected to all the resource we need (CloudSQL, Cloud Storage).
Both APIs are also connected to a Memorystore (Redis).
The communication between those pods is secured with self-signed server/client certificates (which isn't the problem, we already removed it temporarily to test).
We checked the logs and saw that the request from the public API to the private one is taking about 200ms only to reach it.
Also the response to the public API from the private one took about 600ms (messured from the point when the whole business logic of the private API went throw until we received that response back at the pubilc API)
We're really out of things to try... We already connected all the Google Cloud resources to our local environment which didn't show that kind of bad latency.
In a complete local setup the latency is only about 1/5 to 1/10 of what we see in the cloud setup.
We also tried to ping the private POD from the public one which was in the 0.100ms area.
Do you have any ideas where we can further investigate ?
Here is the terraform script about our Google Cloud setup
// Configure the Google Cloud provider
provider "google" {
project = "${var.project}"
region = "${var.region}"
}
data "google_compute_zones" "available" {}
# Ensuring relevant service APIs are enabled in your project. Alternatively visit and enable the needed services
resource "google_project_service" "serviceapi" {
service = "serviceusage.googleapis.com"
disable_on_destroy = false
}
resource "google_project_service" "sqlapi" {
service = "sqladmin.googleapis.com"
disable_on_destroy = false
depends_on = ["google_project_service.serviceapi"]
}
resource "google_project_service" "redisapi" {
service = "redis.googleapis.com"
disable_on_destroy = false
depends_on = ["google_project_service.serviceapi"]
}
# Create a VPC and a subnetwork in our region
resource "google_compute_network" "appnetwork" {
name = "${var.environment}-vpn"
auto_create_subnetworks = "false"
}
resource "google_compute_subnetwork" "network-with-private-secondary-ip-ranges" {
name = "${var.environment}-vpn-subnet"
ip_cidr_range = "10.2.0.0/16"
region = "europe-west1"
network = "${google_compute_network.appnetwork.self_link}"
secondary_ip_range {
range_name = "kubernetes-secondary-range-pods"
ip_cidr_range = "10.60.0.0/16"
}
secondary_ip_range {
range_name = "kubernetes-secondary-range-services"
ip_cidr_range = "10.70.0.0/16"
}
}
# GKE cluster setup
resource "google_container_cluster" "primary" {
name = "${var.environment}-cluster"
zone = "${data.google_compute_zones.available.names[1]}"
initial_node_count = 1
description = "Kubernetes Cluster"
network = "${google_compute_network.appnetwork.self_link}"
subnetwork = "${google_compute_subnetwork.network-with-private-secondary-ip-ranges.self_link}"
depends_on = ["google_project_service.serviceapi"]
additional_zones = [
"${data.google_compute_zones.available.names[0]}",
"${data.google_compute_zones.available.names[2]}",
]
master_auth {
username = "xxxxxxx"
password = "xxxxxxx"
}
ip_allocation_policy {
cluster_secondary_range_name = "kubernetes-secondary-range-pods"
services_secondary_range_name = "kubernetes-secondary-range-services"
}
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/trace.append"
]
tags = ["kubernetes", "${var.environment}"]
}
}
##################
# MySQL DATABASES
##################
resource "google_sql_database_instance" "core" {
name = "${var.environment}-sql-core"
database_version = "MYSQL_5_7"
region = "${var.region}"
depends_on = ["google_project_service.sqlapi"]
settings {
# Second-generation instance tiers are based on the machine
# type. See argument reference below.
tier = "db-n1-standard-1"
}
}
resource "google_sql_database_instance" "tenant1" {
name = "${var.environment}-sql-tenant1"
database_version = "MYSQL_5_7"
region = "${var.region}"
depends_on = ["google_project_service.sqlapi"]
settings {
# Second-generation instance tiers are based on the machine
# type. See argument reference below.
tier = "db-n1-standard-1"
}
}
resource "google_sql_database_instance" "tenant2" {
name = "${var.environment}-sql-tenant2"
database_version = "MYSQL_5_7"
region = "${var.region}"
depends_on = ["google_project_service.sqlapi"]
settings {
# Second-generation instance tiers are based on the machine
# type. See argument reference below.
tier = "db-n1-standard-1"
}
}
resource "google_sql_database" "core" {
name = "project_core"
instance = "${google_sql_database_instance.core.name}"
}
resource "google_sql_database" "tenant1" {
name = "project_tenant_1"
instance = "${google_sql_database_instance.tenant1.name}"
}
resource "google_sql_database" "tenant2" {
name = "project_tenant_2"
instance = "${google_sql_database_instance.tenant2.name}"
}
##################
# MySQL USERS
##################
resource "google_sql_user" "core-user" {
name = "${var.sqluser}"
instance = "${google_sql_database_instance.core.name}"
host = "cloudsqlproxy~%"
password = "${var.sqlpassword}"
}
resource "google_sql_user" "tenant1-user" {
name = "${var.sqluser}"
instance = "${google_sql_database_instance.tenant1.name}"
host = "cloudsqlproxy~%"
password = "${var.sqlpassword}"
}
resource "google_sql_user" "tenant2-user" {
name = "${var.sqluser}"
instance = "${google_sql_database_instance.tenant2.name}"
host = "cloudsqlproxy~%"
password = "${var.sqlpassword}"
}
##################
# REDIS
##################
resource "google_redis_instance" "redis" {
name = "${var.environment}-redis"
tier = "BASIC"
memory_size_gb = 1
depends_on = ["google_project_service.redisapi"]
authorized_network = "${google_compute_network.appnetwork.self_link}"
region = "${var.region}"
location_id = "${data.google_compute_zones.available.names[0]}"
redis_version = "REDIS_3_2"
display_name = "Redis Instance"
}
# The following outputs allow authentication and connectivity to the GKE Cluster.
output "client_certificate" {
value = "${google_container_cluster.primary.master_auth.0.client_certificate}"
}
output "client_key" {
value = "${google_container_cluster.primary.master_auth.0.client_key}"
}
output "cluster_ca_certificate" {
value = "${google_container_cluster.primary.master_auth.0.cluster_ca_certificate}"
}
The service and deployment of the private API
# START CRUD POD
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: crud-pod
labels:
app: crud
spec:
template:
metadata:
labels:
app: crud
spec:
containers:
- name: crud
image: eu.gcr.io/dev-xxxxx/crud:latest-unstable
ports:
- containerPort: 3333
env:
- name: NODE_ENV
value: develop
volumeMounts:
- [..MountedConfigFiles..]
# [START proxy_container]
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=dev-xxxx:europe-west1:dev-sql-core=tcp:3306,dev-xxxx:europe-west1:dev-sql-tenant1=tcp:3307,dev-xxxx:europe-west1:dev-sql-tenant2=tcp:3308",
"-credential_file=xxxx"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- [..ConfigFilesVolumes..]
# [END volumes]
# END CRUD POD
-------
# START CRUD SERVICE
apiVersion: v1
kind: Service
metadata:
name: crud
annotations:
cloud.google.com/load-balancer-type: "Internal"
spec:
type: LoadBalancer
loadBalancerSourceRanges:
- 10.60.0.0/16
ports:
- name: crud-port
port: 3333
protocol: TCP # default; can also specify UDP
selector:
app: crud # label selector for Pods to target
# END CRUD SERVICE
And the public one (including ingress)
# START SAPI POD
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: sapi-pod
labels:
app: sapi
spec:
template:
metadata:
labels:
app: sapi
spec:
containers:
- name: sapi
image: eu.gcr.io/dev-xxx/sapi:latest-unstable
ports:
- containerPort: 8080
env:
- name: NODE_ENV
value: develop
volumeMounts:
- [..MountedConfigFiles..]
volumes:
- [..ConfigFilesVolumes..]
# END SAPI POD
-------------
# START SAPI SERVICE
kind: Service
apiVersion: v1
metadata:
name: sapi # Service name
spec:
selector:
app: sapi
ports:
- port: 8080
targetPort: 8080
type: NodePort
# END SAPI SERVICE
--------------
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dev-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: api-dev-static-ip
labels:
app: sapi-ingress
spec:
backend:
serviceName: sapi
servicePort: 8080
tls:
- hosts:
- xxxxx
secretName: xxxxx
We fixed the issue by removing the #google-cloud/logging-winston from our logTransport.
For some reason it blocked our traffic so that we got such bad latency.