I configured Gitlab runner (version 11.4.2) to use Kubernetes executor.
Here is my non-interactive registrer command:
gitlab-runner register
--non-interactive \
--registration-token **** \
--url https://mygitlab.net/ \
--tls-ca-file /etc/gitlab-runner/certs/ca.crt \
--executor "kubernetes" \
--kubernetes-image-pull-secrets pull-internal \
--kubernetes-image-pull-secrets pull-external \
--name "kube-docker-runner" \
--tag-list "docker" \
--config "/etc/gitlab-runner/config.toml" \
--kubernetes-image "docker:latest" \
--kubernetes-helper-image "gitlab/gitlab-runner-helper:x86_64-latest" \
--output-limit 32768
It works fine and I can see the execution log in the Gitlab UI
In kubernetes, I see the runner pod composed by 2 containers : helper and build. I expected to see execution job logs by watching the build container logs but it's not the case. I would like to centralize these job execution log with a tool like fluentdbit by reading the container stdout output.
If I start the docker:latest alone (without runner execution) in a pod deployed in the same kubernetes cluster, I can see the logs on stdout. Any idea for configuring the stdout of build container properly ?
Related
I pulled the latest image from DockerHub Ceph/Daemon. I run the container like this:
docker run -d --net=host \
-v ~/ceph-container1/etc/ceph:/etc/ceph \
-v ~/ceph-container1/var/lib/ceph/:/var/lib/ceph/ \
-e MON_IP=192.168.0.20 \
-e CEPH_PUBLIC_NETWORK=192.168.0.0/24 \
ceph/daemon mon
The container exits immediately after created. I can not use ceph -v or ceph -s to check I deployed right or not. Same thing happens on OSD and MDS as well. Only MGR container will keep running after created.
My system is ArchLinux. Did I miss any thing else to keep it running? Thanks.
I have created minikube cluster. I have to run my automation script in the minikube for testcases using pytest. I have to pass service account. How to get the it? Anyone can please help?
While running minikube add extra flags:
minikube start \
--extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/sa.key \
--extra-config=apiserver.service-account-key-file=/var/lib/minikube/certs/sa.pub \
--extra-config=apiserver.service-account-issuer=api \
--extra-config=apiserver.service-account-api-audiences=api,spire-server,nats \
--extra-config=apiserver.authorization-mode=Node,RBAC \
--extra-config=kubelet.authentication-token-webhook=true
Take a look: minikube-sa, kubernetes-psat.
We have an app running under a Google Cloud Kubernetes cluster, things are running fine in my testing scenario. We went to set up autoscaling for these pods - we'll probably never need to go to 0, but want it to scale up to (for now) 20 pods, and back down, obviously. We are deploying using faas-cli. First, tried:
faas-cli deploy --replace --update=false -f ./process-listing-image.yml \
--gateway=https://openfaas.ihouseprd.com \
--label "com.openfaas.scale.min=1" \
--label "com.openfaas.scale.max=20" \
--label "com.openfaas.scale.factor=5"
But that gave us 1 pod, and never moved. The it was suggested to use:
faas-cli deploy --replace --update=false -f ./process-listing-image.yml \
--gateway=https://openfaas.ihouseprd.com \
--label "com.openfaas.scale.min=0" \
--label "com.openfaas.scale.max=20" \
--label "com.openfaas.scale.factor=5"
But that still gave us but one pod. I most recently tried:
faas-cli deploy --replace --update=false -f ./process-listing-image.yml \
--gateway=https://openfaas.ihouseprd.com \
--label "com.openfaas.scale.min=5" \
--label "com.openfaas.scale.max=20" \
--label "com.openfaas.scale.factor=5"
Which produced 5 pods, but it hasn't scaled past that, despite there being thousands of requests waiting. Looking at the Cloud Console "Deployment Details" screen, I see the five pods, but can't tell if all 5 are working.
Any idea why these things aren't scaling?
I need to run a Dataproc cluster with both BigQuery and Cloud Storage connectors installed.
I use a variant of this script (because I have no access to the bucket used in the general one), everything is working fine but when I run a job, when the cluster is up and running, it always results in a Task was not acquired error.
I can fix this by simply restarting the dataproc agent on every nodes but I really need this to work properly to be able to run a job right after my cluster is created. it seems that this part of the script is not working properly:
# Restarts Dataproc Agent after successful initialization
# WARNING: this function relies on undocumented and not officially supported Dataproc Agent
# "sentinel" files to determine successful Agent initialization and not guaranteed
# to work in the future. Use at your own risk!
restart_dataproc_agent() {
# Because Dataproc Agent should be restarted after initialization, we need to wait until
# it will create a sentinel file that signals initialization competition (success or failure)
while [[ ! -f /var/lib/google/dataproc/has_run_before ]]; do
sleep 1
done
# If Dataproc Agent didn't create a sentinel file that signals initialization
# failure then it means that initialization succeded and it should be restarted
if [[ ! -f /var/lib/google/dataproc/has_failed_before ]]; then
service google-dataproc-agent restart
fi
}
export -f restart_dataproc_agent
# Schedule asynchronous Dataproc Agent restart so it will use updated connectors.
# It could not be restarted sycnhronously because Dataproc Agent should be restarted
# after its initialization, including init actions execution, has been completed.
bash -c restart_dataproc_agent & disown
My question here are:
How to know that the initialization actions are done?
Do I have/How to properly restart the Dataproc agent one my newly created cluster's nodes?
EDIT:
Here is the command I use to create a cluster (using the 1.3 image version):
gcloud dataproc --region europe-west1 \
clusters create my-cluster \
--bucket my-bucket \
--subnet default \
--zone europe-west1-b \
--master-machine-type n1-standard-1 \
--master-boot-disk-size 50 \
--num-workers 2 \
--worker-machine-type n1-standard-2 \
--worker-boot-disk-size 100 \
--image-version 1.3 \
--scopes 'https://www.googleapis.com/auth/cloud-platform' \
--project my-project \
--initialization-actions gs://dataproc-initialization-actions/connectors/connectors.sh \
--metadata 'gcs-connector-version=1.9.6' \
--metadata 'bigquery-connector-version=0.13.6'
Also, please note that the connectors initialization script has been fixed and works fine by now, so I am using it now but I still have to restart manually the dataproc agent to be able to run a job.
Dataproc agent logs Custom initialization actions finished. message in the /var/log/google-dataproc-agent.0.log file after initialization actions succeed.
No you don't need to restart Dataproc agent manually.
This issue is caused by Dataproc agent service restart in the connectors initialization action and should be resolved by this PR.
As for knowing when the initialization actions are finished, You can check the dataproc's status.state, if it's CREATING that means initialization actions are still being executed, if RUNNING that would mean that they are done!
Check here
I have created a Google Dataproc cluster, but need to install presto as I now have a requirement. Presto is provided as an initialization action on Dataproc here, how can I run this initialization action after creation of the cluster.
Most init actions would probably run even after the cluster is created (though I haven't tried the Presto init action).
I like to run clusters describe to get the instance names, then run something like gcloud compute ssh <NODE> -- -T sudo bash -s < presto.sh for each node. Reference: How to use SSH to run a shell script on a remote machine?.
Notes:
Everything after the -- are args to the normal ssh command
The -T means don't try to create an interactive session (otherwise you'll get a warning like "Pseudo-terminal will not be allocated because stdin is not a terminal.")
I use "sudo bash" because init actions scripts assume they're being run as root.
presto.sh must be a copy of the script on your local machine. You could alternatively ssh and gsutil cp gs://dataproc-initialization-actions/presto/presto.sh . && sudo bash presto.sh.
But #Kanji Hara is correct in general. Spinning up a new cluster is pretty fast/painless, so we advocate using initialization actions when creating a cluster.
You could use initialization-actions parameter
Ex:
gcloud dataproc clusters create $CLUSTERNAME \
--project $PROJECT \
--num-workers $WORKERS \
--bucket $BUCKET \
--master-machine-type $VMMASTER \
--worker-machine-type $VMWORKER \
--initialization-actions \
gs://dataproc-initialization-actions/presto/presto.sh \
--scopes cloud-platform
Maybe this script can help you: https://github.com/kanjih-ciandt/script-dataproc-datalab