I have set-up datalab to run on a dataproc master node using the datalab initialisation action:
gcloud dataproc clusters create <CLUSTER_NAME> \
--initialization-actions gs://<GCS_BUCKET>/datalab/datalab.sh \
--scopes cloud-platform
This historically has worked OK. However as of 30.5 I can no longer get any code to run, however simple. I just get the "Running" progress bar. No timeouts, no error messages. How can I debug this?
I just created a cluster and it seemed to work for me.
Just seeing "Running" usually means that there is not enough room in the cluster to schedule a Spark Application. Datalab loads PySpark when Python loads and that creates a YARN application. Any code will block until the YARN application is scheduled.
On the default 2 node n1-standard-4 worker cluster, with the default configs. There can only be 1 spark application. You should be able to fit two notebooks by setting --properties spark.yarn.am.memory=1g or using a larger cluster, but you will still eventually hit a limit on running notebooks per cluster.
Related
I have few dataproc clusters with the same properties enabled. But on cluster "A" I am able to get the jobs from the API "dataproc.googleapis.com" But on cluster "B" I am not able to get the dataproc jobs. Only difference I see between the clusters is the initialization. We have to get the dataproc jobs and populate into a table. Any thoughts on other reasons for this. Thanks.
Cluster A Initialization actions:
Pip install,
Configure Pip,
Purge yarn locals,
cloud sql proxy,
Conda-install-python,
copy-vennv-cron-multiversion
Cluster B Initialization actions:
Pip install,
Configure Pip,
Purge yarn locals,
cloud sql proxy
Once we increase load by using JMeter client than my deployed service is interrupted and on GCP/GKE console it says that -
Upgrading cluster master
The values shown below are going to change soon.
And my kubectl client throw this error during upgrade -
Unable to connect to the server: dial tcp 35.236.238.66:443: connectex: No connection could be made because the target machine actively refused it.
How can I stop this upgrade or prevent my service interruption ? If service will be intrupted than there is no benefit of this auto scaling. I am new to GKE, please let me know if I am missing any configuration or parameter here.
I am using this command to create my cluster-
gcloud container clusters create ajeet-gke --zone us-east4-b --node-locations us-east4-b --machine-type n1-standard-8 --num-nodes 1 --enable-autoscaling --min-nodes 4 --max-nodes 16
It is not upgrading k8s version. Because it works fine with smaller load but as I increase load than cluster starts upgrade of master. So it looks the master is resizing itself for more nodes. After upgrade I can see more nodes on GCP console. https://github.com/terraform-providers/terraform-provider-google/issues/3385
Below command says auto scaling is not enabled on instance group.
> gcloud compute instance-groups managed list
NAME AUTOSCALED LOCATION SCOPE ---
ajeet-gke-cluster- no us-east4-b zone ---
default-pool-4***0
Workaround
Sorry forget to update it here, I found a workaround to fix it - after splitting cluster creation command in to two steps cluster is auto scaling without restarting master node:
gcloud container clusters create ajeet-ggs --zone us-east4-b --node-locations us-east4-b --machine-type n1-standard-8 --num-nodes 1
gcloud container clusters update ajeet-ggs --enable-autoscaling --min-nodes 1 --max-nodes 10 --zone us-east4-b --node-pool default-pool
To prevent this you should always create your cluster with hardcoded cluster version to the last version available.
See the documentation: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture#master
This means that Goolge is managing the master, meaning that if your master is not up to date it will be updated to be in the last version and allow google to limit the number of version currently managed. https://cloud.google.com/kubernetes-engine/docs/concepts/regional-clusters
Now why do you have an interruption of service during the update: because you are in zonal mode with only one master, to prevent this you should go in regional cluster mode with more than one master, allowing for clean rolling update.
The master won't resize the node, unless the autoscaling feature is enabled in it.
As mentioned in above answer, this is a feature at the node-pool level. By looking at description of the issue, it does seems like 'autoscaling' is enabled on your node-pool and eventually a GKE's cluster autoscaler automatically resizes clusters based on the demands of the workloads you want to run(ie when there are pods that are not able to be scheduled due to resource shortages such as CPU).
Additionaly, Kubernetes cluster autoscaling does not use the Managed Instance Group autoscaler. It runs a cluster-autoscaler controller on the Kubernetes master that uses Kubernetes-specific signals to scale your nodes.
It is therefore, highly recommended not use(or rely on the autoscaling status showed by MIG) Compute Engine's autoscaling feature on instance groups created by Kubernetes Engine.
When I am trying to delete dataproc cluster in google cloud platform getting below error,
Failed to stop job b021d29d-acc9-409d-8fca-52363076a63c Cluster not
found
could any one help??
I'm guessing you are trying to delete the cluster via the Dataproc Clusters UI. In that case the problem could be a bug with the UI itself which always sets the cluster region argument to 'global'. If your cluster region is not set to 'global' you'll get the 'Cluster not found' error.
The solution is to use the gcloud api:
gcloud dataproc clusters delete NAME [--async] [--region=REGION] [GCLOUD_WIDE_FLAG …]
ref: https://cloud.google.com/sdk/gcloud/reference/dataproc/clusters/delete
I'm wondering the graceful way to reduce nodes in a Kubernetes cluster on GKE.
I have some nodes each of which has some pods watching a shared job queue and executing a job. I also have the script which monitors the length of the job queue and increase the number of instances when the length exceeds a threshold by executing gcloud compute instance-groups managed resize command and it works ok.
But I don't know the graceful way to reduce the number of instances when the length falls below the threshold.
Is there any good way to stop the pods working on the terminating instance before the instance gets terminated? or any other good practice?
Note
Each job can take around between 30m and 1h
It is acceptable if a job gets executed more than once (in the worst case...)
I think the best approach is instead of using a pod to run your tasks, use the kubernetes job object. That way when the task is completed the job terminates the container. You would only need a small pod that could initiate kubernetes jobs based on the queue.
The more kube jobs that get created, the more resources will be consumed and the cluster auto-scaler will see that it needs to add more nodes. A kube job will need to complete even if it gets terminated, it will get re-scheduled to complete.
There is no direct information in the GKE docs about whether a downsize will happen if a Job is running on the node, but the stipulation seems to be if a pod can be easily moved to another node and the resources are under-utilized it will drain the node.
Refrences
https://cloud.google.com/container-engine/docs/cluster-autoscaler
http://kubernetes.io/docs/user-guide/kubectl/kubectl_drain/
http://kubernetes.io/docs/user-guide/jobs/
Before resizing the cluster, let's set the project context in the cloud shell by running the below commands:
gcloud config set project [PROJECT_ID]
gcloud config set compute/zone [COMPUTE_ZONE]
gcloud config set compute/region [COMPUTE_REGION]
gcloud components update
Note: You can also set project, compute zone & region as flags in the below command using --project, --zone, and --region operational flags
gcloud container clusters resize [CLUSTER_NAME] --node-pool [POOL_NAME] --num-nodes [NUM_NODES]
Run the above command for each node pool. You can omit the --node-pool flag if you have only one node pool.
Reference: https://cloud.google.com/kubernetes-engine/docs/how-to/resizing-a-cluster
I have an instance group of Container VMs running my app on a docker container.
I am trying to find a good strategy to manage the application logs for docker + MEAN + Google Cloud Compute Machines.
I can see the logs on individual containers running docker logs [container_id].
However, if I stop and start the VM I lose those logs. I also have VMs dynamically added by Auto scaler and would like to have a convenient way to access the logs.
Stack is MEAN and Logging tool is bunyan.
Is is possible to centralize or combine the logs from all VMS in one persistent location?
any suggestions?
UPDATES:
I installed fluentd agent and now I can see logs when I manually run thins on the shell: logger "some message for testing"
However, the logs from my container vm from my docker container never shows up on logs.
I still don't know how to get those docker logs to turn up on google cloud logs. It is supposed to be automatically collected.
cheers
Leo
Here is a yaml, Dockerfile and conf for a fluentd pod inside kubernetes.
Adjust the yaml to mount a disk:
https://github.com/GoogleCloudPlatform/kubernetes/tree/master/contrib/logging/fluentd-sidecar-gcp
Then adjust the config to log to the disk.
Build the container with the new configuration.
Deploy the new container.