I have few dataproc clusters with the same properties enabled. But on cluster "A" I am able to get the jobs from the API "dataproc.googleapis.com" But on cluster "B" I am not able to get the dataproc jobs. Only difference I see between the clusters is the initialization. We have to get the dataproc jobs and populate into a table. Any thoughts on other reasons for this. Thanks.
Cluster A Initialization actions:
Pip install,
Configure Pip,
Purge yarn locals,
cloud sql proxy,
Conda-install-python,
copy-vennv-cron-multiversion
Cluster B Initialization actions:
Pip install,
Configure Pip,
Purge yarn locals,
cloud sql proxy
Related
Recently, I tried to setup Jenkins X on a kubernetes cluster. However there exists some problem during installation.
There are several options in jx create cluster such as aks(create with AKS), aws(create with AWS), minikube(create with Minikube) and etc.
However there is no option which create a cluster with local kubernetes cluster. I want to setup Jenkins X with my own cluster.
Can I get some advice?
Thanks.
when you have your cluster setup such that you can run kubectl commands against it, you can run jx boot to setup your jx installation. You don't need to use jx create cluster as your cluster already exists.
To install Jenkins X on already existing cluster, you have to use the below command:
jx install --provider=kubernetes --on-premise
Above command will install the jx on your cluster.
Say we have a couple of clusters on Amazon EKS. We have a new user or new machine that needs .kube/config to be populated with the latest cluster info.
Is there some easy way we get the context info from our clusters on EKS and put the info in the .kube/config file? something like:
eksctl init "cluster-1-ARN" "cluster-2-ARN"
so after some web-sleuthing, I heard about:
aws eks update-kubeconfig
I tried that, and I get this:
$ aws eks update-kubeconfig usage: aws [options]
[ ...] [parameters] To see help text, you can
run:
aws help aws help aws help
aws: error: argument --name is required
I would think it would just update for all clusters then, but it don't. So I put the cluster names/ARNs, like so:
aws eks update-kubeconfig --name arn:aws:eks:us-west-2:913xxx371:cluster/eks-cluster-1
aws eks update-kubeconfig --name arn:aws:eks:us-west-2:913xxx371:cluster/ignitecluster
but then I get:
kbc stderr: An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: arn:aws:eks:us-west-2:913xxx371:cluster/eks-cluster-1.
kbc stderr: An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: arn:aws:eks:us-west-2:913xxx371:cluster/ignitecluster.
hmmm this is kinda dumb 😒 those cluster names exist..so what 🤷 do I do now
So yeah those clusters I named don't actually exist. I discovered that via:
aws eks list-clusters
ultimately however, I still feel strong because we people need to make a tool that can just update your config with all the clusters that exist instead of having you name them.
So to do this programmatically, it would be:
aws eks list-clusters | jq '.clusters[]' | while read c; do
aws eks update-kubeconfig --name "$c"
done;
In my case, I was working with two AWS environments. My ~/.aws/credentials were pointing to one and had to be changed to point to the correct account. Once you change the account details, you can verify the change by running the following commands:
eksctl get clusters
and then setting the kube-config using the command below after verifying the region.
aws eks --region your_aws_region update-kubeconfig --name your_eks_cluster
Working in GCP with several kubernetes clusters, I would like to automatically get cluster credentials when switching gcloud configurations.
I have created several configurations for gcloud with gcloud config configurations create [config-name] and I have set what I need, specifically gcloud config set container/cluster [cluster-name].
When I switch configurations with gcloud config configurations activate [config-name], everything goes ok, except I do not get the credentials for the cluster I have configured for that configuration. Instead I need to run gcloud container clusters get-credentials [cluster-name].
Is there any way to automatically get credentials for a cluster when activating a gcloud configuration?
I think not.
gcloud and kubectl are distinct tools and each maintains its own configuration.
gcloud container custers get-credentials is a bridging helper that configures kubectl configuration (conventionally located in ~/.kube/config file) with a gcloud auth helper to facilitate accessing Kubernetes Engine clusters. But, otherwise the 2 tools are unrelated.
Have a look at this post I wrote that covers using different configurations with kubectl. It's not exactly what you want but I hope it will be useful:
https://medium.com/google-cloud/context-light-gcloud-and-kubectl-89185d38ce82
When I am trying to delete dataproc cluster in google cloud platform getting below error,
Failed to stop job b021d29d-acc9-409d-8fca-52363076a63c Cluster not
found
could any one help??
I'm guessing you are trying to delete the cluster via the Dataproc Clusters UI. In that case the problem could be a bug with the UI itself which always sets the cluster region argument to 'global'. If your cluster region is not set to 'global' you'll get the 'Cluster not found' error.
The solution is to use the gcloud api:
gcloud dataproc clusters delete NAME [--async] [--region=REGION] [GCLOUD_WIDE_FLAG …]
ref: https://cloud.google.com/sdk/gcloud/reference/dataproc/clusters/delete
I have set-up datalab to run on a dataproc master node using the datalab initialisation action:
gcloud dataproc clusters create <CLUSTER_NAME> \
--initialization-actions gs://<GCS_BUCKET>/datalab/datalab.sh \
--scopes cloud-platform
This historically has worked OK. However as of 30.5 I can no longer get any code to run, however simple. I just get the "Running" progress bar. No timeouts, no error messages. How can I debug this?
I just created a cluster and it seemed to work for me.
Just seeing "Running" usually means that there is not enough room in the cluster to schedule a Spark Application. Datalab loads PySpark when Python loads and that creates a YARN application. Any code will block until the YARN application is scheduled.
On the default 2 node n1-standard-4 worker cluster, with the default configs. There can only be 1 spark application. You should be able to fit two notebooks by setting --properties spark.yarn.am.memory=1g or using a larger cluster, but you will still eventually hit a limit on running notebooks per cluster.