How can we copy an Amazon WorkSpace image from one AWS account to another? - amazon-workspaces

I heard we need to create an AWS support ticket for that and that option is not available for 'basic plan'. Kindly share any details if you guys know. Appreciate any help. Thanks

AWS recently added support for this to the AWS CLI tool. There's still no way to do it through the web dashboard. To do it via CLI, follow the steps below.
Using credentials for the source account:
aws workspaces describe-workspace-images --region <source-region>
aws workspaces update-workspace-image-permission --image-id <source-image-id> --region <source-region> --shared-account-id <target-account-id> --allow-copy-image
aws workspaces describe-workspace-image-permissions --image-id <source-image-id> --region <source-region>
And then with destination account credentials:
aws workspaces describe-workspace-images --image-ids <source-image-id> --region <source-region> --image-type SHARED
aws workspaces copy-workspace-image --source-image-id <source-image-id> --source-region <source-region> --name <new-image-name> --region <source-region>
Where <source-region> is e.g. us-east-1.
The copy will take some time, during which the image state will be PENDING. I just did this today and it took maybe 15 minutes.

You need to reach out to AWS Support for that.

Related

Jenkins cron job to run selenium & k8s

I am working on a project in which I have created a k8s cluster to run selenium grid locally. I want to schedule the tests to run and until now I have tried to create a Jenkins cron job to do so. For that I am using k8s plugin in Jenkins.
However I am not sure about the steps to follow. Where should I be uploading the kube config file? There are a few options here:
Build Environment in Jenkins
Any ideas or suggestions?
Thanks
Typically, you can choose any option, depending on how you want to manage the system, I believe:
secret text or file option will allow you to copy/paste a secret (with a token) in Jenkins which will be used to access the k8s cluster. Token based access works by adding an HTTP header to your requests to the k8s API server as follows: Authorization: Bearer $YOUR_TOKEN. This authenticates you to the server. This is the programmatic way to access the k8s API.
configure kubectl option will allow you to perhaps specify the config file within Jenkins UI where you can set the kubeconfig. This is the imperative/scriptive way of configuring access to the k8s API. The kubeconfig itself contains set of keypair based credentials that are issued to a username and signed by the API server's CA.
Any way would work fine! Hope this helps!
If Jenkins is running in Kubernetes as well, I'd create a service account, create the necessary Role and RoleBinding to only create CronJobs, and attach your service account to your Jenkins deployment or statefulset, then you can use the token of the service account (by default mounted under /var/run/secrets/kubernetes.io/serviceaccount/token) and query your API endpoint to create your CronJobs.
However, if Jenkins is running outside of your Kubernetes cluster, I'd authenticate against your cloud provider in Jenkins using one of the plugins available, using:
Service account (GCP)
Service principal (Azure)
AWS access and secret key or with an instance profile (AWS).
and then would run any of the CLI commands to generate a kubeconfig file:
gcloud container clusters get-credentials
az aks get-credentials
aws eks update-kubeconfig

Remove gcloud VPC-SC security perimeter when no organisation is set up

A cloud run project that worked two months ago suddenly started complaining about the default log bucket being outside the VPC-SC perimeter. However, this project is not in an organisation, so I don't understand how I can remove the perimeter.
gcloud builds submit --tag [tag]
Errors with:
ERROR: (gcloud.builds.submit)
The build is running, and logs are being written to the default logs bucket.
Unfortunately, the default logs bucket is always outside any VPC-SC security
perimeter, so this tool cannot stream the logs for you.
While changing the controls is not possible:
If you still have the issue, I reviewed documentation according to your main question of how to remove gcloud VPC-SC security perimeter, and, if you activate VPC accessible services and then decide that the VPC networks in your perimeter no longer need access to the Cloud Storage service, you can remove services from your service perimeter's VPC accessible services using the following command:
gcloud access-context-manager perimeters update example_perimeter \
--remove-vpc-allowed-services=example.storage.googleapis.com \
--policy=example.11271009391
If the issue persists, you can leave a comment so that we can continue helping you, or here is a link that helps to troubleshoot any issue related to VPC Service Control.
Update your gcloud tools......

best way to seed new machine with k8s/eks info

Say we have a couple of clusters on Amazon EKS. We have a new user or new machine that needs .kube/config to be populated with the latest cluster info.
Is there some easy way we get the context info from our clusters on EKS and put the info in the .kube/config file? something like:
eksctl init "cluster-1-ARN" "cluster-2-ARN"
so after some web-sleuthing, I heard about:
aws eks update-kubeconfig
I tried that, and I get this:
$ aws eks update-kubeconfig usage: aws [options]
[ ...] [parameters] To see help text, you can
run:
aws help aws help aws help
aws: error: argument --name is required
I would think it would just update for all clusters then, but it don't. So I put the cluster names/ARNs, like so:
aws eks update-kubeconfig --name arn:aws:eks:us-west-2:913xxx371:cluster/eks-cluster-1
aws eks update-kubeconfig --name arn:aws:eks:us-west-2:913xxx371:cluster/ignitecluster
but then I get:
kbc stderr: An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: arn:aws:eks:us-west-2:913xxx371:cluster/eks-cluster-1.
kbc stderr: An error occurred (ResourceNotFoundException) when calling the DescribeCluster operation: No cluster found for name: arn:aws:eks:us-west-2:913xxx371:cluster/ignitecluster.
hmmm this is kinda dumb 😒 those cluster names exist..so what 🤷 do I do now
So yeah those clusters I named don't actually exist. I discovered that via:
aws eks list-clusters
ultimately however, I still feel strong because we people need to make a tool that can just update your config with all the clusters that exist instead of having you name them.
So to do this programmatically, it would be:
aws eks list-clusters | jq '.clusters[]' | while read c; do
aws eks update-kubeconfig --name "$c"
done;
In my case, I was working with two AWS environments. My ~/.aws/credentials were pointing to one and had to be changed to point to the correct account. Once you change the account details, you can verify the change by running the following commands:
eksctl get clusters
and then setting the kube-config using the command below after verifying the region.
aws eks --region your_aws_region update-kubeconfig --name your_eks_cluster

Spring GCP service not connecting to Cloud SQL database

I have a Spring GCP service which when run locally connects fine to my Google Cloud SQL instance.
However, when I deploy and launch on my Google Cloud Kubernetes cluster, it is failing to connect with Insufficient Permissions errors.
I followed the steps https://cloud.google.com/sql/docs/mysql/connect-kubernetes-engine , but still the same connection issue.
My source code is https://github.com/christianblake/spring-boot-gcp
deployment.yml is in the root dir.
Appreciate if somebody has any pointers as I'm obviously missing a point.
Thank you.
Assuming credentials.json is installed correctly, the service account defined in credentials.json needs to have the Cloud SQL Client role. There are several ways to do this is as documented here.
From the cli, you would do something like this:
gcloud projects add-iam-policy-binding $PROJECT_NAME \
--member serviceAccount:$GOOGLE_SERIVICE_ACCOUNT.iam.gserviceaccount.com --role roles/cloudsql.client
#Mangu, I found the following error in the error logs.
Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden
Which led to the following similar question
Cloud SQL Proxy and Insufficient Permission
I re-created the cluster, including the sql scopes with the following.
gcloud container clusters create cloudcluster --num-nodes 2 --machine-type n1-standard-1 --zone us-central1-c --scopes https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/sqlservice.admin
And that resolved the issue.
Thank you both for the feedback, and apologies for missing the google error code in the original question.

How to acces Google kubernetes cluster without googlecloud SDK?

I'm having trouble figuring out how I can set my kubectl context to connect to a googlecloud cluster without using the gcloud sdk. (to run in a controlled CI environment)
I created a service account in googlecloud
Generated a secret from that (json format)
From there, how do I configure kubectl context to be able to interact with the cluster ?
Right in the Cloud Console you can find the connect link
gcloud container clusters get-credentials "your-cluster-name" --zone "desired-zone" --project "your_project"
But before this you should configure gcloud tool.