Switching VPC on an EKS instance - kubernetes

Is it possible to change the VPC of an already created EKS cluster? Or do I have to create a new one and there to select the new VPC?

you should be able to change the VPC configuration for the EKS cluster. However, as per the documentation which I found, it states that if VPC config is updated, the update type is replacement i.e., a new cluster will be created with the updated config.
Please see https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-eks-cluster.html#cfn-eks-cluster-resourcesvpcconfig for more information
Hope this helps.

The correct answer for most situations is "You can't change the VPC for an EKS cluster." The other answer from krisnik refers to a CloudFormation-managed stack, where if you change the VPC it deletes your EKS cluster and makes a new one which is a lot like the old one, but in a new VPC. But also this new one isn't going to have any of your kubernetes applications or anything else in it, unless you were also managing that stuff in CloudFormation. So it's really not particularly helpful unless you fall into that strange combination of cloudformation-to-manage-your-kubernetes.
To confirm this, see API reference for UpdateClusterConfig which confusingly lets you pass in the resourcesVpcConfig parameter, but also has a red-triangle note saying "You can't update the subnets or security group IDs for an existing cluster." So that pretty much settles it.

Related

Edit extraPortMappings in kind cluster

I've scanned through all resources, still cannot find a way to change extraPortMappings in Kind cluster without deleting and creating again.
Is it possible and how?
It's not said explicitly in the official docs, but I found some references that confirm: your thoughts are correct and changing extraPortMappings (as well as other cluster settings) is only possible with recreation of the kind cluster.
if you use extraPortMappings in your config, they are “fixed” and
cannot be modified, unless you recreate the cluster.
Source - Issues I Came Across
Note that the cluster configuration cannot be changed. The only
workaround is to delete the cluster (see below) and create another one
with the new configuration.
Source - Kind Installation
However, there are obvious disadvantages in the configuration and
update of the cluster, and the cluster can only be configured and
updated by recreating the cluster. So you need to consider
configuration options when you initialize the cluster.
Source - Restrictions

Authenticate to K8s as specific user

I recently built an EKS cluster w/ Terraform. In doing that, I Created 3 different roles on the cluster, all mapped to the same AWS IAM Role (the one used to create the cluster)
Now, when I try to manage the cluster, RBAC seems to use the least privileged of these (which I made a view role) that only has read-only access.
Is there anyway to tell config to use the admin role instead of view?
I'm afraid I've hosed this cluster and may need to rebuild it.
Some into
You don't need to create a mapping in K8s for an IAM entity that created an EKS cluster, because by default it will be mapped to "system:masters" K8s group automatically. So, if you want to give additional permissions in a K8s cluster, just map other IAM roles/users.
In EKS, IAM entities are used authentication and K8s RBAC are for authorization purposes. The mapping between them is set in aws-auth configMap in kube-system namespace,
Back to the question
I'm not sure, why K8s may have mapped that IAM user to the least privileged K8s user - it may be the default behaviour (bug?) or due to the mapping record (for view perms) being later in the CM, so it just re-wrote the previous mapping.
Any way, there is no possibility to specify a K8s user to use with such mapping.
Also, if you used eksctl to spin up the cluster, you may try creating a new mapping as per docs, but not sure if that will work.
Some reading reference: #1, #2

Migration K8S cluster

we have several clusters. Right now, we want to upgrade a K8S cluster replacing it for new one.
We handle the deployments with CICD, so, when the new cluster is ready, we will start to move apps to the new cluster running the pipelines.
We're facing a problem with DNS.
All the apps in the kubernetes cluster is resolved by a wildcard DNS.
Besides, we need to do the migration in multiple steps, so, we can't change the wildcard to the new cluster, because the old cluster is going to host some apps for a while and need to interact between them
Any good solution or alternative to get the migration done smoothly?
And what would be a best practice about DNS to avoid this situation in the future?
Thank you in advance.
You can put in specific DNS records for each hostname as they need to migrate.
Say your wildcard is for *.mycompany.com...
app1.mycompany.com is getting migrated
app2.mycompany.com is staying put until the next batch
Add a record for app2.mycompany.com pointing to the old cluster, and switch the wildcard record to point to the new cluster.
Now app1.mycompany.com will resolve to the new cluster, but the more specific record for app2.mycompany.com will trump the wildcard and keep pointing to the old cluster.
When it's time for app2's DNS cutover, delete the record and the wildcard will take over.

Google Kubernetes Engine: restore service account

One of our Google Kubernetes Engine clusters has lost access to Google Cloud Platform via it's main service account. It was not using the service account 'default', but a custom one, but it's now gone. Is there a way to restore or change the service account for a GKE cluster after it has been created? Or are we just out of luck and do we have to re-create the cluster?
Good news! We found a way to solve the issue without having to re-create the entire cluster.
Create a new node-pool and make sure it has the default permissions to Google Cloud Platform (this is the case if you create the pool via the Console UI).
'Force' all workloads on the new node pool (e.g. by using node labels).
Re-deploy the workloads.
Remove the old (broken ) node pool.
Hope this helps anyone with the same issue in the future.
Looks like you are out of luck. According to the documentation, gcloud container clusters update command does not let you update service account.
It's not possible to do it, either restore a service account or update the cluster for a new one, you can edit Compute Engine instances but since the cluster is managed as a group, you can't edit them, even if you could, if you had the autoscaler or the auto repair node feature, new nodes wouldn't have the new service account.
So, it seems you're out of luck, you will have to recreate the cluster.

Correct way to define k8s-user-startup-script

This is like a follow-up question of: Recommended way to persistently change kube-env variables
I was playing around with the possibility to define a k8s-user-startup-script for GKE instances (I want to install additional software to each node).
Adding k8s-user-startup-script to an Instance Group Template "Custom Metadata" works, but that is overwritten by gcloud container clusters upgrade which creates a new Instance Template without "inheriting" the additional k8s-user-startup-script Metadata from the current template.
I've also tried to add a k8s-user-startup-script to the project metadata (I thought that would be inherited by all instances of my project like described here) but that is not taken into account.
What is the correct way to define a k8s-user-startup-script that persists cluster upgrades?
Or, more general, what is the desired way to customize the GKE nodes?
Google Container Engine doesn't support custom startup scripts for nodes.
As I mentioned in Recommended way to persistently change kube-env variables you can use a DaemonSet to customize your nodes. A DaemonSet running in privileged mode can do pretty much anything that you could do with a startup script, with the caveat that it is done slightly later in the node bring-up lifecycle. Since a DaemonSet will run on all nodes in your cluster, it will be automatically applied to any new nodes that join (via cluster resize) and because it is a Kubernetes API object, it will be persisted across OS upgrades.