I create and an ecs account name linked to aws account name, enabled ecs. Now when creating a new server group and selecting ecs. Under the account section the ecs account name appears but it wont let me select it.
If you are using Spinnaker < 1.19.X then the AWS ECS provider depends on the AWS EC2 provider and the AWS IAM structure.
Please read: AWS Providers Overview to understand the AWs IAM structure that is required (AWS managing Account and AWS Managed accounts through AssumeRole action)
Then you can set up an AWS EC2 Provider following this easy to get started guide by armory
Finally Set the AWS ECS provider with the legacy instructions found at spinnaker.io
If you are using Spinnaker > 1.19.X then you must use AWS ECS Service linked roles
One very important step is tagging the AWS VPC subnets so that spinnaker can access them.
Related
I have some pods running in an AKS cluster which are trying to access AWS s3 buckets (using azure blobs are not an option because of the current architecture). I have read about IAM roles for Kubernetes Service Accounts but it mentions about EKS clsuters. Is there any way out here, can we create a service account in AKS with the IAM role to access a s3 bucket in AWS (probably in a different location)
Sounds Workload identity with identity federation could be a fit for your scenario.
The idea would be to enable the OIDC feature flag on your AKS and then create a federated identity trust between an AKS Kubernetes service account and AWS.
Maybe this and this articles will guide you.
I setup a EKS cluster and integrated AWS Secrets Manager in it following the steps mentioned in https://github.com/aws/secrets-store-csi-driver-provider-aws and it worked as expected.
Now we have a requirement to integrate the AWS Secrets Manager on an on-premises k8s cluster and I am unable to follow the same steps as they seem to be explicitly for AWS EKS based clusters.
I googled around a bit and found you can call the Secrets Manager programmatically using one of the ways in https://docs.aws.amazon.com/secretsmanager/latest/userguide/asm_access.html, but this approach wont work for us.
Is there a k8s way to directly connect to AWS secrets Manager without setting up AWS-CLI and the OIDC cluster ID on the on-premises cluster?
Any help would be highly appreciated.
You can setup external OIDC providers with AWS and also setup K8s to with OIDC, but that is a lot of work.
AWS recently announced IAM Roles Anywhere which will let you use host based certificates to authenticate, but you will still have to call the Secrets Manager APIs.
If you are willing to retrieve secrets through etcd (which may store the secrets base64 encoded on the cluster) you can look at using the opensource External Secrets solution.
enter image description here
We are trying to deploy application to the provisioned private aks cluster using terraform in Azure devops, when we try to deploy helm or access the cluster we are getting error.
enter image description here
As you did not provided much information, i will do my best to help you:
It seems that the user or Service principal that is running the pipeline has permissions on subscription level to create the AKS but not enough permissions to create anything inside Kubernetes.
You can leverage RBAC, Azure AD & Azure RBAC with your Kubernetes. With Terraform you can specify admin_group_object_ids inside the azure_active_directory_role_based_access_control block. Just assign the group there and add the pipeline User / SP to that group.
Alternative you can use Azure build-in roles like Azure Kubernetes Service Cluster Admin Role and add your User / SP there.
I'm new to istio, and I read istio docs(https://istio.io/docs/concepts/security/#istio-identity):
Istio service identities on different platforms:
Kubernetes: Kubernetes service account
GKE/GCE: may use GCP service account
GCP: GCP service account
AWS: AWS IAM user/role account
On-premises (non-Kubernetes): user account, custom service account, service name, Istio service account, or GCP service account. The custom service account refers to the existing service account just like the identities that the customer’s Identity Directory manages.
I can't make it clear what does on-premise mean? Can anyone give me some more detailed information about on-premise? And how does it compared to kubernetes?
Thanks.
"On Premises" simply means locally at your organization in contrast to remote / in the cloud. See https://en.wikipedia.org/wiki/On-premises_software
I created a kubernetes cluster under my user account on IBM Bluemix, and added another into my organization. But he can't see my cluster. Is there any other configure?
To manage cluster access, see this link from the IBM Bluemix Container Service documentation. Summarised here:
Managing cluster access
You can grant access to your cluster to other users, so that they can
access the cluster, manage the cluster, and deploy apps to the
cluster.
Every user that works with IBM Bluemix Container Service must be
assigned a service-specific user role in Identity and Access
Management that determines what actions this user can perform.
Identity and Access Management differentiates between the following
access permissions.
IBM Bluemix Container Service access policies
Access policies determine the cluster management actions that you can
perform on a cluster, such as creating or removing clusters, and
adding or removing extra worker nodes.
Cloud Foundry roles
Every user must be assigned a Cloud Foundry user role. This role
determines the actions that the user can perform on the Bluemix
account, such as inviting other users, or viewing the quota usage. To
review the permissions of each role, see Cloud Foundry roles.
RBAC roles
Every user who is assigned an IBM Bluemix Container Service access
policy is automatically assigned an RBAC role. RBAC roles determine
the actions that you can perform on Kubernetes resources inside the
cluster. RBAC roles are set up for the default namespace only. The
cluster administrator can add RBAC roles for other namespaces in the
cluster. See Using RBAC Authorization in the
Kubernetes documentation for more information.