Find out deployment type of existing ECS service - amazon-ecs

The AWS doc describes three potential deployment types for a service: "ECS", "CODE_DEPLOY", and "EXTERNAL". When creating a new service you can choose between "ECS" or "CODE_DEPLOY".
I have an existing service. I checked the following places to find out its deployment type:
Its definition in Terraform
Its page in the AWS web console
Its entry in aws ecs describe-services --cluster=my-cluster --services=my-service
Not one of them mentions anything about a deployment type, nor any of the three enum values above. I'm guessing my service has the default deployment type, and that the default deployment type is "ECS", but I haven't found anything in the docs validating this.
How can I figure out my service's deployment type?

I understand it is bit confusing , well deployment type is called deployment controller , so you can pass the parameter to aws cli as deploymentController , if you do not pass anything then by default it takes 'ECS' , you can find more details for aws cli in the link below and we can use same parameter in terraform as well, I have provided you an example for terraform link.
https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_CreateService.html#API_CreateService_RequestSyntax
resource "aws_ecs_service" "example" {
name = "example"
cluster = aws_ecs_cluster.example.id
deployment_controller {
type = "EXTERNAL"
}
}
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/ecs_service

Related

How can I see which kubernetes user is creating the deployment and what type of authentication is used?

I am trying to see which kubernetes user is creating the deployment and what type of authentication is used (basic auth, token, etc).
I try to do it using this:
kubectl describe deployment/my-workermole
but I am not finding that type of information in there.
Cluster is not managed by me and I am not able to find it in the deployment Jenkinsfile. Where and how can I find that type of information in my kubernetes deployment but after deployment?

How to fix kubernetes_config_map resource error on a newly provisioned EKS cluster via terraform?

I'm using Terraform to provision an EKS cluster (mostly following the example here). At the end of the tutorial, there's a method of outputting the configmap through the terraform output command, and then applying it to the cluster via kubectl apply -f <file>. I'm attempting to wrap this kubectl command into the Terraform file using the kubernetes_config_map resource, however when running Terraform for the first time, I receive the following error:
Error: Error applying plan:
1 error(s) occurred:
* kubernetes_config_map.config_map_aws_auth: 1 error(s) occurred:
* kubernetes_config_map.config_map_aws_auth: the server could not find the requested resource (post configmaps)
The strange thing is, every subsequent terraform apply works, and applies the configmap to the EKS cluster. This leads me to believe it is perhaps a timing issue? I tried to preform a bunch of actions in between the provisioning of the cluster and applying the configmap but that didn't work. I also put an explicit depends_on argument to ensure that the cluster has been fully provisioned first before attempting to apply the configmap.
provider "kubernetes" {
config_path = "kube_config.yaml"
}
locals {
map_roles = <<ROLES
- rolearn: ${aws_iam_role.eks_worker_iam_role.arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
ROLES
}
resource "kubernetes_config_map" "config_map_aws_auth" {
metadata {
name = "aws-auth"
namespace = "kube-system"
}
data {
mapRoles = "${local.map_roles}"
}
depends_on = ["aws_eks_cluster.eks_cluster"]
}
I expect for this to run correctly the first time, however it only runs after applying the same file with no changes a second time.
I attempted to get more information by enabling the TRACE debug flag for terraform, however the only output I got was the exact same error as above.
Well, I don't know if that is fresh yet but I was dealing with the same troubles and found out that:
https://github.com/terraform-aws-modules/terraform-aws-eks/issues/699#issuecomment-601136543
So, in others words, I changed the cluster's name in aws_eks_cluster_auth block to a static name, and worked. Well, perhaps this is a bug on TF.
This seems like a timing issue while bootstrapping your cluster. Your kube-apiserver initially doesn't think there's a configmaps resource.
It's likely that the Role and RoleBinding that it's using the create the ConfigMap has not been fully configured in the cluster to allow it to create a ConfigMap (possibly within the EKS infrastructure) which uses the iam-authenticator and the following policies:
resource "aws_iam_role_policy_attachment" "demo-cluster-AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = "${aws_iam_role.demo-cluster.name}"
}
resource "aws_iam_role_policy_attachment" "demo-cluster-AmazonEKSServicePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSServicePolicy"
role = "${aws_iam_role.demo-cluster.name}"
}
The depends Terraform clause will not do much since it seems like the timing is happening within the EKS service.
I suggest you try the terraform-aws-eks module which uses the same resource described in the doc. You can also browse through the code if you'd like to figure out how they solve the problem that you are seeing.

Need advice how to make Spinnaker work with aws ecr?

I'm setting up Spinnaker in K8s with aws-ecr. My setup and steps are:
on AWS side:
Added policies ecr-pull, ecr-push, and ecr-generate-token
Attached the policy to a role
Spinnaker setup:
Modified values.yaml with below above settings:
```accounts:
name: my-ecr
address: https://123456xxx.dkr.ecr.my-region.amazonaws.com
repositories:
123456xxx.dkr.ecr..amazonaws.com/spinnaker-test-project
```
Annotated clouddriver.yaml: deployment to use created role (using the IAM role in a pod by referencing the role name in an annotation on the pod specification)
But it doesn't work and the error on the cloudrvier side is :
.d.r.p.a.DockerRegistryImageCachingAgent : Could not load tags for 1234xxxxx.dkr.ecr.<my_region>.amazonaws.com/spinnaker-test-project in https://1234xxxxx.dkr.ecr.<my_region>.amazonaws.com
Would like to get some help or advice what I'm missing, thank you
Got the answer from an official Spinnaker slack channel. That adding an iam policy to the clouddriver pod won't work unfortunately since it uses the docker client instead of the aws client. The workaround to make it work can be found here
Note* Ecr support currently is broken in halyard.This might get fixed in future after halyard migrates from the kubernetes v1 -> v2 or earlier so please verify with community or docs.

Is it possible to create an exposed kubernetes service based on a new deployment in one command?

I feel this must be possible but haven't managed to find docs on how to do this.
I would ideally like to add service exposure details in a deployment yaml file and the service would come up with a host name and port upon the issuing of a create command with the deployment yaml.
You can write your Deployment manifest (like here) then you can write your Service manifest (like here) and then you can put them in the same file with a --- between them (like here). That is part of the YAML spec rather than a Kubernetes specific thing.
For an example of how you can write the Service and Deployment so that they target the same set of Pods, you can look at the definition of the default backend of the nginx ingress controller for inspiration here.

Can you define Kubernetes Services / Pods using YAML in Terraform?

I am using the Kubernetes Provider to describe services/pods in Terraform.
It can get confusing using the Hashicorp Configuration Language to define kubernetes_pod or kubernetes_service resources because the Kubernetes documentation describes everything in YAML which it means you need to translate it into HCL.
Is it possible to define pods as YAML and use them with kubernetes_pod and kubernetes_service resources as templates?
While Terraform normally uses HCL, this is a superset of JSON (much like YAML itself) so can also read JSON.
One possible option would be to take the YAML examples you already have and convert them into JSON and then use Terraform on those.
Unfortunately, that's unlikely to work because keywords are likely to be different for how Terraform is expecting things so you'd need to write something to do some basic translation of the input YAML to a Terraform resource JSON. At this point, it'd probably be worth just adding HCL output to the conversion so your outputted Terraform config is more readable if you ever intend to keep the Terraform config around instead of just one shot converting and applying the config.
The benefit of doing things this way would be that you have a reusable Kubernetes config that could be ran using kubectl or other tools but gives you the power of Terraform's lifecycle management, being able to plan changes and integration with non Kubernetes parts of your infrastructure (such as setting up instances to run the Kubernetes cluster on).
I've not used it much but I believe Kops will allow you to keep pod/service config in typical Kubernetes YAML files but can then use Terraform to manage the configuration and even allows you to output the Terraform configuration so you can run it outside of Kops itself.
The hashicorp/kubernetes provider does not support raw YAML/JSON, and they have no intention of implementing it.
The possible solutions are:
K2tf, a tool for converting Kubernetes RAW YAML manifests into Terraform HCL for the Kubernetes provider.
Use an alternative community Kubernetes provider, such as gavinbunny/kubectl, which does support raw YAML and can track each resource and the attributes in Terraform state, unlike the kubernetes-alpha provider.
Another solution is to use the hashicorp/kubernetes-alpha provider, you can pass in either a Terraform object or convert raw YAML manifest into a TF object for using in the provider resource. The downside is that the attributes are not tracked as individual objects and thus a change will cause the entire resource to be tainted.
Using the kubectl provider.
This core of this provider is the kubectl_manifest resource, allowing free-form yaml to be processed and applied against Kubernetes. This yaml object is then tracked and handles creation, updates and deleted seamlessly - including drift detection. This provider is ideal if you want to track the manifest in Terraform:
resource "kubectl_manifest" "test" {
yaml_body = file("path/to/manifest.yaml")
}
Using the kubernetes-alpha provider
The kubernetes_manifest represents one Kubernetes resource as described in the manifest attribute. The manifest value is the HCL transcription of a regular Kubernetes YAML manifest. To transcribe an existing manifest from YAML to HCL, use the Terrafrom built-in function yamldecode(), or use the tfk8s tool to convert YAML into manifest attributes for the kubernetes-alpha provider manifest resource.
Example using yamldecode:
resource "kubernetes_manifest" "service" {
provider = kubernetes-alpha
manifest = yamldecode(file("path/to/manifest.yaml"))
}
Why doesn't the kubernetes provider support RAW YAML?
Supporting YAML/JSON in hashicorp/kubernetes was considered before (the very first proposal of K8S provider was exactly that) and during the initial implementation of this provider and we decided not to do it.
The reason is that you can't accurately track resources created from RAW YAML as Terraform objects.
From Terraform's developer perspective it is very tricky to get around
the way K8S API works where you send an array [a, b, c] to the Create
API and then you Get back [a, b, c, d]. This happens for example with
pods that get some secret volumes attached automatically, but happens
with most other resources I had the chance to play with. The
whitelisting/blacklisting is tricky part.
You may also be interested in the following project, which allows you to convert YAML files to Terraform's HCL.
https://github.com/sl1pm4t/k2tf
Description:
A tool for converting Kubernetes API Objects (in YAML format) into HashiCorp's Terraform configuration language.
The converted .tf files are suitable for use with the Terraform Kubernetes Provider