Shared aws API Gateway Multiple deployment on a single Stage e.g. dev with AWS CDK - aws-api-gateway

I've created a shared AWS.APIGATEWAY
FOR API-1: Import gateway and attach Lambda integration in that RESTAPI as resource and add this RESTAPI on deployment stage e.g. dev
FOR API-2: Import gateway and attach Lambda integration in that RESTAPI as resource and add this RESTAPI on deployment stage e.g. dev {error : says stage already exist}
Is there anyway I can update the stage with latest resources

Related

AWS ECS Blue Green Deployments - CloudFormation Error

Trying to execute a blue/green deployment of an ECS task within AWS using the CloudFormation approach (as documented here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/blue-green.html) and the deployment fails.
The initial stack deployment works fine and the ECS task is deployed and running as expected with the correct load balancer and target group etc. However when updating the task definition, to trigger a blue/green deployment, it fails with the message:
Imports and exports are currently not supported on templates using hooks
The deployment is created in CodeDeploy, so it's obviously triggered as expected, but the deployment screen in AWS console shows the following error:
The deployment failed because the stack update that triggered this CodeDeploy deployment failed in CloudFormation. In the AWS CloudFormation console, go to the Events tab to view status and error messages.
But the puzzling thing is the CloudFormation template does not appear to contain any imports or exports. I have even tried copying the yml from the documented example and it doesn't work.
I'm executing the CloudFormation updates using Serverless Framework, but I don't think that's an issue, the error is logged in the CloudFormation stack events tab.
Probably not unreasonable to expect the example in the AWS documentation to work?
So we did find the cause of this issue, and in fact the problem was actually caused by running the CloudFormation template via the serverless framework.
The serverless approach works for all our other AWS deployments, but the CodeDeploy transform explicitly requires for there to be no outputs from the CF template - however serverless actually adds the name of the S3 bucket that it uses as an output, which breaks this particular use case.
Therefore the solution was to invoke the CF template directly from the AWS CLI and it works perfectly.

Spring batch worker pods are unable to pick custom service account for spring cloud deployer kubernetes

I am trying to run a spring batch with remote partitioning on K8s cluster using spring-cloud-deployer-kubernetes. Eventhough I have configured a service account and mentioned in my application properties the below way
spring.cloud.deployer.kubernetes.deployment-service-account-name=scdf-sa
Still the master task is unable to spawn worker pods and it seems it does not pick the property while launching task from spring cloud dataflow UI and throws this error in master pod:
io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://<IP>/api/v1/namespaces/test/pods/batchsampleappworker-aeghj644g. Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. pods "batchsampleappworker-j3ljqq3de9" is forbidden: User "system:serviceaccount:test:default" cannot get resource "pods" in API group "" in the namespace "test".
PS: I am using spring-cloud-deployer-kubernetes version of 2.5.0
Please some hints on how to correctly configure service account?
Thanks in advance!
As per the official documentation of spring cloud dataflow here, adding the below to SCDF server config map solved the issue for me.
data:
application.yaml: |-
spring:
cloud:
dataflow:
task:
platform:
kubernetes:
accounts:
default:
deploymentServiceAccountName: myserviceaccountname

How to run script which start kubernetes cluster on azure devops

I tried to start #kubernetes cluster and then run tests and publish results. Do you have any idea on how this can be done?
I created a pipeline but I do not know which yml to use
Which task to add first - kubernetes deploy or something else
We have Kubernetes deployment.yml file>>It takes the container( image )(exampleacr.io/sampleapp) that we going to publish on AKS
App version: app/v1
Service.yml is to just expose the application App version: v1
Both the yml files are to be added .Please refer WAY2 for modifying manually.
WAY 1:
Quick way: Deploy to Azure Kubernetes service will do everything that’s needed because if you use the Deploy to Azure Kubernetes Services template, these variables get defined for you.
Steps:
Create AKS cluster and ACR(container registry) in azure.
In azure Devops:
Create a pipeline>choose any source:for select an application
hosted in git hub
Then select Deploy to Azure Kubernetes service >select your aks
subscription> select the existing cluster>then select the container
registry that you want to put docker image into.keep the remaining
as default
Click on validate and configure
azure pipeline will generate a YAML file.
In the review pipeline YAML of azure-pipelines.yml
You have two stages:Build,deploy
Click save and run:This saves yaml file in master branch and it creates manifests files(deployment.yml and service.yml) for kubernetes deployment.
Click save and run>this will also trigger build.
Reference
WAY2: Using Docker image
To do modifications in the azurepipelines.yml file In the 3rd step from above ,Select Docker image instead of Deploy to Azure Kubernetes service.
Under configure pipeline>If it is in Build.SourcesDirectory in our
application , it will appear as say $
Build.SourcesDirectory/app/Dockerfile
That builds the docker file /pipeline.
In the review pipeline YAML of azure-pipelines.yml
Few things can be modified like
You can change variable tag to repo name: and then deployment and
service.yml file can be added to the yml file by doing few
modifications
Build stage is automatic and no need to modify there.
You have to Add push and deploy stages in yml file as shown in the article
And get source code here

How can I using Pulumi get continuous deployment between ACR and AppService container?

I want to create a Pulumi script that automatically creates an instance of an App-Service and tie it with a newly created Azure Container Registry. The goal is to get an automatic update of my AppService when i push a new image to the registry.
So what I think I need a way to get the AppService - Container Settings -> Continuous Deployment -> WebHook URL after I create the AppService using Pulumi.
From that URL I can then (i hope) create a containerservice RegistryWebhook between the registry and the app-service.
Or are there any simper way to achieve this? (Get a auto-update of an AppService after docker push?)

How to connect on premise kubernetes cluster using Jenkins File

I am trying to deploy application on kubernetes cluster by using jenkins multi branch pipeline and "Jenkins file" but unable to make connection between Jenkins and Kubernetes. From code side I can't share more details here.
I just want to know if there is any way to make this connection (Jenkins and Kubernetes) using Jenkins file so that I will use it to deploy the application on Kubernetes.
Following is the technology stack that might clear my issue:
Jenkins file is kept at root location of project in git hub.
Separate jenkins server where pipeline is created to deploy the application on Kubernetes.
On premise kubernetes cluster.
You need credentials to talk to Kubernetes. When you have automation like Jenkins running jobs, it's best to create a service account for Jenkins, look here for some documentation. Once you create the Jenkins service account, you can extract an authentication token for that account, which you put into Jenkins. What I would recommend doing, since your Jenkins is not a pod inside your Kubernetes cluster is to upload a working kubectl config as a secret file in the Jenkins credential manager.
Then, in your Jenkins job configuration, you can use that secret. Jenkins can put the file somewhere for your job to access, then in your Jenkinsfile, you can run commands with "kubectl --kubeconfig= ...".