How to set Serverless Framework to deploy artifacts to a specific S3 bucket and not trig CloudFormation? - aws-cloudformation

Serverless Framework upon the command sls deploy will deploy artifacts in a specific S3 bucket and trig the CloudFormation to use the generated cloudformation-template-create-stack.json and build the stack.
I have a bucket that contains stack versions like:
-bucket_stack_1
- 1.0.1
- cloudformation-template-create-stack.json
- 1.0.2
- cloudformation-template-create-stack.json
-bucket_stack_2
- 13.0.1
- cloudformation-template-create-stack.json
- 14.0.1
- cloudformation-template-create-stack.json
Then I have a https endpoint (a lambda) that trigs the CloudFormation to use a specific deployment version of cloudformation-template-create-stack.json. This is an example of the payload:
{
'stackName':'bucket_stack_1',
'version':'1.0.2',
}
So I would like to know if it is possible to force Serverless Framework to deploy or only save the cloudformation-template-create-stack.json in a custom S3 bucket and not trig the CloudFormation to build the stack.

Related

How to maintain helm repository in gitlab

I have a helm chart and I want to add it to my gitlab repository. But when I run:
helm repo add repo_name url
I am getting the following error:
Error: looks like "https://gitlab.<domain>.com/group/infra/repo/helm/charts/" is not a valid chart repository or cannot be reached: error converting YAML to JSON: yaml: line 3: mapping values are not allowed in this context
Linter shows it is a valid chart.
Here is index.yaml:
apiVersion: v1
entries:
helloworld:
- apiVersion: v2
appVersion: 1.0.0
created: "2021-06-28T14:05:53.974207+01:00"
description: This Helm chart will be used to create hello world
digest: f290432f0280fe3f66b126c28a0bb21263d64fd8f73a16808ac2070b874619e7
name: helloworld
type: application
urls:
- https://gitlab.<domain>.com/group/infra/repo/helm/charts/helloworld-0.1.0.tgz
version: 0.1.0
generated: "2021-06-28T14:05:53.973549+01:00"
Not sure what is missing here.
It looks like you want to use the helm chart that is hosted on the gitlab. Unfortunately, it won't work as you want it to. As Lei Yang mentioned well in the comment:
helm repo and git repo are different things.
In the official documentation of Helm, you can find The Chart Repository Guide.
You can find it also a guide how to create a chart repository:
A chart repository is an HTTP server that houses an index.yaml file and optionally some packaged charts. When you're ready to share your charts, the preferred way to do so is by uploading them to a chart repository.
Here you can find section, how to properly host chart repos. There are several ways to do this - for example you can use a Google Cloud Storage (GCS) bucket, Amazon S3 bucket, GitHub Pages, or even create your own web server.
You can also use the ChartMuseum server to host a chart repository from a local file system.
ChartMuseum is an open-source Helm Chart Repository server written in Go (Golang), with support for cloud storage backends, including Google Cloud Storage, Amazon S3, Microsoft Azure Blob Storage, Alibaba Cloud OSS Storage, Openstack Object Storage, Oracle Cloud Infrastructure Object Storage, Baidu Cloud BOS Storage, Tencent Cloud Object Storage, DigitalOcean Spaces, Minio, and etcd.
Alternatively it could be also possible to host helm charts in JFrog.
You can host your own Public Helm repository on git.I have done it on Github and the process is very easy and straightforward.
You can follow this link
https://medium.com/#mattiaperi/create-a-public-helm-chart-repository-with-github-pages-49b180dbb417
You will have to package the chart and create an index.yaml file.You will also have to host your repository branch as Github pages.
I am not sure if gitlab also supports this but worth a shot.

Google cloud functions deploy to different environments with cloud build

trigger configuration imageI want to set up a ci/cd pipeline for my gcp project based on cloud functions service. Right now I am able to deploy to gcp project from cloud build with a trigger when pushing code on the staging branch.
My cloudbuild.yml file looks like this:
steps:
- name: 'gcr.io/cloud-builders/gcloud'
args: ['functions', 'deploy', 'profile', '--region', 'europe-west1', '--trigger-http', '--runtime', 'nodejs8', '--entry-point', 'profile']
dir: functions
After that I created a second gcp project as production environment, to isolate stage and prod environment. I tried to follow the same process with master branch this time. I created the trigger to deploy cloud function when pushing to master just like staging case but it fails to trigger build. Any ideas ?

Azure devops deploy .net core to AWS elastic beanstalk

I am using azure devops for CI/CD of a .net core application
I am doing a nuget restore/build solution test etc followed by "Publish Artifact like below". All fairly standard
I want to deploy this the AWS elastic beanstalk, and have the following task setup in Azure Devops..
I have tried a number of things but not sure how to get the .net core application deployed to aws beanstalk... The problem i believe is with the location of zip file. what should this be? - Is there anything else needed to be done? It just errors trying to created deployment bundle in the "Deploy to Elastic Beanstalk" task (btw aws connection etc is working fine)
Azure devops deploy .net core to AWS elastic beanstalk
It depends on the output of the previous dotnet publish.
You could get the info from the Deployment Bundle Type:
ASP.NET Core (Source: dotnet publish)
As you can see the source is from dotnet publish task.
Then, the output location of the next option Published Application Path should be based on previous build steps placed the deployment artifacts. It should be the path and filename of the .zip file containing the artifacts.
You can check the details info from the document:
AWS Elastic Beanstalk Deploy Application Task
Hope this helps.
Here is a full tutorial on how to do Elastic Beanstalk deployment using Azure DevOps
You can also use this PowerShell script to automate the end to end deployment instead of using Azure DevOps plugins. It gives your more flexibility on what you can do with your deployments

Azure App Service Deploy Release (Azure DevOps) overwrites the Multi-Container Docker Compose (Preview) settings in Azure Portal

I have a multi-container app running with App Service - Web App for Containers. It all works fine as long as the Docker Compose (Preview) configuration is provided under the Container Settings tab.
Currently, I am using Azure DevOps to create builds for specific containers, and then use the Continous Deployment option (in Azure Portal) under Container Settings to pull the latest deployed container image from ACR. This also works fine. I can run builds for individual containers, and deploy only specific container without affecting the web app. (Each container is a separate project, and only has a Dockerfile without requiring docker-compose)
However, when I create a Release from Azure DevOp using Azure App Service Deploy (version 4.*), the Docker Compose (Preview) configuration in Azure Portal is completely wiped out, and it defaults to Single Container and the application breaks. The Docker Compose configuration is needed as it makes the main container aware of the other containers.
I am using version 4.* of Azure App Service Deploy. I would like to use the Release feature of Azure DevOps as it provides more control.
Is there a way I can specify the docker-compose multi-container configuration from Azure App Service Deploy version 4 so that the App Service is aware of the multi-container configuration and not wipe out the multi-container config in Docker Compose (preview)
Thanks,
Karan
Replace the Azure App Service deploy task in your Release pipeline with an Azure Web App for Containers task. There are parameters for multiple images and a configuration file (Docker-Compose.yml).
As Dave mentioned this is possible using the AzureWebAppContainer task, however the documentation does not mention the options regarding multi-container deployment.
I had to dig into the source code of that task to discover the task parameters.
https://github.com/microsoft/azure-pipelines-tasks/blob/master/Tasks/AzureWebAppContainerV1/taskparameters.ts
I'll summarise my setup to give you an idea how it can be used. I have a multi-stage pipeline defined in YAML. There are two stages, the first stage builds and publishes the Docker images and the second stage updates the Web App for Containers app service.
The first stage also produces an artifact, namely the docker-compose.yml file that is used to configure the Web App for Containers app service. In my repository I have a template for this file. During the pipeline execution the tags of the docker images are replaced within this template (e.g. using envsubst or sed). Then the resulting docker-compose.yml file is published as an artifact.
- task: PublishBuildArtifacts#1
displayName: "Publish artifact"
inputs:
pathToPublish: $(Build.SourcesDirectory)/pipelines/assets/docker-compose.yml
artifactName: yml
In the second stage of the pipeline the artifact is downloaded and used to configure the Web App for Containers. In the example below the AzureWebAppContainer is a step of a deployment job.
- task: AzureWebAppContainer#1
displayName: 'Azure Web App for Containers'
inputs:
azureSubscription: '<YOUR_SUBSCRIPTION>'
appName: '<YOUR_WEB_APP_FOR_CONTAINERS>'
multicontainerConfigFile: $(System.ArtifactsDirectory)/yml/docker-compose.yml
The generated docker-compose.yml is stored as an artifact and you can always consult it later.

pass artifacts between Concourse jobs without S3 or similar external resource

I am using concourse and build binaries that I would like to send off to integration tests. However they are lightweight and using an S3 bucket for permanent storage seems like overkill. Additionally I am versioning with semver-resource, which also seems to require S3 or such to back it.
Is there a way to configure a local on-worker or similar blobstore? can I use the Concourse postgres db to store my semver? it's small enough it should fit in a DB table.
Short answer: no.
Concourse is designed so that the Concourse deployment itself is stateless, explicitly not providing artifact persistence and striving to be largely free of configuration.
This forces pipelines to be self-contained, which makes your CI reproducible. If your Concourse server burns down, you haven't lost anything special. You can just spin up another one and send up the original pipeline. Everything will then continue from where it left off: your versions will continue counting from where they were, rather than restarting from 0.0.0, and all of your artifacts are still wherever they are.
All that being said, you're free to deploy your own S3-compatible blob store. The s3 resource should talk to it just fine.
We use the semver resource with gist. Just get the clone id from the gist page:
then set your resource:
- name: version
type: semver
source:
driver: git
branch: master
uri: {{version-url}}
file: Version
private_key: {{github-private-key}}