I am following
Kustomize helm example
With original tutorial command
flux bootstrap github --context=staging --owner=${GITHUB_USER} --repository=${GITHUB_REPO} --branch=main --personal --path=clusters/staging
I got
✗ context "staging" does not exist
This is what watch helmrelease shows
Output sources all
flux get sources all
NAME REVISION SUSPENDED READY MESSAGE
gitrepository/flux-system main/3fabbc2 False True stored artifact for revision 'main/3fabbc21c473f479389790de8d1daa20d207ebd6'
gitrepository/podinfo master/132f4e7 False True stored artifact for revision 'master/132f4e719209eb10b9485302f8593fc0e680f4fc'
How to create context?
You need to set the context=${K8S_CLUSTER_CONTEXT} for example, I am using Kind so I have --context=kind-kind or you run
export K8S_CLUSTER_CONTEXT=kind-kind
flux bootstrap github \
--context=${K8S_CLUSTER_CONTEXT}
--owner=${GITHUB_USER} \
--repository=${GITHUB_REPO} \
--branch=main \
--personal \
--path=clusters/staging
Related
In my Github actions workflow, I have a job where I install eksctl :
install-cluster:
name: "staging - create cluster"
runs-on: ubuntu-latest
steps:
- name: Setup eksctl
run: |
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
I have then another job (within the same workflow) where I deploy some applications. How can I use eksctl installation step in my second job. Currently, I add a step in my second job to have eksctl installed. Is the cache the only way of doing this ? Like I install a tool & keep its binaries in the cache & get it back in my second job ?
A GitHub action job will run on its own runner, which means you can't re-use a binary in your second job that was installed in your first job. If you want to make it available without having to reinstall it, you'll likely have to use an action to upload[1] and download[2] the artifact in your second job.
If you'd like to cache the binary within the same job, and re-use it in other steps you can set up https://github.com/actions/cache
References:
https://github.com/actions/upload-artifact
https://github.com/actions/download-artifact
Whenever I run my basic deploy command, everything is redeployed in my environment. Is there any way to tell Helm to only apply things if there were changes made or is this just the way it works?
I'm running:
helm upgrade --atomic MyInstall . -f CustomEnvironmentData.yaml
I didn't see anything in the Helm Upgrade documentation that seemed to indicate this capability.
I don't want to bounce my whole evironment unless I have to.
There's no way to tell Helm to do this, but also no need. If you submit an object to the Kubernetes API server that exactly matches something that's already there, generally nothing will happen.
For example, say you have a Deployment object that specifies image: my/image:{{ .Values.tag }} and replicas: 3. You submit this once with tag: 20200904.01. Now you run the helm upgrade command you show, with that tag value unchanged in the CustomEnvironmentData.yaml file. This will in fact trigger the deployment controller inside Kubernetes. That sees that it wants 3 pods to exist with the image my/image:20200904.01. Those 3 pods already exist, so it does nothing.
(This is essentially the same as the "don't use the latest tag" advice: if you try to set image: my/image:latest, and redeploy your Deployment with this tag, since the Deployment spec is unchanged Kubernetes won't do anything, even if the version of the image in the registry has changed.)
You should probably use helm diff upgrade
https://github.com/databus23/helm-diff
$ helm diff upgrade - h
Show a diff explaining what a helm upgrade would change.
This fetches the currently deployed version of a release
and compares it to a chart plus values.
This can be used visualize what changes a helm upgrade will
perform.
Usage:
diff upgrade[flags] [RELEASE] [CHART]
Examples:
helm diff upgrade my-release stable / postgresql--values values.yaml
Flags:
-h, --help help for upgrade
--detailed - exitcode return a non - zero exit code when there are changes
--post - renderer string the path to an executable to be used for post rendering. If it exists in $PATH, the binary will be used, otherwise it will try to look for the executable at the given path
--reset - values reset the values to the ones built into the chart and merge in any new values
--reuse - values reuse the last release's values and merge in any new values
--set stringArray set values on the command line(can specify multiple or separate values with commas: key1 = val1, key2 = val2)
--suppress stringArray allows suppression of the values listed in the diff output
- q, --suppress - secrets suppress secrets in the output
- f, --values valueFiles specify values in a YAML file(can specify multiple)(default[])
--version string specify the exact chart version to use.If this is not specified, the latest version is used
Global Flags:
--no - color remove colors from the output
I've deployed a function to gcloud using the following command line script:
gcloud functions deploy my_new_function --runtime python37 \
--trigger-event providers/cloud.firestore/eventTypes/document.create \
--trigger-resource projects/my_project_name/databases/default/documents/experiences/{eventId}
This worked successfully, and my function was deployed. Here is what I expected to happen as a result:
Any time a new document was created within the experiences firestore collection, the function my_new_function would be invoked.
What is actually happening:
my_new_function is never being invoked as a result of a new document being created within experiences
The --source parameter is for deploying from source control, which you are not trying to do . You will want to deploy from your local machine instead. You run gcloud from the directory you want to deploy.
We're creating a micro-services project to be deployed in multiple environments (dev,qa, stg, prd), we plan on making use of cloud formation templates using nested stacks for the shared resources between multiple services.
The thing is that when using nested stacks you need to specify the TemplateURL of the nested resource, and this is a static URL pointing an S3 Bucket that changes every time you do update the template (Upload a new template with some changes).
So the question is, what is the best way to use a version control tool like GIT to keep track of the changes made in a nested template that once it is upload to S3 would give you a new URL?
The cloudformation package command in the AWS Command Line Interface will upload local artifacts (including the TemplateURL property for the AWS::CloudFormation::Stack resource) to an S3 bucket, and output a transformed CloudFormation template referencing the uploaded artifact.
Using this command, the best way to track changes would be to commit both the base template and nested-stack templates to Git, then use cloudformation package as an intermediate processing step in your deploy script, e.g., with cloudformation deploy:
S3_BUCKET=my_s3_bucket
TEMPLATE=base_template.yml
OUTPUT_TEMPLATE=$(mktemp)
aws cloudformation package \
--template-file $TEMPLATE \
--s3-bucket $S3_BUCKET \
--output-template-file $OUTPUT_TEMPLATE
aws cloudformation deploy \
--template-file $OUTPUT_TEMPLATE \
--stack-name $STACK
I have created a simple node express MongoDB app which has 3 API endpoints to perform basic crud operations.
If I was to deploy this to Heroku as a service and use bitbucket-pipeline to perform CI-CD this would do the job for me. On top of this, I can have Heroku pipelines to have multiple stages of environments like dev and production.
And after doing all above I would be done with my pipeline and happy about it.
Now coming back to Serverless, I have deployed my API endpoints to AWS as lambda functions, And that is the only environment (let's say DEV) present at the moment.
Now how can I achieve a pipeline similar to the one mentioned earlier in a serverless architecture?
All the solutions out there do not suggest (maybe I missed some) promoting the actual code which is tried and tested on dev env to Production. But rather a deploy a new set of code, is this a limitation?
Option 1
Presuming that you are developing a Node Serverless application, deploying a new set of code with the same git commit ID and package-lock.json/yarn.lock should result in the same environment. This can be achieved by executing multiple deploy commands to different stages e.g.
sls deploy -s dev
sls deploy -s prod
There are various factors that may cause the deployed environments to be different, but the risk of that should be very low. This is the simplest CI/CD solution you can implement.
Option 2
If you'd like to avoid the risk from Option 1 at all cost, you can split the package and deployment phase in your pipeline. Create the package before you deploy from the codebase that you have checked out:
sls package -s dev --package build/dev
sls package -s prod --package build/prod
Archive as necessary, then to deploy:
sls deploy -s dev --package build/dev
sls deploy -s prod --package build/prod
Option 3
This is an improved version of Option 2. I have not tried this solution but it should theoretically be possible. The problem with Option 2 is that you have to execute the package command multiple times which might not be desirable YMMV. To avoid the need of packaging more than once, first create the package:
sls package -s dev --package build
Then to deploy:
# Execute a script to modify build/cloudformation-template-update-stack.json to match dev environment
sls deploy -s dev --package build
# Execute a script to modify build/cloudformation-template-update-stack.json to match prod environment
sls deploy -s prod --package build
If you have the following resource in build/cloudformation-template-update-stack.json for example:
"MyBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": "myapp-dev-bucket"
}
},
The result of the script you execute before sls deploy should modify the CF resource to:
"MyBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": "myapp-prod-bucket"
}
},
This option of course will imply that you can't have any hardcoded resource name in your app, every resource names must be injected from serverless.yml to your Lambdas.