Automatically increment Datadog version tag - deployment

I want to use Datadog's Deployment Tracking but to do so I need to dynamically set the version tag. Right now my backend tag is set in the Dockerfile ENV and LABEL and on the frontend it is just in datadogRum.init(). They are currently hardcoded.
I usually do one deployment per day, I would like these version tags to be auto-incremented but haven't found a good way to do it.
I tried using an environment variable from build tools to update these, but could not find a reliable way to do that. I use CodeFresh for backend building and AWS Amplify for frontend.

Related

Can someone explain me some use cases of helm?

I’m currently using kubernetes and I came across of helm.
Let’s say I don’t like the idea of “infecting” my kubernetes cluster with a process that is not related to my applications but I would gladly accept it if it could be beneficial.
So I made some researches but I still can’t find anything I can’t easily do by using my yaml descriptor and kubectl so for now I can’t find an use except,maybe, for the environizing.
For example (taking it from guides I read:
you can easily install application, eg. helm install nginx —> I add an nginx image to my deployment descriptor, done
repositories -> I have docker ones (where I pull my images from)
you can easily helm rollback in case of release failure-> I just change the image version to the previous one in my kubernetes descriptor, easy
What bothers me is that, at level of commands, I do pretty much the same effort (helm update->kubectl apply).
In exchange for that I have a lot of boilerplate because of keeping the directory structure helm wants and I feel like missing the control I have with plain deployment descriptors ...what am I missing?
It is totally understandable your question. For small and simple deploys the benefits is not actually that great. But when the deploy of something is very complex Helm helps a lot.
Think that you have a couple squads that develop microservice for some company. If you can make a Chart that works for most of them, the deploy of each microservices would differ only by the image and the resources required. This way you get an standardized deployment and easier to all developers.
Another use case is deploying applications which requires a lot of moving parts. For example, if you want to deploy a Grafana server on Kubernetes you're probably going to need at least a Deployment and a Configmap, then you would need a service that matches this deployment. And if you want to expose it to the internet you need an ingress too.
One relatively simple application, would require 4 different YAMLs that you would to manually configure and make sure everything is correct instead you could do a simple helm install and reuse the configuration that someone has already made, sometimes even the company who created the Application.
There are a lot of other use cases, but these two are the ones that I would say are the most common.
Here's three suggestions of ways Helm can be useful:
Your continuous deployment system somewhat routinely produces new builds and wants to send them to the Kubernetes cluster. You can use templating to specify the image name and tag in a deployment, and so helm upgrade ... --set tag=201907211931 to request a specific tag.
You might have various service-specific controls like the log level or external database hostnames. The Helm values mechanism gives a uniform way to specify them, without having to know the details of the Kubernetes YAML files.
There is a repository of pre-packaged application charts, so if you want replicated PostgreSQL with in-cluster persistent storage, that's already built for you and you can just depend on it, rather than figuring out the right combination of StatefulSets and PersistentVolumeClaims yourself.
You can combine these in interesting (and potentially complex) ways: use an in-cluster database for developer testing but use a cloud-hosted and backed-up database for production, for example, and compute the database host name based on what combination of settings are provided.
There are, of course, alternative ways to do all of these things. Kustomize in particular can change the image value fairly straightforwardly, and is notable for having been included in the kubectl tool since Kubernetes 1.14 (see also Declarative Management of Kubernetes Objects Using Kustomize in the Kubernetes documentation). The "operator" pattern gives an alternate path to install software in your cluster, but even more so than Helm you're trusting an arbitrary program with API access.

Packaging a kubernetes based application

We have multiple(20+) services running inside docker containers which are being managed using Kubernetes. These services include databases, streaming pipelines and custom applications. We want to make this product available as an on-premises solution so that it can be easily installed, like a one-click installation sort of thing, hiding all the complexity of the infrastructure.
What would be the best way of doing this? Currently we have scripts managing this but as we move into production there will be frequent upgrades and it will become more and more complex to manage all the dependencies.
I am currently looking into helm and am wondering if I am exploring in the right direction. Any guidance will be really helpful to me. Thanks.
Helm seems like the way to go, but what you need to think about in my opinion is more about how will you deliver updates to your software. For example, will you provide a single 'version' of your whole stack, that translates into particular composition of infra setup and microservice versions, or will you allow your customers to upgrade single microservices as they are released. You can have one huge helm chart for everything, or you can use, like I do in most cases, an "umbrella" chart. It contains subcharts for all microservices etc.
My usual setup contains a subchart for every service, then services names are correctly namespaced, so they can be referenced within as .Release.Name-subchart[-optional]. Also, when I need to upgrade, I just upgraed whole chart with something like --reuse-values --set subchart.image.tag=v1.x.x which gives granular control over each service version. I also gate each subcharts resources with if .Values.enabled so I can individualy enabe/diable each subcharts resources.
The ugly side of this, is that if you do want to release single service upgrade, you still need to run the whole umbrella chart, leaving more surface for some kind of error, but on the other hand it gives this capability to deploy whole solution in one command (the default tags are :latest so clean install will always install latest versions published, and then get updated with tagged releases)

Deploy specific version of Kubernetes to Azure Container Service

Is there any way to deploy a particular version of Kubernetes to ACS in Azure?
Using Azure resource manager, or az command.
It doesn't seem like template format for container service shows this info.
You can specify the version in ACS in selected regions. See the template example.
https://github.com/weinong/azure-quickstart-templates/tree/master/101-acs-kubernetes-with-version
We will be updating azure cli with this feature soon.
I suggest you to use acs-engine in this case. It is a tool for you to specify some custom definition like "orchestratorversion" for your case, and it can then generate an ARM template for deploying the k8s cluster.
You could download the acs-engine tool here https://github.com/Azure/acs-engine/releases (choose the version based on your need for the k8s version).
To achieve your goal, you have to provide a json file which the template you could find here https://github.com/Azure/acs-engine/blob/master/examples/kubernetes-releases/kubernetes1.7.json. You could alter the attribute "orchestratorversion" to either 1.5, 1.6 or 1.7 to suit your need. (or maybe 1.8 for the latest version)
When the json file is ready, you could turn it into the ARM template files by typing the following command
.\acs-engine.exe generate kubernetes.json
This will create a new directory called "_output" and you could find the azuredeploy.json and azuredeploy.parameters.json files there.
For more information about the attributes in the json file, take a look at https://github.com/Azure/acs-engine/blob/master/docs/clusterdefinition.md.
Another way you could also try is in the Deployment via Azure CLI specified here. https://github.com/Azure/ACS/tree/master/docs

How can I modify attributes in Record Types after container is deployed to production?

I need to remove / add new attributes to one of my record type, but container is deployed to production yet. Any chance?
As you can see - I hovered over the X button my mouse - it not possible to remove attributes.
You should be able to just make modifications to the development container and then deploy that again to production. See the documentation: https://developer.apple.com/library/ios/documentation/General/Conceptual/iCloudDesignGuide/DesigningforCloudKit/DesigningforCloudKit.html
It says: Prior to deploying your app, you migrate your schema and data to the production environment using CloudKit Dashboard. When running against the production environment, the server prevents your app from changing the schema programmatically. You can still make changes with CloudKit Dashboard but attempts to add fields to a record in the production environment result in errors.
Then see the part 'Future Proofing Your Records'

how can I set up a continuous deployment with TFSBuild for MVC app?

I have some questions around the best mechanism to deploy MVC web applications to different environments. Previously I used setup projects (.msi's) but as these have been discontinued in VS2012 I am looking to move to an alternative.
Let me explain my current setup. I currently have a CI setup using TFSBuild 2010 with Team Foundation Server for source control.
A number of developers work on their local machines and check in to the TFS Server. We regularly deploy to a single server dev environment and a load balanced qa environment with 2 servers. Our current process includes installing an msi which carries out some of the following custom actions:
brings current app offline with the app_offline.htm file
run in database scripts (from database project in the solution)
modifies web.config (different for each web server of qa)
labels the code
warmup each deployed file via http request
etc
This is the current process. Now I would like to make some changes. Firstly, I need alternative to msi's. From som research I believe that web deploy via IIS and using MsDeploy is the best alternative. I can use web config transforms for web config modifications. Is this correct and if so, could I get an outline of what I need to do?
Secondly I want to set up continuous delivery via TFSBuild, I have no idea how this may be achieved, would it be possible to get an outline of how it can be integrated in to my current setup? Rather than check in driven, I would like it to be user driven following check in. Also, would it be possible for this to also run in database scripts from a database project in the solution.
Finally, there is also a production environment, but I would like to manually deploy this - can my process also produce an artifact that I can manually install?
Vishal Joshi has some information on his blog that is reasonably good, http://vishaljoshi.blogspot.com/2010/11/team-build-web-deployment-web-deploy-vs.html. It does have the downside that your deployment password is include in the properties you pass to msbuild.
Syed Hashimi has also posted some information on this in another questions Team Build: Publish locally using MSDeploy.