Its not clear to me if the GCP Folder resource is alpha or publicly available ? The API around it gcloud alpha resource-manager folders certainly seems alpha . Can I go ahead and solution the structure using folders or not yet ?
While it may not be a definitive answer (occasionally documentation may be lagging behind), one way to check if a particular gcloud command (use the left side bar for navigation) is alpha, beta or generally available is to check for a Notes section at the bottom of the respective command's documentation page.
In particular for gcloud alpha resource-manager folders you see:
NOTES
This command is currently in ALPHA and may change without notice.
By comparison, the gcloud alpha app update command shows:
NOTES
This command is currently in ALPHA and may change without notice.
These variants are also available:
$ gcloud app update
$ gcloud beta app update
Since gclod app update is available it means the feature is generally available, thus covered by SLA, so it's safe to base solutions on it.
But gcloud alpha resource-manager folders doesn't show a gcloud beta resource-manager folders or a gcloud resource-manager folders, so it's indeed only an alpha release.
Neither alpha nor beta features are covered by SLAs and may change at any time. That's not to say you can't start working on a solution using them, as long as you're prepared to revise it or switch to some other solution if/when things change (and, of course, you don't expect a service SLA for them). It's really up to you.
As for the Cloud Folders feature itself, it reached General Availability. From the July 24, 2017 release note:
Folders General Availability
Cloud folders are nodes in the Cloud Platform Resource Hierarchy.
A folder can contain projects, other folders, or a combination of
both. You can use folders to group projects under an organization in a
hierarchy. For example, you organization might contain multiple
departments, each with its own set of Cloud Platform resources.
Folders allows you to group these resources on a per-department basis.
Folders are used to group resources that share common IAM policies.
Related
For the MicroService Architecture based application, I'm trying to understand a standard process about how to logically group and manage correct version compatibility among independently deployable microservices. Let me elaborate with practical scenario :
Say, I am building a software application which is composed of 10 microservices. All the microservices have their independent repositories(branching workflow etc.) and their separate CI/CD Pipeline.
The CI/CD Pipeline gets triggered whenever any change pushed to 'master' branch for respective microservice.
Considering Helm chart and Kubernetes based deployment, all the microservices will get deployed with version 1.0 for the very first deployment and our system would work. For subsequent releases, we might have only couple of services that will get deploy. So after couple of production releases, each microservice will be at different version to constituent an application at that point of time.
My question is :
How to logically group independently deployable microservices in order to deploy or rollback to earlier release i.e. how to determine what was the version of different microservices for earlier releases?
Is there any existing tool or standard practice to track versions of each microservice for given release to seamlessly rollback to expected release?
If not automated solution, what would be the right approach to address such requirement?
Appreciate your thoughts and suggestion on this.
With consideration kuberenets:
1. Helm is nice tool to deploy and track.
2. Native k8s deployment works nice, you need to use deployment properly especially look --record flag in k8s commands eg check this link
With AWS ECS clusters:
1. they have task definations and tasks. I think that works for you.
Not have pointers for docker-compose, swarm, and other tools. But you can always use the power of git and some scripting.
the idea is make a file that lists all versions of services/containers/code . and commit that file in git with code. Make tag out of it for simplicity. your script should compare this state file and current state and apply specific changes only. Look at git submodules also. it is nothing but a group of many git projects and it tracks status of each project with help of commit id of each project. This helped us in the situation you mention.
This is a fairly new problem, we just launched a new tool Reliza Hub to solve that. Also here is my post on the subject: Microservices – Combinatorial Explosion of Versions. Currently, we are at the MVP stage and a lot of work is going on - see this video tutorial if our direction makes sense for you https://www.youtube.com/watch?v=yDlf5fMBGuI
If you decide to implement and have any questions or need help with integration, just tag me on SO and I'd be very much willing to make it work for you.
To sum up few things that we are doing - we denote developer facing projects (those that map to source code) as Projects and customer facing projects (bundles that customer sees) as Products.
And we say that Products are essentially composition of Projects and provide tooling how you can compile different versions of Projects into what's called a Product bundle. You can then integrate this into any CI or CD tool out there or start manually if you haven't configured CICD yet.
Other than that, yes - I highly recommend helm and kubernetes - this is what we use on newer projects. (And I can also add ArgoCD and Spinnaker to the existing tooling). But it is not enough to track permutations of different versions of microservices and establishing which configurations are good and which are not between different environments.
Say that I have 5 apis that i want to deploy in a Kubernetes cluster, my question is simply what is the best practice to store the yaml files related to Kubernetes.
In projects I've seen online, Kubernetes yaml files are just added to the the api project itself. I wonder if it makes sense to decouple all files related to Kubernetes in an entirely separate "project", and which is managed by VCS as a completely separated entity from the api projects themselves.
This question arises since I'm currently reading a book about Kubernetes, on the topic namespaces, and considered it might be a good idea to have separate namespaces per environment (DEV / UAT / PROD), and it may make sense to have these files in a centralized "Kubernetes" project (unless it might be better to have a separate cluster per environment (?)).
Whether to put the yaml in the same repo as the app is a question that projects answer in different ways. You might want to put them together if you find that you often change both at the same time or you just find it clearer to see everything in one place. You might separate if you mostly work on the yaml separately or if you find it less clutttered or want different visibility for it (e.g. different teams to look at it). If things get more sophisticated then you'll actually want to generate the yaml from templates and inject environment-specific configuration into it at deploy time (whether those environments are namespaces or clusters) - see Best practices for storing kubernetes configuration in source control for more discussion on this.
From Production k8s experience for CI/CD:
One cluster per environment such as dev , stage , prod ( optionally per data centre )
One namespace per project
One git deployment repo per project
One branch in git deployment repo per environment
Use configmaps for configuration aspects
Use secret management solution to store and use secrets
I have been asked to set up CI/CD for a new app using VSTS and Kubernetes.
It was suggested to me that we could use Helm (but it was made clear it was not mandatory).
The value I am seeing for this tool in our project is to define different values for different environments e.g. database connection string.
But for that we can also use the Replace Tokens VSTS task which is a lot simpler.
A definition explains that Helm is a chart manager and it sort of connections all resources of a system to deploy to Kubernetes.
Our system is just 1 web API (could grow later) so I feel deploying using Helm would be over-engineering the deployment process. Plus, we need this for yesterday.
Question
According to the current context, should I go with Replace Tokens VSTS task or Helm?
Just based on your requirement, for example, which is easier to deploy, which is easier to manage, which you familiar or which is easier for requirement changes.
You also can custom build task to achieve it.
I would go for helm because it gives you more flexibility and it's more cross-platform; moreover, when adding more API's/components or microservices it will be easier to control configuration (a single or multiple values.yaml, using git submodules for helm charts and so on).
Surely it requires a slightly bigger time investment than simple value substitution in your CI/CD tools, but has a potential payback that far outweighs the effort (again, based on my experience and the limited information about your environment).
I'm curious, what did you end up using?
I have a project with N git repos, each representing a static website (N varies). For every git repo there exists a build definition that creates an nginx docker image on Azure Container Registry. These N build definitions are linked to N release defenitions that deploy each image to k8s (also on Azure). Overall, CI/CD works fine and after the releases have succedded for the first time, I see a list of environments, each representing a website that is now online.
What I cannot do though with VSTS CI/CD is to declare how these environments are torn down. In GitLab CI (which I used before), there exists a concept of stopping an environment and although this is just a stage in .gitlab-ci.yaml, running it literally removes an environemnt from the list of the deployed ones.
Stopping an environment can be useful when deleting autodeployable feature branches (aka Review Apps). In my case, I'd like to do this when an already shared static website needs to be removed.
VSTS does not seem to have a concept of unreleasing something that has already been released and I'm wondering what the best workaround could be. I tried these two options so far:
Create N new release definition pipelines, which call kubecetl delete ... for a corresponding static websites. That does make things clear at all, because an environment called k8s prod (website-42) in one pipeline is not the same one as in another one (otherwise, I could see whether web → cloud or web × cloud was called last):
Define a new environment called production (delete) in the same release defenition and trigger it manually.
In this case 'deploy' sits a bit closer to 'undeploy', but its hard to figure out what was last (in the example above, you can kind of guess that re-releasing my k8s resources happened after I deleted them – you need to look at the time on the cards, which is a pretty poor indication).
What else could work for deleting / undeploying released applications?
VSTS has not the feature "stop environment" (auto delete the deployed things on the environment) in release management. But you can achieve the same thing in VSTS YAML build.
So except the two workarounds you shared, you can also stop the environment by VSTS YAML build (similar as the mechanism in GitLab).
For YAML CI build, you just need to commit the file end with .vsts-ci.yml. And in the .vsts-ci.yml file, you can specify with the tasks to delete the deployed app.
I'm looking to stear my team into this century and use source control. The developers are very capable of handling source control software - be it command line based or GUI based, Windows or -Nix.
The reason they've been locally and individually handling their code (which deeply frightens me) is because our CM group is not as technically savvy nor comfortable with the whole check-in/out process.
Is there a source control software out there that is geared towards the CM group? I'm thinking of one that would allow them to select a version of a file out of all that have been checked in and mark it for the build they are trying to create.
If you consider the CM (configuration Management) group as in charge of a release management process, then you could isolate them from the "technical details" of any (D)VCS tool you might choose by establishing a good publication process.
The publication consists of making visible somewhere (a shared directory, an artifact repository like Nexus, dedicated to releases, ...):
a deliver (a set of binary and their dependencies) necessary to run your program
a clear list of versions for those binaries (SVN revision number or tag, git tag, Nexus Group-Artifact-Version, ...) allowing the developers to find the exact set of code whenever the CM group get back to them with a list of defect to fix
a document explaining the deployment
The CM group take that set of deliveries, manages the release process and the promotion between the different deployment environment (Integration, UAT, pre-prod, prod, ...), without having to deal with the VCS tool.
That also enforces a strong separation between dev and prod (both in term of environment and process), which allows for the devs to adopt whatever workflow of development they want, withtout affect the way the CM group works.