I've read about partial upgrade, but it always requires to change some parts of the packages application. I'd like to know if there's a way to redeploy a package without version change. In a way, similar to what VS is doing when deploying to the dev cluster.
On your local dev cluster, VS simply deletes the application before it starts the re-deployment. You could do the same in your production cluster, however this results in downtime since the application is not accessible during that time.
What's the reason why you wouldn't want to use the regular monitored upgrade? It has many advantages, like automatic rollbacks and so on.
Related
I have been working with kubernetes in a staging environment for a couple of month and want to switch to production, I came across a tool called Rancher almost 2 weeks ago and since then am going through their documents.
It was recommended by the developers and also in the community not to use rancher in production kubernete and preferably create a separated cluster for that and add an agent to your main production cluster from that one.
However in the latest stable version, there is actually an option you can tick to use the rancher only for local cluster so this question came to my mind that:
If the latest stable version of rancher is modified to be deployed on production cluster itself rather than having dedicated cluster? and if there is any security or restarting issues can happen that deletes all the configurations for other components on cluster
Note: on another staging environment I installed on the local clustor an instance of wordpress and ghost and both were working fine.
I still think the best option for you would be to have fully accessible own cluster and you wont be dependent to rancher cloud solutions. I am not saying Rancher is bad - no. Just If you are talking about PRODUCTION environment - my personal opinion cluster should be own. Sure arguable topic.
What I can mention also here - you can use any of Useful Interactive Terminal and Graphical UI Tools for Kubernetes . for example Octant
Octant is a browser-based UI aimed at application developers giving
them visibility into how their application is running. I also think
this tool can really benefit anyone using K8s, especially if you
forget the various options to kubectl to inspect your K8s Cluster
and/or workloads. Octant is also a VMware Open Source project and it
is supported on Windows, Mac and Linux (including ARM) and runs
locally on a system that has access to a K8S Cluster. After installing
Octant, just type octant and it will start listening on localhost:7777
and you just launch your web browser to access the UI.
I work with teams members to develop a microservices architecture but I have a problem with the way to work. Indeed, I have too many microservices and when I run them during my development, it consumes too memory even with a good workstation. So I use docker compose to build and execute my MSA but it takes a long time. One often hears about how technically build an MSA but never about the way to work efficiently to build it. How do you do in this case ? How do you work ? Do you use tools or any others to improve and facilitate your developments. I've heard about skaffold but I don't see what the difference is with docker compose or with a simple ci/cd in a cluster env for example. Feel free to give tips and your opinion. Thanks
I've had a fair amount of experience with microservices and local development and here's been some approaches I've seen:
Run all the things locally on docker or k8. If using k8, then a tool like skaffolding can make it easier to run and debug a service locally in the IDE but put it into your local k8 so that it can communicate with other k8 services. It works OK but running more than 4 or 5 full services locally in k8 or docker requires dedicating a substantial amount of CPU and memory.
Build mock versions of all your services. Use those locally and for integration tests. The mock services are intentionally much simpler and therefore easier to run lots of them locally. Obvious downside is that you have to build mock version of every service, and you can easily miss bugs that are caused by mock services not behaving like the real service. Record/replay tools like Hoveryfly can help in building mock services.
Give every developer their own Cloud environment. Run most services in the cloud but use a tool like Telepresence to swap locally running services in and out of the cloud cluster. This eliminates the problem of running too many services on a single machine but can be spendy to maintain separate cloud sandboxes for each developer. You also need a DevOps resource to help developers when their cloud sandbox gets out of whack.
Eliminate unnecessary microservice complexity and consolidate all your services into 1 or 2 monoliths. Enjoy being able to run everything locally as a single service. Accept the fact that a microservice architecture is overkill for most companies. Too many people choose a microservice architecture upfront before their needs demand it. Or they do it out of fear that they will need it in the future. Inevitably this leads to guessing how they should decompose the system into many microservices, and getting the boundaries and contracts wrong, which makes it just as hard or harder to fix in the future compared to a monolith. And they incur the costs of microservices years before they need to. Microservices make everything more costly and painful, from local development to deployment. For companies like Netflix and Amazon, it's necessary. For most of us, it's not.
I prefer option 4 if at all possible. Otherwise option 2 or 3 in that order. Option 1 should be avoided in my opinion but it is probably the option everyone tries first.
In GKE and assuming you have a private cluster. You can utilize port forwarding while hooked up to the GKE environment through the CLI. Create a script that forwards your local ports to the GKE environment. I believe on the services tab in your cluster is where you will find the "port-forwarding" button that will give you the CMD command. This way you can work on one microservice with all of its traffic being routed to the actual DEV cluster. This prevents you from having to run multiple projects at the same time.
I would say create a staging environment which will have all services running. This staging environment will specifically be curated for development. E.g. if it's deployed using k8s then you expose some ports using nodeport service if you need them for your specific microservice. And have a DevOps pipeline to always keep this environment up to date with the code.
This environment should always be built from master branch. If you have single repo for app or repo per service, it's fair assumption that the will always have most recent code when you create your dev/feature branch.
Then when you want to develop a feature or fix a bug you checkout your microservice. And if you are following the microservice pattern appropriately, that single microservice should be an executable and have it's own docker file and should be debuggable from your local IDE. Many enterprises follow this pattern, and enforce at the organization level that the master branch is always production ready and high quality.
Let's say, you discover a bug in some other microservice running in k8s cluster. You will very likely get tempted to find a way to debug that remote microservice. However, that should be written as a bug for the team that owns the microservice. If your team owns it then you fix it and then start working on your feature. If you really think you need to debug multiple microservices, then I think you have real tight coupling between the services or you don't really need the microservice architecture.
Google Compute Platform
I've got an Angular (2) app and a Node.js middleware (Loopback) running as Services in an App Engine in a project.
For the database, we have a Compute Engine running PostgreSQL in that same project.
What we want
The testing has gone well, and we now want to have a test version (for ongoing upgrade testing/demo/etc) and a release deployment that is more stable for our initial internal clients.
We are going to use a different database in psql for the release version, but could use the same server for our test and deployed apps.
Should we....?
create another GCP project and another gcloud setup on my local box to deploy to that new project for our release deployment,
or is it better to deploy multiple versions of the services to the single project with different prefixes - and how do I do that?
Cost is a big concern for our little nonprofit. :)
My recommendation is the following:
Create two projects, one for each database instance. You can mess around all you want in the test project, and don't have to worry about messing up your prod deployment. You would need to store your database credentials securely somewhere. A possible solution is to use Google Cloud Project Metadata, so your code can stay the same between projects.
When you are ready to deploy to production, I would recommend deploying a new version of your App Engine app in the production project, but not promoting it to the default.
gcloud app deploy --no-promote
This means customers will still go to the old version, but the new version will be deployed so you can make sure everything is working. After that, you can slowly (or quickly) move traffic over to the new version.
At about 8:45 into this video, traffic splitting is demoed:
https://vimeo.com/180426390
Also, I would recommend aggressively shutting down unused App Engine Flexible deployments to save costs. You can read more here.
I'm super new to SCCM and trying out some stuff.
Atm I create a lot of Applications to deploy on around 50 Clients.
Before I deploy them to all clients I test them on a test Client.
The problem now is that if I change sth in the Deployment Type like the installation command I have to delete the deployment everytime afterwards and deploy it again or the change wont happen on the client when I install the Application next time.
There probaly a way easier method which I can't figure out atm.
So how do i update the changes I made after the Application is allready deployed?
Greetings,
Paxz.
The application deployment command line will only be executed if the application is not detected - i.e. the Application Detection criteria evaluates to false. With this premise, it is possible to change the Application Detection criteria so it evaluates to false... perhaps add an addition rule to include "file1.txt exists"? This should work, but it is ugly and I would not recommend it.
A better approach
I prefer to test my application deployments on VMs in the first instance: prepare the destination machine, snapshot it, then deploy.
If you need to tweak your deployment you can then make the required changes, redistribute the content (if required), then restore the VM's snapshot for a fresh deployment.
I managed to get an answer from microsofts technet forum.
For deployments to know the update in the command line, I just have to push the next policy polling cycle.
This will only be effective for clients that haven't executed the deployment type yet.
Other than that there seems to be no other way than deleting the deployment and re-deploying it for the changes to be known for the deployment.
I have an autoscaled environment at my production, which is currently a havoc when we update build on it, so we thought we better move to dev opsworks at AWS to make the process more easy for us.
We can't afford a downtime, not now not ever, never ever; a second worth of loss while updating a build and may be restarting apache costs a fortune.
We can't possibly afford to just let our machine be terminated by autoscale policy when a new update comes in with new AMI based ec2 machine, actually when autoscale terminates a machine under any circumstances it doesn't care for your running requests on that machine, it just shuts it down while what it should rather do is a graceful shutdown, by something like drainstop on apache, so it could first at least finish the work in hand.
now that opsworks is here, and we are planning to use it to update our builds more automagically, will the new update push run the recipes again, in fact this paragraph which i just read worries me more, does it mean that it won't update the build automatically on new instances.
After you have modified the app settings, you must deploy the app.
When you first deploy an app, the Deploy recipes download the code and
related files to the app server instances, which then run the local
copy. If you modify the app in the repository, you must ensure that
the updated code and related files are installed on your app server
instances. AWS OpsWorks automatically deploys the current app version
to new instances when they are started. For existing instances,
however, the situation is different:
You must manually deploy the updated app to online instances.
You do not have to deploy the updated app to offline instance
store-backed instances, including load-based and time-based instances;
AWS OpsWorks automatically deploys the latest app version when they
are restarted.
You must restart offline EBS-backed 24/7 instances and manually deploy
the app; AWS OpsWorks does not run the Deploy recipes on these
instances when they are restarted.
You cannot restart offline EBS-backed load-based and time-based
instances, so the simplest approach is to delete the offline instances
and add new instances to replace them.
Because they are now new instances, AWS OpsWorks will automatically
deploy the current app version when they are started.
First of all, let me state that I've started looking into OpsWorks just about 2 weeks ago. So I'm far from being a pro. But here's my understanding how it works:
We need to differentiate between instances that are instance store backed, and instances that are EBS backed:
The instance store disappears together with the instance once it's shut down. Therefore, bringing it up again, starts from zero. It has to download the latest app again and will deploy that.
For EBS backed instances the deployed code remains intact (persisted) exceeding the lifetime of the instance to which it is attached. Therefore, bringing an EBS backed instance back to life, will not update your app automatically. The old version remains deployed.
So your first decision needs to be what instance type to use. It is generally a good idea to have the same version of your app on all instances. Therefore I would suggest going with EBS-backed instances which will not automatically deploy new versions when booting up. In this case, deploying a new version would mean to bring up brand new instances that will be running the new code automatically (as they are new), and then destroying the old instances. You will have a very short time during which old and new code will run side by side.
However, if you prefer to have always the very latest version deployed and can afford risking discrepancies between the individual instances for an extended period of time (e.g. having different app versions deployed depending on when an instance was originally started), then instance store backed might be your choice. Every time a new instance spins up, the latest and greatest code will be deployed. If you want to update existing ones, just bring up new instances instead and kill the existing ones.
Both strategies should give you the desired effect of zero downtime. The difference is on when and how the latest code is being deployed. Combine this with HAProxy to have better control which servers will be used. You can gradually move traffic from old instances to new instances for example.