IBM Bluemix Zero-downtime deployment capability for cloud applications - ibm-cloud

We have a NodeJS Cloud Foundry application with a DevOps Delivery Pipeline enabled.
Initially we set up the pipeline to use the Active Deploy extension to the Delivery Pipeline in Bluemix to deploy app updates without any downtime. Also called: Rolling deployments, Blue-Green deployments, Red-Black deployments.
https://www.ibm.com/developerworks/cloud/library/cl-bluemix-rollingpipeline/cl-bluemix-rollingpipeline-pdf.pdf
Unfortunately, the Active Deploy service was retired swiftly as of June 23, 2017 as we are inhabiting downtime upon deployment.
https://www.ibm.com/blogs/bluemix/2017/05/retirement-ibm-active-deploy-beta-service/
How do we go back to the process by which a new version of an application is deployed into an environment with no disruption in service for the consumer? UrbanCode? Other options?

A good way of doing this is to use the IBM Cloud Garage's blue-green-deploy cf plugin. In your deploy script, add:
cf add-plugin-repo CF-Community https://plugins.cloudfoundry.org
cf install-plugin blue-green-deploy -f -r CF-Community
Then, instead of doing cf push <app_name>, do:
cf blue-green-deploy <app_name>
You can also specify a manifest, or specify a smoke test (if the smoke test fails, the build will be marked as failed and the original version will continue running).

Related

Container deployment with self-managed kubernetes in AWS

I am relatively new to AWS and kubernetes. I have created a self-managed kubernetes cluster running in AWS (not using EKS). I have successfully created a pipeline in AWS CodePipeline that builds my container and uploads it to ECR. Currently I am manually deploying the created image in the cluster by running the following commands:
kubectl delete deployment my-service
kubectl apply -f my-service_deployment.yaml
How can I automate this manual step in AWS CodePipeline? How can I run the above commands as part of the pipeline?
Regarding my deployment yaml files, where should I store these files? (currently I store them locally in the master node.)
I am missing some best practices for this process.
Your yaml manifests should'nt be on your master node (never), they should be stored in a Version Control System (just like github/gitlab/bitbucket etc.).
To automate the deployment of your docker image based on new artifact version in ECR, you can use a great tools named FluxCD, it is actually very simple to install (https://fluxcd.io/docs/get-started/) and you can easily configure it to automatically deploy your images in your cluster each time there is a new image on your ECR registry.
This way your codePipeline will build the code, do the tests, build the image, tag it and push it to ECR and FluxCD will deploy it to kubernetes. (it is also natively configurable to deploy on each X minutes (based on your configuration) on your cluster, so even if you bring a little change into your manifests, it will be automatically deployed !
bguess
you can also make use of argo cd its very easy to install and use compared to aws codepipeline.
argo cd was specifically designed for Kubernetes thus offers much better way to deploy to K8s

How to connect on premise kubernetes cluster using Jenkins File

I am trying to deploy application on kubernetes cluster by using jenkins multi branch pipeline and "Jenkins file" but unable to make connection between Jenkins and Kubernetes. From code side I can't share more details here.
I just want to know if there is any way to make this connection (Jenkins and Kubernetes) using Jenkins file so that I will use it to deploy the application on Kubernetes.
Following is the technology stack that might clear my issue:
Jenkins file is kept at root location of project in git hub.
Separate jenkins server where pipeline is created to deploy the application on Kubernetes.
On premise kubernetes cluster.
You need credentials to talk to Kubernetes. When you have automation like Jenkins running jobs, it's best to create a service account for Jenkins, look here for some documentation. Once you create the Jenkins service account, you can extract an authentication token for that account, which you put into Jenkins. What I would recommend doing, since your Jenkins is not a pod inside your Kubernetes cluster is to upload a working kubectl config as a secret file in the Jenkins credential manager.
Then, in your Jenkins job configuration, you can use that secret. Jenkins can put the file somewhere for your job to access, then in your Jenkinsfile, you can run commands with "kubectl --kubeconfig= ...".

Service Fabric doesn't run a docker pull on deployment

I've setup VSTS to deploy an Service Fabric app with a Docker guest container. All goes well but Service Fabric doesn't download the latest version of my image, a docker pull doesn't seem to be performed.
I've added the 'Service Fabric PowerShell script' with a 'docker pull' command but this is then only run on one of the nodes.
Is there a way to run a powershell script/command during deployment, either in VSTS or Service Fabric, to run a command across all the nodes to do a docker pull?
Please use an explicit version tag. Don't rely on 'latest'. An easy way to do this in VSTS, in the task 'Push Services' add $(Build.BuildId) in the field Additional Image Tags to tag your image.
Next, you can use a tokenizer to replace the ServiceManifest.xml image tag value in your release pipeline. One of my favorites is this one.
to deploy docker containers to Service Fabric, you have to either provide a Docker Compose file or a Service Fabric Applicaiton Package with manifests.
For containers the Service Fabric hosting system controls the docker host on the nodes to run containers.
For VSTS deployments, there's a Service Fabric Deploy task and a Service Fabric Compose Deploy task for both paths.
Container quick starts for Service Fabric:
See her for Windows: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-quickstart-containers
Her for Linux: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-quickstart-containers-linux

Setting up Spring Cloud Data Flow on Kubernetes

Do I need to install an instance of Spring Cloud Data Flow on the master server myself, or is this getting installed "automatically" as part of the deployment?
This isn't quite clear from the description at
http://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/current-SNAPSHOT/reference/htmlsingle/#_deploying_streams_on_kubernetes
I've followed the guide, though removed every config for MySQL. Maybe this is required. Though I'm somewhat stuck since it's just not assigning an external IP and I do not see why, how to debug, and whether I missed to install some required component.
Edit:
To clarify, I see a scdf service entry when I run
kubectl get svc
But this service never gets an external IP.
Do I need to install an instance of Spring Cloud Data Flow on the master server myself, or is this getting installed "automatically" as part of the deployment?
Spring Cloud Data Flow server needs to be setup either outside (that knows how to connect to the kubernetes environment) or you can use the Spring Cloud Data Flow server docker image to run inside the kubernetes while the latter approach is better.
Step 6 in the link you posted above runs the SCDF docker image inside the kubernetes cluster:
```
Deploy the Spring Cloud Data Flow Server for Kubernetes using the Docker image and the configuration settings you just modified.
$ kubectl create -f src/etc/kubernetes/scdf-config-kafka.yml
$ kubectl create -f src/etc/kubernetes/scdf-secrets.yml
$ kubectl create -f src/etc/kubernetes/scdf-service.yml
$ kubectl create -f src/etc/kubernetes/scdf-controller.yml
```
MySql is required, that's why it's in the steps.
Spring Cloud Data Flow uses an RDBMS instead of Redis for stream/task
definitions, application registration, and for job repositories.
You can also use any of the other supported RDMBSes.
You can install it using Helm Charts.
https://dataflow.spring.io/docs/installation/kubernetes/helm/
At first install Helm
Then install Spring Cloud Data Flow
helm install --name my-release stable/spring-cloud-data-flow
It will install and config relevant pods such as spring-cloud-dataflow-server, mysql, skipper, rabbitmq, etc.
Also you can customize versions and configurations.

Google Cloud - Deploy App to Specific VM Instance

I am using Google Cloud / Google Compute to host my application. I was on Google App Engine and I am migrating my code to Google Compute in order to use a customized VM Instance.
I am using the tutorial here, and I am deploying my app using:
$ gcloud preview app deploy
I setup a custom VM Instance using the "Create Instance" option at the top of my Google Cloud Console:
However, when I use the standard deploy gcloud command, my app is deployed to Managed VMs (managed by Google), and I have no control over those servers. I need to run the app on my custom VM because it has some custom OS-level software.
Any ideas on how to deploy the app to my custom VM Instance only? Even when I delete all the Managed VMs and try to deploy, the VMs are just re-created by Google.
The gcloud app deploy command can only be used to deploy the app to classic AppEngine sandboxed environment or to the Managed VMs. It cannot deploy your application to an instance running on GCE.
You will need to incorporate your own deployment method/script depending on the programming language you're using. Of course, since GCE is just an infrastructure-as-a-service environment (versus AppEngine being a platform-as-a-service), you will also need to take care of high-availability (what happens when your instance becomes unavailable?), scalability (what happens when one instance is not enough to sustain the load of your application?), load balancing and many more topics you'll need to address.
Finally, If you need to install packages on your application servers you may consider taking the Managed VMs route. It manages for you all the infrastructure related matters (scalability, elasticity, monitoring etc) and still allows you to have your own custom runtime. It's still beta though...
How to create a simple static Website and deploy it on Google cloud VM instance
Recommended: Docker and Google Cloud SDK should be installed
Step:1
Create a Folder “personal-website” with index.html and frontend files on your local computer
Step:2
Inside “personal-website” folder create a Dockerfile
Write two lines
FROM httpd
COPY . /usr/local/apache2/htdocs/personal-website
Step:3
Build image with docker and push it to Google cloud registry
You should have google cloud sdk and project selected and docker authorized
Select Project using these commands:
gcloud config set project [PROJECT_ID]
gcloud config set compute/zone us-central1-b
After that Run these commands
1. export PROJECT_ID="$(gcloud config get-value project -q)"
2. docker build -t gcr.io/${PROJECT_ID}/personal-website:v1 .
3. gcloud auth configure-docker
4. docker push gcr.io/${PROJECT_ID}/personal-website:v1
Step:4
Create a VM instance with command with container running into it
Run Command
1. gcloud compute instances create-with-container apache-vm2 --container-image gcr.io/test-project-220705/personal-website:v1