Service Fabric doesn't run a docker pull on deployment - powershell

I've setup VSTS to deploy an Service Fabric app with a Docker guest container. All goes well but Service Fabric doesn't download the latest version of my image, a docker pull doesn't seem to be performed.
I've added the 'Service Fabric PowerShell script' with a 'docker pull' command but this is then only run on one of the nodes.
Is there a way to run a powershell script/command during deployment, either in VSTS or Service Fabric, to run a command across all the nodes to do a docker pull?

Please use an explicit version tag. Don't rely on 'latest'. An easy way to do this in VSTS, in the task 'Push Services' add $(Build.BuildId) in the field Additional Image Tags to tag your image.
Next, you can use a tokenizer to replace the ServiceManifest.xml image tag value in your release pipeline. One of my favorites is this one.

to deploy docker containers to Service Fabric, you have to either provide a Docker Compose file or a Service Fabric Applicaiton Package with manifests.
For containers the Service Fabric hosting system controls the docker host on the nodes to run containers.
For VSTS deployments, there's a Service Fabric Deploy task and a Service Fabric Compose Deploy task for both paths.
Container quick starts for Service Fabric:
See her for Windows: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-quickstart-containers
Her for Linux: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-quickstart-containers-linux

Related

Container deployment with self-managed kubernetes in AWS

I am relatively new to AWS and kubernetes. I have created a self-managed kubernetes cluster running in AWS (not using EKS). I have successfully created a pipeline in AWS CodePipeline that builds my container and uploads it to ECR. Currently I am manually deploying the created image in the cluster by running the following commands:
kubectl delete deployment my-service
kubectl apply -f my-service_deployment.yaml
How can I automate this manual step in AWS CodePipeline? How can I run the above commands as part of the pipeline?
Regarding my deployment yaml files, where should I store these files? (currently I store them locally in the master node.)
I am missing some best practices for this process.
Your yaml manifests should'nt be on your master node (never), they should be stored in a Version Control System (just like github/gitlab/bitbucket etc.).
To automate the deployment of your docker image based on new artifact version in ECR, you can use a great tools named FluxCD, it is actually very simple to install (https://fluxcd.io/docs/get-started/) and you can easily configure it to automatically deploy your images in your cluster each time there is a new image on your ECR registry.
This way your codePipeline will build the code, do the tests, build the image, tag it and push it to ECR and FluxCD will deploy it to kubernetes. (it is also natively configurable to deploy on each X minutes (based on your configuration) on your cluster, so even if you bring a little change into your manifests, it will be automatically deployed !
bguess
you can also make use of argo cd its very easy to install and use compared to aws codepipeline.
argo cd was specifically designed for Kubernetes thus offers much better way to deploy to K8s

How to edit the Caddyfile with ECR

I am working with two containers, one is for Caddy and one is for my application, both of them have been hosted on ECS Fargate, while my application is being pulled from ECR and the caddy image is being pulled from the official Docker repository. Both the containers are running fine but I am not sure how to access the Caddyfile for Caddy to serve SSL certificates to my application. I am able to get the standard Caddy webpage on container, but I need to somehow edit the Caddyfile for my use-case, could someone help me out?
enter image description here
According to the documentation for the Caddy image you are using, you should be mounting a /data folder and a /config folder. To do that with ECS on Fargate you need to create an Amazon Elastic File System, and then configure those mount points in your ECS task definition to use the EFS.
If you just want to specify the domain name, the documentation says you can simply pass a --domain parameter in the caddy command line. You would do that by editing the command in your ECS task definition.
I think you will have to configure more than that in order to get it to proxy requests to your other container though.
If you want to bundle your config file into the docker image that is being deployed, the documentation I linked also describes how you can create your own version of the image. You would do that locally, push the image to ECR, and then configure your ECS task definition to use that image instead of the standard Docker Hub image.

How to connect on premise kubernetes cluster using Jenkins File

I am trying to deploy application on kubernetes cluster by using jenkins multi branch pipeline and "Jenkins file" but unable to make connection between Jenkins and Kubernetes. From code side I can't share more details here.
I just want to know if there is any way to make this connection (Jenkins and Kubernetes) using Jenkins file so that I will use it to deploy the application on Kubernetes.
Following is the technology stack that might clear my issue:
Jenkins file is kept at root location of project in git hub.
Separate jenkins server where pipeline is created to deploy the application on Kubernetes.
On premise kubernetes cluster.
You need credentials to talk to Kubernetes. When you have automation like Jenkins running jobs, it's best to create a service account for Jenkins, look here for some documentation. Once you create the Jenkins service account, you can extract an authentication token for that account, which you put into Jenkins. What I would recommend doing, since your Jenkins is not a pod inside your Kubernetes cluster is to upload a working kubectl config as a secret file in the Jenkins credential manager.
Then, in your Jenkins job configuration, you can use that secret. Jenkins can put the file somewhere for your job to access, then in your Jenkinsfile, you can run commands with "kubectl --kubeconfig= ...".

Can a Service Fabric Container project pull from Docker Hub?

I have created a new Service Fabric Container project in Visual Studio that I am trying to test by publishing to the local cluster. I have created a Windows Container image that I have run locally in Docker. I pushed the image to a private registry in Docker Hub.
When I publish the project to the local cluster, it deploys, but then I get an error:
Error event: SourceId='System.Hosting', Property='Download:1.0:1.0'.
There was an error during download.Failed to download container image docker.io/(username)/(repository)
All the examples show pulling an image from Azure Container Registry. Does Service Fabric only work with ACR, or do I have to add additional configuration to my service manifest to use a private Docker Hub registry?
Edit: also, it seems unable to find the container locally. I tried using the tagged local name of the image from the local repository (I checked using "docker images" and it is there). Same result. Service Fabric should be able to find it:
Service Fabric will pull down the image (if it's not already in the local registry) and launch a container based on the arguments you provide.
from MSDN blog on Service Fabric
It looks like the problem is that Service Fabric does not support container deployment on Windows 10 (and my dev machine is Win10, so local development/testing is out). There are notes to this effect on the Azure Documentation but I guess I didn't notice them or glossed over them...

How can I force the latest container image in ACR to deploy to Service Fabric?

When I deploy to my Windows Service Fabric cluster from Azure Container Registry, the latest image is not pulled from ACR - instead the latest image available on the cluster node is just started.
I tried
deploying as a Service Fabric application
deploying with Compose
over VSTS and manually from the PowerShell command line.
With both options I explicitly referred to the :latest image.
Please use explicit image tags, not 'latest'. This is a best practice.