Can a Service Fabric Container project pull from Docker Hub? - azure-service-fabric

I have created a new Service Fabric Container project in Visual Studio that I am trying to test by publishing to the local cluster. I have created a Windows Container image that I have run locally in Docker. I pushed the image to a private registry in Docker Hub.
When I publish the project to the local cluster, it deploys, but then I get an error:
Error event: SourceId='System.Hosting', Property='Download:1.0:1.0'.
There was an error during download.Failed to download container image docker.io/(username)/(repository)
All the examples show pulling an image from Azure Container Registry. Does Service Fabric only work with ACR, or do I have to add additional configuration to my service manifest to use a private Docker Hub registry?
Edit: also, it seems unable to find the container locally. I tried using the tagged local name of the image from the local repository (I checked using "docker images" and it is there). Same result. Service Fabric should be able to find it:
Service Fabric will pull down the image (if it's not already in the local registry) and launch a container based on the arguments you provide.
from MSDN blog on Service Fabric

It looks like the problem is that Service Fabric does not support container deployment on Windows 10 (and my dev machine is Win10, so local development/testing is out). There are notes to this effect on the Azure Documentation but I guess I didn't notice them or glossed over them...

Related

How to get flink streaming jar to kubernetes

With maven am building a fat jar for my streaming app. Have to deploy the jar to a k8 cluster. Enterprise don't have internal docker hub. So my option is to build the image as part of jenkins and use the same in kub job manager config. I would appreciate if any example demonstrating the project layout and steps to deploy
Used the build.sh script from https://github.com/apache/flink/blob/release-1.7/flink-container/docker/README.md and able to convert to docker image. And using docker compose am able to get the app running. But when trying for kub as specified in https://github.com/apache/flink/blob/release-1.7/flink-container/kubernetes/README.md#deploy-flink-job-cluster am seeing image not found.
Kubernetes does not manage images, it relies on Docker for that. You can check the Docker documentation About images, containers, and storage drivers.
In Kubernetes You can use the following registries: Google Container Registry, AWS EC2 Container Registry, Azure Container Registry, IBM Cloud Container Registry and your own Private Registry
You can read the Kubernetes documentation on how to Pull an Image from a Private Registry
You can find many projects helping with the setup of your own private registry.
One of the easiest ones is the project k8s-local-docker-registry by SeldonIO.
Start/Stop private registry in cluster
start private registry
./start-docker-private-registry
stop private registry
./stop-docker-private-registry
Check if the registry catalog can be accessed and the ability to push an image.
(set -x && curl -X GET http://127.0.0.1:5000/v2/_catalog && docker pull busybox && docker tag busybox 127.0.0.1:5000/busybox && docker push 127.0.0.1:5000/busybox)

Service Fabric doesn't run a docker pull on deployment

I've setup VSTS to deploy an Service Fabric app with a Docker guest container. All goes well but Service Fabric doesn't download the latest version of my image, a docker pull doesn't seem to be performed.
I've added the 'Service Fabric PowerShell script' with a 'docker pull' command but this is then only run on one of the nodes.
Is there a way to run a powershell script/command during deployment, either in VSTS or Service Fabric, to run a command across all the nodes to do a docker pull?
Please use an explicit version tag. Don't rely on 'latest'. An easy way to do this in VSTS, in the task 'Push Services' add $(Build.BuildId) in the field Additional Image Tags to tag your image.
Next, you can use a tokenizer to replace the ServiceManifest.xml image tag value in your release pipeline. One of my favorites is this one.
to deploy docker containers to Service Fabric, you have to either provide a Docker Compose file or a Service Fabric Applicaiton Package with manifests.
For containers the Service Fabric hosting system controls the docker host on the nodes to run containers.
For VSTS deployments, there's a Service Fabric Deploy task and a Service Fabric Compose Deploy task for both paths.
Container quick starts for Service Fabric:
See her for Windows: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-quickstart-containers
Her for Linux: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-quickstart-containers-linux

How can I force the latest container image in ACR to deploy to Service Fabric?

When I deploy to my Windows Service Fabric cluster from Azure Container Registry, the latest image is not pulled from ACR - instead the latest image available on the cluster node is just started.
I tried
deploying as a Service Fabric application
deploying with Compose
over VSTS and manually from the PowerShell command line.
With both options I explicitly referred to the :latest image.
Please use explicit image tags, not 'latest'. This is a best practice.

Standard way to put container images on a Kubernetes instance?

I've already read through Kubernetes tutorials. The problem is the lack of a straight answer on how to get a Kubernetes image for TeamCity into a plain Kubernetes instance.
My install doesn't use Google Cloud engine, Amazon EC2, or Azure, which means I can't use their built-in container registries.
This site appears to recommend installing docker and using it the pull the container image:
https://hub.docker.com/r/jetbrains/teamcity-server/
This GitHub page appears to imply that a specific plugin is required for kubernetes:
https://github.com/JetBrains/teamcity-kubernetes-plugin
The Rancher Web UI has a JavaScript/HTML form to install containerized apps: "Enter the URL of a public image on any registry, or a private image hosted on Docker Hub or Google Container Registry."
-> I found teamcity-server on Docker Hub, although I have no idea if I can just give it the page (https://hub.docker.com/r/jetbrains/teamcity-server/) or if there's a special subpath that I have to give it.
For the Docker Hub, the "Enter the URL" instructions are wrong and actually fail with an error. To use the Docker hub, you just type the repository name.
For example, to use teamcity-server:
https://hub.docker.com/r/jetbrains/teamcity-server/
you would type (as the app URL):
jetbrains/teamcity-server

Google Cloud - Deploy App to Specific VM Instance

I am using Google Cloud / Google Compute to host my application. I was on Google App Engine and I am migrating my code to Google Compute in order to use a customized VM Instance.
I am using the tutorial here, and I am deploying my app using:
$ gcloud preview app deploy
I setup a custom VM Instance using the "Create Instance" option at the top of my Google Cloud Console:
However, when I use the standard deploy gcloud command, my app is deployed to Managed VMs (managed by Google), and I have no control over those servers. I need to run the app on my custom VM because it has some custom OS-level software.
Any ideas on how to deploy the app to my custom VM Instance only? Even when I delete all the Managed VMs and try to deploy, the VMs are just re-created by Google.
The gcloud app deploy command can only be used to deploy the app to classic AppEngine sandboxed environment or to the Managed VMs. It cannot deploy your application to an instance running on GCE.
You will need to incorporate your own deployment method/script depending on the programming language you're using. Of course, since GCE is just an infrastructure-as-a-service environment (versus AppEngine being a platform-as-a-service), you will also need to take care of high-availability (what happens when your instance becomes unavailable?), scalability (what happens when one instance is not enough to sustain the load of your application?), load balancing and many more topics you'll need to address.
Finally, If you need to install packages on your application servers you may consider taking the Managed VMs route. It manages for you all the infrastructure related matters (scalability, elasticity, monitoring etc) and still allows you to have your own custom runtime. It's still beta though...
How to create a simple static Website and deploy it on Google cloud VM instance
Recommended: Docker and Google Cloud SDK should be installed
Step:1
Create a Folder “personal-website” with index.html and frontend files on your local computer
Step:2
Inside “personal-website” folder create a Dockerfile
Write two lines
FROM httpd
COPY . /usr/local/apache2/htdocs/personal-website
Step:3
Build image with docker and push it to Google cloud registry
You should have google cloud sdk and project selected and docker authorized
Select Project using these commands:
gcloud config set project [PROJECT_ID]
gcloud config set compute/zone us-central1-b
After that Run these commands
1. export PROJECT_ID="$(gcloud config get-value project -q)"
2. docker build -t gcr.io/${PROJECT_ID}/personal-website:v1 .
3. gcloud auth configure-docker
4. docker push gcr.io/${PROJECT_ID}/personal-website:v1
Step:4
Create a VM instance with command with container running into it
Run Command
1. gcloud compute instances create-with-container apache-vm2 --container-image gcr.io/test-project-220705/personal-website:v1