Publish a Service Fabric Container project - docker-compose

I don't make it to publish a container. In my case I want to put an MVC4 Webrole into the container. ...but actually, what's inside the container does not matter.
Their primary tutorial for using a container to lift-and-shift old apps uses Continuous Delivery. The average user does not always need this.
Instead of Continuous Delivery one may use the Visual Studio's support for Docker Compose:
Connect-ServiceFabricCluster <mycluster> and then New-ServiceFabricComposeApplication -ApplicationName> <mytestapp> -Compose docker-compose.yml
But following exactly their tutorial still leads to errors. The applicaton appears in the cluster but outputs immediatly an error event:
"SourceId='System.Hosting', Property='Download:1.0:1.0'. Error during
download. Failed to download container image fabrikamfiber.web"
Do I miss a whole step, which they expect to be obvious? But even placing the image in my Docker Hub registry myself did not help? Or does it need to be Azure Container Registry?

Docker Hub should work fine, ACR is not required.
These blog posts may help:
about running containers
about docker compose on Service Fabric

Related

Running Net Core Api and MongoDb in the same container

I have a requirement to create a Net Core Api running in a docker container. Typically, I would link that to another container running MongoDb and run them both in the same Docker CLI together but seperate containers.
However, for this project I need to embed MongoDb inside the actual API project container and access it from the API as a local implementation (to that container).
I’ve been able to create the container for the API and part of that Dockerfile installs MongoDb - but I cant work out what my connection string for MongoDb needs to be - given that its running inside the container too. Could someone give me some pointers?

Using Docker Container as a REST API

I was working on a software using Docker wherein I have my container uploaded on Docker. Is there any way that I could convert the container into a REST API and simply make calls to it in my software ?
A little remark first: You don't upload containers, you upload images.
Further than that, of course you can run an API inside a container. In order to call it from another application, you would have to configure the container's network properly.
Here you can find a small example of a Python Flask API running inside a Docker container, which I built as a coding challenge. It's far from perfect, but you should get the idea from that.

Can I stop Azure Container Service for Linux from issuing Docker Pull commands?

I am using an Azure App Service (Linux containers) to host a container application. Unfortunately for me, the App Service periodically issues a new Docker Pull command like this:
2018-11-08 18:39:32.512 INFO - Issuing docker pull: imagename =library/ghost:2.2.4-alpine
I don't know why it is issuing this command, and I can't find out how to stop it doing so.
I want to stop it because although the volume on which my container stores data can survive restarts of the container, it doesn't seem to survive rebuilding the container. I suspect that this might be because I'm using the Docker Compose (preview), and the docker compose configuration sets a volume name and associates it with the container.
I currently have 'continuous deployment' toggled 'OFF' in the azure console, and I can't find any setting which seems to control whether or not the underlying app service is issuing the docker pull command.
Unfortunately I can't use the docker single container as the pre-built ghost images don't appear to be set up to store data in a volume outside the container.
I have had no luck in searching the App Service FAQs for information about this behaviour. I'm hoping that I've made a foolish mistake which is easy to fix, and that someone here will have seen this and fixed it themselves.
For your issue, you will know how to achieve what you want if you know the work process of Azure Web App for Container.
Each time when the Web App starts, no matter you restart it or restart itself because of the timeout, it will check the image if it should update. When you use the public Docker hub image, the update dependent on the Docker hub, not your order.
So the best way for you is to store the image in your private container registries like your own git hub or Azure Container Registry. And give the image a specific tag. This way make sure that if you do not update the image, the web app will do the check when it starts.

Build multiple images with Docker Compose?

I have a repository which builds three different images:
powerpy-base
powerpy-web
powerpy-worker
Both powerpy-web and powerpy-worker inherit from powerpy-base using the FROM keyword in their Dockerfile.
I'm using Docker Compose in the project to run a Redis and RabbitMQ container. Is there a way for me to tell Docker Compose that I'd like to build the base image first and then the web and worker images?
You can use depends_on to enforce an order, however that order will also be applied during "runtime" (docker-compose up), which may not be correct.
If you're only using compose to build images it should be fine.
You could also split it into two compose files. a docker-compose.build.yml which has depends_on for build, and a separate one for running the images as services.
These is a related issue: https://github.com/docker/compose/issues/295
About run containers:
It was bug before, but they fixed it since docker-compose v1.10.
https://blog.docker.com/2016/02/docker-1-10/
Start linked containers in correct order when restarting daemon: This is a little thing, but if you’ve run into it you’ll know what a headache it is. If you restarted a daemon with linked containers, they sometimes failed to start up if the linked containers weren’t running yet. Engine will now attempt to start up containers in the correct order.
About build:
You need to build base image first.

Cyclic backups of a docker postgresql container

I would like to deploy an application using docker and would like to use a postgresql container to hold my data.
However I am worried about losing data, so I need back-ups.
I know I could run a cron job on the host to dump the data out from the container, however this approach is not containerized and when I deploy to a new location, I have to remember to add the cronjob.
What is a good , preferably containerized, approach to implement rotating data backups from a postgresql docker container?
Why not deploy a second container that is linked to the PostgreSQL one that does the backups?
It can contain a crontab within, together with instructions on how to upload the backup to Amazon S3, or some other secure storage in the cloud that will not fail even in case of an atomic war :)
Here's some basic information on linking containers: https://docs.docker.com/userguide/dockerlinks/
You can also use Docker Compose in order to deploy a fleet of containers (at least 2, in your case). If your "backup container" uploads stuff to the cloud, make sure you don't put your secrets (such as AWS keys) into the image at build time. Put them into the container at run-time. Here's more information on managing secrets using Docker.