Using Docker Container as a REST API - rest

I was working on a software using Docker wherein I have my container uploaded on Docker. Is there any way that I could convert the container into a REST API and simply make calls to it in my software ?

A little remark first: You don't upload containers, you upload images.
Further than that, of course you can run an API inside a container. In order to call it from another application, you would have to configure the container's network properly.
Here you can find a small example of a Python Flask API running inside a Docker container, which I built as a coding challenge. It's far from perfect, but you should get the idea from that.

Related

Running Net Core Api and MongoDb in the same container

I have a requirement to create a Net Core Api running in a docker container. Typically, I would link that to another container running MongoDb and run them both in the same Docker CLI together but seperate containers.
However, for this project I need to embed MongoDb inside the actual API project container and access it from the API as a local implementation (to that container).
I’ve been able to create the container for the API and part of that Dockerfile installs MongoDb - but I cant work out what my connection string for MongoDb needs to be - given that its running inside the container too. Could someone give me some pointers?

Can I stop Azure Container Service for Linux from issuing Docker Pull commands?

I am using an Azure App Service (Linux containers) to host a container application. Unfortunately for me, the App Service periodically issues a new Docker Pull command like this:
2018-11-08 18:39:32.512 INFO - Issuing docker pull: imagename =library/ghost:2.2.4-alpine
I don't know why it is issuing this command, and I can't find out how to stop it doing so.
I want to stop it because although the volume on which my container stores data can survive restarts of the container, it doesn't seem to survive rebuilding the container. I suspect that this might be because I'm using the Docker Compose (preview), and the docker compose configuration sets a volume name and associates it with the container.
I currently have 'continuous deployment' toggled 'OFF' in the azure console, and I can't find any setting which seems to control whether or not the underlying app service is issuing the docker pull command.
Unfortunately I can't use the docker single container as the pre-built ghost images don't appear to be set up to store data in a volume outside the container.
I have had no luck in searching the App Service FAQs for information about this behaviour. I'm hoping that I've made a foolish mistake which is easy to fix, and that someone here will have seen this and fixed it themselves.
For your issue, you will know how to achieve what you want if you know the work process of Azure Web App for Container.
Each time when the Web App starts, no matter you restart it or restart itself because of the timeout, it will check the image if it should update. When you use the public Docker hub image, the update dependent on the Docker hub, not your order.
So the best way for you is to store the image in your private container registries like your own git hub or Azure Container Registry. And give the image a specific tag. This way make sure that if you do not update the image, the web app will do the check when it starts.

Publish a Service Fabric Container project

I don't make it to publish a container. In my case I want to put an MVC4 Webrole into the container. ...but actually, what's inside the container does not matter.
Their primary tutorial for using a container to lift-and-shift old apps uses Continuous Delivery. The average user does not always need this.
Instead of Continuous Delivery one may use the Visual Studio's support for Docker Compose:
Connect-ServiceFabricCluster <mycluster> and then New-ServiceFabricComposeApplication -ApplicationName> <mytestapp> -Compose docker-compose.yml
But following exactly their tutorial still leads to errors. The applicaton appears in the cluster but outputs immediatly an error event:
"SourceId='System.Hosting', Property='Download:1.0:1.0'. Error during
download. Failed to download container image fabrikamfiber.web"
Do I miss a whole step, which they expect to be obvious? But even placing the image in my Docker Hub registry myself did not help? Or does it need to be Azure Container Registry?
Docker Hub should work fine, ACR is not required.
These blog posts may help:
about running containers
about docker compose on Service Fabric

Dockerized MongoDB on Heroku?

I'm not sure if this is the right StackExchange to be asking this, but I'm in the process of setting up a MEAN stack application and I want to do it right from the get-go.
I really would like to use Docker and Heroku (due to their new pipelining groups and ease of deployment as the sole developer), but I can't find any guides on how to run MongoDB as a docker image on Heroku.
Is this even possible? I also don't really understand how you can put a database into a binary image (Docker image) anyways, yet every guide I've read says to separate the micro-services.
Has anyone else done this?
Thanks.
EDIT: Or is it just a better idea to leave Mongo undockerized and use MongoLabs and have two separate instances for Dev/Prod databases?
There is an official mongodb docker image which you can use. you just need to make sure you have docker installed on heroku.
If you are concerned about the data persistance you can easily mount host directories into your container so you will have physical access to your data. if you are worried about accebility you can easily expose ports inside your comtainer to your host so everything can connect to it.
Having your database in a container makes you able to be worried only on the db configuration and not the ehole stack . so when something goes down you always know where to look.

Docker: Running multiple applications VS running multiple containers

I am trying to run Wildfly, Jenkins and Postgresql in Docker container(s).
As far as I could understand from articles I've read, the Docker way is to have each application run in a different container.
Is my assumption correct or is it better to have only one container containing these three applications?
Afaik the basic philosophy behind docker is to run one service per container. You can run whole application inside a container, but I don't think that will go well with the way docker work. Running different services in different containers gives you more flexibility and a better modularity for your app.