I can not understand conception of Docker. I trying to install this component (graphite rendering graphs from influxdb):
https://github.com/vimeo/graphite-api-influxdb-docker
I was faced with docker at first time and it is important to deploy graphite+influxdb from that link by this work night.
The question is: if I need search github links of graphite and influxdb, install them, and after that make them work under docker?
For what docker and how quickly to deploy this project.
As I understood I need to do next steps from github link:
#cd /root
#yum install docker
#docker pull vimeo/graphite-api-influxdb
#git clone https://github.com/vimeo/graphite-api-influxdb-docker.git
#cd graphite-api-influxdb-docker
#ls
Dockerfile graphite-api.sh graphite-api.yaml LICENSE NOTICE README.md
#vi graphite-api.yaml (change <host> to localhost)
#docker build .
#docker run -p 8000:8000 <image-id> (<image-id> here i set like vimeo/graphite-api-influxdb if this true?)
I feel that I think in different direction and hope for a few words what u think about will a little help to me.
First you need to clone the GitHub repository
git clone https://github.com/vimeo/graphite-api-influxdb-docker.git
Second, you have to add your own graphite-api.yaml (if you want)
Build it:
docker build .
If you need more information about how to build a Docker content from a Dockerfile, read the "Building an image from a Dockerfile" section from this link to know how to build a Docker image from a Dockerfile.
You can add a name with -t option (and use it as ID in the next step).
And, finally, run the content :
docker run -p 8000:8000 [ID]
[ID] is provided to you when you build the Docker content (it is explained in the link).
I hope my answer will help you.
Related
Is it possible with Docker to combine two images into one?
Like this here:
genericA --
\
---> specificAB
/
genericB --
For example there's an image for Java and an image for MySQL.
I'd like to have an image with Java and MySQL.
No, you can only inherit from one image.
You probably don't want Java and MySQL in the same image as it's more idiomatic to have a single component in a container i.e. create a separate MySQL container and link it to the Java container rather than put both into the same container.
However, if you really must have them in the same image, write a Dockerfile with Java as the base image (FROM statement) and install MySQL in the Dockerfile. You should be able to largely copy the statements from the official MySQL Dockerfile.
Docker doesn't directly support this, but you can use DockerMake (full disclosure: I wrote it) to manage this sort of "inheritance". It uses a YAML file to set up the individual pieces of the image, then drives the build by generating the appropriate Dockerfiles.
Here's how you would build this slightly more complicated example:
--> genericA --
/ \
debian:jessie --> customBase ---> specificAB
\ /
--> genericB --
You would use this DockerMake.yml file:
specificAB:
requires:
- genericA
- genericB
genericA:
requires:
- customBase
build_directory: [some local directory]
build: |
#Dockerfile commands go here, such as
ADD installA.sh
RUN ./installA.sh
genericB:
requires:
- customBase
build: |
#Here are some other commands you could run
RUN apt-get install -y genericB
ENV PATH=$PATH:something
customBase:
FROM: debian:jessie
build: |
RUN apt-get update && apt-get install -y buildessentials
After installing the docker-make CLI tool (pip install dockermake), you can then build the specificAB image just by running
docker-make specificAB
If you do docker commit, it is not handy to see what commands were used in order to build your container, you have to issue a docker history image
If you have a Dockerfile, just look at it and you see how it was built and what it contains.
Docker commit is 'by hand', so prone to errors, docker build using a Dockerfile that works is much better.
You can put multiple FROM commands in a single Dockerfile.
https://docs.docker.com/reference/builder/#from
I am trying to run the postgres container and get error as bellow.
"Unable to find image 'name:latest' locally
docker: Error response from daemon: pull access denied for name, repository does not exist or may require 'docker login': denied: requested access to the resource is denied."
I have been working on the problem for a couple of days I do not know what the problem is.
This is of my command:
The issue is with your command:
docker run -- name
While --name should be with no any spaces, but you have space between -- and name.
Run your command again with the correct syntax.
To clarify more:
When you run docker run -- name, docker assumes that you are trying to pull and download an image called name, and since your name does not include any tags, so it says I cannot find any image called name:latest.
Just in case anyone gets this error for the same reason I did. I had built an image locally and Docker was complaining the image could not be found. It seems the error was happening because I built the image locally, but specified a different platform for docker run (I had copied the command from somewhere else). Example:
docker build -t my-image .
docker run ... --platform=linux/amd64 my-image
linux/amd64 is not my current platform. So I removed this argument and it worked.
Answer : You can't use that image because you didn't login to your Docker Hub Account
After creating an account find the image you want to use and then pull the image .
You can simply use docker pull [OPTIONS] NAME[:TAG|#DIGEST] for pulling an image from docker.hub and the using it as a container
According to the docker reference
Most of your images will be created on top of a base image from the Docker Hub registry.
Docker Hub contains many pre-built images that you can pull and try without needing to define and configure your own.
To download a particular image, or set of images (i.e., a repository), use docker pull.
P.S : Thank you for contributing in stackoverflow community, but for your next question please ensure that you are asking your question properly by reading
Code of Conduct
Before you pull the image from DockerHub, use docker login and then enter your username and password.
If you have not yet registered in DockerHub, register from the link below
here
then you can use this command for pull your images.
docker pull imageName
be notice that the image you want to receive must already be in DockerHub.
Using the remote container extension in VSCode I have opened a repository in a dev container. I am using vscode/docker in windows/wsl2 to develop in linux containers. In order to improve disk performance I chose the option to clone the repository into a docker volume instead of bind mounting it from windows. This works great, I did some editing, but before my change was finished I closed my VSCode window to perform some other tasks. Now I have some uncommitted state in a git repository that is cloned into a docker volume, I can inspect the volume using the docker extension.
My question is, how do I reconnect to that project?
One way is if the container is still running I can reconnect to it, then do file>open folder and navigated to the volume mounted inside the container. But what if the container itself has been deleted? If the file was on my windows filesystem I could say "open folder" on the windows directory and then run "Remote-Container: Reopen in dev container" or whatever, but I can't open the folder in a volume can I?
if I understood correctly you cloned a repository directly into a container volume using the "Clone Repository in Container Volume..." option.
Assuming the local volume is still there you should still be able to recover the uncommitted changes you saved inside it.
First make sure the volume is still there: Unless you named it something in particular, it is usually named <your-repository-name>-<hex-unique-id>. Use this docker command to list the volume and their labels:
docker volume ls --format "{{.Name}}:\t{{.Labels}}"
Notice I included the Labels property, this should help you locate the right volume which should have a label that looks like vsch.local.repository=<your-repository-clone-url>. You can even use the filter mode of the previous command if you remember the exact URL used for cloning in the first place, like this:
docker volume ls --filter label=vsch.local.repository=<your-repository-clone-url>
If you still struggle to locate the exact volume, you can find more about the docker volume ls command in the Official docker documentation and also use docker volume inspect to obtain detailed information about volumes.
Once you know the name of the volume, open an empty folder on your local machine and create the necessary devcontainer file .devcontainer/devcontainer.json. Choose the image most suitable to your development environment, but in order to recover your work by performing a simple git commit any image with git installed should do (even those who are not specifically designed to be devcontainers, here I am using Alpine because it occupies relatively little space).
Then set the workspaceFolder and workspaceMount variables to mount your volume in your new devcontainer, like this:
{
"image": "mcr.microsoft.com/vscode/devcontainers/base:0-alpine-3.13",
"workspaceFolder": "/workspaces/<your-repository-name>",
"workspaceMount": "type=volume,source=<your-volume-name>,target=/worskpaces"
}
If you need something more specific there you can find exhaustive documentation about the possible devcontainer configuration in the devcontainer.json reference page.
You can now use VSCode git tools and even continue the work from where you left the last time you "persisted" your file contents.
This is, by the way, the only way I know to work with VSCode devcontainers if you are using Docker through TCP or SSH with a non-local context (a.k.a. the docker VM is not running on your local machine), since your local file system is not directly available to the docker machine.
If you look at the container log produced when you ask VSCode to spin up a devcontainer for you, you will find the actual docker run command executed by the IDE to be something like along this line:
docker run ... type=bind,source=<your-local-folder>,target=/workspaces/<your-local-folder-or-workspace-basename>,consistency=cached ...
meaning that if you omit to specify the workspaceMount variable in devcontainer.json, VSCode will actually do it for you like if you were to write this:
// file: .devcontainer/devcontainer.json
{
// ...
"workspaceFolder": "/worspaces/${localWorkspaceFolderBasename}",
"workspaceMount": "type=bind,source=${localWorkspaceFolder},target=/worspaces/${localWorkspaceFolderBasename},consistency=cached"}
// ...
}
Where ${localWorkspaceFolder} and ${localWorkspaceFolderBasename} are dynamic variables avaialble in the VSCode devcontainer.json context.
Alternatively
If you just want to commit the changes and throw away the volume afterwards you can simply spin up a docker container with git installed (even the tiny Alpine linux one should do):
docker run --rm -it --name repo-recovery --mount type=volume,source=<your-volume-name>,target=/workspaces --workdir /workspaces/<your-repository-name> mcr.microsoft.com/vscode/devcontainers/base:0-alpine-3.13 /bin/bash
Then, if you are familiar with the git command line tool, you can git add and git commit all your modifications. Alternatively you can run the git commands directly instead of manually using a shell inside the container:
docker run --rm -t --name repo-recovery --mount type=volume,source=<your-volume-name>,target=/workspaces --workdir /workspaces/<your-repository-name> mcr.microsoft.com/vscode/devcontainers/base:0-alpine-3.13 /bin/bash -c "git add . && git commit -m 'Recovered from devcontainer'"
You can find a full list of devcontainers provided by MS in the VSCode devcontainers repository .
Devcontainers are an amazing tool to help you keep your environment clean and flexible, I hope this answer understood and helped you solve your problem and expand a bit your knowledge about this instrument.
Cheers
you can also use remote-contianer: clone repository in volumes again
volumes and your changes still there
I would like to setup my JHipster project on a remote server utilising docker-compose as per here.
Am I right in thinking (for the simplest approach), these are the steps I might follow:
Install docker on remote system.
Install docker-compose on remote system.
On laptop (with app src code) run ./mvnw package -Pprod docker:build to produce a docker image of the application.
Copy the image produced by this to remote server like this.
Install this image on remote system.
On laptop copy relevant yml files from src/main/docker to a directory (e.g. dir/on/remote) on the remote server.
Run docker-compose -f dir/on/remote/app.yml up on the remote server.
Thanks for your help.
Also any suggestions on how this process may be improved would be appreciated.
Expecting that your server is Ubunutu,
SSH to your server,
Install docker, docker-compose, install JAVA and set JAVA_HOME
Two approches
create docker image and push it to docker hub if you have docker hub account
create docker image within server
Second approch would be better to reduce the confusion
Clone your repo to server
cd <APPLICATION_FOLDER>
Do
./mvnw package -Pprod docker:build -DskipTests
List the images created
docker images
You can ignore -DskipTests , if you are writing test code.
Do
docker-compose -f /src/main/docker/app.yml up -d
List containers running
docker ps -a
Logs of the container
docker logs <CONTAINER_ID>
I have the following line in my Dockerfile:
RUN git clone https://github.com/assafg/youtube-remote.git ./youtube-remote
When executing sudo docker build -t 'yremote' .
I get the following error:
Cloning into './youtube-remote'... fatal: unable to access
'https://github.com/assafg/youtube-remote.git/': Could not resolve
host: github.com The command '/bin/sh -c git clone
https://github.com/assafg/youtube-remote.git ./youtube-remote'
returned a non-zero code: 128
Running clone command from command line works fine.
This can happen if your container can't connect to the internet. Possibly because it was started with a weird networking option? Run this command to check default internet connectivity:
docker run ubuntu apt install -y git && \
git clone https://github.com/assafg/youtube-remote.git ./youtube-remote
If that container successfully pulls down the repo, it probably means the first container has a networking problem. Try to restart, or change networking settings.
Docker Network just became a first class citizen in the Docker ecosystem. It's a really fast-moving project. This advice applies to v1.8
This is not a very scientific answer but sometimes docker restart helps especially in cases connected with docker network.