I am using docker for continuous integration of a Scala project. Inside the container I am building the project and creating a distribution with "sbt dist".
This takes ages pulling down all the dependencies and I would like to use a docker data volume as mentioned here: http://docs.docker.io/en/latest/use/working_with_volumes/
However, I don't understand how I could get SBT to put the jar files in the volume, or how SBT would know how to read them from that volume.
SBT uses ivy to resolve project dependencies. Ivy caches downloaded artifacts locally and every time it is asked to pull something, it first goes to that cache and if nothing found downloads from remote. By default cache is located in ~/.ivy2, but it is actually a configurable property. So just mount volume, point ivy to it (or mount it in a way it will be on default location) and enjoy the caches.
Not sure if this makes sense on an integration server, but when developing on localhost, I'm mapping my host's .ivy2/ and .sbt/ directories to volumes in the container, like so:
docker run ... -v ~/.ivy2:/root/.ivy2 -v ~/.sbt:/root/.sbt ...
(Apparently, inside the container, .ivy2/ and .sbt/ are placed in /root/, since we're logging in to the container as the root user.)
Related
I'm attempting to deploy a python server to Google App Engine.
I'm trying to use the gcloud sdk to do so.
It appears the command I need to use is gcloud app deploy.
I get the following error:
me#mymachine:~/development/some-app/backend$ gcloud app deploy
ERROR: (gcloud.app.deploy) Error Response: [3] The directory [~/.config/google-chrome/Default/Cache] has too many files (greater than 1000).
I had to add ~/.config to my .gcloudignore to get past this error.
Why was it looking there at all?
The full repo of my project is public but I believe I've included the relevant portion.
I looked at your linked repo and there aren't any yaml files. As far as I know, a GAE project needs an app.yaml file because that file tells GAE what your runtime is so that GAE knows how to deploy/run your code. In fact, according to the gcloud app deploy documentation, if you don't specify any yaml files to be deployed, it will default to app.yaml in the current directory. If it can't find any in the current directory, it will try to build one.
Your repo also shows you have a Dockerfile. GAE documentation for custom runtimes says ...Custom runtimes let you build apps that run in an environment defined by a Dockerfile... In the app.yaml file for custom runtimes, you will have the following entry
runtime: custom
env: flex
Since you don't have an app.yaml file and you have a Docker file in which you are downloading and installing Chrome, it seems to me that gcloud app deploy is trying to infer your runtime and this has led to it executing some or all of the contents of the Dockerfile before it attempts to then push it to Production. This is what is making it take a peek at the config file on your local machine till you explicitly tell it to ignore it. To be clear, I'm not 100% sure of this, just trying to see if I can draw a logical conclusion.
My suggestion would be to create an app.yaml file and specify a custom runtime. Or just use the python runtime with flex
I'm new to docker-compose and after reading the docs I still have some unclear things that comes to my mind.
So far when I used docker I kept the builds in the following directory tree:
builds:
Service A:
Dockerfile
ServiceA.jar
Service B:
Dockerfile
ServiceB.jar
So when I want to run all I use some shell script, until I read about docker-compose.
I saw that there are 2 ways of creating and running a service (in manner of copying files)
Specifying build: path/to/my/build/directory and linking it with
volumes: so it can see live code and refresh the service
Specifying image: (for example java:8) and then use the volumes: as above
What I want to understand is what is the best practice using docker-compose
before I dive in to it, should I create specify image for each service (and replace with the FROM inside the Dockerfile) or should I specify paths to build folders and volumes to keep live code changes, and how does volumes work and their usages when using image tag
Thanks!
In Docker you can simply run services as containers and you can put the state of each service in a volume. This means for you:
The service is run as a runtime container which will be started from an image.
The service's binaries are inside an image, the service itself writes data to volumes.
The image can be either pulled from a image repository or build on the target environment.
Images can be build with the command docker build or the docker-compose build section.
In your example this means:
Keep the directory structure.
Use the docker-compose build section to build your immages according your Dockerfiles.
Configure your Dockerfile to put the binaries inside the image.
Simply start the whole stack including the build with docker-compose up -d
Your binaries changed? Simply replace the whole stack with docker-compose up --build --force-recreate -d. This command will rebuild all images and replace containers.
Why you do not place the binaries inside the volume:
You lose the advantage of image versioning.
If you replace the binaries files you cannot simply fallback to an older image version.
You can retag images before you deploy your new version and fallback if an error occurs.
You can tag and save running containers for fallback and error investigation.
I'm currently experimenting with ACI construction for rkt-containers. During my experiments I've built some containers especially for the use as a dependency. I now want to use these .aci images as a dependency for other images. As these files are fetched by name (for example "quay.io/alpine-sh"), I wonder if there is a way to refer to actual local .aci files.
Is there a way to import these .aci files from the local filesystem or do I have to set up a local webserver to serve as a repository?
Dependencies in acbulid (at least till version 0.3) can be defined only as http-links,
so you need to make your aci available through http to use it as dependency in acbuild.
It's not so hard to publish your aci to make it available through http. Image archive can be actually hosted on github or bitbucket.
The recent versions of acbuild seem to support it
since the relating issue (cache dependencies across acbuild invocations #144) is closed.
Cached ACI's are stored in directories depstore-tar and depstore-expanded inside $CONTEXT_ROOT/.acbuild. If we save somehow content of those directories between acbuild init,
acis won't be downloaded over and over again.
When i played with acbuild i was so annoyed that acbulid redownloads dependencies on every build.
I've written script https://bitbucket.org/legeyda/anyorigin/src/tip/acbuild-plus
which configures symbolic links inside $CONTEXT_ROOT/.acbuild to point to
persistence directories inside /var/lib/acbuild/hack. The usage is simple:
acbuild begin
acbuild-plus init target
After that all dependencies will be cached by acbuild.
You can also manually install aci-file to be available to acbuild.
This is as simple as
acbulid-plus install <your-image.aci>
I've tested the script with acbuild v0.3.0.
You can get an example of using it in the Makefile next to acbuld-plus in the repository.
My application is built on Play Framework (2.5.3) using Scala. In order to deploy the app, I create a docker image using sbt docker:publishLocal command. I am trying to figure out where the base docker image file is in the play framework folder structure. I do see a DockerFile in target/docker folder. I don't know how Play Framework creates this DockerFile and where / how Play Framework tells docker to layer the application on the base image. I am scala/play/docker n00b. Please help.
Here are some definitions in order to get an idea of what im talking about.
Docker containers are stored applications or services that have been configured and stored so that docker can run it on any machine docker is installed on.
Docker Images are save states of docker container instances which are used to run new containers.
Dockerfiles are files that contain build instructions for how a docker image should be built. This is for offloading some of the many additional parameters one has to attach in order to run a container just how they want to.
#michaJlS I am looking for details on the base docker image that play framework uses. Regarding your question on why I need it - 1. I like to know where it is 2. I'd like to know how I can add another layer to it if need be.
1.) This is going to be based on what you want to look for. If you are looking for the ready-to-use image for docker, simply do docker images and find what image is implying what you are using. If you are looking for the raw data, you can find that in /var/lib/docker/graph. However this is pointless as you need the image ID which can only be specified by the docker images command i gave above. If the image was not built correctly the image will not appear.
2.) If you wish to modify or attach an additional layer (which by adding layers, you mean to add new files and modules to your application), you need to run the docker image (via the docker run command). Again this is moot if the docker image can not be located by the docker daemon.
While this should answer your question, I do not believe it was what you were asking. People who are concerned with docker are people who want to put applications into containers that can be ran on any platform avoiding dependency issues while maintaining relative convenience. If you are trying to use docker to place your newly created application, you are going to want to build from a dockerfile.
In short, dockerfiles are build instructions that you would normally have to specify either before running a container or when inside one. If you are stating that the sbt docker command created a docker file, your going to need to look for that file and use the docker build command in order to create an instance of your image (info should be provided in that second link). Your best course of action in order to try to get your application to run on a container is to either build it inside a container that has the environment of the running app or simply build it from a dockerfile.
The first option would be to simply be docker run -ti imagename with the image being the environment of your machine that has the running app. You can find some images on the docker hub and one that may be of interest to you is this play-framework image someone else created. That command will put you into an interactive session with the container so you can work as if you are trying to create the app within your own environment. Just don't forget to use docker commit when you are done building!
The faster method of building your image would be to do it from a dockerfile. If you know all the commands and you have all the dependencies needed to create and run your application simply put those into a file named "Dockerfile" (following the instructions and guidance of link 2) and do docker build -t NewImageName /path/to/Dockerfile. This will create an image which you can use to create containers of that image or distribute it how you see fit.
There is no specified Docker file, it's generated by plugin with predefined defaults, that you can override
http://www.scala-sbt.org/sbt-native-packager/formats/docker.html
You can take a look at docker.DockerPlugin, to get a better idea of the defaults used.
You can also look at DockerKeys for additional tasks, like
docker-generate-config
Documentation here: http://www.scala-sbt.org/sbt-native-packager/formats/docker.html indicates that the base image is dockerfile/java which doesn't seem to be in docker hub but details for the image are on github: https://github.com/dockerfile/java
The documentation also indicates that you can specify your own base image using a "dockerBaseImage" environment setting or creating a custom dockerfile http://www.scala-sbt.org/sbt-native-packager/formats/docker.html#custom-dockerfile
It's also indicated what the requirements are when using your own base image:
The image to use as a base for running the application. It should include binaries on the path for chown, mkdir, have a discoverable java binary, and include the user configured by daemonUser (daemon, by default).
I am attempting to install the piechart plugin on my Grafana v2.5 environment and no matter what I do the panel does now show as an option in the UI. I cloned the repository to /var/lib/grafana/plugins as documented and restarted the grafana-server service and that did not work. I also tried putting the plugin in a separate directory and referencing it as:
[plugin.piechart]
path = /home/usr/share/grafana/panel-plugin-piechart
I made sure that the grafana service has ownership of the plugin directory, and checked the grafana logs but it did not have useful information.
https://github.com/grafana/panel-plugin-piechart
You will need Grafana master based on the release date of the plugin.
Confirmed here - https://groups.io/g/grafana/message/1181
You definitely need to upgrade your Grafana. This is very seamless operation - just install a new package on top of the old one. You can back up through copy /var/lib/grafana/grafana.db for safety before doing that.
Check the permission of the files in plugins directory,
all the files of the plugin should be in its directory, i.e. every plugin should be contained its own directory
if the plugins directory has any package.json file or webpack.config.js available outside then also your plugins will fail to load.
the above mentioned files are part of every panel plugins and should only exist in their respective directories.
execute "chown" and set the owner to grafana:grafana group:user
(by default root is the owner of the files and directories.)
Are you running Grafana as a standalone service or in a docker container?
If running as a service directly you can visit the Grafana community page and find the plugin installation instructions from there.
https://grafana.com/grafana/plugins/grafana-piechart-panel
(Verified on Grafana version 6.x.x & 7)
If running within a dockerized service you need to copy the plugin in your workspace and specify the directory within the docker image so it can locate the plugin from there. You can do this by using the environment variables and mention these within a docker-compose file
GF_PATHS_PLUGINS /var/lib/grafana/plugins
https://grafana.com/docs/grafana/latest/installation/configure-docker/
I have been able to work with both these options