Running Docker 1.3.2 on CentOS 6.6.
How can I recover from this error state?
Restarting Docker did not help
docker run -p 8888:6543 zopyx/pp.server
Unable to find image 'zopyx/pp.server' locally
Pulling repository zopyx/pp.server
e64a47ccffa6: Error pulling image (latest) from zopyx/pp.server, Driver devicemapper failed to create image rootfs 39339148edaf62e7572fc761b22a06a1b6320117360de99169150300f798e68f: device 39339148edaf62e7572fc761b22a06a1b6320117360de99169150300f798e68f already exists 0117360de99169150300f798e68f already exists
fe95bf7d5f50: Download complete
9a4594fe74ea: Download complete
8c4b1edcceea: Download complete
ed5a78b7b42b: Download complete
f05fd44c10df: Download complete
4a52e4389d94: Download complete
6a6f3cabfcc0: Download complete
4c7a3dc214a2: Download complete
c444afe7e4a7: Download complete
071ab5784dd2: Download complete
6f723dfb9672: Download complete
eef4e9a4e524: Download complete
cab477dc84b8: Download complete
435c43b2ac8c: Download complete
3759d2f133f4: Download complete
bf8ebe5cdfab: Download complete
503797f1ffc0: Download complete
165b1bc94202: Download complete
39339148edaf: Error downloading dependent layers
2015/01/01 16:15:28 Error pulling image (latest) from zopyx/pp.server, Driver devicemapper failed to create image rootfs 39339148edaf62e7572fc761b22a06a1b6320117360de99169150300f798e68f: device 39339148edaf62e7572fc761b22a06a1b6320117360de99169150300f798e68f already exists
I had this issue following a docker build that ran into a disk space issue. It seems that various garbage is left behind. Reboot doesn't fix. A sledgehammer fix is to just delete everything under /var/lib/docker (save anything you need first). Everything is reinstated by the next build run.
This issue has been seen by other people too See Issue 3721.
The two common causes are low disk space and slow network. The slow network cause was marked as fixed in a version of docker earlier than the one you have listed as your versions so I suspect it is a disk space issue.
If it is not a disk space issue you can try to delete the image from your local drive and the try again:
rm -rf /var/lib/docker/aufs/diff/<id>
rm -rf /var/lib/docker/graph/<id>
Where id is 39339148edaf62e7572fc761b22a06a1b6320117360de99169150300f798e68f
Related
Command
gcloud run deploy api --region=$REGION --image=$IMAGE
Logs
Deploying container to Cloud Run service [api] in project [[MASKED]] region [[MASKED]]
Deploying...
Creating Revision...........interrupted
Deployment failed
ERROR: (gcloud.run.deploy) Revision [[MASKED]] is not ready and cannot serve traffic.
I've tried to search Google Cloud documentation, but it does not mention such problem.
How to solve the "Revision is not ready and cannot serve traffic."?
Try to wait a few minutes and then just re-launch the procedure. The good old "let's retry without changing anything" worked for me! :)
EDIT: I talked with a Cloud Architect who works with me and he told me that this is the actual solution, because if you retry too quickly to restart the deploy, GCP may still have some pending operations from the previous one!
I faced the same error in Cloud Run after getting the container working correctly locally. In my case the revisions weren't showing as failing, they had a grey checkmark
and when hovering I got the message
The revision is healthy but not currently serving traffic.
I just needed to click Manage Traffic and set 100% of the traffic to a new revision
I faced this problem as well. In my case I checked "Cloud Run" section from hamburger menu of google cloud console. The "Logs" section should give you more idea about what went wrong. I was missing a python library, and adding correct python dependency in my requirements.txt solved the issue for me. Somehow my local testing went well without this issue. I hope this helps. :)
I faced with this problem, my problem is that my docker image is missing required dependency package at build stage, my Dockerfile missed some steps to copy required files for preparing to install package.
To find you problem if cloud build logs was not make sense for you, I think you should:
From gcloud console, go to service "Container Registry" > Images
Select your repository name
From the image version (maybe latest) that you want to check > more actions > show pull command > then copy that command ex: docker pull gcr.io/..
From gcloud console header > select activate cloud shell
At cloud shell terminal, pull docker images of your latest build by running "pull command" that you copied before.
Start your container from this image to see what exactly happens with your run revision
Long story short, I was able to build a bitbucket .NET/MVC/Angular project successfully on windows 2019 azure hosted agent, but I am unable to make it build successfully on ubuntu agent.
The reason I want to build it on ubuntu is because I noticed the build time is way faster than that of the windows agent, which makes sense considering the platforms.
I am facing this error:
Copying file from "/home/vsts/work/1/s/Bobby.ProjectA/obj/Debug/Bobby.ProjectA.pdb" to "/home/vsts/work/1/s/Bobby.ProjectA/bin/Bobby.ProjectA.pdb".
CopyRoslynCompilerFilesToOutputDirectory:
Creating directory "/bin/roslyn".
Creating directory "/bin/roslyn".
Creating directory "/bin/roslyn".
Creating directory "/bin/roslyn".
/home/vsts/work/1/s/packages/Microsoft.CodeDom.Providers.DotNetCompilerPlatform.1.0.8/build/net45/Microsoft.CodeDom.Providers.DotNetCompilerPlatform.props(17,5):
warning MSB3021: Unable to copy file "/home/vsts/work/1/s/packages/Microsoft.Net.Compilers.2.4.0/build/../tools/csc.exe" to "/bin/roslyn/csc.exe". Access to the path '/bin/roslyn' is denied. [/home/vsts/work/1/s/Bobby.ProjectA/Bobby.ProjectA.csproj]
According to this post, the issue is because the VBCSCompiler is locking the src.
So i have exhausted all of these solutions here to kill the VBCCompiler, but none of them worked.
I also can't restart the ubuntu agent during a build due to CI limitation, and killall VBCSCompiler bash script before msbuild task resulted in this error: VBCSCompiler: no process found
So now i am stumped, if VBCSCompiler process is not even running, why is Access to the path '/bin/roslyn' denied???
Ive also tried updating/installing Microsoft.CodeDom.Providers.DotNetCompilerPlatform package per this post here, but that didnt do/help anything.
This is how my build tasks looks like if it helps:
I'm using Pop OS 20.4 and installed Unity Hub from its store (I think it just downloads the app image from the Unity website and installs it but I don't know).
I set a personal license and want to install the version 2020.1.15f1. As you can see on the next tab there is enough space to install it
Unfortunately I get this error message when trying to install it
at the bottom of the hub this toast message appears
Does someone know what's wrong here? I clearly have enough space on my disk.
I get the same error for other versions too.
Ok here is the solution that worked for me:
Instead of installing it from the Pop OS store I uninstalled this Unity hub version. I restarted my machine and installed the .appimage file from the official website. I copied that file into the root of my home directory. When executing this version there was no error for me.
The reason why it is showing this error is because there is not enough space in your root directory. Typically this happens when there is only a single partition (at least in my case). So a workaround for this is to create a folder in you HOME directory called tmp and running the UnityHub.AppImage file pointing to that tmp folder and the application gets ample amount of space to install. It looks something like this :
TEMP=~/tmp ./UnityHub.AppImage
You can go to this link for further details.
I downloaded the latest Unreal Engine repo from github and opened the setup.bat file as per the instruction on the Official Page. The command-line opened-up and started downloading dependencies and everything looked fine until it reached 30% progress, after which it threw an error:
Checking dependencies...
Updating dependencies: 30% (12936/41030), 1702.0/9123.9 MiB | 5.04 MiB/s...
Failed to download 'http://cdn.unrealengine.com/dependencies/3215544-fbbce13ceb6f4aaea59e20af2d659d08/36799aacce147eba397f6eacc7a0f6c9dac946d8': Can't read from pack stream (CorruptPackFileException)
Initially I thought that it was a connection issue even though I was on a 40-Mbps connection. But it seemed to get stuck on 30% with the same error everytime. So, I started searching for solutions on the internet but could not find any, except for the clue that it was possibly a connection issue.
Furthermore I also tried to clone the repo again and again with the same result (stuck on 30%)!!!
Solution:
Goto
dir-containing-the-repo \ UnrealEngine \ Engine \ Build
Open the file (which should be in the Build folder) with your favourite editor (i.e. VS Code, Notepad):
Commit.gitdeps.xml
Search for this text:
BaseUrl="http://cdn.unrealengine.com/dependencies"
Just replace the http:// with https:// and leave the rest as it is:
Your BaseUrl should now be:
BaseUrl="https://cdn.unrealengine.com/dependencies"
Save the file.
Run Setup.bat from the folder: dir-containing-the-repo \ UnrealEngine
This happened to me as well, but it turns out I ran out of disk space on my Linux machine in AWS. I was using 8GB.
Having a bigger volume (200GB) fixed the error for me on the next run
As a Vagrant user, when trying Docker I noticed one significant difference between development workflow with Vagrant and with Docker - with Docker I need to rebuild my image every time from scratch, even if I made minor changes in code.
This is major problem for me, because the process of image rebuilding oftenly very redundant and time consuming.
Perhaps there are some smart workflows with Docker already invented, if so, what are they?
I filed a feature-request for the vagrant-cachier plugin for saving docker build data and attached a bash workaround for that process. If it's okay for you to hack yourself around you can implement the scripts in vagrant.
caching docker build data with vagrant
Note that this procedure needs the vagrant-cachier plugin to be installed and has to save and load +300MB files from disk if they are new to the machine. Thus it's really slow if you have dockerfiles with just 1-5 lines of code but it's fast if you have dockerfiles with a lot of LOCs or images that have to be downloaded from the net.
Also note that this approach saves every intermediate building step. So if you are building an image and change a line in the middle of a dockerfile and build again the docker build process will get all cached intermediate containers till the changed line.
Using baseimages is still the preferred way but you can combine both procedures.
Feel free to post improvements and subscribe so fgrehm will maybe implement this in his plugin natively.
As Mark O'Connor suggested, one of the tips may be building a base image to your container(s). This image should have the dependencies, package installation, downloads... or any other consuming activity. This base image should be supposed to be built less frequently than the other one(s). In a similar way, if the final states of the execution of each step of your dockerfile doesn't change, Docker don't build this layer again. Thus, you can trying execute the commands than may change this state almost every run (e.g.: apt-get update) as later as you can, so docker don't have to rebuild the steps before. And also you can try to edit your dockerfiles in the later steps better than in the first.
Another option if you compile/download something inside the container is to have it downloaded or compiled in a host folder, and attach it to the container using -v or --volume option in docker run.
Finally there is other approaches to this issue as the one used by chef with knife container. In this approach you build the container using chef cookbooks, and each time you build it (because you have edited your cookbooks...) these changes are applied as a new docker layer (AUFS layer) and you don't have to repeat all the process. I didn't recommend this solution unless you have experience with Chef and you have cookbooks to manage your software. You should work harder to get it working and if you want Chef only to manage docker containers I think it doesn't worth it (although chef is a great option to manage infrastructures).
To automate the building process in case you have several images dependents itself, you can have a bash script that helps you with that task (credits to smola#github):
#!/bin/bash
IMAGES="${IMAGES:-stratio/base:test stratio/mesos:test stratio/spark-mesos:test stratio/ingestion:test}"
LATEST_TAG="${LATEST_TAG:-test}"
for image in $IMAGES ; do
USER=${image/\/*/}
aux=${image/*\//}
NAME=${aux/:*/}
TAG=${aux/*:/}
DIR=${NAME}/${TAG}
pushd $DIR
docker build --tag=${USER}/${NAME}:${TAG} .
if [[ $TAG = $LATEST_TAG ]] ; then
docker tag ${USER}/${NAME}:${TAG} ${USER}/${NAME}:latest
fi
popd
done
There are a couple of tricks that might better your workflow (very web-focused)
Docker caching
Always make sure you are adding your source to your Docker image in Dockerfile at the very end.
Example;
COPY data/package.json /data/
RUN cd /data && npm install
COPY data/ /data
This will make sure you get optimal caching when building the image, and that Docker doesn't have to rebuild the npm packages when you are changing your source.
Also, make sure you don't have a base-image that adds folders/files that are often changed (like base images doing COPY . /data/
fig mount
Use fig (or another tool), and mount your source directory when developing. This way, you can develop with instant changes and still use the current version of your code when building the image.
development server
You can start your developer web-server when you are developing, and nginx when not (if you are developing an www app, but same idea applies to other apps).
Example, in your startup script, do something like:
if [[ $DEBUG ]]; then
/usr/bin/supervisorctl start gulp
else
/usr/bin/supervisorctl start nginx
fi
And have autostart=false in your supervisord.conf files.
auto-refresh app
If you are developing a web-app, use tools like gulp and eg gulp-connect, if you are developing a python/django app, use the runserver utility. Both reloads the server when detecting changes in the files.
If you are using the if [[ $DEBUG ]] ... trick, make them listen on the same port as your normal instance (nginx). That way, you can have 1 configuration for your reverse proxy, ie, just send the traffic to example www:8080, it will hit your web-page both in production and if you are developing.
Create a based image that holds the bulk of your application's dependencies. This will significantly reduce your docker build times.