Podman images not showing with podman image ls - centos

I am trying to setup a build server in a Red Hat Enterprise Linux 8 (CentoOS 8) virtual machine.
I installed podman by running sudo dnf install -y #container-tools
I then ran sudo podman pull mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim to pull a container image from docker:
Trying to pull mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim...Getting image source signatures
Copying blob e936bd534ffb done
Copying blob caf64655bcbb done
Copying blob 4156e490f05f done
Copying blob 68ced04f60ab done
Copying blob 7064c3d93b4a done
Copying config e2cd20adb1 done
Writing manifest to image destination
Storing signatures
e2cd20adb1292ef24ca70de7abaddaadd57a5c932d3852b972e43b6f05a03dea
This looks successful to me. And if I run it again, I get told that the layers "already exists". But then I run:
podman image ls
and I get an empty list back:
REPOSITORY TAG IMAGE ID CREATED SIZE
I also tried the following commands to get a list:
podman image ls -a
podman image list
podman image list -a
podman images
podman images ls
podman images ls -a
podman images list
podman images list -a
They all give an empty list.
How can I see the container image that I pulled down?
Update: I ran sudo podman run --rm --name=linuxconfig-test -p 80:80 httpd and (on another machine) browsed to the ip address of my linux machine and got It Works! shown. So podman is working at least in part.

Unlike Docker, Podman stores images in the home directory of the user. The default path is ~/.local/share/containers/storage and it can be verified by running podman info. Since you executed podman pull as root, the pulled image will be stored in the home directory of the root user. This is why no images are listed when you run podman image ls without sudo.
The main idea behind podman is that it can run entirely in user mode without connecting to a priviledged daemon. Ideally, all podman commands should be run without sudo.

Turns out you have to run using sudo. I ran :
sudo podman image ls
and it returned the list of container images.

You can use the --root option to give the path to where the images are stored. That is if you need to run as root.
Though one important part of using podman is that you do not need to run as root or sudo user.
Note - The moment you run this, podman will change the owner of certain folders and files in the overlay location,and later when you run without sudo , you need to chown back. So not recommended
sudo podman images --root /home/xxx/.local/share/containers/storage

Related

List directory /var/lib/apt/lists/partial is missing. - Acquire (20: Not a directory)

When I executed sudo apt update I'm getting
Reading package lists... Done
E: List directory /var/lib/apt/lists/partial is missing. - Acquire (20: Not a directory)
Also, I was getting a status error which I solved using
sudo cp /var/lib/dpkg/status-old /var/lib/dpkg/status
I tried sudo mkdir /var/lib/apt/lists/partial as suggested in few other threads
mkdir: cannot create directory ‘/var/lib/apt/lists/partial’: Not a directory
Even I tried sudo mkdir /var/lib/apt/lists/
Any other solution?
The answer may be inappropriate here. But as I came here others may land here too.
If you're using docker and you face the same issue you can do like the following.
USER root
# RUN commands
USER 1001
Reference: Link
You can try adding -u 0 in the command
sudo docker exec -u 0 -it ContainerID bin/bash
According to Docker, the u flag defines what username or UID in the system for the container to run as, setting -u 0 means you run the container as root, use it with caution! Reference here
The same happened to me. I follow as guide this answer:The package lists or status file could not be parsed or opened
I assumed my lists were corrupted. I went to /var/lib/apt/ I saw a file (lists#) instead of a directory. I deleted it (sudo rm lists) and re-created the path (sudo mkdir -p /var/lib/apt/lists/partial). Double-check the path gets created.
I ran into the same issue while trying to build a new container and experimenting with Dockerfile for a while.
What saved me finally was just to delete all containers I've created during this process using docker rm.
I had this same issue when trying to install an Typora on Ubuntu 20.04.
I was running into the error whenever I run the command below:
# add Typora's repository
sudo add-apt-repository 'deb https://typora.io/linux ./'
Here's how I solved it:
I disconnected and reconnected my network connection, and when I ran the command again, it worked fine.
I think it was an issue with my network connectivity.
That's all.
I hope this helps
I had a similar error when using bitnami spark image and docker exec command with arguments -u didn't work for me. I found my answer in the image documentation here.
In case you are using a docker image, it might be that the image is a non root container image. Read the documents of the docker image provider to find the solution to see how you can use the image as a root container image.
this is how it works access as root in docker bash and install your apps
get id container by name
sudo docker ps -aqf "name=name=es01"
access bash as root
sudo docker exec -u 0 -it 3d42134dfd59 bash
Example install:
apt get update
apt-get install nano
You first need to have super user privilege by typing in sudo -i and then inserting your password.

Where does docker-compose up keep its images?

I'm aware that if I change my Dockerfile or build directory, I'm supposed to run docker-compose build. This surely implies that docker-compose has some cache somewhere of its already-built images.
Where is it? How do I purge it?
I'd like to get back to a state where docker-compose up is forced to do the initial build steps, without me needing to remember to run docker-compose build.
I've run docker stop $(docker ps -aq) and docker X prune (for X in container, image, volume, network), but docker-compose up still refuses to run the build steps in my Dockerfile.
Or am I completely misunderstanding how docker-compose works?
you can pass on additional argument (--no-cache) to skip using cache during build process.
docker#default:~$ docker-compose build --help
Build or rebuild services.
Services are built once and then tagged as `project_service`,
e.g. `composetest_db`. If you change a service's `Dockerfile` or the
contents of its build directory, you can run `docker-compose build` to rebuild it.
Usage: build [options] [--build-arg key=val...] [SERVICE...]
Options:
--compress Compress the build context using gzip.
--force-rm Always remove intermediate containers.
--no-cache Do not use cache when building the image.
--pull Always attempt to pull a newer version of the image.
-m, --memory MEM Sets memory limit for the build container.
--build-arg key=val Set build-time variables for services.
docker#default:~$
docker-compose uses images, which you can see with docker images:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker_ubuntu latest a7dc4f9bbdfb 19 hours ago 158MB
ubuntu 16.04 0b1edfbffd27 3 weeks ago 113MB
hello-world latest f2a91732366c 6 months ago 264MB
The docker-compose images are prefixed with (usually) the name of the directory you're running docker-compose in. So, for me, the docker_ubuntu image.
docker image prune thinks that the images are in use, so it doesn't prune them.
To get rid of the docker-compose image, you need to delete it explicitly:
docker image rm docker_ubuntu

JHipster - Using docker-compose on remote server

I would like to setup my JHipster project on a remote server utilising docker-compose as per here.
Am I right in thinking (for the simplest approach), these are the steps I might follow:
Install docker on remote system.
Install docker-compose on remote system.
On laptop (with app src code) run ./mvnw package -Pprod docker:build to produce a docker image of the application.
Copy the image produced by this to remote server like this.
Install this image on remote system.
On laptop copy relevant yml files from src/main/docker to a directory (e.g. dir/on/remote) on the remote server.
Run docker-compose -f dir/on/remote/app.yml up on the remote server.
Thanks for your help.
Also any suggestions on how this process may be improved would be appreciated.
Expecting that your server is Ubunutu,
SSH to your server,
Install docker, docker-compose, install JAVA and set JAVA_HOME
Two approches
create docker image and push it to docker hub if you have docker hub account
create docker image within server
Second approch would be better to reduce the confusion
Clone your repo to server
cd <APPLICATION_FOLDER>
Do
./mvnw package -Pprod docker:build -DskipTests
List the images created
docker images
You can ignore -DskipTests , if you are writing test code.
Do
docker-compose -f /src/main/docker/app.yml up -d
List containers running
docker ps -a
Logs of the container
docker logs <CONTAINER_ID>

Docker Madlib Postgres

I was trying to install Apache MADLib on Postgres. Having difficulty with YUM approach I moved to Docker approach as suggested by this website https://pgxn.org/dist/madlib/
I was able to pull docker image as suggested at para 1. Now at para 2 I am stuck with comment "Path to incubator-madlib directory". I am not able to understand whether it should be the URL to MADLib Incubator such as "https://github.com/apache/incubator-madlib" or it should refer to local disk area. It would be great by giving an example of how to run this command.
2) Launch a container corresponding to the MADlib image, mounting the
source code folder to the container:
docker run -d -it --name madlib \ -v (path to incubator-madlib directory):/incubator-madlib/ madlib/postgres_9.6
The (path to incubator-madlib directory) refers to wherever you have git cloned the MADlib code base to on your machine. Say for example, your home directory in your machine is /home/xyz/ and you have cloned the MADlib code base there, you should have a directory called /home/xyz/incubator-madlib. You can now run the docker command documented in the MADlib repo as follows:
docker run -d -it --name madlib -v /home/xyz/incubator-madlib/:/incubator-madlib/ madlib/postgres_9.6
You were probably getting the Permission denied docker:... error after trying Robert's suggestion because the $(pwd) was not referring to your incubator-madlib source code folder, but was referring to /var/lib/docker/devicemapper/devicemapper/data, which should not be the case. In any case, it might be a better idea to provide the incubator-madlib directory's absolute path in the docker command, as specified above.
As is documented, that is the directory on your computer where the source code resides:
where incubator-madlib is the directory where the MADlib source code resides.
So, supossing that you have downloaded the source code in ./incubator-madlib, run as this:
docker run -d -it --name madlib -v $(pwd)/incubator-madlib:/incubator-madlib/ madlib/postgres_9.6
Then see what the container logs:
docker logs -f madlib

Postgres image from docker can't find initdb. What am I missing?

I'm on windows 10 with docker version 1.9.1 using docker toolbox
I wanted to put up a quick postgres container, something I've done before with a dockerfile I had laying around.
FROM postgres
ADD create-db.sql /tmp/
ADD drop_create_table.sql /tmp/
ADD db.sql /tmp/
ADD create-db.sh /docker-entrypoint-initdb.d/
It's pretty simple.
and when i run the resulting image. it starts fine.
However at the end it says:
...
server started ALTER ROLE
/docker-entrypoint-sh: running
/docker-entrypoint-initdb.d/create-db.sh :No such file or directory
If I try to do docker run -it <imagename> //bin/bash I can see that the file is indeed there:
root#xxxx:/docker-entrypoint-initdb.d# ls
create-db.sh
but whenever I run it it tells me it's not.
The container promptly stops when it doesn't find the file, so I can't try to ssh into the running container.