I created a container using the evaluation image for Mobile First (built an image and pushed it), then deleted the container. Though I deleted that container, it still shows in my Dashboard with state "Unknown". What is worse is that it is taking 1 GB of memory out of my 2 GB quota. So, I am not able to create a new container with memory >= 1GB nor am I able to delete the "Unknown" state one. I tried to log out or use different browser with no luck.
The result of "ice ps" is zero rows.
The command
ice ps
returns only running containers.
Please try to run the following command
ice ps -a
This one lists all containers in your space.
If you see your container listed use the following command to remove it:
ice rm <container id> --force
In case this solution does not work for you then I suggest you to open a ticket with IBM Bluemix Support and someone in support can help you delete the container. Here is the link to create a support ticket:
https://developer.ibm.com/bluemix/support/#support
Related
I am following this tutorial-
https://docs.confluent.io/platform/current/platform-quickstart.html
At step 3 when I click on "Connect" I see no option to add connector.
How do I add a connector?
For reference I am using M1 Mac book Air and Docker v4.12.0
You'll only be able to add a connector if you are running Kafka Connect server, and have properly configured Control Center to use it.
On Mac: Docker memory is allocated minimally at 6 GB (Mac). When using Docker Desktop for Mac, the default Docker memory allocation is 2 GB. Change the default allocation to 6 GB in the Docker Desktop app by navigating to Preferences > Resources > Advanced.
Assuming you already did that, then you need to look at the outputs from docker-compose ps and docker-compose logs connect to determine if the Connect containers are healthy and running.
Personally, I don't use Control Center since I prefer to manage connectors as config files, not copy/paste or click through UI fields. In other words, if Connect container is healthy, try using its HTTP endpoints directly with curl/postman, etc
I had exactly the same issue with there being no way to add a Connector.
Updating the container version from my old version 6.2.1 to 7.3.0 solved it.
I am using an Azure App Service (Linux containers) to host a container application. Unfortunately for me, the App Service periodically issues a new Docker Pull command like this:
2018-11-08 18:39:32.512 INFO - Issuing docker pull: imagename =library/ghost:2.2.4-alpine
I don't know why it is issuing this command, and I can't find out how to stop it doing so.
I want to stop it because although the volume on which my container stores data can survive restarts of the container, it doesn't seem to survive rebuilding the container. I suspect that this might be because I'm using the Docker Compose (preview), and the docker compose configuration sets a volume name and associates it with the container.
I currently have 'continuous deployment' toggled 'OFF' in the azure console, and I can't find any setting which seems to control whether or not the underlying app service is issuing the docker pull command.
Unfortunately I can't use the docker single container as the pre-built ghost images don't appear to be set up to store data in a volume outside the container.
I have had no luck in searching the App Service FAQs for information about this behaviour. I'm hoping that I've made a foolish mistake which is easy to fix, and that someone here will have seen this and fixed it themselves.
For your issue, you will know how to achieve what you want if you know the work process of Azure Web App for Container.
Each time when the Web App starts, no matter you restart it or restart itself because of the timeout, it will check the image if it should update. When you use the public Docker hub image, the update dependent on the Docker hub, not your order.
So the best way for you is to store the image in your private container registries like your own git hub or Azure Container Registry. And give the image a specific tag. This way make sure that if you do not update the image, the web app will do the check when it starts.
I don't make it to publish a container. In my case I want to put an MVC4 Webrole into the container. ...but actually, what's inside the container does not matter.
Their primary tutorial for using a container to lift-and-shift old apps uses Continuous Delivery. The average user does not always need this.
Instead of Continuous Delivery one may use the Visual Studio's support for Docker Compose:
Connect-ServiceFabricCluster <mycluster> and then New-ServiceFabricComposeApplication -ApplicationName> <mytestapp> -Compose docker-compose.yml
But following exactly their tutorial still leads to errors. The applicaton appears in the cluster but outputs immediatly an error event:
"SourceId='System.Hosting', Property='Download:1.0:1.0'. Error during
download. Failed to download container image fabrikamfiber.web"
Do I miss a whole step, which they expect to be obvious? But even placing the image in my Docker Hub registry myself did not help? Or does it need to be Azure Container Registry?
Docker Hub should work fine, ACR is not required.
These blog posts may help:
about running containers
about docker compose on Service Fabric
I created a project on Google Cloud a long time ago and I am currently having some problems with it. The only result I seem to be receiving is Internal Server Error.
I tried connecting to the compute instance through ssh, but it does not help much because :
as far as I remember, I used to be able to see all the code on the compute instance. It's no longer there, the home folder only has some hidden files. I am not sure where to look for the actual project files.
the only error I managed to get from a log file was : Error syncing pod 9c8e56bc-4298-11e6-ab50, skipping: failed to "StartContainer" for "postgres" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=postgres pod=postgres_default(9c8e56bc-4298-11e6-ab50); this makes me think there are some issues with Postgres, which has a persistent disk to its own, but there seems to be no easy way to find out how much of that disk is occupied.
even though I am admin on that project and I should receive detailed (with stacktrace) emails every time there is an error, I am not receiving anything at all.
This behaviour started today, all of a sudden, and I haven't touched the project in almost 2 years, so I am completely lost.
Thanks.
How can I check the remaining size of a persistent disk on Google
Cloud?
For this part, I finally found a way to do it today. I'll describe it all here with print screens so that it is easy for anyone.
First, go to the Google Console, Disks page : https://console.cloud.google.com/compute/disks
Identify the persistent disk you are interested in. In my case, this was called pg-data-disk. Click on the respective VM instance; this will be on the column "In use by", link in the image below :
This will open a SSH connection to the VM instance to which your persistent disk is attached. In the SSH window, run the following command : sudo lsblk. The result should be like in the image below :
You will thus discover the DISK ID (in my case this was sdb), so you can now run : sudo df -h <YOUR DISK ID>. This command will give you the exact disk usage, as shown below :
As for the other part of the question, I was actually using Docker containers which were orchestrated by Kubernetes. And I totally forgot about it.
Will upgrade my RAM and get back to work.
Thank you all.
I am creating an app in Origin 3.1 using my Docker image.
Whenever I create image new pod gets created but it restarts again and again and finally gives status as "CrashLoopBackOff".
I analysed logs for pod but it gives no error, all log data is as expected for a successfully running app. Hence, not able to determine the cause.
I came across below link today, which says "running an application inside of a container as root still has risks, OpenShift doesn't allow you to do that by default and will instead run as an arbitrary assigned user ID."
What is CrashLoopBackOff status for openshift pods?
Here my image is using root user only, what to do to make this work? as logs shows no error but pod keeps restarting.
Could anyone please help me with this.
You are seeing this because whatever process your image is starting isn't a long running process and finds no TTY and the container just exits and gets restarted repeatedly, which is a "crash loop" as far as openshift is concerned.
Your dockerfile mentions below :
ENTRYPOINT ["container-entrypoint"]
What actually this "container-entrypoint" doing ?
you need to check.
Did you use the -p or --previous flag to oc logs to see if the logs from the previous attempt to start the pod show anything
The recommendation of Red Hat is to make files group owned by GID 0 - the user in the container is always in the root group. You won't be able to chown, but you can selectively expose which files to write to.
A second option:
In order to allow images that use either named users or the root (0) user to build in OpenShift, you can add the project’s builder service account (system:serviceaccount::builder) to the privileged security context constraint (SCC). Alternatively, you can allow all images to run as any user.
Can you see the logs using
kubectl logs <podname> -p
This should give you the errors why the pod failed.
I am able to resolve this by creating a script as "run.sh" with the content at end:
while :; do
sleep 300
done
and in Dockerfile:
ADD run.sh /run.sh
RUN chmod +x /*.sh
CMD ["/run.sh"]
This way it works, thanks everybody for pointing out the reason ,which helped me in finding the resolution. But one doubt I still have why process gets exited in openshift in this case only, I have tried running tomcat server in the same way which just works fine without having sleep in script.