docker build -t "us.gcr.io/ek-airflow-stage/array_data:sree" .
Status: Downloaded newer image for python:3.7
---> 869a8debb0fd
Successfully built 869a8debb0fd
Successfully tagged us.gcr.io/ek-airflow-stage/array_data:sree
docker push "us.gcr.io/ek-airflow-stage/array_data:sree"
The push refers to repository [us.gcr.io/ek-airflow-stage/array_data]
a36ba9e322f7: Layer already exists
sree: b size: 2218
gcloud run deploy "ek-airflow-stage" \
--quiet \
--image "us.gcr.io/ek-airflow-stage/array_data:sree" \
--region "us-central1" \
--platform "managed"
Deploying container to Cloud Run service [ek-airflow-stage] in project ["project"] region [us-central1]
/ Deploying... Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
Deployment failed
ERROR: (gcloud.run.deploy) Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
Related
I have just installed Rancher for test purpose. First I install docker and I install kubectl and helm. Then I install Rancher. When I try to create a new kubernetes cluster, I got this error. I searched about it and it is about the certification error I thought.
Failed to create fleet-default/aefis-test cluster.x-k8s.io/v1beta1, Kind=Cluster for rke-cluster fleet-default/aefis-test: Internal error occurred: failed calling webhook "default.cluster.cluster.x-k8s.io": failed to call webhook: Post "https://webhook-service.cattle-system.svc:443/mutate-cluster-x-k8s-io-v1beta1-cluster?timeout=10s": service "webhook-service" not found"
I used this command to install Rancher:
sudo docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:latest --no-cacerts"
I hope anybody has a good idea and solution for this error? Thanks.
If I want to delete the webhood secret for triggering to create new one, it throws this error:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I'm trying to deploy a project created in business central. Currently I'm using docker's jboss/drools-workbench container, and whenever I click on the deploy button I get an error message (see below).
I have looked at the server log and there was no error when attempting to deploy the project. I have also tried using standalone wildfly's management console to deploy Drools warfile which was unsuccessful since an error about missing/unavailable module (slf4j) and, in another instance, attempted to link to jboss/kie-server container with my drools-workbench container for which the application becomes unresponsive.
My rule is a simple "hello" application
rule "hello"
when
$name: String( )
then
System.out.println( "Hello " + $name + "!" );
end
And I also have tried linking jboss/kie-server container to drools-workbench container,
$ docker run -p 8080:8080 -p 8001:8001 -d --name drools-workbench jboss/drools-workbench
$ docker run -p 8180:8080 -d --name kie-server --link drools-workbench:kie-wb jboss/kie-server
The server logs are in https://pastebin.com/A97exiJu the error I get from the UI is "Deployment was skipped, couldn't find any server running in 'development' mode." I have tried changing the project to production mode, but I still get the same error except for "development" it states "production".
I'm using Minikube on Windows based machine. On the same machine, I also have docker-machine setup.
I've pointed docker client towards minikube’s docker environment. This way, can see Docker environment inside Kubernetes.
Without issues, I can build docker images & run docker containers from Minikube VM. However, when I try to start any docker container via kubectl(from PowerShell), its failing to start primarily as if kubectl can't find docker image due to following error -
Failed to pull image "image name": rpc error: code = Unknown desc =
Error response from daemon: repository "image-repo-name" not found:
does not exist or no pull access Error syncing pod
I don't know what's missing. If "docker run" can access the image why "kubectl" can not do?
Here my Dockerfile:
FROM node:4.4
EXPOSE 9002
COPY server.js .
CMD node server.js
Make sure your image path in your yaml is correct. That image should exist on your local machine. It should be named with a number not "latest"
Have this in your deployment yaml:
image: redis:1.0.48
run "> docker images" to see the list of images on your machine.
I run the following commands and when I check if the pods are running I get the following errors:
Failed to pull image "tomcat": rpc error: code = Unknown desc = no
matching manifest for linux/amd64 in the manifest list entries
kubectl run tomcat --image=tomcat --port 8080
and
Failed to pull image "ngnix": rpc error: code = Unknown desc
= Error response from daemon: pull access denied for ngnix, repository does not exist or may require 'docker login'
kubectl run nginx3 --image ngnix --port 80
I seen a post in git about how to complete this when private repos cause an issue but not public. Has anyone ran into this before?
First Problem
From github issue
Sometimes, we'll have non-amd64 image build jobs finish before their amd64 counterparts, and due to the way we push the manifest list objects to the library namespace on the Docker Hub, that results in amd64-using folks (our primary target users) getting errors of the form "no supported platform found in manifest list" or "no matching manifest for XXX in the manifest list entries"
Docker Hub manifest list is not up-to-date with amd64 build for tomcat:latest.
Try another tag
kubectl run tomcat --image=tomcat:9.0 --port 8080
Second Problem
Use nginx not ngnix. Its a typo.
$ kubectl run nginx3 --image nginx --port 80
I want to deploy application on Openshift-origin online (next gen). There will be at least 4 pods communicating via services.
In 1st POD I have to run Zookeeper. So I created POD where my Zookeeper from docker image will be running, but POD's status is: Crash loop back off.
I created new project
oc new-project my-project
I created new app to deploy my zookeeper from docker
oc new-app mciz/zookeeper-docker-infispector --name zookeeper
And the output message was:
--> Found Docker image 51220f2 (11 minutes old) from Docker Hub for "mciz/zookeeper-docker-infispector"
* An image stream will be created as "zookeeper:latest" that will track this image
* This image will be deployed in deployment config "zookeeper"
* Ports 2181/tcp, 2888/tcp, 3888/tcp will be load balanced by service "zookeeper"
* Other containers can access this service through the hostname "zookeeper"
* This image declares volumes and will default to use non-persistent, host-local storage.
You can add persistent volumes later by running 'volume dc/zookeeper --add ...'
* WARNING: Image "mciz/zookeeper-docker-infispector" runs as the 'root' user which may not be permitted by your cluster administrator
--> Creating resources with label app=zookeeper ...
imagestream "zookeeper" created
deploymentconfig "zookeeper" created
service "zookeeper" created
--> Success
Run 'oc status' to view your app.
Then I ran pods list:
oc get pods
with output:
NAME READY STATUS RESTART AGE
zookeeper-1-mrgn1 0/1 CrashLoopBackOff 5 5m
Then I ran logs:
oc logs -p zookeeper-1-mrgn1
with output:
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
grep: /opt/zookeeper/bin/../conf/zoo.cfg: No such file or directory
mkdir: can't create directory '': No such file or directory
log4j:WARN No appenders could be found for logger (org.apache.zookeeper.server.quorum.QuorumPeerConfig).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Invalid config, exiting abnormally
My dockerfile:
FROM openjdk:8-jre-alpine
MAINTAINER mciz
ARG MIRROR=http://apache.mirrors.pair.com
ARG VERSION=3.4.6
LABEL name="zookeeper" version=$VERSION
RUN apk add --no-cache wget bash \
&& mkdir /opt \
&& wget -q -O - $MIRROR/zookeeper/zookeeper-$VERSION/zookeeper- $VERSION.tar.gz | tar -xzf - -C /opt \
&& mv /opt/zookeeper-$VERSION /opt/zookeeper \
&& cp /opt/zookeeper/conf/zoo_sample.cfg /opt/zookeeper/conf/zoo.cfg
EXPOSE 2181 2888 3888
WORKDIR /opt/zookeeper
VOLUME ["/opt/zookeeper/conf"]
ENTRYPOINT ["/opt/zookeeper/bin/zkServer.sh"]
CMD ["start-foreground"]
There is a warning in the new-app command output:
WARNING: Image "mciz/zookeeper-docker-infispector" runs as the 'root' user which may not be permitted by your cluster administrator
You should fix the docker image to not run as root (or tell OpenShift to allow this project containers to run as root).
There is an specific example of Zookeeper image and template that works in Openshift.
https://github.com/openshift/origin/tree/master/examples/zookeeper
Notice the Dockerfile changes to run the container as non root user