Keycloak container kcadmin error: Connect to localhost:8080 [localhost/127.0.0.1] failed - keycloak

I run a docker keycloak container. It starts properly based on shell output. After it starts, I can access it from web brower which is out of host.
However, when I go into the keycloak container and try to run some cmd through kcadmin
/opt/jboss/keycloak/bin/kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user $KEYCLOAK_USER --password $KEYCLOAK_PASSWORD
I got the error :
Failed to send request - Connect to localhost:8080 [localhost/127.0.0.1] failed: Connection refused (Connection refused)
It looks like jboss refuse the request. But as I said, I can access keycloak from external brower.
In this case, what log or something else should I check?
Thanks in advance.

Finally, I figured out it is my fault which I put the entry point in my dockerfile same as the keycloak image entry point, which the base image is keycloak image too.
After I remove the entry point in my docker file and rebuild, it works properly.

Related

Openshift: oc login failed

There is an openshift cluster in our organization, and I always get an error when using "oc login" on my computer. However, I can log in successfully by using others' computers. The error is the following:
oc login
error: dial tcp: i/o timeout - verify you have provided the correct host and port and that the server is currently running
Thanks
Please make it sure you install the "Kubectl" in macOS first.
try this
oc login <web interface url>
for example
oc login https://192.168.1.100:8443
OR
oc login https://myopenshift.com:8443

Docker : java.net.ConnectException: Connection refused - Application running at port 8083 is not able to access other application on port 3000

I have to consume an external rest API(using restTemplate.exchange) with Spring Boot. My rest API is running on port 8083 with URL http://localhost:8083/myrest (Docker command : docker run -p 8083:8083 myrest-app)
External API is available in form of public docker image and after running below command , I am able to pull and run it locally.
docker pull dockerExternalId/external-rest-api docker
run -d -p 3000:3000 dockerExternalId/external-rest-api
a) If I enter external rest API URL, for example http://localhost:3000/externalrestapi/testresource directly in chrome, then I get valid JSON data.
b) If I invoke it with myrest application from eclipse(Spring Boot Application), still I am getting valid JSON Response. (I am using Windows Platform to test this)
c) But if I run it on Docker and execute myrest service (say http://localhost:8083/myrest), then i am facing java.net.ConnectException: Connection refused
More details :
org.springframework.web.client.ResourceAccessException: I/O error on GET request for "http://localhost:3000/externalrestapi/testresource": Connection refused (Connection refused); nested exception is java.net.ConnectException: Connection refused (Connection refused)
P.S - I am using Docker on Windows.
# The problem
You run with:
docker run -p 8083:8083 myrest-app
But you need to run like:
docker run --network "host" --name "app" myrest-app
So passing the flag --network with value host will allow you container to access your computer network.
Please ignore my first approach, instead use a better alternative that does not expose the container to the entire host network... is possible to make it work, but is not a best practice.
A Better Alternative
Create a network to be used by both containers:
docker network create external-api
Then run both containers with the flag --network external-api.
docker run --network "external-api" --name "app" -p 8083:8083 myrest-app
and
docker run -d --network "external-api" --name "api" -p 3000:3000 dockerExternalId/external-rest-api
The use of flag -p to publish the ports for the api container are only necessary if you want to access it from your computers browser, otherwise just leave them out, because they aren't needed for 2 containers to communicate in the external-api network.
TIP: docker pull is not necessary, once docker run will try to pull the image if does not found it in your computer.
Let me know how it went...
Call the External API
So in both solutions I have added the --name flag so that we can reach the other container in the network.
So to reach the external api from my rest app you need to use the url http://api:3000/externalrestapi/testresource.
Notice how I have replaced localhost by api that matches the value for --name flag in the docker run command for your external api.
From your myrest-app container if you try to access http://localhost:3000/externalrestapi/testresource, it will try to access 3000 port of the same myrest-app container.
Because each container is a separate running Operating System and it has its own network interface, file system, etc.
Docker is all about Isolation.
There are 3 ways by which you can access an API from another container.
Instead of localhost, provide the IP address of the external host machine (i.e the IP address of your machine on which docker is running)
Create a docker network and attach these two containers. Then you can provide the container_name instead of localhost.
Use --link while starting the container (deprecated)

AWS EC2 403 Forbidden error

I have a development version of my application deployed on ec2 and I'm getting a 403 Forbidden Error on navigating to the given public IPV4 address.
I can start rails c after ssh-ing into the instance and manipulate the data from there.
Since the 403 Forbidden is from nginx so I checked the error logs and found the following:
*191 directory index of "/home/ubuntu/<app-name>/client/" is forbidden, client: <client-ip>, server: _, request: "GET / HTTP/1.1", host: "<host-ip>"
Which is clearly the error I'm getting. Checking my psql logs shows me the following:
So the error is in how my credentials are setup.
I tried to go to my pg_hba.conf and my navigation route was cd /var/lib/postgresql/9.5/main but I can't cd in to main after that since it says permission denied
I tried to view the pg_hba.conf by running:
sudo vim /var/lib/postgresql/9.5/main/pg_hba.conf but it tries to create a new file so clearly the file doesn't exist in that path.
I already ensured that my credentials are correct by doing sudo -u postgres psql
Also, the request from my front end is being made from port 80 and I checked that I have that in the security configuration for my EC2 server

Error on last step of Hyperledger Fabric installation of local runtime

Following the tutorial and tool setup as outlined here;
https://hyperledger.github.io/composer/installing/development-tools.html
On the very last step, I executed the script to download and install local Fabric runtime:
cd ~/fabric-tools
./downloadFabric.sh
The resulting log in the console contained this error at the very end:
# Pull and tag the latest Hyperledger Fabric base image.
docker pull hyperledger/fabric-peer:$ARCH-1.0.4
Warning: failed to get default registry endpoint from daemon (Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.35/info: dial unix /var/run/docker.sock: connect: permission denied). Using system default: https://index.docker.io/v1/
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.35/images/create?fromImage=hyperledger%2Ffabric-peer&tag=x86_64-1.0.4: dial unix /var/run/docker.sock: connect: permission denied
What should I do about this warning?
So your issue is a Docker issue - not a Hyperledger Composer issue FYI. I think this may help you https://techoverflow.net/2017/03/01/solving-docker-permission-denied-while-trying-to-connect-to-the-docker-daemon-socket/
Possibly a docker install issue - didn't install correctly? See here https://superuser.com/questions/835696/how-solve-permission-problems-for-docker-in-ubuntu where it talks about being in the docker group. Or else you can find an answer on Google.
I think this answer might be the reason behind it. The shell keeps your session stored. SO, in order to get the updates working, you have to close the shell and restart it again. That's why it worked after the restart.
Please correct me if I'm wrong!

exposing api via secure gateway

I want to expose one blue zone api to external customers via secure-gateway, I am using docker as the client, but I always met below errors (the api server is in DST environment), can anyone help me on this? I have added the host name and port into ACL file, also, I tried adding --allow when I run docker, it will disable 'deny all'
[INFO] (Client ID d83dty5MIJA_rVI) Connection #2 is being established to ralbz001234.cloud.dst.ibm.com:8888
[2017-09-06 20:59:19.210] [ERROR] (Client ID d83dty5MIJA_rVI) Connection #1 to destination ralbz001234.cloud.dst.ibm.com:8888 had error: EHOSTUNREACH
When I add secure-gateway, the resource loacated filed, I choose On-Premises, is this correct?
EHOSTUNREACH is an issue with the underlying system not being able to find a route to the host you've provided. From the machine hosting the docker client, are you able to access the resource located at ralbz001234.cloud.dst.ibm.com:8888? If the host is able to connect, then you could try adding --net=host to the docker run command:
docker run --net=host -it ibmcom/secure-gateway-client <gatewayID> -t <security_token> --allow
If the host is unable to connect as well, then this post may shed more light on routing.