How to run two Keycloak instances parallel on windows? - keycloak

I want to run two keycloak admin panel parallel on different port for saml one panel work as a idp and second is work as a sp so how can i run two parrallel panel

You can use the -Djboss.socket.binding.port-offset Parameter to run it on a different port.
For example standalone.bat -Djboss.socket.binding.port-offset 1000 will start an instance running on https-port 9443

Related

Is it possible to mock two ports using Java version of Wiremock-standalone?

I have an app which fetches some data from a url on port 8085. processes the data, then sends it to another url on port 8080 for another process, then processes the response from port 8080 again. Is it possible to have either wiremock or wiremock standalone work on both these ports? I can't find any solution for this in the docs but it seems to me it should be possible somehow. I have created a json file to handle the two URLs but I can't figure out a way to handle two ports. Any solution would be highly appreciated.
My understanding is that this is possible, by having two separate WireMock servers running on the two ports. This can be done either through the java WireMock or standalone WireMock. You'll simply need to specify the port.
java -jar wiremock-jre8-standalone-2.26.3.jar --port 8085 --rootdir /path/to/dir1
java -jar wiremock-jre8-standalone-2.26.3.jar --port 8080 --rootdir /path/to/dir2
The main thing to watch out for is that you'll need to have the servers looking to separate data folders. I've specified this in the CLI via the --rootdir flag. This can be set in java with .usingFilesUnderDirectory()
The part I'm curious about is why you have to have the data sent to a separate port. Could you not accomplish the same workflow by processing the data, sending it to a different endpoint, and then re-processing the data?

How to expose services between containers with docker-compose

On circleci, when I declare multiple dockers for a job:
dockers:
app: company/image
selenium: selenium/image
app will expose a port 4000 and selenium will expose port 4444.
Then from app container, I can access selenium service via localhost:4444, and on selenium container, I can access app webserver via localhost:4000.
docker-compose, however, behaves differently. I only allow me to access to selenium:4444 from app, and app:4000 from selenium.
I want docker-compose to behave similar to circleci, in which it allows me to use localhost:port to access other services. How can I do that?
The way to achieve the above is via networking_mode:
I need to tell docker-compose to run selenium using networking_mode = "services:app" so that every ports listened by selenium will be available to access from app using just localhost:PORT (and vice versa)
This is explained here: Can docker-compose share an ip between services with discrete ports?
Also the reason for it to work is explained in the docker networking model here: https://codability.in/docker-networking-explained/

Consul.io - how to run multiple servers on same machine

This is probably a very basic question for you, but I'm just getting into consul and for testing purposes, I wanna run multiple servers on my PC. For example, I run the first server with
consul agent -server -bootstrap-expect=1 -dc=dev -data-dir=/tmp/consul -ui-dir="c:/consul 0.5.2/dist"
and then I try to run the second server with
consul agent -server -data-dir=/tmp/consul2 -dc=dc2
but it returns
==> Error starting agent: Failed to start Consul server: Failed to start RPC lay
er: listen tcp 0.0.0.0:8300: bind: Only one usage of each socket address (protoc
ol/network address/port) is normally permitted.
What am I missing from my command?
You are launching two consul servers using mostly default values. In this case the problem is that you use default ports.
When you read the error message you will notice that your second consul server tries to bind to port 8300. But your first server is already using this port, causing the second server to fail at startup. (note: consul binds to a variety of ports, each having another purpose and default setting. Take a look at the documentation).
As suggested by LenW, you can use Vagrant to set your environment. You could follow the consul tutorial.
If you do not want to use vagrant or set up any virtual machines on your own. You could change the defaults of the second server.
If you are trying to simulate a production topology on your dev machine I would look at using Vagrant in combination with VirtualBox to simulate a couple of machines for testing.

supervisord with haproxy, paster, and node js

I have to run paster serve for my app and nodejs for my real time requirements both are configured through haproxy, but here I need to run haproxy as sudo to bind port 80 and other processes as normal user, how to do it? I tried different ways, but no use. I tried this command
command=sudo haproxy
I think this is not the ways we should do this. Any ideas?
You'll need to run supervisord as root, and configure it to run your various services under non-privileged users instead.
[program:paster]
# other configuration
user = wwwdaemon
In order for this to work, you cannot set the user option in the [supervisord] section (otherwise the daemon cannot restart your haproxy server). You therefore do want to make sure your supervisord configuration is only writeable by root so no new programs can be added to a running supervisord daemon, and you want to make sure the XML-RPC server options are well protected.
The latter means you need to review any [unix_http_server], [inet_http_server] and [rpcinterface:x] sections you have configured to be properly locked down. For example, use the chown and chmod options for the [unix_http_server] section to limit access to the socket file to privileged users only.
Alternatively, you can run a lightweight front-end server with minimal configuration to proxy port 80 to a non-privileged port instead, and keep this minimal server out of your supervisord setup. ngnix is an excellent server to do this, for example, installed via the native packaging system of your server (e.g. apt-get on Debian or Ubuntu).

Powershell: Is it possible to have a service depend on remote services

I'm using Win32 Service object and there's Change which could be used to set Dependencies. Is it possible to set the service to depend on services running on a different machine? Currently, all the services run on the same machine but it's possible to run them each on a separate machine.
Nothing like that exists today, AFAIK. It is a good ask. Check this MS connect item: http://connect.microsoft.com/WindowsServerFeedback/feedback/details/293384/remote-machine-service-dependency
That said, you can create a script or another service to poll remote machines for dependent service startup and then start the local service.