Unable to access High Availablity Management for clustering via web in RHEL6 after starting luci service - rhel6

I am trying to setup clustering in RHEL 6.6. Though the luci service has been started, the High Availability Management web page doesn't load.
For eg :
[root#red1 Desktop]# service ricci start
Starting ricci: [ OK ]
[root#red1 Desktop]# passwd ricci
Changing password for user ricci.
New password:
Retype new password:
passwd: all authentication tokens updated successfully.
[root#red1 Desktop]# service luci start
Start luci... [ OK ]
Point your web browser to https://red1.com:8084 (or equivalent) to access luci
I get above URL to open the but browser fails with error message as below:
An error occurred during a connection to red1.com:8084. Issuer certificate is invalid. (Error code: sec_error_ca_cert_invalid)
Please advise.

I missed to start rgmanager which I guess was the problem.
service rgmanager start
After running this along with rest of the services things became smooth in the below order.
service ricci start
service lucy start
service rgmanager start
service cman start

Related

Grpc server not listening to port 5001 when run as a Windows service

I created the GrpcGreeter and GrpcGreeterClient projects in Visual Studio 2019 from the following page:
[https://learn.microsoft.com/en-us/aspnet/core/tutorials/grpc/grpc-start?view=aspnetcore-5.0&tabs=visual-studio][1]
The only change I made to these examples was that in order for the GrpcGreeter app to run as a Windows service, I added ".UseWindowsService()" to IHostBuilder CreateHostBuilder. I published both to local folders while in VS, and selected Self Contained for the Deployment Mode.
Server and client work fine using https://localhost:5001 when run from the either the VS environment or when running the published GrpcGreeter.exe and GrpcGreeterClient.exe directly.
I then used "Sc create" to successfully create a Windows service with GrpcGreeter.exe. Then on the Services window I started the service.
The problem is that when run as a Windows service the GrpcGreeter.exe does not listen on port 5001, as shown with netstat -anb (it does listen to port 5354, apparently). And of course when I then run GrpcGreeterClient.exe it does not connect. When GrpcGreeter.exe is run not as a Windows service netstat shows that it is listening to 5001, and GrpcGreeterClient.exe talks to it just fine.
A look at Event Viewer shows 3 errors happening immediately whenever I start the service on the Services window. I'm abbreviating them below.
1st:
Faulting application name: GrpcGreeter.exe, version: 1.0.0.0, time stamp: 0x5f6b3846
Faulting module name: ntdll.dll, version: 10.0.19041.546, time stamp: 0xd49544eb
Exception code: 0xc0000374
Fault offset: 0x000e6763
...
2nd:
Fault bucket , type 0
Event Name: FaultTolerantHeap
Response: Not available
Cab Id: 0
Problem signature:
P1: GrpcGreeter.exe
...
3rd:
Fault bucket 2242750238749681031, type 1
Event Name: APPCRASH
Response: Not available
Cab Id: 0
Problem signature:
P1: GrpcGreeter.exe
...
Please help. Thank you.
this is a very old post but I too came across with this issue when deploying a windows service with gRPC. Not sure will it solve your problem or not but my issue was that when you deploy into the windows service, it needs to have a certificate configured. It was stated in this documentation here under the "Set HTTPS certificates by using configuration" part
So I have created a self signed certificate using openssl where you can refer here too, then just add the .pfx file into the kestrel configuration as shown by the Microsoft documentation, build it and publish it as a windows service. After that, just proceed with the normal service creation procedure using
sc create
// and then
sc start
The windows service should now be running with the gRPC server without any issue (For my case at least). One thing to note is that because this is a self signed certificate which is not exactly trustable, when the frontend attempts to communicate with the server, it will have an error about the cert. You just need to trust it and it will be fine.
On browser, just go to the link that is hosting the gRPC, for example https://localhost:5001, click advanced and trust it.
In my case, I was using electron + angular so I just need to add this code snippet that I have gotten from here. Now my frontend can communicate with the gRPC server in the windows service normally.
// ignore self signed certificate in dev mode
if (process.env.NODE_ENV === 'development') {
// SSL/TSL: this is the self signed certificate support
app.on('certificate-error', (event, webContents, url, error, certificate, callback) => {
// On certificate error we disable default behaviour (stop loading the page)
// and we then say "it is all fine - true" to the callback
event.preventDefault();
callback(true);
});
}

How to fix "Your JWT secret key is not set up, you will not be able to log into the JHipster" during the startup of jhipster-registry container

I am trying to launch a microservice application with Jhipster. Each of my services are run in docker containers. When jhipster-registry is starting up, I receive this error:
2019-06-18 18:58:39.066 INFO 1 --- [ main] i.g.j.r.security.jwt.TokenProvider : The JWT key used is not Base64-encoded. We recommend using the `jhipster.security.authentication.jwt.base64-secret` key for optimum security.
2019-06-18 18:58:39.067 ERROR 1 --- [ main] i.g.j.r.security.jwt.TokenProvider :
----------------------------------------------------------
Your JWT secret key is not set up, you will not be able to log into the JHipster.
Please read the documentation at https://www.jhipster.tech/jhipster-registry/
This causes the jhipster-registry service to exit with a code of 1.
However, my application.yml file currently contains a base-64 jwt secret key:
jhipster:
security:
authentication:
jwt:
base64-secret: MjNiZjdiNDk5MGM4MjE4ODI4YzRiNjZkOTRhNTU3YmNkMWRmMWYxMzkzYjAzMzI5OWI0MzNjNzVmZjg0ZDRkNDkwOTNkNjlmNjU4Zjc0NmEyYTQ3NzViMWIzZTliYjNkNjI5ZQ==
I am currently using the docker image jhipster/jhipster-registry:v5.0.1. I have tried using v5.0.2 and the error persists. I have also tried changing my application.yml to include an empty secret parameter like so, but this didn't result in any change.
secret:
base64-secret: MjNiZjdiNDk5MGM4MjE4ODI4YzRiNjZkOTRhNTU3YmNkMWRmMWYxMzkzYjAzMzI5OWI0MzNjNzVmZjg0ZDRkNDkwOTNkNjlmNjU4Zjc0NmEyYTQ3NzViMWIzZTliYjNkNjI5ZQ==
I also tried the solution suggested in How to fix Invalid JWT with JHipster Registry [Docker]?
and it did not work for me. My docker-compose.yml and application.yml are exactly the same as the other people on my team and the registry service launches fine for them. How do I resolve this error?
EDIT: This started happening after I changed my Windows password.
Probably your Docker hasn't acces to the Filesystem where the config lies.
In my case the Firewall was blocking the access.
Check your Docker Desktop installation:
Docker Desktop -> Settings -> Shared Drives -> Reset credentials -> re-enter your new data.
Go to your Docker Desktop settings and under Shared Drives see if you've selected the drives you want to share with Docker.

Unable to fetch data from T24(TAFJ R18) when working with design studio

I faced the below error when importing t24 applications in design studio. The T24 server (TAFJ R18) which I try to connect to is up (jboss is running), but still I face this issue:
Unable to fetch data from T24. Check your connection details and if T24 is up and running.
Subroutine:
Return Code: FAILURE
Response size: 1
Response 1 ->Response Code: EB-SECURITY.VIOLATION,Response Type: NON_FATAL_ERROR,Response Text: Please check your Login Credential and/or access rights,Response Info: 98748ebf-f73d-4e86-8506-950b2fd0b5d2,
Looks like the Username and Password you have provided in the t24-server/config/server.properties is not correct. Make sure you can login to T24 (Browser or Classic) with the T24 User provided in these settings:
#T24 User name used for introspection and deployment (TAFJ)
username=INPUTT
#T24 Encrypted password used for introspection and deployment (TAFJ)
password={encoded}gXhuXZkbBuL09T8WFlRR+w==
Other important settings in this file:
#T24 host name to connect to (IP address or Domain name)
host=localhost
#T24 Web service (TAFJ) port number to connect
ws.port=8080
#Protocol: ftp, sftp or local (TAFC & TAFJ: used for *.b and *.d file transfer)
protocol=ws
#context for web-service
context=axis2
We can check the connectivity and also if anyone restarting the jboss while importing.
We can check the server status is "active" in DS, or we can restart the server connectivity.
And make sure if you are using any VPN to connect the Database and still it is active.

Issue connecting composer to Blockchain on Bluemix - identity or token does not match

I have fabric composer 0.72 installed on my mac, and I was able to follow this thread to get it connected to my Blockchain (v.61 of Fabric) on Bluemix.
fabric-composer-integration-with-bluemix-blockchain-service
Now I am trying to build an ubuntu (16.04) docker container and run composer-rest-server there. When I try to connect to my blockchain service from my docker container (using the same id, WebAppAdmin, that I used on my mac) I get an error:
Discovering types from business network definition ...
Connection fails: Error: Identity or token does not match.
It will be retried for the next request.
{ Error: Identity or token does not match.
at /home/composer/.nvm/versions/node/v6.10.3/lib/node_modules /composer-rest-server/node_modules/grpc/src/node/src/client.js:417:17 code: 2, metadata: Metadata { _internal_repr: {} } }
I tried copying the cert from my mac to my docker container:
/home/composer/.composer-credentials/member.WebAppAdmin
but when I did that I got a different error that says "signature does not verify". I did some additional testing, and I discovered that if I used an id that I had not previously used with composer (i.e. user_type1_0) then I could connect, and I could see a new cert in my .composer-credentials directory.
I tried deleting that container and building a new one (I dorked something else up) I could not use that same userid again.
Does anybody know how security and these certs are supposed to work? It would seem as though something to do with certificate generation/validation is tied to the client (i.e. hardware address), such that if I try to re-use an id on a different machine, the certs or keys or something don't match. I have a way to make things work, but it doesn't seem like it's the right way if I can't use the same id from different machines.
Thanks!
Hi i tried to recreate this by having blockchain running on a unix machine and then i copied my connection profile and certificate to my mac and then edited my connection profile to update the ip address and key store. I then did a composer network ping and it worked fine.
I am using composer v0.7.4 so you could try that?
I have also faced this issue, and concluded that
There is inconsistent behavior while deploying network using composer on Cloud environment includeing Bluemix. Problem is not with composer, but with fabric 0.6.
I am assuming that this issue is also indirectly related to following known bugs into fabric 0.6, which will not be fixed in fabric 0.6.
ERROR:
"
throw er; // Unhandled 'error' event
^
Error
at ClientDuplexStream._emitStatusIfDone (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:189:19)
at ClientDuplexStream._readsDone (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:158:8)
at readCallback (/home/ubuntu/.nvm/versions/node/v6.9.5/lib/node_modules/composer-cli/node_modules/grpc/src/node/src/client.js:217:12)
"
So far, We have understood that following three JIRA are root cause , where essentially the cloud networking layer ends up killing the idle event hub connection after a period of inactivity and the fabric SDK cannot handle this.
https://jira.hyperledger.org/browse/FAB-4002 FAB-3310
https://jira.hyperledger.org/browse/FAB-3310
or FAB-2787
Conclusion:
There is no alternative way of fixing this issue with Bluemix or any cloud environment with fabric 0.6
You may not experience this issue with Fabric 1.0, but there is still possibilities as all above mentioned defects are not fixed yet.

ejabberd on openshift - Failed RPC connection to the node

I am trying to configure ejabberd on DIY cartridge on openshift, following the guide here:
Erlang and Ejabberd on OpenShift
I followed everything successsfully up to here:
Next you can start ejabberd running the following 2 commands, which you’ll want to put in your .openshift/action_hooks/start script
there is no error and ejabberd seems to be started, but the next command:
$OPENSHIFT_DATA_DIR/erl_home/sbin/ejabberdctl register admin localhost password1234
failed with this error:
Failed RPC connection to the node ‘ejabberd#127.7.131.1′: {‘EXIT’, {badarg, [{ets,lookup, [local_config, ejabberdctl_access_commands], []}, {ejabberd_config, get_local_option, 1, [{file, "ejabberd_config.erl"}, {line,590}]}, {ejabberd_ctl, get_accesscommands, 0, [{file, "ejabberd_ctl.erl"}, {line,236}]}, {ejabberd_ctl, process,1, [{file, "ejabberd_ctl.erl"}, {line,199}]}, {rpc, ‘-handle_call_call/6-fun-0-’, 5, [{file, "rpc.erl"}, {line,205}]}]}}
Commands to start an ejabberd node: start Start an ejabberd node in server mode debug Attach an interactive Erlang shell to a running ejabberd node live Start an ejabberd node in live (interactive) mode
Optional parameters when starting an ejabberd node: –config-dir dir Config ejabberd: /var/lib/openshift/52c9674d5973ca7734000180/app-root/data//erl_home/etc/ejabberd –config file Config ejabberd: /var/lib/openshift/52c9674d5973ca7734000180/app-root/data//erl_home/etc/ejabberd/ejabberd.cfg –ctl-config file Config ejabberdctl: /var/lib/openshift/52c9674d5973ca7734000180/app-root/data//erl_home/etc/ejabberd/ejabberdctl.cfg –logs dir Directory for logs: /var/lib/openshift/52c9674d5973ca7734000180/app-root/data//erl_home/var/log/ejabberd –spool dir Database spool dir: /var/lib/openshift/52c9674d5973ca7734000180/app-root/data//erl_home/var/lib/ejabberd –node nodename ejabberd node name: ejabberd#127.7.131.1
I am not sure what causes the error... it seems like it is trying to connect to localhost (due to the nodename: ejabberd#127.7.131.1). However, I have sed every localhost smell from the previous commands on the blog.
Anybody ever encounter this before? Any clue on how to debug is also highly appreciated, as I am not very familiar with openshift or ejabberd as well as linux... Thank you very much in advance!
I wrote the blog post, and saw this error on a new DIY Application I created. It was due to a bug with erlang and the openssl package in the openshift box. I put in a patch in the erlang source so that new compilations will work.
The bug that was causing the issue was here: https://bugzilla.redhat.com/show_bug.cgi?id=1023017
You'll need to run using your host instead of localhost.
$OPENSHIFT_DATA_DIR/erl_home/sbin/ejabberdctl register admin <replacewithyourhost> password1234
For that
OPENSHIFT_DATA_DIR/erl_home/sbin/ejabberdctl register admin $OPENSHIFT_DIY_IP password1234
However it returns error as well:
{error_logger,{{2014,1,13},{21,11,24}},"Protocol: ~tp: register/listen error: ~tp~n",["inet_tcp",econnrefused]} {error
above
You need to kill all the ejabberd process including epmd.
I had this problem one time, My solution was to edit /etc/hosts(for windows- C:\Windows\System32\drivers\etc\hosts) file and make sure that there was a hostname entry for my public ip address and the domain I wanted ejabber to respond to.
0.0.0.0 hostname.domain.com hostname
1.1.1.1(your ip) your-hostname.your-domain your-hostname