Wildfly add-user.sh Instance to Management Instance - wildfly

I'm running Wildfly within Docker. I need to add users and set the configuration before the java process is started.
I can run the add-user.sh script to create a simple user:
/wildfly/bin/add-user.sh admin admin --silent
However, I need to create a management user that will be used for authenticating an instance with the managed instance. In the CLI the option is:
"Is this new user going to be used for one AS process to connect to another AS process?", reply yes.
Is there a way to set this option via a parameter in the script?

add-user.sh just adds the user to: mgmt-users.properties
so you can generate this value for yourself pretty easily.

This will give you access to the container
docker exec -it wildfly-instance /bin/bash

Related

Cannot login to keycloak admin console when running in domain cluster mode

Following the documentation guide, I have booted up a master and slave and I can see it connected via the logs:
Boot up master
$ domain.sh --host-config=host-master.xml
Boot up slave
$ domain.sh --host-config=host-slave.xml
I've also followed the steps to set up the admin user via the add-user.sh. Further research indicated that I should use the add-user-keycloak.sh script to add an initial admin user:
./add-user-keycloak.sh -u john
Press ctrl-d (Unix) or ctrl-z (Windows) to exit
Password:
Added 'john' to '../standalone/configuration/keycloak-add-user.json', restart server to load user
Reran the master and slave, but cannot login to admin console.
However, what's interesting is when I tried to boot up in standalone mode I was able to the admin console as john:
./standalone.sh
Is this a bug or am I missing something (most likely) that's not in the documentation?
Thanks in advance...
Figured it out, hope this helps somebody.
Before you start in domain cluster mode:
./domain --host-config=host-master.xml
./domain --host-config=host-slave.xml
you must first create the admin so you can log in to admin console using the --sc tag, otherwise add-user-keycloak.sh only adds the admin user for the standalone mode. To do that:
./add-user-keycloak.sh --sc ../domain/servers/server-one/configuration -u john -p password
if configuration folder does not exist, then create the directory.
The ./add-user-keycloak.sh script seems to be a little outdated. Currently (as of Keycloak 12.0.2 version) it creates keycloak-add-user.json file in ./domain/configuration/ directory - That is wrong!
The file should be in ./domain/servers/server-one/configuration.
Now you just have to move the file to that directory, restart the server and it should work properly.
I found this solution on this 2-year old email thread:
https://lists.jboss.org/pipermail/keycloak-user/2018-January/012642.html

Spring XD - Unable to undeploy & destroy stream through --cmdfile option

Today i have started automating certain Spring-XD tasks like, stream creation, deployment, and undeploying the same.
For this, all my undeploy and destroy commands sit in one file, but when i run the following
$xd-shell --cmdfile auto_cleanup_14032016_235706.txt
I'm getting the following output:
WARNING: Command 'stream destroy --name ingestion_14032016_235706_<>' was found but is not currently available (type 'help' then ENTER to learn about this command)
But When i run the same command inside the interactive shell xd-shell -- It seems to work fine. :(
You will notice this Warning when xd-shell fails to connect to Spring XD admin. Unless specified xd-shell assumes the admin server to be on localhost.
Add below statement to the top of your cmdfile.
admin config server http://spring-xd-admin-server:9393
Also provide the credentials to spring xd admin if required.

How to get MONGO_URL from command line Meteor Up deployment?

I am currently deploying to Digital Ocean using Meteor Up. If I don't specify a MONGO_URL in the mup.json, can I get the value from the command line while the website is running, i.e. I don't want to shutdown the site?
If I go to the app directory and run meteor mongo --url, I get the following error:
mongo: Meteor isn't running a local MongoDB server.
This command only works while Meteor is running your application
locally. Start your application first. (This error will also occur if
you asked Meteor to use a different MongoDB server with $MONGO_URL when
you ran your application.)
If you're trying to connect to the database of an app you deployed
with 'meteor deploy', specify your site's name with this command.
Even if I run the app from the app directory, it will only give the localhost MONGO_URL. I need the MONGO_URL for the deployed app.
I have also taken a look at a similar question as suggested by some of the answers. I disagree that it is "impossible" to get the MONGO_URL without some other program running on the server. It's not as if we are defying the laws of physics here, folks. Fundamentally, there should be a way to access it. Just because no one has yet figured it out doesn't mean it is impossible.
meteor mongo --url should return the URL.
Try opening another shell in the app directory and running that command.
Meteor Up packages your app in production mode with meteor build so that it runs via node rather than the meteor command line interface. Among other things, this means meteor foo won't work on the remote server (at least not by default). So what you're really looking for is a way to access mongo itself remotely.
I recently set up mongo on an AWS EC2 instance and listed some lessons learned here: https://stackoverflow.com/a/28846703/2669596. Some details of how you do it are going to be different on Digital Ocean, but these are the main things you have to take care of once mongo itself is installed:
Public IP/DNS Address: This is probably fine already since you can deploy to the server.
Port Security Rules: You need to make sure port 27017 is open for TCP access, at least from your IP address. MongoDB also has an http interface you can set up; if you want to use that you'll need to open 28017 as well.
/etc/mongod.conf (file location may differ depending on Linux flavor):
Uncomment port=27017 to make sure you have the default port (I don't think this is actually necessary, but it made me feel better and it's good to know where to change the default port...).
Comment out bind_ip=127.0.0.1 in order to listen to external interfaces (e.g. remote connections).
Uncomment httpinterface=true if you want to use the http interface.
You may have to restart the mongod host via sudo service mongod restart. That's a problem if you can't have downtime, but I don't know of a way around that if you change the config file.
Create User: You need to create an admin and/or user to access the database remotely.
Once you've done all of that, you should be able to access the database from your local machine (assuming you have the mongo client installed locally) by running
mongo server.url.com:27017/mup-app-name -u username -p
where server.url.com is the URL or IP address of your remote server, mup-app-name is the appName parameter from your mup.json file, username is the user you created to access the database, and you'll be prompted for that user's password after you run the command (or you could put it after -p on the same line, depending on the password).
There may also be a way to do this by setting up nginx to reverse-proxy 127.0.0.1:27017 on your remote server, but I've never done it and that's just me speculating.

Console access to Dokku's PostgreSQL plugin?

Is there a way to get console access to Dokku's PostgreSQL plugin? On Heroku I'd do heroku pg:psql. Is this possible in a Dokku environment and if so how?
There is in fact a way to do this directly with the dokku-pg-plugin.
The command postgresql:restore <db> < dump_file.sql connects to the specified database and restores it with the provided dump file. If you simply omit the dump file part (< dump_file.sql), a psql console session opens.
Since postgresql:restore <db> is semantically not the best way to open a console session, I have opened a pull request to add the command postgresql:console <db>.
So, until my PR is merged, the options for opening a psql console for a database are either:
doing it manually with psql -h 172.17.42.1 -p <port> -U root db with the <port> and password taken from the output of dokku postgresql:info <db>,
using the semantically incorrect command dokku postgresql:restore <db>, or
use my forked and patched version of the plugin which adds the command postgresql:console <db>.
Edit:
The owner of the dokku-pg-plugin has merged my pull request. If you're using this plugin and are looking for a way to access your PostgreSQL console with it, you might want to update it to the latest version. Once you have done that, you can use the command postgresql:console <db> to open a psql session to the specified database.
This worked for me for my Rails app that I'm running on Dokku:
dokku run <app-name> rails db
That brought up the console for the PostgreSQL container I created (via dokku postgresql:create <db>). I couldn't figure out another way to get at the PostgreSQL instance in that container, short of attempting to directly connect to the DB, with the connection info/credentials listed when you do this:
dokku postgresql:info <db>
I haven't tried that, though I suspect it would work.

Logon failure in running a windows service

I am running a service called prunner on windows server 2012. I used the command sc to change the username and the password of the service:
sc.exe config myService obj= "sqa265\hero" password= "hero1"
The output of the command is saying that it have succeed but when I go to task manager in order to start the service I get: logon failure!!!
I tried to run the sc command under the user hero and under the user administrator but I still get the same error. But the very strange thing is that if I do the same thing manually via the task manager and service control pane I success and the service go to the state:running!!! But I need to automate this thing, so please any help?
You need to give the account "sqa265\hero" the SeServiceLogonRight permission. As you have noticed setting the credentials up through the control panel works, but what you might not have noticed is that if you tried to use the command line after using the control panel.
You can test this by setting the service back to the Local System account in the control panel, and then running your command-line again.
To fix this from a script, you can use the NTRights utility outlined in this MS knowledgebase article:
http://support.microsoft.com/kb/315276
After you install NTRights, you can run it like this:
NTRights.exe +r SeServiceLogonRight -u "sqa265\hero"
Combined with the sc config commandline you already have, the service should run with those credentials.
Further reading:
http://www.techrepublic.com/article/set-user-rights-using-the-ntrights-utility/5032903