jboss wildfly 17.0.1 - What is the correct location of silent authentication challenge file? - jboss

Hi there, after reading many articles on internet, I know that jboss wildfly writes a challenge file in tmp directory for silent authentication (JBOSS-LOCAL-USER mechanism).
Per my understanding, the challenge file should be located at, e.g. $JBOSS_HOME/standalone/tmp/auth/local2639861357474361285.challenge. If the management cli client can access the file successfully on the same machine, silent authentication succeeds.
Let's say:
App server hostname: appserver
jboss wildfly version: 17.0.1
wildfly is running as a service account - wildfly:wildfly
My account for works: mike
If I ssh to appserver using mike, and then run $JBOSS_HOME/bin/jboss-cli.sh -c, I think that I should fail the silent authentication, but it succeeds eventually. And then I try:
tcpdump to listen 9990 to see if any valuable information, and found:
0x0000: 4500 0060 abf6 4000 4006 9721 0a3f 7181 E..`..#.#..!.?q.
0x0010: 0a3f 7181 276a 932e 7a86 87cf 5604 7121 .?q.'j..z...V.q!
0x0020: 8018 0058 f7d2 0000 0101 080a 0fcc 2e80 ...X............
0x0030: 0fcc 2e6a 0000 0028 032f 746d 702f 6c6f ...j...(./tmp/lo
0x0040: 6361 6c35 3336 3339 3030 3330 3638 3933 cal5363900306893
0x0050: 3137 3238 3532 2e63 6861 6c6c 656e 6765 172852.challenge
auditctl various path to see if any valuable information, and found that the challenge file is created at /tmp/
Is there any system properties to set the location of challenge file? Appreciate for any help!

It seems likely that Wildfly is using the java.io.tmpdir system property. One way to override this is to run standalone.sh with a command line. If I run $WILDFLY_HOME/bin/standalone.sh -Djava.io.tmpdir=/var/tmp for example I can see in the startup logs:
...
java.io.tmpdir = /var/tmp
...
Without it I get:
...
java.io.tmpdir = /tmp
...

It turns out that upon I configure the http-interface according to https://docs.jboss.org/author/display/WFLY/Elytron+Subsystem#ElytronSubsystem-usedefaultelytronmgmtauth, my http-interface settings become:
<management-interfaces>
<http-interface http-authentication-factory="management-http-authentication" ssl-context="httpsSSC">
<http-upgrade enabled="true" sasl-authentication-factory="management-sasl-authentication"/>
<socket-binding http="management-http" https="management-https"/>
</http-interface>
</management-interfaces>
The challenge file will be created at /tmp/localxxxxxxxx.challenge.
Finally, I decide to stick with the default settings instead of using elytron for management authentication.

Related

Wildfly not starting properly

So I have a rather strange issue with wildfly not starting...
If I clean the standalone/deployments of everything but one .war file, wildfly starts perfectly. I can then add in all other .war files(6 in total) and wildfly deploys them without issues.
However if I have all the war files in there and start wildfly it completely fails. It stays in a state where everything is set to .isdeploying for maybe 5 minutes until everything gets set to failed.
The logs that I am getting from service wildfly status
Feb 09 08:49:12 wildfly[2079]: /etc/init.d/wildfly: 3: /etc/default/wildfly: default: not found
Feb 09 08:49:12 wildfly[2079]: * Starting WildFly Application Server wildfly
Feb 09 08:49:43 wildfly[2079]: ...done.
Feb 09 08:49:43 wildfly[2079]: * WildFly Application Server hasn't started within the timeout allowed
Feb 09 08:49:43 wildfly[2079]: * please review file "/var/log/wildfly/console.log" to see the status of the service
Has anyone seen anything like this before?
After looking aroung I found this just before it undeployed everything:
ERROR [org.jboss.as.controller.management-operation] (Controller Boot Thread) WFLYCTL0348: Timeout after [300] seconds waiting for service container stability. Operation will roll back. Step that first updated t
he service container was 'add' at address '[
("core-service" => "management"),
("management-interface" => "http-interface")
]'
But I am still not sure what i means...
This happened to me too starting on WildFly 11 and above IIRC.
Are you trying to access to the public or management IP while the server is booting? Basically you have to wait until the server has started to access those IPs.
My workaround was use the marker files that deployment scanner checks.
https://docs.jboss.org/author/display/WFLY/Application+deployment#Applicationdeployment-MarkerFiles
Before you start wildfly you have to put a .skipdeploy file for each .war you want to skip. Then, when the server has started you only have to delete that file to let wildfly start the deployment. You can achieve this making a shell script and calling it from your standalone.sh
This error shows that your IP/Port is being used by another process.
Use below command to check it.
For windows: use netstat -aon | find "port number"
You can configure jboss.as.management.blocking.timeout system property to tune timeout (seconds) waiting for service container stabilityas below :
...
</extensions>
<system-properties>
<property name="jboss.as.management.blocking.timeout" value="900"/>
</system-properties>
<management>
...
Or, if still doesn't work this way, collect a series of thread dumps during your startup period so we can see what it might be getting stuck on.

backup of Domain controller on Jboss eap6

I have 4 remote jboss eap6 with deployed application. First one is working as Domain Controller. In host.xml I declared backup of DC (example):
<domain-controller>
​ <remote security-realm="ManagementRealm">
​ <discovery-options>
​ <static-discovery name="primary" host="172.16.81.100" port="9999"/>
​ <static-discovery name="backup" host="172.16.81.101" port="9999"/>
​ </discovery-options>
​ </remote>
​</domain-controller>
For more info: link
What if 172.16.81.100 will be down? Is there any place on 172.16.81.101 to put actual copy of domain.xml with configuration of app-profiles and locally deployed HornetQ server?
Is there possibility to change configuration of connection factories of any working profile during switching to backup?
When you start "172.16.81.101" with --backup option domain.xml from DC will be cashed at remote HC under configuration directory with name 'domain.cached-remote.xml'. Once you reload HC it will act as DC for rest of the HC's. Can you please be specific what connection factory changes you want to done in cashed domain.xml file.

Can not integrate TeleStax Restcomm in MetaSwitch Clearwater

I really want to study how restcomm works in clearwater as a Telephony Application Server.
I follow the guideline at:
http://telestax.com/wp-content/uploads/2013/12/ClearWater-RestComm-Integration-2013.pdf
But seemly, the verion of Restcomm in this article is too old (TelScale-Restcomm-JBoss-AS7-7.1.2-GA), and I am using the Restcomm in newer version (Restcomm-JBoss-AS7-7.7.0.900).
I could not follow the guide in this article because of some difference configuration between two versions.
I set up the clearwater successfully. I could make a SIP call in clearwater.
When I setup the restcomm (version Restcomm-JBoss-AS7-7.7.0.900),
I changed the local-address of media-server in file: standalone/deployments/restcomm.war/WEB-INF/conf/restcomm.xml
as follow:
<media-server-manager>
...
<local-address>192.168.0.117</local-address>
...
</media-server-manager>
(192.168.0.117 is my local IP address)
I did not change the references to 127.0.0.1:8080 in restcomm.xml file to point to 192.168.0.117:8180
because there is no references to 127.0.0.1:8080.
I think that may be the difference between two versions.
I also did not edit the JAVA_OPTS in bin/standalone.conf file because of misunderstanding.
I edit the file mediaserver/deploy/server-beans.xml as follow:
<property name="bindAddress">192.168.0.117</property>
<property name="localBindAddress">127.0.0.1</property>
<property name="externalAddress"><null/></property>
<property name="localNetwork">192.168.0.0</property>
<property name="localSubnet">255.255.255.0</property>
After that, I start media-server:
$ cd ${JBOSS_HOME}/mediaserver/bin
$ ./run.sh
The media-server start successfully.
Then, I start restcomm jboss:
$ cd ${JBOSS_HOME}/bin
$ sudo ./standalone.sh -Djboss.socket.binding.port-offset=100 -b 192.168.0.117
It got errors as the below picture.
enter image description here
But Jboss server still work, when I goto http:/192.168.0.117:8180
But I can not access the Restcomm managerment interface.
I also try to modify somes as the article:
-Modify default app: standalone/deployments/restcomm.war/demos/hello-play.xml
<Response>
<Play>http://192.168.0.117:8180/restcomm/audio/demo-prompt.wav</Play>
</Response>
-Add configure IMS core through Ellis configure file:
{
"Restcomm" :
"<InitialFilterCriteria><Priority>1</Priority><TriggerPoint> <ConditionTypeCNF></ConditionTypeCNF><SPT><ConditionNegated>0</ConditionNegated><Group>0</Group><Method>INVITE</Method><Extension></Extension></SPT></TriggerPoint><ApplicationServer><ServerName>sip:192.168.0.117:5180</ServerName><DefaultHandling>0</DefaultHandling></ApplicationServer></InitialFilterCriteria>"
}
-Bind the number to defaul app:
curl -X POST http://ACae6e420f425248d6a26948c17a9e2acf:77f8c12cc7b8f8423e5c38b035249166#192.168.0.117:8180/restcomm/2012-04-24/Accounts/ACae6e420f425248d6a26948c17a9e2acf/IncomingPhoneNumbers.json -d "PhoneNumber=4321" -d "VoiceUrl=http://192.168.0.117:8180/restcomm/demos/hello-play.xml"
It got the error:
That are my problems.
Thank you very much for supporting me.
Best Regards,
Indeed those steps are way too old and won't probably work on the new version.
I would recommend starting Restcomm with Docker instead and configure the JVM options and port offset (see http://docs.telestax.com/restcomm-docker-environment-variables/) in the docker run command
The rest of the description to configure Clearwater should still be valid.

Wildfly 9 - mod_cluster on TCP

We are currently testing to move from Wildfly 8.2.0 to Wildfly 9.0.0.CR1 (or CR2 built from snapshot). The system is a cluster using mod_cluster and is running on VPS what in fact prevents it from using multicast.
On 8.2.0 we have been using the following configuration of the modcluster that works well:
<mod-cluster-config proxy-list="1.2.3.4:10001,1.2.3.5:10001" advertise="false" connector="ajp">
<dynamic-load-provider>
<load-metric type="cpu"/>
</dynamic-load-provider>
</mod-cluster-config>
Unfortunately, on 9.0.0 proxy-list was deprecated and the start of the server will finish with an error. There is a terrible lack of documentation, however after a couple of tries I have discovered that proxy-list was replaced with proxies that are a list of outbound-socket-bindings. Hence, the configuration looks like the following:
<mod-cluster-config proxies="mc-prox1 mc-prox2" advertise="false" connector="ajp">
<dynamic-load-provider>
<load-metric type="cpu"/>
</dynamic-load-provider>
</mod-cluster-config>
And the following should be added into the appropriate socket-binding-group (full-ha in my case):
<outbound-socket-binding name="mc-prox1">
<remote-destination host="1.2.3.4" port="10001"/>
</outbound-socket-binding>
<outbound-socket-binding name="mc-prox2">
<remote-destination host="1.2.3.5" port="10001"/>
</outbound-socket-binding>
So far so good. After this, the httpd cluster starts registering the nodes. However I am getting errors from load balancer. When I look into /mod_cluster-manager, I see a couple of Node REMOVED lines and there are also many many errors like:
ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending STATUS command to node1/1.2.3.4:10001, configuration will be reset: MEM: Can't read node
In the log of mod_cluster there are the equivalent warnings:
manager_handler STATUS error: MEM: Can't read node
As far as I understand, the problem is that although wildfly/modcluster is able to connect to httpd/mod_cluster, it does not work the other way. Unfortunately, even after an extensive effort I am stuck.
Could someone help with setting mod_cluster for Wildfly 9.0.0 without advertising? Thanks a lot.
I ran into the Node Removed issue to.
I managed to solve it by using the following as instance-id
<subsystem xmlns="urn:jboss:domain:undertow:2.0" instance-id="${jboss.server.name}">
I hope this will help someone else to ;)
There is no need for any unnecessary effort or uneasiness about static proxy configuration. Each WildFly distribution comes with xsd sheets that describe xml subsystem configuration. For instance, with WildFly 9x, it's:
WILDFLY_DIRECTORY/docs/schema/jboss-as-mod-cluster_2_0.xsd
It says:
<xs:attribute name="proxies" use="optional">
<xs:annotation>
<xs:documentation>List of proxies for mod_cluster to register with defined by outbound-socket-binding in socket-binding-group.</xs:documentation>
</xs:annotation>
<xs:simpleType>
<xs:list itemType="xs:string"/>
</xs:simpleType>
</xs:attribute>
The following setup works out of box
Download wildfly-9.0.0.CR1.zip or build with ./build.sh from sources
Let's assume you have 2 boxes, Apache HTTP Server with mod_cluster acting as a load balancing proxy and your WildFly server acting as a worker. Make sure botch servers can access each other on both MCMP enabled VirtualHost's address and port (Apache HTTP Server side) and on WildFly AJP and HTTP connector side. The common mistake is to binf WildFLy to localhost; it then reports its addess as localhost to the Apache HTTP Server residing on a dofferent box, which makes it impossible for it to contact WildFly server back. The communication is bidirectional.
This is my configuration diff from the default wildfly-9.0.0.CR1.zip.
328c328
< <mod-cluster-config advertise-socket="modcluster" connector="ajp" advertise="false" proxies="my-proxy-one">
---
> <mod-cluster-config advertise-socket="modcluster" connector="ajp">
384c384
< <subsystem xmlns="urn:jboss:domain:undertow:2.0" instance-id="worker-1">
---
> <subsystem xmlns="urn:jboss:domain:undertow:2.0">
435c435
< <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:102}">
---
> <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
452,454d451
< <outbound-socket-binding name="my-proxy-one">
< <remote-destination host="10.10.2.4" port="6666"/>
< </outbound-socket-binding>
456c453
< </server>
---
> </server>
Changes explanation
proxies="my-proxy-one", outbound socket binding name; could be more of them here.
instance-id="worker-1", the name of the worker, a.k.a. JVMRoute.
offset -- you could ignore, it's just for my test setup. Offset does not apply to outbound socket bindings.
<outbound-socket-binding name="my-proxy-one"> - IP and port of the VirtualHost in Apache HTTP Server containing EnableMCPMReceive directive.
Conclusion
Generally, these MEM read / node error messages are related to network problems, e.g. WildFly can contact Apache, but Apache cannot contact WildFly back. Last but not least, it could happen that the Apache HTTP Server's configuration uses PersistSlots directive and some substantial enviroment conf change took place, e.g. switch from mpm_prefork to mpm_worker. In this case, MEM Read error messages are not realted to WildFly, but to the cached slotmem files in HTTPD/cache/mod_custer that need to be deleted.
I'm certain it's network in your case though.
After a couple of weeks I got back to the problem and found the solution. The problem was - of course - in configuration and had nothing in common with the particular version of Wildfly. Mode specifically:
There were three nodes in the domain and three servers in each node. All nodes were launched with the following property:
-Djboss.node.name=nodeX
...where nodeX is the name of a particular node. However, it meant that all three servers in the node get the same name, which is exactly what confused the load balancer.
As soon as I have removed this property, everything started to work.

python-memcache memcached -- I installed on centos virtualbox but it get/set never seem to work

I'm using python. I did a yum install memcached followed by a easy_install python-memcached
I used the simple test program from the Help(memcache). When I wasn't getting the proper answers I threw in some print statements:
[~/test]$ cat m2.py
import memcache
mc = memcache.Client(['127.0.0.1:11211'], debug=0)
x = mc.set("some_key", "Some value")
print 'Just set a key and value into the cache (suposedly)'
value = mc.get("some_key")
print 'Just retrieved that value from the cache using the key'
print 'X %s' % x
print 'Value %s' % value
[~/test]$ python m2.py
Just set a key and value into the cache (suposedly)
Just retrieved that value from the cache using the key
X 0
Value None
[~/test]$
The question now is, what have I failed to do in my installation? It appears to be working from an API perspective but it fails to put anything into the memcache share area.
I'm using a virtualbox vm running centos
[~]# cat /proc/version
Linux version 2.6.32-358.6.2.el6.i686 (mockbuild#c6b8.bsys.dev.centos.org) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) ) #1 SMP Thu May 16 18:12:13 UTC 2013
Is there a daemon that is supposed to be running? I don't see an obvious named one when I do a ps.
I tried to get pylibmc installed on my vm but was unable to find a working installation so for now will see if I can get the above stuff working first.
I discovered if i ran straight from the python console GUI i get a bit more output if I set debug=1
>>> mc = memcache.Client(['127.0.0.1:11211'], debug=1)
>>> mc.stats
{}
>>> mc.set('test','value')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
0
>>> mc.get('test')
MemCached: MemCache: inet:127.0.0.1:11211: connect: Connection refused. Marking dead.
When I try to use per the example telnet to connect to the port i get a connection refused:
[root#~]# telnet 127.0.0.1 11211
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
[root#~]#
I tried the instructions I found on the net for configuring telnet so localhost wouldn't be disabled:
vi /etc/xinetd.d/telnet
service telnet
{
flags = REUSE
socket_type = stream
wait = no
user = root
server = /usr/sbin/in.telnetd
log_on_failure += USERID
disable = no
}
And then ran the commands to restart the service(s):
service iptables stop
service xinetd stop
service iptables start
service xinetd start
service iptables stop
I ran with both cases (iptables started and stopped) but it has no effect. So I am out of ideas. What do I need to do to make it so the PORT will be allowed? if that is the problem?
Or is there a memcached service that needs to be running that needs to open up the port ?
well this is what it took to get it working: ( a series of manual steps )
1) su -
cd /var/run
mkdir memcached # this was missing
In the memcached file I added "-l 127.0.0.1" to the OPTIONS statement. It's apparently a listen option. Do this for steps 2 & 3. I'm not certain which file is actually used at runtime.
2) cd /etc/sysconfig
cp memcached memcached.old
vi memcached
3) cd /etc/init.d
cp memcached memcached.old
vi memcached
4) Try some commands to see if the server starts now
/etc/init.d/memcached start
/etc/init.d/memcached status
/etc/init.d/memcached stop
/etc/init.d/memcached restart
I tried opening a browser, but it never seemed to actually display anything so I don't really know how valid this approach is. I'm not running apache or anything like this so perhaps its not relevant to my cause. Perhaps I would have to supply a ?key=blah or something.
5) http://127.0.0.1:11211
6) Now it should be ready to go. If one runs the test shown with the following it should work. At least it did for me. doing the help(memcache) will display a simple program. just paste that in and it should work just fine.
[~]$ python
>>> import memcache
>>> help(memcache)