Our Centos 7 AWS Guacamole (version 1.3.0) server was fully working, with postgresql database for the users. One day it decided not to work and I'm perplexed. I didn't set it up and have no access to the person that did.
When logging in to aws.....:8080/guacamole, it comes up with ERROR. If I rename /etc/guacamole/guacamole.properties to something else, then it gives the log in screen, so there's some problem with postgres.
screenshot of error message when trying to access guacamole from web browser
Here is guacamole.properties:
# PostgreSQL properties
postgresql-hostname: localhost
postgresql-port: 5432
postgresql-database: guacamole_db
postgresql-username: guacamole_user
postgresql-password: password
#postgresql-auto-create-accounts: true
#Guac Properties
#skip-if-unavailable: postrgresql
guacamole_user does exist, but is not in table guacamole_entity and I'm wondering if the user is a postgres user and not a guacamole user, but why would that have changed?
I've tried unhashing skip-if-unavailable: postgresql but that didn't change anything.
Also upgraded the postgres java file from postgresql-42.2.23.jar to postgresql-42.3.1.jar
but that did nothing.
/var/log/messages doesn't have anything to do with the error. I can't really find a way to debug it.
Totally at a loss, any ideas?
EDIT: It was an SELinux problem. Disabling SELinux solves it, so it's got nothing to do with Postgres. Thanks for your time.
The question lacks of some information. (For example: guacamole version, any other extensions used, error screenshot). If possible, try posting the full version of /etc/guacamole/guacamole.properties.
The user specify in postgresql-username is a database user (not a guacamole user) so it won't show in guacamole_entity.
In order to debug guacamole application, you will have to configure it to show debug log.
Create the file /etc/guacamole/logback.xml.
Insert the following content:
<configuration>
<!-- Appender for debugging -->
<appender name="GUAC-DEBUG" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<!-- Log at DEBUG level -->
<!-- For this part, you can change the log level. Log level available: error, warn, info, debug, trace -->
<!-- by default the level is error -->
<root level="debug">
<appender-ref ref="GUAC-DEBUG"/>
</root>
</configuration>
Restart guacamole service.
After setting up, you may start debugging with tomcat logs.
I have not installed in a CentOS before, but in Debian, the default tomcat log path is /var/lib/tomcat9/logs.
It was an SELinux problem. Disabling SELinux solves it, so it's got nothing to do with Postgres. Thanks for your time.
Related
I have a Java EE Web App running on JBoss AS 7.2 connecting to a Postgresql 9.4 database (hosted on RDS).
The App is quite large and does a mixture of web page serving, API calls and Scheduled Tasks
More and more frequently I am having to reboot the application server as the whole app has ground to a halt, checking DB stats I can see the number of connections has gone through the roof along with database CPU
(big spike as app stops responding, soon as I restart Jboss it drops back)
The database logs show that the connection to the client has been lost:
LOG: could not send data to client: Broken pipe
FATAL: connection to client lost
The jboss logs start filling up as transactions time-out...
Caused by: javax.transaction.RollbackException: ARJUNA016063: The transaction is not active!
The only way to fix is to restart JBoss and the number of connections goes back to normal.
My DB datasource configuration looks like this..
<datasource jta="false" jndi-name="java:/appWebDatasource" pool-name="jdbc/appWebDatasource" enabled="true" use-java-context="true" use-ccm="false">
<connection-url>jdbc:postgresql://${web.db.url}/MyApp</connection-url>
<driver>postgresql</driver>
<security>
<user-name>jboss</user-name>
<password>******</password>
</security>
<validation>
<check-valid-connection-sql>select 1</check-valid-connection-sql>
<validate-on-match>false</validate-on-match>
<background-validation>true</background-validation>
</validation>
<statement>
<share-prepared-statements>false</share-prepared-statements>
</statement>
</datasource>
I have been checking the pg_stat_activity table as soon as the issue occurs and there are no idle in transaction connections, they are all either idle or active
So my question is, how to configure JBoss or Postgresql in a way to stop this increase in number of connections that crashes the app??
You can have a cap on the max number of connections by declaring the max pool size you want to allow with this paramater <max-pool-size>
You have to consider your application and choose an appropriate size to set in <max-pool-size>
As you need to use the validation checker mechanism also along with parameter already mentioned by DaveB in data source configuration, given in the doc.
We configured the latest version (7.2) SMSC-GW to work on on our server with the environment (cassandra and such). However, after setting up everything. Some failures are appearing (which did not appear in previous versions).
Firstly, when connecting the simulators and the gateway using the default settings (JSS7 <-> SMSCGW <-> SMPP)
JSS7 is connected and sending, but no response is received.
SMPP is connected to SMSC-GW and the EMSE is bound. SMPP tries to send to SS7 but receives a response PDU packet failure from the SMSC-GW
I tried configuring DB routing rules, but that did not work.
Also, the log in the SMSC-GW server is frequently displaying the following message:
16:00:28,504 INFO [SchedulerResourceAdaptor] (pool-56-thread-1) Not all SBB are running now: ServicesDownList=[smscTxSmppServerServiceState, smscRxSmppServerServiceState, smscTxSipServerServiceState, smscRxSipServerServiceState, smscTxHttpServerServiceState, moServiceState, homeRoutingServiceState, mtServiceState, alertServiceState, chargingServiceState, ]
And the JSS7 management console GUI is displaying this (which looks wrong):
So are these the source of the SMSC-GW failures?
UPDATE: I found this error in the server.log
2017-02-02 10:57:42,005 WARN [org.mobicents.slee.container.deployment.jboss.SleeContainerDeployerImpl] (SLEE-InternalDeployer-thread-1) SLEE DUs not deployed, due to missing dependencies: file:/home/coreteam/kitchensink/restcomm-smsc-7.2.109/jboss-5.1.0.GA/server/simulator/deploy/smsc-services-du-7.2.109.jar/
Followed by:
EventTypeID[name=org.mobicents.smsc.slee.services.smpp.server.events.SS7_SEND_MT,vendor=org.mobicents,version=1.0]
ResourceAdaptorTypeID[name=PersistenceResourceAdaptorType,vendor=org.mobicents,version=1.0]
ResourceAdaptorTypeID[name=SchedulerResourceAdaptorType,vendor=org.mobicents,version=1.0]
SipRA
EventTypeID[name=org.mobicents.smsc.slee.services.smpp.server.events.SS7_SEND_RSDS,vendor=org.mobicents,version=1.0]
SchedulerResourceAdaptor^M
PersistenceResourceAdaptor^M
EventTypeID[name=org.mobicents.smsc.slee.services.smpp.server.events.SMPP_SM,vendor=org.mobicents,version=1.0]
EventTypeID[name=org.mobicents.smsc.slee.services.smpp.server.events.SS7_SM,vendor=org.mobicents,version=1.0]
EventTypeID[name=org.mobicents.smsc.slee.services.smpp.server.events.SIP_SM,vendor=org.mobicents,version=1.0]
2017-02-02 14:41:17,450 WARN [org.mobicents.slee.container.deployment.jboss.DeploymentManager] (main) Unable to INSTALL smsc-services-du-7.3.0-SNAPSHOT.jar right now. Waiting for dependencies to be resolved.
Solved it quite a while ago, but thought I would share. I just simply installed the SipRA missing dependency by adding the following in the deploy-config.xml file:
<ra-entity
resource-adaptor-id="ResourceAdaptorID[name=JainSipResourceAdaptor,vendor=net.java.slee.sip,version=1.2]"
entity-name="SipRA">
<properties>
<property name="javax.sip.PORT" type="java.lang.Integer" value="5060" />
</properties>
<ra-link name="SipRA" />
In the $JBOSS_HOME/server/profile_name/deploy/restcomm-slee directory.
I set the port to some other value since that number was already taken by some other service.
The smsc-services-du-7.2.109.jar then installed automatically the next time I ran the SMSC-GW.
We are currently testing to move from Wildfly 8.2.0 to Wildfly 9.0.0.CR1 (or CR2 built from snapshot). The system is a cluster using mod_cluster and is running on VPS what in fact prevents it from using multicast.
On 8.2.0 we have been using the following configuration of the modcluster that works well:
<mod-cluster-config proxy-list="1.2.3.4:10001,1.2.3.5:10001" advertise="false" connector="ajp">
<dynamic-load-provider>
<load-metric type="cpu"/>
</dynamic-load-provider>
</mod-cluster-config>
Unfortunately, on 9.0.0 proxy-list was deprecated and the start of the server will finish with an error. There is a terrible lack of documentation, however after a couple of tries I have discovered that proxy-list was replaced with proxies that are a list of outbound-socket-bindings. Hence, the configuration looks like the following:
<mod-cluster-config proxies="mc-prox1 mc-prox2" advertise="false" connector="ajp">
<dynamic-load-provider>
<load-metric type="cpu"/>
</dynamic-load-provider>
</mod-cluster-config>
And the following should be added into the appropriate socket-binding-group (full-ha in my case):
<outbound-socket-binding name="mc-prox1">
<remote-destination host="1.2.3.4" port="10001"/>
</outbound-socket-binding>
<outbound-socket-binding name="mc-prox2">
<remote-destination host="1.2.3.5" port="10001"/>
</outbound-socket-binding>
So far so good. After this, the httpd cluster starts registering the nodes. However I am getting errors from load balancer. When I look into /mod_cluster-manager, I see a couple of Node REMOVED lines and there are also many many errors like:
ERROR [org.jboss.modcluster] (UndertowEventHandlerAdapter - 1) MODCLUSTER000042: Error MEM sending STATUS command to node1/1.2.3.4:10001, configuration will be reset: MEM: Can't read node
In the log of mod_cluster there are the equivalent warnings:
manager_handler STATUS error: MEM: Can't read node
As far as I understand, the problem is that although wildfly/modcluster is able to connect to httpd/mod_cluster, it does not work the other way. Unfortunately, even after an extensive effort I am stuck.
Could someone help with setting mod_cluster for Wildfly 9.0.0 without advertising? Thanks a lot.
I ran into the Node Removed issue to.
I managed to solve it by using the following as instance-id
<subsystem xmlns="urn:jboss:domain:undertow:2.0" instance-id="${jboss.server.name}">
I hope this will help someone else to ;)
There is no need for any unnecessary effort or uneasiness about static proxy configuration. Each WildFly distribution comes with xsd sheets that describe xml subsystem configuration. For instance, with WildFly 9x, it's:
WILDFLY_DIRECTORY/docs/schema/jboss-as-mod-cluster_2_0.xsd
It says:
<xs:attribute name="proxies" use="optional">
<xs:annotation>
<xs:documentation>List of proxies for mod_cluster to register with defined by outbound-socket-binding in socket-binding-group.</xs:documentation>
</xs:annotation>
<xs:simpleType>
<xs:list itemType="xs:string"/>
</xs:simpleType>
</xs:attribute>
The following setup works out of box
Download wildfly-9.0.0.CR1.zip or build with ./build.sh from sources
Let's assume you have 2 boxes, Apache HTTP Server with mod_cluster acting as a load balancing proxy and your WildFly server acting as a worker. Make sure botch servers can access each other on both MCMP enabled VirtualHost's address and port (Apache HTTP Server side) and on WildFly AJP and HTTP connector side. The common mistake is to binf WildFLy to localhost; it then reports its addess as localhost to the Apache HTTP Server residing on a dofferent box, which makes it impossible for it to contact WildFly server back. The communication is bidirectional.
This is my configuration diff from the default wildfly-9.0.0.CR1.zip.
328c328
< <mod-cluster-config advertise-socket="modcluster" connector="ajp" advertise="false" proxies="my-proxy-one">
---
> <mod-cluster-config advertise-socket="modcluster" connector="ajp">
384c384
< <subsystem xmlns="urn:jboss:domain:undertow:2.0" instance-id="worker-1">
---
> <subsystem xmlns="urn:jboss:domain:undertow:2.0">
435c435
< <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:102}">
---
> <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}">
452,454d451
< <outbound-socket-binding name="my-proxy-one">
< <remote-destination host="10.10.2.4" port="6666"/>
< </outbound-socket-binding>
456c453
< </server>
---
> </server>
Changes explanation
proxies="my-proxy-one", outbound socket binding name; could be more of them here.
instance-id="worker-1", the name of the worker, a.k.a. JVMRoute.
offset -- you could ignore, it's just for my test setup. Offset does not apply to outbound socket bindings.
<outbound-socket-binding name="my-proxy-one"> - IP and port of the VirtualHost in Apache HTTP Server containing EnableMCPMReceive directive.
Conclusion
Generally, these MEM read / node error messages are related to network problems, e.g. WildFly can contact Apache, but Apache cannot contact WildFly back. Last but not least, it could happen that the Apache HTTP Server's configuration uses PersistSlots directive and some substantial enviroment conf change took place, e.g. switch from mpm_prefork to mpm_worker. In this case, MEM Read error messages are not realted to WildFly, but to the cached slotmem files in HTTPD/cache/mod_custer that need to be deleted.
I'm certain it's network in your case though.
After a couple of weeks I got back to the problem and found the solution. The problem was - of course - in configuration and had nothing in common with the particular version of Wildfly. Mode specifically:
There were three nodes in the domain and three servers in each node. All nodes were launched with the following property:
-Djboss.node.name=nodeX
...where nodeX is the name of a particular node. However, it meant that all three servers in the node get the same name, which is exactly what confused the load balancer.
As soon as I have removed this property, everything started to work.
I am setting up CruiseControl.NET and I get the following error message on the webdashboard:
No connection could be made because the target machine actively refused it 127.0.0.1:21234
The Url it is looking for is: tcp://localhost:21234/CruiseManager.rem
However the ccnet website in IIS has its tcp port set to 82.
So I use the following Url to navigate to the webdashboard http://127.0.0.1:82/ccnet/ViewFarmReport.aspx
I tried changing the Tcp port in IIS to 21234 and I get the following error message on the webdashboard:
Tcp channel protocol violation: expecting preamble.
I have also tried opening the port with the following command:
netsh firewall add portopening TCP 21234 CCNET
When I try and start the CCNET service I get the following message
The CruiseControl.NET Server service started then stopped. Some services stop automatically if they have no work to do....
Can anyone help me with this problem please?
EDIT - Adding config file
<cruisecontrol xmlns:cb="urn:ccnet.config.builder">
<cb:define PublishDir="C:\Deploy\Portal2.0Build"/>
<project name="Portal2.0">
<workingDirectory>C:\PortalCruiseControl\Working</workingDirectory>
<artifactDirectory>C:\PortalCruiseControl\Artifacts</artifactDirectory>
<webURL>http://192.168.17.59:82/ccnet</webURL>
<triggers>
<intervalTrigger name="continuous" seconds="10"
buildCondition="IfModificationExists"/>
</triggers>
<sourcecontrol type="svn">
<trunkUrl>https://portal2003.local:8443/svn/portalv2.0/trunk</trunkUrl>
<executable>C:\Program Files (x86)\VisualSVN Server\bin\svn.exe</executable>
<username>ccnet</username>
<password>***</password>
<cleanCopy>true</cleanCopy>
</sourcecontrol>
<tasks>
<msbuild>
<executable>
C:\WINDOWS\microsoft.net\Framework64\v3.5\MSBuild.exe
</executable>
<projectFile>Portal2.0.sln</projectFile>
<buildArgs>
/target:build;publish /p:Configuration=Release /p:MSBuildExtensionsPath=C:\Progra~2\MSBuild /p:MSBuildEmitSolution=1 /p:publishdir=C:\Deploy\Portal2.0Build /verbosity:diag
</buildArgs>
<logger>
C:\Program Files (x86)\CruiseControl.NET\server\ThoughtWorks.CruiseControl.MSBuild.dll
</logger>
</msbuild>
</tasks>
<labeller type="assemblyVersionLabeller">
<major>2</major>
<minor>0</minor>
<incrementOnFailure>false</incrementOnFailure>
</labeller>
<publishers>
<statistics />
<xmllogger />
<package>
<name>ZipFilePublish</name>
<compression>9</compression>
<always>false</always>
<flatten>false</flatten>
<baseDirectory>$(PublishDir)</baseDirectory>
<dynamicValues>
<replacementValue property="name">
<format>C:\Deploy\Builds\PortalBuild{0}.zip</format>
<parameters>
<namedValue name="$CCNetLabel" value="Default" />
</parameters>
</replacementValue>
</dynamicValues>
<files>
<file>*.*</file>
<file>**\*</file>
</files>
</package>
<email from="bla" mailhost="bla" port="25" userName="bla"
password="bla" includeDetails="TRUE" useSSL="FALSE">
<users>
<user name="User1" group="Portal" address=""/>
</users>
<groups>
<group name="Portal">
<notifications>
<notificationType>change</notificationType>
</notifications>
</group>
</groups>
</email>
</publishers>
</project>
The first error message is probably caused by CCNET service not running because of which the web dashboard can't connect to it. It should go away as soon as you fix the ccnet.config so that service starts running.
The second problem ("Ilegal characters in path"; you seem to have already figured out the missing nodes part) is caused by msbuild/executable element. It seems that CC.NET doesn't like whitespace and especially new line characters inside it's value. Replacing:
<executable>
C:\WINDOWS\microsoft.net\Framework64\v3.5\MSBuild.exe
</executable>
with:
<executable>C:\WINDOWS\microsoft.net\Framework64\v3.5\MSBuild.exe</executable>
should fix the problem.
Another hint: when you're having problems with the validity of your ccnet.config file, try using CCValidator.exe (it's in your CruiseControl.NET\server folder). It usually points out the problematic part of the config file quite nicely (although that wasn't the case with "Illegal characters in path" problem - I had to comment out specific parts of the config to find the offending node).
The first message you receive (connection actively refused) makes me think of a firewall which is blocking the port you're using.
The second problem could be anything. It could for instance be an error in your XML configuration (ccnet.config) file. Can you find any pointers in the Windows Eventlog ?
Regarding the 2nd problem: did you try to run the CC.NET server from the command line?
If you've got an error in your XML configuration, this will give you a more meaningful error message.
Which account are you using to run the Windows service?
Have you checked your ccnet's dashboard.config file?
It has the following line in it:
<server name="local" url="tcp://localhost:21234/CruiseManager.rem" ... />
Try changing the port on that to 82 and then restarting the website (you should be just able to add a space to the web.config file and save and IIS will restart the website).
Sounds like you're confusing two different functions:
tcp://localhost:21234
This is the default remoting port for clients like CCTray. This is not used for the IIS web site (dashboard).
Configuration document is likely missing Xml nodes required for properly populating CruiseControl co nfiguration. Missing Xml node (packageList) for required member (ThoughtWorks.CruiseControl.Core.Publishers.Package Publisher.PackageList)
Your example config is missing required packageList node.
A misleading error message. The port really is 21234, not 82. I got the same errors. The fix was to start ccnet.exe from the desktop shortcut to discover that the real problem was illegal code in my ccnet.config file.
After fixing the ccnet.config file, the problem moved on. When attempting to build, the system would not let the subversion client modify the read-only marker files in the checked out repo.
In my case I misprinted project configuration file name in ccnet.config instead of timescheduler.config it were timesheduler. When I fixed file name I was able to run ccnet service.
<cruisecontrol xmlns:cb="urn:ccnet.config.builder">
<cb:include href="definitions.xml" xmlns:cb="urn:ccnet.config.builder"/>
<cb:include href="projects/timescheduler.config" xmlns:cb="urn:ccnet.config.builder"/>
</cruisecontrol>
I have a windows service that uses log4net. We noticed that the service in question was running painfully slow so we attached a debugger to it and stepped through. It appears that each time it tries to write an entry to the log via log4net that it takes anywhere from 10 to 30 seconds before the next line of code can execute. Obviously this adds up...
The service is 2.0 .net
We're using log4Net 1.2.0.30714.
We've tested this on a machine running vista and a machine running win sever 2003 and have seen the same or similar results.
Jeff mentioned a performance problem with Log4Net in Podcast 20. It's possible that you are seeing a similar issue.
It turned out that someone had added an SMPTAppender in a config file which was overriding the one in our app. As a result the errant SMPT server address was unreachable. log4net was trying to log the error for a minute per request and then giving up and going on to the next line of code. Correcting the smtp address fixed the problem.
I have log4net with adonet appender and have not seen any decremental performance of my windows service. what appender are you using?
Check your config file for Log4Net settings. Log4Net can be configured to log to a remote machine, and if the connection is slow, so will be your logging speed.
Well I'm not remoting... this is writing to the log file on the machine it's running on. Here's my appender settings:
<appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender,log4net">
<file value="D:\\ROPLogFiles\\FileProcessor.txt" />
<appendToFile value="true" />
<datePattern value="yyyyMMdd" />
<rollingStyle value="Date" />
<layout type="log4net.Layout.PatternLayout,log4net">
<param name="ConversionPattern" value="%d [%t] %-5p %c [%x] - %m%n" />
</layout>
<threshold value="INFO" />
</appender>
the default maximum file size is 10mb . if your files are about this size and your file systems is quite full and probably heavily fragmented, it may be possible that the problem lies there. how big are your log files? i encountered similar problems with logfiles at gigabyte size.