I have a failing case when making an IDP connection using pingfederate - certificate

The server.log says "signature_status": "UNVERIFIED". Is this a certificate issue?
Also what are the best ways to read the pingfederate logs in windows machine.

That sounds like an issue with signature verification, which could be the cert itself but is more likely a configuration issue. More information is really needed to know which it is.
I assume the issue you are having with reading logs on windows machines is because the files are large or are moving quickly. If the files are too big you can modify the log4j2.xml config file at appdir/pingfed*/pingfed*/server/default/conf/log4j2.xml to reduce the log size to something easier to read in notepad. Here is an example rolling file appender that should leave you with easily maneageable files.
<RollingFile name="FILE" fileName="${sys:pf.log.dir}/server.log"
filePattern="${sys:pf.log.dir}/server.log.%i" ignoreExceptions="false">
<PatternLayout>
<!-- Uncomment this if you want to use UTF-8 encoding instead
of system's default encoding.
<charset>UTF-8</charset> -->
<pattern>%d %X{trackingid} %-5p [%c] %m%n</pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy
size="20000 KB" />
</Policies>
<DefaultRolloverStrategy max="5" />
</RollingFile>
If you issue is that the files are moving too fast to read then you might consider using something like baretail or Get-Content in powershell now that it has a tail switch.

Related

How do I limit the size of log file generated by druid while using imply?

I'm using imply to handle druid's cluster. But my logs files have increased to hundreds of gigabytes of storage. I'm talking about logs files present in imply/var/sv/ directory in which there are these 7 log files, broker.log, historical.log, middleManager.log zk.log, coordinator.log, imply-ui.log, and overlord.log.
Among them, this particular file called coordinator.log has increased to the really massive size of about 560 GBs in a matter of a few months. I have read all those logs, and they don't bother me much. What I'm concerned about is the size of the file which is eating my entire storage. I have tried finding ways to limit the size of those log files but believe me nothing worked for me.
I read in many places that druid uses log4j2 logger so we can limit the size using its configuration from log4j2.xml file. But again a big confusion there are four log4j2.xml files which one shall I modify?
I tried modifying all of them, but still, it didn't work. I'm kind of a fool while handling it seems like... Well so this is my request if anybody could point me in the right direction in limiting the size of these log files
You can setup a simple cron process to truncate these files periodically using truncate -s 0 imply/var/sv/*.log
The default log level in imply distribution is set to info which generates a lots of logs. If they don't bother you much, you can set the log level to error so that logs will be generated only when there is any error during system runtime. To set that, you need to modify logger level inside conf/druid/_common/log4j2.xml file.
<?xml version="1.0" encoding="UTF-8" ?>
<Configuration status="WARN">
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{ISO8601} %p [%t] %c - %m%n"/>
</Console>
</Appenders>
<Loggers>
<Root level="error">
<AppenderRef ref="Console"/>
</Root>
</Loggers>
</Configuration>
And even after do so, you should periodically truncate log files as #mdeora suggested.

Not create new log file once start my service with using log4j2

following is my configration of log4j2:
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="trace" name="MyApp" packages="com.swimap.base.launcher.log">
<Appenders>
<RollingFile name="RollingFile" fileName="logs/app-${date:MM-dd-yyyy-HH-mm-ss-SSS}.log"
filePattern="logs/$${date:yyyy-MM}/app-%d{MM-dd-yyyy}-%i.log.gz">
<PatternLayout>
<Pattern>%d %p %c{1.} [%t] %m%n</Pattern>
</PatternLayout>
<Policies>
<SizeBasedTriggeringPolicy size="1 KB"/>
</Policies>
<DefaultRolloverStrategy max="3"/>
</RollingFile>
</Appenders>
<Loggers>
<Root level="trace">
<AppenderRef ref="RollingFile"/>
</Root>
</Loggers>
</Configuration>
the issue is that each time when start up my service, a new log will be created even the old one has not reached the specific size. If the program restart frequently, i will got many log files end with ‘.log’ which never be compressed.
the logs i got like this:
/log4j2/logs
/log4j2/logs/2017-07
/log4j2/logs/2017-07/app-07-18-2017-1.log.gz
/log4j2/logs/2017-07/app-07-18-2017-2.log.gz
/log4j2/logs/2017-07/app-07-18-2017-3.log.gz
/log4j2/logs/app-07-18-2017-20-42-06-173.log
/log4j2/logs/app-07-18-2017-20-42-12-284.log
/log4j2/logs/app-07-18-2017-20-42-16-797.log
/log4j2/logs/app-07-18-2017-20-42-21-269.log
someone can tell me how can i append log to the exists log file when i start up my program? much thanks whether u can help me closer to the answer!!
I suppose that your problem it that you have fileName="logs/app-${date:MM-dd-yyyy-HH-mm-ss-SSS}.log in your log4j2 configuration file.
This fileName template means that log4j2 will create log file with name that contains current date + hours + minutes + seconds + milliseconds in its name.
You should probably remove HH-mm-ss-SSS section and this will allow you to have daily rolling file and to not create new file every app restart.
You can play with template and choose format that you need.
If you want only one log file forever - then create constant fileName, like fileName=app.log
It's not hard to implement this. There is a interface DirectFileRolloverStrategy, implement below method:
public String getCurrentFileName(RollingFileManager manager)
Mybe someone met same problem and this can help him.

Infinispan write behind? What am i doing wrong

I have a small test in which I am trying to chalk out the time taken by Infinispan in a local cache and then in a local cache with a write behind.
Surprisingly, the time taken in a local cache to put 8M entries is around 27 sec and to do a get it is 1 millisec. That is good. However, as soon as I enable the write behind the does not even end in 30 minutes. I am sure there is something which is terribly wrong with the configuration.
I have used 5.3.0 Final and 5.2.7 final.
The configurations are pasted here
<namedCache name="LocalWithWriteBehind">
<loaders shared="false">
<loader class="org.infinispan.loaders.file.FileCacheStore"
fetchPersistentState="true" ignoreModifications="false"
purgeOnStartup="false">
<properties>
<property name="location" value="${java.io.tmpdir}" />
</properties>
<!-- write-behind configuration starts here -->
<async enabled="true" threadPoolSize="500" />
<!-- write-behind configuration ends here -->
</loader>
</loaders>
</namedCache>
If you would like to see the Scala App, see the code here http://pastebin.com/PSiJFFiZ
The file cache store before Infinispan 6.0 was very slow. Please upgrade to Infinispan 6.0.0.Final, and enable the single file cache store as indicated here.

WCF Web Service Metadata Publishing Disabled Error

I modified my web.config to add a custom binding in order to increase the buffer and message size, and it seems like in creating the service element explicitly, it has somehow broken the reference to my behavior element.
When I try to run my web service from VS using the WCF Test Client, or I go to the service page, I get the error:
Metadata publishing for this service is currently disabled.
I've compared my web.config to a few different sources on this and everything seems to match. I've no idea what the issue is.
Here is the System.serviceModel element:
<system.serviceModel>
<services>
<service name="BIMIntegrationWS.BIMIntegrationService" behaviorConfiguration="metadataBehavior">
<endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" />
<endpoint address="http://localhost:1895/BIMIntegrationService.svc"
binding="customBinding" bindingConfiguration="customBinding0"
contract="BIMIntegrationWS.IBIMIntegrationService"/>
</service>
</services>
<behaviors>
<serviceBehaviors>
<behavior name="metadataBehavior">
<!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment -->
<serviceMetadata httpGetEnabled="true"/>
<!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information -->
<serviceDebug includeExceptionDetailInFaults="true"/>
</behavior>
</serviceBehaviors>
</behaviors>
<bindings>
<customBinding>
<binding name="customBinding0">
<binaryMessageEncoding />
<httpTransport maxReceivedMessageSize="262064"
maxBufferSize="262064"
maxBufferPoolSize="262064" />
</binding>
</customBinding>
</bindings>
<serviceHostingEnvironment multipleSiteBindingsEnabled="true" aspNetCompatibilityEnabled="true"/>
It almost seems like either the web service page is not finding the element. I don't get an error complaining that the target behaviorConfiguration doesn't exist, or anything like that, just that Metadata publishing is not enabled.
Any help would be greatly appreciated.
Thanks.
I think i've "fixed" the issue. More coming later.
EDIT:
In order to "fix" my issue, I basically added a new WCF service to my application, and had it implement my previous interface, I then copied all the code from my original service and when I tweaked the .config file (to look pretty much like the one posted in the question), everything worked fine.
Ofcourse, I know, like we all know, that there is no magic here, that there must be some discrepancy. This is when I noticed/remembered, that after I had created my original service, called "BIMIntegrationService.svc", I had decided that this was too long of a name, so I renamed/refactored my class to "BIMIntegrationWS". Doing this, however, does not change the name of the service file (and therefore the name of the file in the http:// address).
Long story short, I made 2 changes to my .config and everything worked:
1) I changed:
<service name="BIMIntegrationWS.BIMIntegrationService" behaviorConfiguration="metadataBehavior">
to
<service name="BIMIntegrationWS.BIMIntegrationWS" behaviorConfiguration="metadataBehavior">
After running the service like this, I got an error (a helpful one this time) complaining that if multipleSiteBindings was enabled, the endpoint address had to be relative. So:
2) I set that to false (because I don't remember why it was in there in the first place) and everything worked fine.
I guess I should have taken a hint when it seemed like my web service was not using my "service" element at all. :\
EDIT 2:
Actually, you can see in my second comment, in the original question, that the auto-generated tag was pointing to: , as opposed to "BIMIntegrationService".
That should have given it away.
I doubt many other people will have this same issue, but just in case.
I've had the same trouble; I've gotten everything configured correctly (even copied the code from a previous service I have running) and yet no change on allowing the service to expose metadata. yet, if I make a spelling error, parse error, etc. to the app.config I'm given exceptions telling me to correct the problem (which tells me the service is reading the config, just disregarding it.
I was able to bypass it by generating the config settings in the host application:
System.ServiceModel.Description.ServiceMetadataBehavior smb = new System.ServiceModel.Description.ServiceMetadataBehavior();
smb.HttpGetEnabled = true;
ServiceBase host = new Servicebase(typeof(MyService), new Uri("http://localhost:1234/"));
host.Description.Behaviors.Add(smb);
host.Open();
...
host.Close();
Basically allow the code to override and push the config file changes i wanted applied. I know it's not ideal, but I was exposing the service through a windows service which made this do-able. if you do get your problem resolved, please pass along the solution though as I'd be curious to know why (at least in yours, if not both our cases) it's failing.

making server.log append=true

How do I make the log server\\log\serve.log to be appended. i.e. whenver I restart JBoss it should not override the content of the log but continue from the end of it?
Add <param name="Append" value="true"/> to the <Appender> in your conf/jboss-log4j.xml file. There may be multiple appenders defined, so make sure you get the one that handles server.log.
Try setting <param name="Append" value="true"/> in your log4j.xml. This may be on a FileAppender och RollingFileAppender section. Just look for the appender that writes to server.log.
Short answer: change the log file name (e.g. myapp.log)
Longer answer: We've also seen a case where the server.log got truncated in jboss. Somewhere, someone is truncating the server.log file in some static initialization block we couldn't find. Changing the file name seems to work and the contents was appended to.
we had the same issue on our remote Ubuntu 16.04 Linuxes running Jboss EAP 6.4.0 but not when we ran our Jboss server locally in Eclipse/Windows.
The append property was already set to true.
I finally made it work by declaring the property append before the filename in the standalone-full.xml.
<properties>
<property name="append" value="true"/>
<property name="fileName" value="${jboss.server.log.dir}/server.log"/>