Is CMS Replication required for ApplicationPool also? - powershell

Is CMS Replication required for ApplicationPool also?
When I run the command Get-CsManagementStoreReplicationStatus I get UpToDate : True for my domain but it comes False for my ApplicationPool.
UpToDate : True
ReplicaFqdn : ****.*****
LastStatusReport : 07-08-2014 11:42:26
LastUpdateCreation : 07-08-2014 11:42:26
ProductVersion : 5.0.8308.0
UpToDate : False
ReplicaFqdn : MyApplicationPool.****.*****
LastStatusReport :
LastUpdateCreation : 08-08-2014 15:16:03
ProductVersion :
UpToDate : False
ReplicaFqdn : ****.*****
LastStatusReport :
LastUpdateCreation : 08-08-2014 15:10:59
Am I on the right track? Have I created my ApplicationPool wrongly?

Yes, UCMA applications running on an app server generally require access to the CMS, so replication should be enabled.
On the app server, you'd need to:
Ensure the "Lync Server Replica Replicator Agent" service is running
Run Enable-CsReplica in the management shell
Run Enable-CsTopoloy
Then run Invoke-CSManagementStoreReplication to force a replication
I've noticed that it often takes a while for the CMS to be replicated to the app server, so you might need to run Get-CsManagementStoreReplicationStatus a few times before you see UpToDate change to True.

Related

Setup Apereo Cas Management integrated with CAS server

I want to install Apero Cas Management (verison 6.0) and integrate it with Cas Server (version 6.0).
I have installed following these step:
Step 1: I installed Cas Server
I checked it with REST API. It worked.
My server stays at http://203.162.141.7:8080
And this is configuration of my Cas server. I put this config at /etc/cas/config. Here is my file cas.properties file
cas.server.name=http://203.162.141.7:8080
cas.server.prefix=${cas.server.name}/cas
logging.config: file:/etc/cas/config/log4j2.xml
server.port=8080
server.ssl.enabled=false
cas.serviceRegistry.initFromJson=false
cas.serviceRegistry.json.location=file:/etc/cas/services-repo
cas.authn.oauth.grants.resourceOwner.requireServiceHeader=true
cas.authn.oauth.userProfileViewType=NESTED
cas.authn.policy.requiredHandlerAuthenticationPolicyEnabled=false
cas.authn.attributeRepository.stub.attributes.email=casuser#example.org
#REST API JSON
cas.rest.attributeName=email
cas.rest.attributeValue=.+example.*
Step 2: I installed Cas-management-overlay
I put my cas-management-overlay's config file a /etc/cas/config too. Here is my management.properties file
cas.server.name=http://203.162.141.7:8080
cas.server.prefix=${cas.server.name}/cas
mgmt.serverName=http://203.162.141.7:8088
mgmt.adminRoles[0]=ROLE_ADMIN
mgmt.userPropertiesFile=file:/etc/cas/config/users.json
server.port=8088
server.ssl.enabled=false
logging.config=file:/etc/cas/config/log4j2-management.xml
And my here is users.json file
{
"casuser" : {
"#class" : "org.apereo.cas.mgmt.authz.json.UserAuthorizationDefinition",
"roles" : [ "ROLE_ADMIN" ]
}
}
Then I run ./build.sh, and it shows me that
Finally, I access this link to open cas-management http://203.162.141.7:8088/cas-management, but the it redirects to this url http://203.162.141.7:8080/cas/login?service=http%3A%2F%2F203.162.141.7%3A8088%2Fcas-management%2F and shows this error below
I don't know where I have gone wrong.
I think since you haven't told the management webapp about the location of the service registry, it can't add itself as a registered service.
Manually add a registered service for http://203.162.141.7:8088/cas-management and you should be able to log in to the management app at that point.
Here is my answer for cas-management register file name /etc/cas/services-repo/casManagement-1.json
{
"#class" : "org.apereo.cas.services.RegexRegisteredService",
"serviceId":"^https://domain:8088/cas-management.+",
"name" : "casManagement",
"id" : 1,
"evaluationOrder" : 1,
"allowedAttributes":["cn","mail"]
}

Deploy Graylog on GKE

I'm having a hard time deploying Graylog on Google Kubernetes Engine, I'm using this configuration https://github.com/aliasmee/kubernetes-graylog-cluster with some minor modifications. My Graylog server is up but show this error in the interface:
Error message
Request has been terminated
Possible causes: the network is offline, Origin is not allowed by Access-Control-Allow-Origin, the page is being unloaded, etc.
Original Request
GET http://ES_IP:12900/system/sessions
Status code
undefined
Full error message
Error: Request has been terminated
Possible causes: the network is offline, Origin is not allowed by Access-Control-Allow-Origin, the page is being unloaded, etc.
Graylog logs show nothing in particular other than this:
org.graylog.plugins.threatintel.tools.AdapterDisabledException: Spamhaus service is disabled, not starting (E)DROP adapter. To enable it please go to System / Configurations.
at org.graylog.plugins.threatintel.adapters.spamhaus.SpamhausEDROPDataAdapter.doStart(SpamhausEDROPDataAdapter.java:68) ~[?:?]
at org.graylog2.plugin.lookup.LookupDataAdapter.startUp(LookupDataAdapter.java:59) [graylog.jar:?]
at com.google.common.util.concurrent.AbstractIdleService$DelegateService$1.run(AbstractIdleService.java:62) [graylog.jar:?]
at com.google.common.util.concurrent.Callables$4.run(Callables.java:119) [graylog.jar:?]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
but at the end :
2019-01-16 13:35:00,255 INFO : org.graylog2.bootstrap.ServerBootstrap - Graylog server up and running.
Elastic search health check is green, no issues in ES nor Mongo logs.
I suspect a problem with the connection to Elastic Search though.
curl http://ip_address:9200/_cluster/health\?pretty
{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 4,
"active_shards" : 4,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
After reading the tutorial you shared, I was able to identify that kubelet needs to run with the argument --allow-privileged.
"Elasticsearch pods need for an init-container to run in privileged mode, so it can set some VM options. For that to happen, the kubelet should be running with args --allow-privileged, otherwise the init-container will fail to run."
It's not possible to customize or modify kubelet parameter/arguments, there is a feature request found here: https://issuetracker.google.com/118428580, so this can be implemented in a future.
Also in case you are modifying kubelet directly on the node(s), it's possible that the master resets the configuration and it isn't guaranteed that the configurations will be persistent.

Service Fabric Replica Stuck

I am upgrading an application on Service Fabric and one of the replicas is showing the following warning:
Unhealthy event: SourceId='System.RAP', Property='IStatefulServiceReplica.ChangeRole(S)Duration', HealthState='Warning', ConsiderWarningAsError=false.
The api IStatefulServiceReplica.ChangeRole(S) on node _gtmsf1_0 is stuck. Start Time (UTC): 2018-03-21 15:49:54.326.
After some debugging, I suspect I'm not properly honoring a cancellation token. In the meantime, how do I safely force a restart of this stuck replica to get the service working again?
Partial results of Get-ServiceFabricDeployedReplica:
...
ReplicaRole : ActiveSecondary
ReplicaStatus : Ready
ServiceTypeName : MarketServiceType
...
ServicePackageActivationId :
CodePackageName : Code
...
HostProcessId : 6180
ReconfigurationInformation : {
PreviousConfigurationRole : Primary
ReconfigurationPhase : Phase0
ReconfigurationType : SwapPrimary
ReconfigurationStartTimeUtc : 3/21/2018 3:49:54 PM
}
You might be able to pipe that directly to Restart-ServiceFabricReplica. If that remains stuck, then you should be able to use Get-ServiceFabricDeployedCodePackage and Restart-ServiceFabricDeployedCodePackage to restart the surrounding process. Since Restart-ServiceFabricDeployedCodePackage has options for selecting random packages to simulate failure, just be sure to target the specific code package you're interested in restarting.

Spring Boot Restful service with incremental memory without load

I have deployed Spring Boot Restful service [spring-boot-starter-parent -
1.3.0.RELEASE] along with: -
spring-boot-starter-cloud-connectors,
spring-boot-starter-data-rest,
spring-boot-starter-jdbc,
spring-boot-starter-test,
spring-boot-configuration-processor,
spring-context-support[4.1.2.RELEASE] and
spring-cloud-starter-parent[Brixton.M4]
maven dependencies. Also connected with MySQL and AutoScalar marketplace services.
This restful service is working excellent with respect to SLA and memory utilisation. However, I have observed following concerns:
The application has been deployed with initial memory however after
couple of minutes (say 15-20 minutes), it is keep growing by 2-5MBs
every couple-of-minutes. However there is no load (service call).
After a huge work load, the application get reached on certain memory (say 600MB). But the Memory is not coming down after the load
(even in idle state for a while ). However due to above point 1, the
memory is keep growing.
The JMX is showing that GC is calling both minor and major while work load and following is the Spring Metrics through spring-boot-starter-actuator:
{"_links" : {"self" : {"href" : "https://Promotion/metrics"}},"mem" :
376320,"mem.free" : 199048,"processors" : 4,"instance.uptime" :
6457682,"uptime" : 6464241,"systemload.average" : 0.05,"heap.committed"
: 376320,"heap.init" : 382976,"heap.used" : 177271,"heap" :
376320,"threads.peak" : 22,"threads.daemon" : 20,"threads.totalStarted"
: 27,"threads" : 22,"classes" : 9916,"classes.loaded" :
9916,"classes.unloaded" : 0,"gc.ps_scavenge.count" :
47,"gc.ps_scavenge.time" : 344,"gc.ps_marksweep.count" :
0,"gc.ps_marksweep.time" : 0,"httpsessions.max" :
-1,"httpsessions.active" : 0,"datasource.primary.active" :
0,"datasource.primary.usage" : 0.0,"gauge.response.metrics" :
7.0,"gauge.response.BuyMoreSaveMore.getDiscDetails" :
14.0,"counter.status.200.metrics" :
7,"counter.status.200.BuyMoreSaveMore.getDiscDetails" : 850}

how to use jconsole remote connect to Resin 4

I want to use jconsole remote connect to Resin 4,but it doesn't work when I modify the resin.properties:
#Jconsole config
-Dcom.sun.management.jmxremote.port : 8080
-Dcom.sun.management.jmxremote.ssl : false
-Dcom.sun.management.jmxremote.authenticate : false
-Djava.rmi.server.hostname : host_ip
I think that it's the resin.properties doesn't take effect,but I don't know how to config it now.
From 4.0 it has to be configured in resin.xml, below is the documentation link however, I am still unable to get the jmx port up and running.
http://caucho.com/resin-4.0/admin/resin-admin-console.xtp#JMXConsole