I would like to get the backup and restore related functionality working inside the service fabric explorer for my local dev cluster. Any action I take related to backup/restore in the cluster manager ui throws a service not found exception currently, I believe due to the backup and restore service not running on the cluster.
I can't find any documentation pertaining to configuring the local dev cluster. The standalone cluster steps don't seem to apply. I have attempted to use sfctl to get the cluster configuration with sfctl sa-cluster config but the operations times out against my local dev cluster. I've tried the analogous Get-ServiceFabricClusterConfiguration from powershell module and get a timeout there as well.
For the time being I have built a code based backup and restore, but I really like the service and would like to see what I can do with it locally.
I tested this with cluster version 7.0.470.9590
Verify BackupAndRestore service is available in your installation.
C:\Program Files\Microsoft Service Fabric\bin\Fabric\Fabric.Code\__FabricSystem_App{random-number}\BRS.Code.Current folder should exist with the correct binaries.
Change your local cluster config.
Your clusterconfig is located under: C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup
So if your dev cluster is single node unsecure, you can change: C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup\NonSecure\OneNode\ClusterManifestTemplate.json
In the "addOnFeatures" tag you can add "BackupRestoreService" example:
"addOnFeatures": [
"DnsService",
"EventStoreService",
"BackupRestoreService"
]
Under "fabricSettings" you then add the configuration for the backup and restore service:
{
"name": "BackupRestoreService",
"parameters": [
{
"name": "SecretEncryptionCertThumbprint",
"value": "......YOURTHUMBPRINT....."
}
]
}
After these steps you can reset your dev cluster from the system tray. (Right click the service fabric icon => Reset Local Cluster)
When your cluster is restarted you can verify if the service is running by opening the cluster dashboard and open the system services.
You can use this approach to configure other system services as well.
Note: updating your SDK may result in losing the changes made to your cluster config.
Related
When installing MongoDb, I get the option to install it as a service. What does that mean? If I don't select that option, what difference would it make? Also, selecting "install as a service" will bring up additional options, such as "Run service as a network service user" or "run service as a local or domain user". What do these options do?
I'm speaking in the perspective of Windows development, but the concepts are similar with other Operating Systems, such as Linux.
What are services?
Services are application types that run in the system's background. These are applications such as task schedulers and event loggers. If you look at the Task Manager > Processes, you can see that you have a series of Service Hosts which are containers hosting your Windows Services.
What difference does setting MongoDB as a service make?
Running MongoDB as a service gives you some flexibility with how you can run and deploy MongoDB. For example, you can have MongoDB run at startup and restart on failures. If you don't set MongoDB up as a service, you will have to run the MongoDB server every time.
So, what is the difference between a network service and a local service?
Running MongoDB as a network service means that your service will have permission to access the network with the same credentials as the computer you are using. Running MongoDB locally will run the service without network connectivity.(Refer Source here)
I am going through guide here:
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-creation-for-windows-server
Section "Step 1B: Create a multi-machine cluster".
I have installed Cluster on one box and trying to use the same json (as per instructions) and trying to install it on another box so that i can have Cluster running on 2 VMs.
I am now getting this error when I run by TestConfig.ps1:
Previous Fabric installation detected on machine XXX. Please clean the machine.
Previous Fabric installation detected on machine XXX. Please clean the machine.
Data Root node Dev Box1 exists on machine XXX in \XXX\C$\ProgramData\SF\Dev Box1. This is an artifact from a previous installation - please delete the directory corresponding to this node.
First, take a look on this link. These are the requirements for each cluster node that are need to be met if you want to create the cluster.
The error is pretty obvious. You most likely have already SF installed on the machine. So either you have SF runtime or some uncleaned cluster data there.
Your first try should be running CleanFabric powershell script from the SF standalone package on each node. It should clean all SF data (cluster, runtime, registry etc.). Try this and then run TestConfiguration script once again. If this does not help, you would have to go to each node and manually delete any SF data that TestConfiguration script is complaining about.
I have installed the latest mongodb mms agent (6.5.0.456) on ubuntu 16.04 and initialised the replicaset. Hence I am running a single node replicaset with the monitoring agent enabled. The agent works fine, however it does not seem to actually find the replicaset member:
[2018/05/26 18:30:30.222] [agent.info] [components/agent.go:Iterate:170] Received new configuration: Primary agent, Assigned 0 out of 0 plus 0 chunk monitor(s)
[2018/05/26 18:30:30.222] [agent.info] [components/agent.go:Iterate:182] Nothing to do. Either the server detected the possibility of another monitoring agent running, or no Hosts are configured on the Group.
[2018/05/26 18:30:30.222] [agent.info] [components/agent.go:Run:199] Done. Sleeping for 55s...
[2018/05/26 18:30:30.222] [discovery.monitor.info] [components/discovery.go:discover:746] Performing discovery with 0 hosts
[2018/05/26 18:30:30.222] [discovery.monitor.info] [components/discovery.go:discover:803] Received discovery responses from 0/0 requests after 891ns
I can see two processes for monitor agents:
/bin/sh -c /usr/bin/mongodb-mms-monitoring-agent -conf /etc/mongodb-mms/monitoring-agent.config >> /var/log/mongodb-mms/monitoring-agent.log 2>&1
/usr/bin/mongodb-mms-monitoring-agent -conf /etc/mongodb-mms/monitoring-agent.config
However if I terminate one, it also tears down the other, so I do not think that is the problem.
So, question is what is the Group that the agent is referring to. Where is that configured? Or how do I find out which Group the agent refers to and how do I check if the group is configured correctly.
The rs.config() looks fine, with one replicaset member, which has a host field, which looks just fine. I can use that value to connect to the instance using the mongo command. no auth is configured.
EDIT
It kind of looks that the cloud manager now needs to be configured with the seed host. Then it starts to discover all the other nodes in the replicaset. This seems to be different to pre-cloud-manager days, where the agent was able to track the rs - if I remember correctly... Probably there still is a way to get this done easier, so I am leaving this question open for now...
So, question is what is the Group that the agent is referring to. Where is that configured? Or how do I find out which Group the agent refers to and how do I check if the group is configured correctly.
Configuration values for the Cloud Manager agent (such as mmsGroupId and mmsApiKey) are set in the config file, which is /etc/mongodb-mms/monitoring-agent.config by default. The agent needs this information in order to communicate with the Cloud Manager servers.
For more details, see Install or Update the Monitoring Agent and Monitoring Agent Configuration in the Cloud Manager documentation.
It kind of looks that the cloud manager now needs to be configured with the seed host. Then it starts to discover all the other nodes in the replicaset.
Unless a MongoDB process is already managed by Cloud Manager automation, I believe it has always been the case that you need to add an existing MongoDB process to monitoring to start the process of initial topology discovery. Once a deployment is monitored, any changes in deployment membership should automatically be discovered by the Cloud Manager agent.
Production employments should have authentication and access control enabled, so in addition to adding a seed hostname and port via the Cloud Manager UI you usually need to provide appropriate credentials.
I have setup an ESB cluster using jdbc connections to ms sql databases for local and remotely mounted config and gov registries. 1x mgt and 2xworker
Our .car file contains some ws-security policy artifacts which go to config. When I deploy to mgt it deploys OK. I have SVN dep sync setup to the cluster and when it picks up the .car it starts to deploy on the worker but fails when loading the policy files into conf. It is trying to duplicate the policy in the shared conf and fails - of course that is right but; how should I deploy these 'shared' artifacts when a .car file is distributed by svn? I need to be able to control the deploy properly. The only way I can see is via the dev studio which is terrible for our change management practice.
Thanks for you help.
I can recommend multiple solutions. You can decide what to choose from them.
Since you have only 2 worker nodes, you can get rid of (disable) deployment synchronization and deploy the car files to all the nodes. I believe you have some automated process, so it wont be a problem to deploy to all nodes. While doing so, modify your project to bundle the policies to a separate car file and the services to another. When deploying, you deploy the policies only to management node and the services to all nodes.
Second option is to, add the policies to local registry. i.e. Not the config registry, not the governance registry. Then, when you deploy the car to the management node, it will add the policies to local registry of the management node. When the car file is dep-synced, worker nodes will deploy them and they will add the policies to their local registry. This will avoid the worker nodes trying to add the policies to the same location.
By going through the question, I felt you have external databases to the local registry too. But, its not necessary. You can use the internal H2 database for the local registry. H2 databases sometimes get corrupted. If such a thing happens, all you have to do is, delete the H2 database and restart the server with -Dsetup option. Having an external DB is fine. But, thats an overkill.
I am trying to understand if rolling deployment of application is possible in Weblogic. Weblogic version is 12.1.2.0.0.
"By rolling deployment I mean, deploying the new version to a single node or a child cluster, by removing the node or child cluster from targets of existing deployment. This is to make sure that the current version of deployment on existing cluster is still functioning, probably with degraded performance, due to removing a node/a child cluster.
The operation team can verify if the intended change has worked." Once verified then the target for the deployment can be updated to add rest of the child cluster(s).
I am aware of the -redeploy option available in Weblogic, which mean no outage, but it does the deployment to the same target as the original deployment.
java weblogic.Deployer -adminurl http://localhost:8802
-username weblogic -password weblogic -name VersionedApp
-targets adminServer -redeploy -source
C:/tmp/VersionedApp2 -appversion version2
However not sure how will it behave, if there is an active DB in the backend.
Any insight on this is highly appreciated.
You should look at -adminmode atribute for deploying. In Oracle Docs: http://docs.oracle.com/middleware/1213/wls/DEPGD/wldeployer.htm#DEPGD318
You need first enable admin port, and than application which is deployed in adminmode can be only accessed only by adminport (context is visible at admin port not at production one).
Once tests are ok, you can promote application from "admin" state to "active" one by using "-start" parameter in weblogic.Deployer.