Temporary storage size when installing service fabric - azure-service-fabric

I'd like to create a basic (cheap) environment for my service fabric application. However, Service Fabric seems to use the "Temporary Storage" drive of the VMs which is limited in size. The only way I can see to increase the temporary storage drive is to pay for a more performant VM, which I don't really want to do yet.
Is there a way to either, increase the Temporary Storage size, or to tell Service Fabric not to use the Temporary Storage drive, but a different drive?
I am running out of storage in the temp drive, so need to look at other options.

The best option for you is creating a custom cluster from an ARM template, setup all requirements like extra disks, disk types, and then deploy a new cluster based on this template.
You can get some templates in this repository and use this guide as reference

If you don't configure MaxDiskQuotaInMB then Service Fabric diagnostic logs can use up 64gigs of temporary storage (d: for Windows or /mnt on Linux).
You can try using VMs with larger temporary storage (64gigs or more) or limit how much storage service fabric allocates for logs by:
running this type of Powershell command described here
Set-AzureRmServiceFabricSetting -ResourceGroupName clusterResourceGroup -Name clusterName -Section "Diagnostics" -Parameter "MaxDiskQuotaInMB" -Value "25600"
defining it as part of the arm template like in this example
"parameters": [
{
"name": "MaxDiskQuotaInMB",
"value": 5120
}
]
changing it via resources.azure.com by following these instructions from Microsoft Docs
{
"name": "Diagnostics",
"parameters": [
{
"name": "MaxDiskQuotaInMB",
"value": "65536"
}
]
}

Related

Service Fabric - How to enable BackupRestoreService on my local dev cluster

I would like to get the backup and restore related functionality working inside the service fabric explorer for my local dev cluster. Any action I take related to backup/restore in the cluster manager ui throws a service not found exception currently, I believe due to the backup and restore service not running on the cluster.
I can't find any documentation pertaining to configuring the local dev cluster. The standalone cluster steps don't seem to apply. I have attempted to use sfctl to get the cluster configuration with sfctl sa-cluster config but the operations times out against my local dev cluster. I've tried the analogous Get-ServiceFabricClusterConfiguration from powershell module and get a timeout there as well.
For the time being I have built a code based backup and restore, but I really like the service and would like to see what I can do with it locally.
I tested this with cluster version 7.0.470.9590
Verify BackupAndRestore service is available in your installation.
C:\Program Files\Microsoft Service Fabric\bin\Fabric\Fabric.Code\__FabricSystem_App{random-number}\BRS.Code.Current folder should exist with the correct binaries.
Change your local cluster config.
Your clusterconfig is located under: C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup
So if your dev cluster is single node unsecure, you can change: C:\Program Files\Microsoft SDKs\Service Fabric\ClusterSetup\NonSecure\OneNode\ClusterManifestTemplate.json
In the "addOnFeatures" tag you can add "BackupRestoreService" example:
"addOnFeatures": [
"DnsService",
"EventStoreService",
"BackupRestoreService"
]
Under "fabricSettings" you then add the configuration for the backup and restore service:
{
"name": "BackupRestoreService",
"parameters": [
{
"name": "SecretEncryptionCertThumbprint",
"value": "......YOURTHUMBPRINT....."
}
]
}
After these steps you can reset your dev cluster from the system tray. (Right click the service fabric icon => Reset Local Cluster)
When your cluster is restarted you can verify if the service is running by opening the cluster dashboard and open the system services.
You can use this approach to configure other system services as well.
Note: updating your SDK may result in losing the changes made to your cluster config.

Is there an ARM template which will allow us to setup a MongoDB replica set instance using Azure Managed Disk instead of regular data disks

Is there an ARM template which will allow us to setup a MongoDB replica set instance using Azure Managed Disk instead of regular data disks?
Note: The following reference provides a way to setup MongoDB replica set using regular data disks
https://github.com/Azure/azure-quickstart-templates/blob/master/mongodb-replica-set-centos/nested/primary-resources.json#L190-L230
You can edit the ARM template to use Managed Disks as shown below. Also, please use the apiVersion as 2017-03-30
"apiVersion": "2017-03-30",
"type": "Microsoft.Compute/virtualMachines",
{
"name": "datadisk1",
"diskSizeGB": "[parameters('sizeOfDataDiskInGB')]",
"lun": 0,
"createOption": "Empty",
"managedDisk": {
"storageAccountType": "Standard_LRS"
}
"caching": "ReadWrite"
}

Azure Service Fabric disk full

We have an Azure Service Fabric running on Azure. When we created the cluster, all data are stored on D: drive which is a temp folder with low disk space. It looks like Service Fabric is also using the D: drive to write his logs and those logs takes half of the space.
At some time we run out of space on some nodes in the cluster. We free up space but the problem will probably come back very soon.
Does everyone know how we could safely reconfigure Service Fabric to store data elsewhere ? Could we do that on existing cluster or do we have to reinstall a new cluster ? Could we mount drives from Azure storage and use that for SF logs or storing our data ?
To fix this issue for that instance, you can reset your cluster. So it will clear cache and free up the memory for service execution.
Edit 1:
Here I found msdn link to change settings of service fabric. This might help.
Customize service fabric cluster settings
(Not really an answer, but I cannot comment since I am under 50 points :))

Utilize Managed Disk for Service Fabric temporary storage

Is it possible to configure and deploy a Service Fabric cluster that uses a Managed Disk as the temporary storage location for things like the replicator log and app type/versions?
For example, I can't use an A1_v2 VM instance size because the D: (Temporary Storage) drive is too small. If I could leverage a Managed Disk and configure SF to use it instead of the local SSD then this instance size would work for my dev/test scenarios.
Any idea if and how I can make this work?
Disclaimer: You can do this, but you shouldn't. Details below.
Consider changing the size of the shared log file instead if you really want to use such small VMs.
"fabricSettings": [{
"name": "KtlLogger",
"parameters": [{
"name": "SharedLogSizeInMB",
"value": "4096"
}]
}]
More info on configuration here.
Now to actually answer:
Here are the settings. You'd probably change Setup/FabricDataRoot to move the Service Fabric local installation and all of the local application working directories, and/or TransactionalReplicator/SharedLogPath to move the reliable collections shared log.
Some things to consider:
Service Fabric Services (and Service Fabric itself) are built to work on local disks and generally should not be hosted on XStore backed disks (premium or not):
Reliable Collections are definitely built to operate against local drives. There's no internal testing that I'm aware of that runs them in this configuration.
Waste of IO: Assuming LRS replicates changes 3 times and you set TargetReplicaSetSize to 3, this configuration will generate 9 copies of the state. Do you need 9 copies of your state?
Impact on Latency and Performance: What should be a local disk IO will turn into network + disk IO, which has a chance to hurt your performance.
Impact on Availability: At a minimum you're adding another dependency, which usually reduces overall availability. If storage ever has an issue you're now more coupled to that other service. Today you're pretty coupled already since the VMSS drives are backed by blobs, so VM provisioning would fail, but that's different than the read/write/activation path for your services.

Node app staging fails at Installing App Management stage

I've been trying to push a new build of an app to Bluemix, but staging keeps failing when it's at "Installing App Management" because it can't create regular files and directories due to the disk quota being exceeded.
I've already tried pushing it with "-k 2G", but it it still fails.
Is there any way to find out how or why the disk quota keeps being exceeded? There's no way I'm near using 2GB of disk space.
Switching to npm v3 is a potentinal solution here as it reduces the number of duplicated dependencies.
You can do that in your package.json, for example:
"engines": { "npm": "3.x" }
By design, the CloudFoundry applications on IBM Bluemix are limited to an available diskquota of 2GB (default is 1 GB): usually if a cloud application needs more than a 1GB (even 1GB is a lot for a cloud application...) it should be redesigned according to cloud patterns, breaking down it into microservices, using external storage services if it needs simply static storage (for example ObjectStorage service on Bluemix).
You have also to consider that a cloud application filesystem is a unreliable filesystem, the application itself could be automatically deployed on a different virtual environment without any evidence for the end users.
Even the logs should be collected on external services (routing the log stream) if you need to keep these safe, otherwise they will be reset as soon as the application will be restarted on a different cluster node.