Is there a way to get the server ID of the registered server in an Azure storage sync service through power shell or ARM template? - powershell

{
"name": "string",
"type": "Microsoft.StorageSync/storageSyncServices/syncGroups/serverEndpoints",
"apiVersion": "2019-06-01",
"properties": {
"serverLocalPath": "string",
"cloudTiering": "string",
"volumeFreeSpacePercent": "integer",
"tierFilesOlderThanDays": "integer",
"friendlyName": "string",
"serverResourceId": "string",
"offlineDataTransfer": "string",
"offlineDataTransferShareName": "string"
}
}
What needs to be passed as a server resource id in the above template format for storage sync. How do we get the server id ?

Yes. Use the Get-AzStorageSyncServerEndpoint cmdlet from the Az.StorageSync module to list all server endpoints within a given sync group.
Get-AzStorageSyncServerEndpoint -ResourceGroupName "myResourceGroup" -StorageSyncServiceName "mysyncservice" -SyncGroupName "testsyncgroup"
Sample Response:
SyncGroupName : testsyncgroup
StorageSyncServiceName : mysyncservice
ServerLocalPath : C:\Users\onpremwindows\Desktop\myfolder
ServerResourceId : /subscriptions/***/resourceGroups/myResourceGroup/providers/microsoft.stor
agesync/storageSyncServices/mysyncservice/registeredServers/8ec8b78c-0739-195r-856e-2e7r5ta2c13w
ServerEndpointName : 94e9fd59-23d6-9854-99c1-3e9a8ab73760
ProvisioningState : Succeeded
LastWorkflowId : storageSyncServices/mysyncservice/workflows/bd3o8c77-7183-4a46-8830-9b156dd2f011
LastOperationName : ICreateServerEndpointWorkflow
FriendlyName : onpremwindows
SyncStatus : Microsoft.Azure.Commands.StorageSync.Models.PSServerEndpointHealth
CloudTiering : Off
VolumeFreeSpacePercent : 20
TierFilesOlderThanDays :
OfflineDataTransfer : Off
OfflineDataTransferShareName :
OfflineDataTransferStorageAccountResourceId :
OfflineDataTransferStorageAccountTenantId :
InitialDownloadPolicy : NamespaceThenModifiedFiles
LocalCacheMode : UpdateLocallyCachedFiles
ResourceId : /subscriptions/***/resourceGroups/myResourceGroup/providers/microsoft.stor
agesync/storageSyncServices/mysyncservice/syncGroups/testsyncgroup/serverEndpoints/94e9fd59-23d6-9854-99c1-3e9a8ab73760
ResourceGroupName : myResourceGroup
Type : microsoft.storagesync/storageSyncServices/syncGroups/serverEndpoints
8ec8b78c-0739-195r-856e-2e7r5ta2c13w would be the ServerId you can pass.
You can obtain the ServerId using the Get-AzStorageSyncService cmdlet as well:
Get-AzStorageSyncService -ResourceGroupName "myResourceGroup" -StorageSyncServiceName "mysyncservice"
Sample response:
StorageSyncServiceName : mysyncservice
ServerName : 8ec8b78c-0739-195r-856e-2e7r5ta2c13w
ServerCertificate :
AgentVersion : 8.0.0.0
ServerOSVersion : 10.0.14393.0
ServerManagementErrorCode : 0
LastHeartBeat : 12/10/2019 12:55:01
ProvisioningState : Succeeded
ServerRole : Standalone
ClusterId : 00000000-0000-0000-0000-000000000000
ClusterName :
ServerId : 8ec8b78c-0739-195r-856e-2e7r5ta2c13w
StorageSyncServiceUid : 76104503-6cd8-4812-7884-c4288e87a8cd
LastWorkflowId : storageSyncServices/finalsync/workflows/0e3bc737-7619-4b42-81a9-aec36e3fbf53
LastOperationName : ICreateRegisteredServerWorkflow
DiscoveryEndpointUri : https://tm-server.xxx.com:443
ResourceLocation : eastasia
ServiceLocation : eastasia
FriendlyName : onpremwindows
ManagementEndpointUri : https://tm-server.xxx.com:443
MonitoringEndpointUri : https://tm-server.xxx.com:443
ResourceId : /subscriptions/***/resourceGroups/myResourceGroup/providers/microsoft.storagesync/storageSyncServices/mysyncservice/registeredServers/8ec8b78c-0739-195r-856e-2e7r5ta2c13w
ResourceGroupName : myResourceGroup
Type : microsoft.storagesync/storageSyncServices/registeredServers

Related

Service unavailable error while using MongoDB, ElasticSearch and transporter

I am trying to use the transporter plugin to create a pipeline to sync a MongoDB database and ElasticSearch. I am using a Linux virtual machine (ubuntu) for this.
I have created a MongoDB collection my_application with the following data in it:
db.users.find().pretty();
{
"_id" : ObjectId("6008153cf979ac0f18681765"),
"firstName" : "Sammy",
"lastName" : "Shark"
}
{
"_id" : ObjectId("60081544f979ac0f18681766"),
"firstName" : "Gilly",
"lastName" : "Glowfish"
}
I configured ElasticSearch and the transporter pipeline and now exported MongoDB_URI and Elastic_URI.
I then ran my transporter pipeline.js to obtain this:
INFO[0005] metrics source records: 2 path=source ts=1611154492641006368
INFO[0005] metrics source/sink records: 2 path="source/sink" ts=1611154492641013556
I then try to view my ElasticSearch but get this error:
curl $ELASTICSEARCH_URI/_search?pretty=true
{
"error" : {
"root_cause" : [
{
"type" : "cluster_block_exception",
"reason" : "blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];"
}
],
"type" : "cluster_block_exception",
"reason" : "blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];"
},
"status" : 503
}
Here is my elasticsearch.yml:
# Use a descriptive name for the node:
node.name: node-1
path.data: /var/lib/elasticsearch
# Path to log files:
path.logs: /var/log/elasticsearch
# Set the bind address to a specific IP (IPv4 or IPv6):
network.host: 0.0.0.0
# Set a custom port for HTTP:
http.port: 9200
# Bootstrap the cluster using an initial set of master-eligible nodes:
cluster.initial_master_nodes: ["node-1", "node-2"]
Here is my elasticsearch node:
{
"name" : "node-1",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "_na_",
"version" : {
"number" : "7.7.1",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "ad56dce891c901a492bb1ee393f12dfff473a423",
"build_date" : "2020-05-28T16:30:01.040088Z",
"build_snapshot" : false,
"lucene_version" : "8.5.1",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
I have tried deleting indices and restarting the server but the error repeats. Would like to know the solution to this. I am using elasticsearch 7.10

Could not to inspect the dedicated host provisioning process

When I used slcli(softlayer-python command) to create a dedicated host, the command return the order id. And I check the order's status was 'APPROVED'. But I can not get the host in the result of 'SoftLayer_Account/getDedicatedHosts'.
So I check the billing item and it is 'dedicated_virtual_hosts' rightly. Did SoftLayer API support another approach to inspect the dedicated host provisioned? Or did I do something wrong?
Yes, the dedicated host should be listed when calling to SoftLayer_Account::getDedicatedHosts method, or when using the "slcli dedicatedhost list" command. I suggest to check your permissions and device access, verify that "View Virtual Dedicated Host Details" is checked.
Below are some slcli commands I executed to order and list dedicated hosts.
To order a dedicated host:
slcli dedicatedhost create -H slahostname -D example.com -d mex01 -f 56_CORES_X_242_RAM_X_1_4_TB
To list dedicated hosts:
slcli dedicatedhost list
:.......:...................:..........:..............:................:............:............:
: id : name : cpuCount : diskCapacity : memoryCapacity : datacenter : guestCount :
:.......:...................:..........:..............:................:............:............:
: 11111 : slahostname : 56 : 1200 : 242 : mex01 : - :
:.......:...................:..........:..............:................:............:............:
Below an example about how to see the details:
slcli dedicatedhost detail 11111
:.................:...........................:
: name : value :
:.................:...........................:
: id : 11111 :
: name : slahostname :
: cpu count : 56 :
: memory capacity : 242 :
: disk capacity : 1200 :
: create date : 2018-02-01T09:53:46-04:00 :
: modify date : :
: router id : 333333 :
: router hostname : bcr01a.mex01 :
: owner : owner001 :
: guest count : 0 :
: datacenter : mex01 :
:.................:...........................:
Using RestFul the response when calling to SoftLayer_Account::getDedicatedHosts should be something like below:
GET:
https://[userName]:[apiKey]#api.softlayer.com/rest/v3/SoftLayer_Account/getDedicatedHosts
RESPONSE:
{
"cpuCount": 56,
"createDate": "2018-02-01T09:53:46-04:00",
"diskCapacity": 1200,
"id": 11111,
"memoryCapacity": 242,
"modifyDate": null,
"name": "slahostname"
}
Also you can use SoftLayer_Virtual_DedicatedHost::getObject method:
GET:
https://[userName]:[apiKey]#api.softlayer.com/rest/v3/SoftLayer_Virtual_DedicatedHost/11111/getObject

One object per distinct field / value

I have this schema that represents a "File object" :
{
"_id" : ObjectId("59f7184aedd1712cdd6a148a"),
"file" : {
"part" : ".part000010",
"creation_time" : "2017-10-26T14:01:42.309597",
"archive_time" : "2017-10-30T12:17:14.770871",
"inode" : 18644328,
"size" : 733326,
"tags" : null,
"group" : "ssc_hpci",
"uuid" : "408bf97c-bd6c-11e7-a854-de7a7d6f0f6c",
"filename" : "build/hpca/base_library.zip",
"owner" : "anl001",
"gid" : 66026,
"uid" : 66799,
"checksum" : null,
"symlink" : false
},
"archivename" : "build",
...
}
Many files will have the same archivename.
I need one object per distinct archivename. Obj1 would have an archive named "build", Obj2 an archive named "build1", etc.
Projection is not suitable for this, aggregate neither.
I'm using pymongo.

how can I do a POST Req for a REST service in SoapUI 5.1.2 with form-data

Could you help me to resolved the problem that I have?
Using Postman (REST Client - chrome extension) I do a Post to a REST service an I get the correct answer from the services.
The answer is "201 Created" and a new row is added into the DB.
URL = http://suring-t.suremptec.com.ar/gis/13/rest/1.0/organizations
form-date = metadata
{
"meta" : {
"version" : "1.0",
"description" : "Organization"
},
"id" : null,
"name" : "test org",
"startDate" : "2014-06-05 16:20:31.334",
"endDate" : null,
"administrable" : true,
"published" : true,
"href" : "\/2\/rest\/1.0\/organizations\/1"
}
I can't find the way to make SoapUI (5.1.2) works with the same request.
URL = http://suring-t.suremptec.com.ar/gis/13/rest/1.0/organizations
form-date =
metadata
{
"meta" : {
"version" : "1.0",
"description" : "Organization"
},
"id" : null,
"name" : "test org",
"startDate" : "2014-06-05 16:20:31.334",
"endDate" : null,
"administrable" : true,
"published" : true,
"href" : "/2/rest/1.0/organizations/1"
}
The response is "200 - Ok" but
Any Ideas, how should configure the soapui request?
This should get you started Getting started with REST

using 2 different result sets in mongodb

I'm using groovy with mongodb. I have a result set but need a value from a different grouping of documents. How do I pull that value into the result set I need?
MAIN:Network data
"resource_metadata" : {
"name" : "tapd2e75adf-71",
"parameters" : { },
"fref" : null,
"instance_id" : "9f170531-79d0-48ee-b0f7-9bd2788b1cc5"}
I need the display_name for the network data result set which is contained in the compute data.
CPU data
"resource_id" : "9f170531-79d0-48ee-b0f7-9bd2788b1cc5",
"resource_metadata" : {
"ramdisk_id" : "",
"display_name" : "testinstance0001"}
You can see the resource_id and the Instance_id are the same values. I know there is no relationship I can do but trying to reach to see if anyone has come across this. I'm using the table model to retrieve data for reporting. Hashtable has been suggested to me but I'm not seeing that working. Somehow in the hasNext I need to include the display_name value. in the networking data so GUID number doesn't only valid name shows from compute data.
def docs = meter.find(query).sort(sort).limit(50)\
while (docs.hasNext()) { def doc = docs.next()\
model.addRow([ doc.get("counter_name"),doc.get("counter_volume"),doc.get("timestamp"),\
doc.get("resource_metadata").getString("mac"),\
doc.get("resource_metadata").getString("instance_id"),\
doc.get("counter_unit")]
as Object[]);}
Full document:
1st set where I need the network data measure with no name only id {resource_metadata.instance_id}
{
"_id" : ObjectId("528812f8be09a32281e137d0"),
"counter_name" : "network.outgoing.packets",
"user_id" : "4d4e43ec79c5497491b23b13644c2a3b",
"timestamp" : ISODate("2013-11-17T00:51:00Z"),
"resource_metadata" : {
"name" : "tap6baab24e-8f",
"parameters" : { },
"fref" : null,
"instance_id" : "a8727a1d-4661-4565-9c0a-511279024a97",
"instance_type" : "50",
"mac" : "fa:16:3e:a3:bf:fc"
},
"source" : "openstack",
"counter_unit" : "packet",
"counter_volume" : 4611911,
"project_id" : "97dc4ca962b040608e7e707dd03f2574",
"message_id" : "54039238-4f22-11e3-8e68-e4115b99a59d",
"counter_type" : "cumulative"
}
2nd set where I want to grab the name as I get the values {resource_id}:
"_id" : ObjectId("5287bc3ebe09a32281dd2594"),
"counter_name" : "cpu",
"user_id" : "4d4e43ec79c5497491b23b13644c2a3b",
"message_signature" :
"timestamp" : ISODate("2013-11-16T18:40:58Z"),
"resource_id" : "a8727a1d-4661-4565-9c0a-511279024a97",
"resource_metadata" : {
"ramdisk_id" : "",
"display_name" : "vmsapng01",
"name" : "instance-000014d4",
"disk_gb" : "",
"availability_zone" : "",
"kernel_id" : "",
"ephemeral_gb" : "",
"host" : "3746d148a76f4e1a8203d7e2378ef48ccad8a714a47e7481ab37bcb6",
"memory_mb" : "",
"instance_type" : "50",
"vcpus" : "",
"root_gb" : "",
"image_ref" : "869be2c0-9480-4239-97ad-df383c6d09bf",
"architecture" : "",
"os_type" : "",
"reservation_id" : ""
},
"source" : "openstack",
"counter_unit" : "ns",
"counter_volume" : NumberLong("724574640000000"),
"project_id" : "97dc4ca962b040608e7e707dd03f2574",
"message_id" : "a240fa5a-4eee-11e3-8e68-e4115b99a59d",
"counter_type" : "cumulative"
}
This is another collection that contains the same value but just thought it would be easier to grab from same collection:
"_id" : "a8727a1d-4661-4565-9c0a-511279024a97",
"metadata" : {
"ramdisk_id" : "",
"display_name" : "vmsapng01",
"name" : "instance-000014d4",
"disk_gb" : "",
"availability_zone" : "",
"kernel_id" : "",
"ephemeral_gb" : "",
"host" : "3746d148a76f4e1a8203d7e2378ef48ccad8a714a47e7481ab37bcb6",
"memory_mb" : "",
"instance_type" : "50",
"vcpus" : "",
"root_gb" : "",
"image_ref" : "869be2c0-9480-4239-97ad-df383c6d09bf",
"architecture" : "",
"os_type" : "",
"reservation_id" : "",
}
Mike
It looks like these data are in 2 different collections, is this correct?
Would you be able to query CPU data for each "instance_id" ("resource_id")?
Or if this would cause too many queries to the database (looks like you limit to 50...) you could use $in with the list of all "Instance_id"s
http://docs.mongodb.org/manual/reference/operator/query/in/
Either way, you will need to query each collection separately.