I am using the FabricClient QueryManager to iterate over all partitions to find all actors and return a list. This works fine on my local cluster but throws InvalidCastException when running in our Azure sandbox. Specifically, "Unable to cast object of type 'System.Fabric.SingletonPartitionInformation' to type 'System.Fabric.Int64RangePartitionInformation'."
public async Task<List<Store>> GetStores()
{
var cancelToken = new CancellationToken(false);
var fabricClient = new FabricClient();
var actorServiceUri = new ServiceUriBuilder("StoreActorService").ToUri();
var partitions = await fabricClient.QueryManager.GetPartitionListAsync(actorServiceUri);
var actorIds = new List<ActorId>();
foreach (var partition in partitions)
{
// the following cast works locally but throws InvalidCast in our Azure sandbox
var partitionInfo = (Int64RangePartitionInformation)partition.PartitionInformation;
var actorServiceProxy = ActorServiceProxy.Create(
actorServiceUri,
partitionInfo.LowKey);
var continueToken = (ContinuationToken)null;
do
{
var page = await actorServiceProxy.GetActorsAsync(continueToken, cancelToken);
actorIds.AddRange(page.Items.Select(actor => actor.ActorId));
continueToken = page.ContinuationToken;
} while (continueToken != null);
}
var stores = new List<Store>();
foreach (var actorId in actorIds)
{
var proxy = ActorProxy.Create<IStoreActor>(actorId, actorServiceUri);
var store = await proxy.RetrieveAsync(actorId.ToString());
stores.Add(store);
}
return stores;
}
As shown in Service Fabric Explorer, the service is partitioned for Int64Range as required.
Any thoughts on why Azure thinks it's a SingletonPartition?
Thanks.
Chuck
It turns out that somehow this Actor Service did get created in a Singleton Partition in our Sandbox environment, but not in either local or production environments. I didn't think this was possible, but I guess it is.
Related
I'm trying to run the following code:
var awsCredentials = new BasicAWSCredentials("SomeSecret","SomeKey");
var client = new AmazonCostExplorerClient(awsCredentials, Amazon.RegionEndpoint.USEast1);
var requestObject = new GetCostAndUsageWithResourcesRequest();
requestObject.Granularity = "DAILY";
var range = new DateInterval();
range.Start = DateTime.Now.AddDays(-1).ToString("yyyy-MM-dd");
range.End = DateTime.Now.ToString("yyyy-MM-dd");
requestObject.TimePeriod = range;
GetCostAndUsageWithResourcesResponse costs = await client.GetCostAndUsageWithResourcesAsync(requestObject);
The routine seems to run then I get the error:
System.Text.Json.JsonException: A possible object cycle was detected. This can either be due to a cycle or if the object depth is larger than the maximum allowed depth of 32. Consider using ReferenceHandler.Preserve on JsonSerializerOptions to support cycles.
I'm not sure what I'm missing here to get the proper result.
Any thoughts are appreciated!
EDIT:
I got this working in case anyone needs it.
The issue was that I wasn't being smart with my code setup
The code above was being called in an async method and was incorrectly calling it in the consumer method.
Example:
Incorrect usage in consumer was:
public async Task<IActionResult> GetAwsCost()
{
var coster = new CostExplorerDash(_connectionInfo);
var res = coster.GetMonthlyCost();
return Ok(res);
}
Proper usage is:
public async Task<IActionResult> GetAwsCost()
{
var coster = new CostExplorerDash(_connectionInfo);
var res = coster.GetMonthlyCost(); //async method call
return Ok(res.Result);
}
The issue (I think) was just that I was trying to use an unfinished async task which caused a mess when .NET tried to do serialzation of the objects involved.
I am able to programmatically log in to the PowerBI Client, gather my Workspaces as well as get a specific Report from a specific Workspace. I need to programmatically render that report to a .pdf or .xlsx file. Allegedly this is possible with the ExportToFileInGroup/ExportToFileInGroupAsync methods. I even created a very simple report without any parameters. I can embed this using the sample app from here. So that at least tells me that I have what I need setup in the backend. But it fails when I try to run the ExportToFileInGroupAsync method (errors below code.)
My Code is:
var accessToken = await tokenAcquisition.GetAccessTokenForUserAsync(new string[] {
PowerBiScopes.ReadReport,
PowerBiScopes.ReadDataset,
});
var userInfo = await graphServiceClient.Me.Request().GetAsync();
var userName = userInfo.Mail;
AuthDetails authDetails = new AuthDetails {
UserName = userName,
AccessToken = accessToken,
};
var credentials = new TokenCredentials($"{accessToken}", "Bearer");
PowerBIClient powerBIClient = new PowerBIClient(credentials);
var groups = await powerBIClient.Groups.GetGroupsAsync();
var theGroup = groups.Value
.Where(x => x.Name == "SWIFT Application Development")
.FirstOrDefault();
var groupReports = await powerBIClient.Reports.GetReportsAsync(theGroup.Id);
var theReport = groupReports.Value
.Where(x => x.Name == "No Param Test")
.FirstOrDefault();
var exportRequest = new ExportReportRequest {
Format = FileFormat.PDF,
};
string result = "";
try {
var response = await powerBIClient.Reports.ExportToFileInGroupAsync(theGroup.Id, theReport.Id, exportRequest);
result = response.ReportId.ToString();
} catch (Exception e) {
result = e.Message;
}
return result;
It gets to the line in the try block and then throws the following errors:
An error occurred while sending the request.
Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host..
UPDATE
Relating to #AndreyNikolov question, here is our Embedded capacity:
After this was implemented, no change. Same exact error.
Turns out the issue was on our side, more specifically, security/firewall settings. Here is the exact quote from our networking guru.
"After some more investigation we determined that our firewall was causing this issue when it was terminating the SSL connection. We were able to add a bypass for the URL and it is now working as expected."
How can I get the list of test results for a given test case using azure devops API ?
var testResultsQuery = new TestResultsQuery
{
ResultsFilter = new ResultsFilter
{
TestCaseId = validTestCaseId
}
};
var testCaseResults = await _testClient.GetTestResultsByQueryAsync(testResultsQuery, projectName).ConfigureAwait(false);
This code results in an internal error (500) from the API.
You need to specify the GroupBy and AutomatedTestName properties for ResultsFilter too. Please specify them in ResultsFilter. See below example:
var testResultsQuery = new TestResultsQuery
{
ResultsFilter = new ResultsFilter
{
AutomatedTestName="AutoTestName",
GroupBy = "branch",
TestCaseId = 149
}
};
In c#, I want to get a list of service fabric node information where my stateless service runs. This will be useful in tests. I know how to do this for stateful service using FabricClient class and ActorServiceProxy class, but when it comes to stateless service, I couldn't find a way. Do you have an idea?
Thanks,
You can still use the FabricClient to get this information. Have a play with the QueryManager to check for the info you need
Here's some quick code I use to quickly query the latest version of our TenantApp Service then I check to see they're all running in a healthy state or they've been upgraded properly.
var currentAppTypes = await fabricClient.QueryManager.GetApplicationTypeListAsync();
var tenantAppTypes = currentAppTypes.Where(x => x.ApplicationTypeName.Equals("TenantAppsType"));
var latestTenantAppType = currentAppTypes.Where(x => x.ApplicationTypeName.Equals("TenantAppsType"))?
.OrderByDescending(x =>
{
var versions = x.ApplicationTypeVersion.Split('.');
if (versions.Length == 3)
{
return (int.Parse(versions[0]) * 1000000) +
(int.Parse(versions[1]) * 1000) +
int.Parse(versions[2]);
}
return 0;
})?.FirstOrDefault();
if (latestTenantAppType != null)
{
var currentSvcTypes = await fabricClient.QueryManager.GetServiceTypeListAsync(latestTenantAppType.ApplicationTypeName, latestTenantAppType.ApplicationTypeVersion);
// etc
}
Or if you just want to get all the applications running
var currentApps = await fabricClient.QueryManager.GetApplicationListAsync();
Once you have the service information you can check the nodes its on or you can check the nodes directly themselves
var currentNodes = fabricClient.QueryManager.GetNodeListAsync();
var nodeInfo = await fabricClient.QueryManager.GetNodeLoadInformationAsync("nodeName");
Hope this helps
For anyone still trying to do this, I had a timer requirement that required me to work out how many nodes were running my app on the fly. This is roughly the code I used:
string currentNodeName = ServiceContext.NodeContext.NodeName;
var fabricClient = new FabricClient();
var nodeList = (await fabricClient.QueryManager.GetNodeListAsync()).ToList();
var serviceName = ServiceContext.ServiceName.LocalPath.Split('/')[1];
var nodesRunningApplication = new List<Node>();
foreach (var node in nodeList)
{
var nodeApplicationList = await fabricClient.QueryManager.GetDeployedApplicationListAsync(node.NodeName);
var nodeApplication = nodeApplicationList.FirstOrDefault(p =>
p.ApplicationName.LocalPath.Split('/')[1] == serviceName);
if (nodeApplication != null)
{
nodesRunningApplication.Add(node);
}
}
I want to send images stored in MongoDB using GridFS via a MVC4 Web app to the browser via my LAN environment, but it take ~500ms until the image is sent to the browser.
Google Chrome network inspector says most of the time is spent during "Waiting" while the actual "Receiving" takes ~1ms.
The MongoDB server is in the local network, so what can take so long to send an 10kb image? I use Windows 8 with Visual Studio 2012 and the official mongo-csharp-driver via NuGet.
Here is my code of my "Files" controller which takes an object id and sends the data for this id:
public FileContentResult Files(string id)
{
var database = new MongoClient(MyConnection).GetServer().GetDatabase("MyDB");
var gridFs = new MongoGridFS(database);
var bsonId = new BsonObjectId(id);
var gridInfo = gridFs.FindOneById(bsonId);
var bytes = GridInfoToArray(gridInfo);
return new FileContentResult(bytes, "image/jpeg") { FileDownloadName = gridInfo.Name };
}
private byte[] GridInfoToArray(MongoGridFSFileInfo file)
{
using (var stream = file.OpenRead())
{
var bytes = new byte[stream.Length];
stream.Read(bytes, 0, (int)stream.Length);
return bytes;
}
}
Code to display the image in a View:
<img src="#Url.Action("Files", new { id = objectIdOfMyImage) })"/>
How different are the results if you cache your Database and MongoGridFS instances?
// create static fields for _database & _gridFs
var database = _database ??
(_database = new MongoClient(MyConnection).GetServer().GetDatabase("MyDB"));
var gridFs = _gridFs ??
(_gridFs = new MongoGridFS(database));
I'm not sure how much overhead it incurs when you instantiate these, but it wouldn't hurt to move it outside of the method you're trying to optimize.