how to automate bots to monitor for successful queues on orchestrator? - queue

I have a project that I have to do that deals with queues being loaded successfully and unsuccessfully whereby I do manually at the moment that can be tedious and also positive negative meaning the orchestrator can state that new queues have been added but when I access the actual job (process) nothing has been added.
I would like to know, is there a way to monitor queue success and unsuccessful rates on orchestrator instead of the using monitoring it manually?

You can access pretty much any information via the Orchestrator API.
You can find the "Orchestrator HTTP Request" activity, which will allow you to access any relevant endpoint.
Note that the provisioned Robot in Orchestrator needs to have the right access permission, so please have a look at what roles are associated to the Robot user.
The API reference can be found here:
https://docs.uipath.com/orchestrator/reference
You will see it mentions swagger, which in turn will give you all the information you need to access the relevant APIs.

Related

Transfer client configuration between environments

For securing a frontend application, I created a new Keycloak client with a custom configuration:
mapper which includes "client roles"
scope configuration
client-specific roles (composite and non-composite roles)
This setup works fine in the local development setup. Now we need to transfer this configuration to the other environments like develop/preproduction/production stage.
As far as I understand, Keycloak offers the following exports:
Complete realm
Specific client
It looks as if both apporaches have some major drawbacks. Either I would need to overwrite the complete realm (which I definitely don't want to do in production) or I can import the basic client configuration which is missing all the roles.
And as soon as we, for example, add more roles later on, then we would need to re-configure all stages manually.
Is there some "good practice" how to deal with that? Does keycloak offer some kind of "sync" between stages?
I thought it is hard answer question.
it is compare API call vs UI configuration.
Disadvantage of API call I prefer API call but it takes a time to figure out API function and call order is matter and some properties missing in parent have to set detail in child, complicated structure API URL path ( example id/property/id/property), require more deep of knowledge for Keycloak.
Advantage of API call more fine tunning fast, easy organize from top to bottom (example configure client, Auth resources, auth scopes, policies and permissions to other environment), can transfer 100% configuration.
Disadvantage of UI configuration - not flexible, if un-match, id makes error, can't update/add a partial data (example get client's resource missing it's scopes - it have to set by separate API call), can't move 100% configuration from source to target environment, can make human error
Advantage of UI configuration - easy, quick even manual
My preference is API call - using Postman (single API call or running correction for a sequence of API call - at the local and develop stage, can simple unit test and check HTTP status) and curl call with Bash Schell for higher stage. If check condition of target, can handle scenario based transfer(example already setting, skip that configuration)
One more tips, If using a debug section by F12 in Chrome or Firefox, can see the API call in network tab. It saves time to figure out API call methods and payload/response JSON data.

Way to pull Exchange permissions

Maybe an easy question for someone who knows Powershell and O365 well. Is there a way to configure it so when a command is run for example to pull all access to a shared mailbox, that either a service account is permissioned each time to pull that information or the user who is running the script? I looked at connecting an SA to the script but it would have too much access to 0365 to give it the specific permissions. So the account is not permissioned for the access by default but every time the script/command is ran its permissioned for that inquiry which it shows then it won't have access until the next time its called.
Looking to add this type of function to a script which we only want the helpdesk people to see the information when they run the script and the specific command in the script.
Hopefully explained clear enough :)
Thanks all.
I don't think there is a way to do that natively. You could fiddle something with Azure PIM but that's more for one-off operations than minute action that are done often.
You could however circumvent that by making some sort of web interface that triggers commands on another server using a privileged SA and returns the output through the web interface. You can just make it so that the interface can only request one specific command to be run, and the only thing you have to worry about is sanitizing your parameters well to avoid unwanted injection.
Alternatively, what are you trying to protect against by restricting access so much ? Isn't it something that could be done more easily using a read-only account and some clearly defined policy ? If your helpdesk people overstep their allowed scope, that's a management/HR problem as much as a technical one.

Testing service session management via REST

I need to write test for some JAX RS web service that asserts that certain value is cached in the session from disk on the first request in the session.
The testing process does not have access to the tested process. The use case involves using REST API to invoke services.
I can think of several options to proceed with:
Create a REST endpoint just for testing, and query there the needed session value.
Write and then read a log message.
I am aware that I am trying to test an implementation detail via an external API which does not provide contract for this detail, but currently I'm a bit constrained about which processes may be run by the testing infrastructure.
Are there any additional seams to exploit for testing, and what general good practice exists for this scenario?
I just came up with the idea of changing the cached resource and using the change in the behavior.

Sharepoint Online remote event receiver without App/Add-in

The company I work for uses SharePoint Online. We have a requirement that on most site collections, whenever a user creates a new document library that the document library is configured with the "document" content type being removed, and replaced with some of our own corporate content types.
Previously I've managed this by using a coded sandbox solution installed on relevant site collections which had an event handler that fired on "list added". It's obviously now time to move away from that solution.
I'm really struggling to get to grips with the alternative, conceptually. I'm aiming to replace the old solution with a Remote Event Receiver solution.
The way I think I'd like to achieve this:
1) Create a single remote event receiver hosted in Azure which receives details of a new list being added in a site which it then configures appropriately.
2) Use CSOM to provision the site and as part of that provisioning, hook up the event receiver.
I've spent a lot of time on this, getting nowhere. I initially thought the answer lied in using an App which I could install in the App Catalog and then push out to particular site collections but that doesn't seem to be right.
Is the solution above possible? All examples on the web I've come across of setting up remote event receivers seem to use a SharePoint app which I don't really want to do.
Thanks.
For info I found the answer. You can indeed create a remote event receiver without a SharePoint app/add-in.
The answer was written up here
I thought I needed a SharePoint Provider Hosted App for that part 1
But you should bear in mind that as per Remove event receivers on host web clientContext you will not have the client Context passed through, so
TokenHelper.CreateRemoteEventReceiverClientContext(properties)
...will come through as empty. If you want to interact with SharePoint then you'll need to find another way than this approach, or use a different set of credentials.

mqsvc.exe pegs cpu at full usage when deploying nservicebus to production

When I deployed my site that uses nservice to a new production box, it was unusably slow...
After some debugging I discovered that mqsvc.exe was taking up 50% of the CPU usage and the other 50% was being taken up by w3wp.exe
I found this post here:
http://geekswithblogs.net/michaelstephenson/archive/2010/05/07/139717.aspx
which recommended the following:
Make sure you set the windows service for NserviceBus Generic Host to the right credentials
Make sure you have the queue set with the right permissions
Make sure you turn on the right logging configuration in NServiceBus
So I figured the issue was something related to permissions, but even after trying to set the permissions correctly (I thought) I still wasn't able to resolve the issue.
If you allow NServiceBus to create its own queues, then it will create them with the correct permissions it needs.
The problem comes in when you set up a web application, and then the queues are created, and then the identity the application runs under changes. Then you get exactly this problem. NServiceBus tries to check the queue for a message, it does not have access to do so, so it immediately retries over and over, and you spike the processor.
The fix: Delete the queue. Restart the web application. NServiceBus takes over.
Edit: As noted in the comments, NServiceBus 3.x doesn't invoke the installers by default, which means queues are not automatically created in production unless you ask it to. See the documentation page on Installers for more detail.
For a web application (or any other situation where you're not using NServiceBus.Host) you can invoke the installers as part of the fluent config. There is a full example in the NServiceBus download, but here is a link to the relevant file on GitHub.
The issue did end up being that the website needed to be granted explicit permissions to the queues.
I found a number of resources online telling me this, but I still had to spend a good amount of time monkeying around with exactly WHICH account needed access... turned out that since my application pools were set to run as ApplicationPoolIdentity, I need to grant the account permissions by adding the following account to the nservicebus queue:
IIS AppPool\{APP POOL NAME}
I granted full access rights, though I'm sure you could refine that a bit if you needed to.
Hopefully, this will help anyone who runs into the same issues.
(This is my first attempt at the "Answer your own question" mechanism so please let me know if I am doing something wrong..)