How can I specify which team receives what alerts from opsgenie? - kubernetes

So to be more clear, we have some infrastructure and application related alerts set on Prometheus which is running on cluster A. Also, we have 2 teams one is devops and the other one is app team. I would like to make sure that devops team only receives the alerts those related to infra and app team receives the alerts those related to application.
Is there a way to achieve this?

Related

How to make a deployment with different parallel approvals?

I want to make a pipeline (or azure workflow of any kind) that deploys a snapshot of my software to somewhere for User Acceptance Tests, then having people approve this User Acceptance Test and after successful approvals deploy the very same snapshot to the final production place.
Having "Approvals & Checks" configured for an environment and making a YAML-Pipeline with a "Production"-Stage which is assigned to that environment solves the problem.
But only if I had one group of approvers.
In my case there need to be 3 groups of approvers with different rules each that should be able to have their approval process at the same time (to make it as fast as possible):
Change Advisory Board when one more than 50% of the assigned Change Advisory Board members give approval
Tech Review when anyone of the assigned people gives approval
Design Review when anyone of the assigned people gives approval
Any idea how to do that?

Prevent users from creating new work items in Azure DevOps

I've been looking at organisation and project settings but I can't see a setting that would prevent users from creating work items in an Azure DevOps project.
I have a number of users who refuse to follow the guidelines we set out for our projects so I'd like to inconvenience them and the wider project team so that they find it better to follow the guidelines than not - at the moment we've got one-word user stories and/or tasks with estimates of 60-70 hours which isn't reflective of the way that we should be planning.
I'd still want them to be able to edit the stories or tasks and moving statuses, but that initial creation should be off-limits for them (for a time at least). Is there a way to do this??
The Azure DevOps Aggregator project allows you to write simple scripts that get triggered when a work item is created or updated. It uses a service hook to trigger when such an event occurs and abstracts most of the API specific stuff away, providing you with an instance of the work item to directly interact with.
You can't block the creation or update from, such a policy, Azure DevOps will inform the aggregator too late in the creation process to do so, but you can revert changes, close the work item etc. There are also a few utility functions to send email.
You need to install the aggregator somewhere, it can be hosted in Azure Functions and we provide a docker container you can spin up anywhere you want. Then link it to Azure DevOps using a PAT token with sufficient permissions and write your first policy.
A few sample rules can be found in the aggregator docs.
store.DeleteWorkItem(self);
should put the work item in the Recycle Bin in Azure DevOps. You can create a code snippet around it that checks the creator of the work item (self.CreatedBy.Id) against a list of known bad identities.
Be mindful that when Azure DevOps creates a new work item the Created and Updated event may fire in rapid succession (this is caused by the mechanism that sets the backlog order on work items), so you may need to find a way to detect what metadata tells you a work item should be deleted. I generally check for a low Revision number (like, < 5) and the last few revisions didn't change any field other than Backlog Priority.
I'd still want them to be able to edit the stories or tasks and moving statuses, but that initial creation should be off-limits for them (for a time at least). Is there a way to do this??
I am afraid there is no such out of setting to do this.
That because the current permission settings for the workitem have not yet been subdivided to apply to the current scenario.
There is a setting about this is that:
Project Settings->Team configuration->Area->Security:
Set this value to Deny, it will prevent users from creating new work items. But it also prevent users from modify the workitem.
For your request, you could add your request for this feature on our UserVoice site (https://developercommunity.visualstudio.com/content/idea/post.html?space=21 ), which is our main forum for product suggestions.

Region issue for IBM Cloud Activity Tracker with LogDNA

We have different resource group
A resource in Tokyo
B resource in Tokyo too
we need create one IBM Cloud Activity Tracker with LogDNA for A resource
and another one IBM Cloud Activity Tracker with LogDNA for B resource
Can i put IBM Cloud Activity Tracker for A resource in Tokyo
and IBM Cloud Activity Tracker for A resource in Dallas?
Because in your guide https://cloud.ibm.com/docs/Activity-Tracker-with-LogDNA?topic=Activity-Tracker-with-LogDNA-launch
there is important notice "There is 1 instance per region."
Could you help to confirm this case?
Events are logged in the region where they are generated. Also note that all global events are sent to LogDNA in Frankfurt (Europe).
When working with the logged events, you can distinguish between the services based on the records and their data fields. You could create custom views to separate those events.
You need to effectively name your environments and work with Event fields and Event types
A basic convention could use a pattern of “department_instance_level.”
Instead of “Error Logs” and “Info Logs,” they would be named
“dev_app1_info,” “qa_app2_error,” and so on. Not only does this
clearly define the contents of each view, but since LogDNA sorts views
alphabetically, each view is naturally grouped with similar views.
This makes it easy to scan for specific views and search for those
containing specific keywords. In this screenshot, searching for “prod”
narrows the list to views containing production environment logs.
Check How to use LogDNA Views to Manage Logs Effectively guide to learn the best way to manage your events
Additionally, you are planning to setup multiple environments, check this guide around How to Set Up Multiple Environments in LogDNA

To create the Log Analytics alerts using Azure Power Shell or Azure CLI

I'm trying to create alerts in LogAnlytics in azure portal, need to create 6 alerts for 5 db's, so have to create 30 alerts manually and is time consuming.
Hence would require an automated approach.
Tried to create via Creating Alerts Using Azure PowerShell, but this creates the alerts in the Alerts Classic under Monitor but this is not what is required, require it to be created in Log Analytics.
Next approach was via Create a metric alert with a Resource Manager template but this was metric alert and not LogAnalytics alert
At last tried Create and manage alert rules in Log Analytics with REST API, but this is a tedious process need to get the search id, schedule id, threshold id and action id. Even after trying to create the threshold id or action id the error I'm facing is "404 - File or directory not found." (as in the image).
Could someone please suggest me on how should this be proceeded, or is there any other way to create alerts apart from the manual creation?
If you use the Add activity log alert to add a rule, you will find it in the Alerts of Log Analytics in the portal.
Please refer to the Log Analytics Documentation,
Alerts are created by alert rules in Azure Monitor and can automatically run saved queries or custom log searches at regular intervals.
Update:
Please refer to my test screenshots, I think you should check the specific resource group or other things, etc.
Even so, activity log alert belongs to the alerts(classic), alerts is a new metric alert type. You could check the link new metric alert type in this article, it points the alerts. it is not supported by powershell and CLI currently.
Please refer to:
1.Use PowerShell to create alerts for Azure services
2.Use the cross-platform Azure CLI to create classic metric alerts in Azure Monitor for Azure services
As mentioned in the two articles:
This article describes how to create older classic metric alerts. Azure Monitor now supports newer, better metric alerts. These alerts can monitor multiple metrics and allow for alerting on dimensional metrics. PowerShell support for newer metric alerts is coming soon.
This article describes how to create older classic metric alerts. Azure Monitor now supports newer, better metric alerts. These alerts can monitor multiple metrics and allow for alerting on dimensional metrics. Azure CLI support for newer metric alerts is coming soon.
#Shashikiran : You can use the script published in the GITHUB https://github.com/microsoft/manageability-toolkits/tree/master/Alert%20Toolkit
This can create a few sample alerts. For now we have included some sample core machine monitoring alerts like CPU , hardware failures , SQL , etc... Also these are only the log alerts. You can use this as a sample code and come up with your code.

My Node.js app seems to also get a delivery pipeline app?

I'm running a node.js app on bluemix dedicated. What is odd is that it seems to create a new app called "Delivery Pipeline". They both appear as node.js apps in my dashboard and they both appear to share the same actual delivery pipeline.
What is also odd is that the "delivery pipeline" app appears to be the one that actually is running and owns the route.
Just seems really odd to me...
I guess you have to check the delivery Pipeline of your toolchain.
Within the Delivery Pipeline of your toolchain you can define the target application. In your case you might have 2 Delivery Pipelines or you just defined the wrong application name.