How to get number of pods in AKS that were active in a given timeframe - kubernetes

So, I'm having an unexpectedly hard time figuring this out. I have a kubernetes cluster deployed in AKS. In Azure (or Kubernetes dashboard), How do I view how many active pods there were in a given time frame?

Updated 0106:
You can use the query below to count the number of active pods:
KubePodInventory
| where TimeGenerated > ago(2d)//set the time frame to 2 days
| where PodStatus == "Running"
| project PodStatus
| summarize count() by PodStatus
Here is the test result:
Original answer:
If you have configured monitoring, then you can use kusto query to fetch it.
Steps as below:
1.Go to azure portal -> your AKS.
2.In the left panel -> Monitoring -> click Logs.
3.In the table named KubePodInventory, there is a field PodStatus which you can use it as filter in your query. You can write your own kusto query and specify the Time range via portal(by clicking the Time range button) or in query(by using ago() function). You should also use the count() function to count the number.

Related

how to use two grafana notification policy with multiple labels

I have grafana setup, and want to set a policy to get all the alerts generated for the namespace "myapp", However, I want to exclude alerts which are generated from the cluster "test-cluster".
below is my label setup
case-1
namespace=~myapp
cluster=test-cluster
this will not help, it still sends alerts from the cluster "test-cluster"
Case-2
namespace=~myapp
|
| nested policy with below condition
--> cluster=test-cluster
this also wont help, it give same result as case-1

Latest container logs not displayed in k8s dashboard

I have a running k8s cluster and we integrated the k8s dashboard to view the logs and I am able to login and view the app logs.
One thing to note here is that our application logs have the current date stamp appended to it, for example : application-20221101.log
I tried to sort the logs in the log location using the below command and this displays the latest logs inside the pod
tail -f `/bin/ls -1td /log-location/application*.log| /usr/bin/head -n1`
but once I add this to the container startup script,
it just displays the current day's logs and after the date changes, i.e it becomes 20221102, it still just displays the previous day's
i.e application-20221101.log only.
I need it to display the latest logs of the current date even after the date changes.
The easiest approach was to just remove the timestamp from the log files, but that would not be possible for our application.
Is there any simple way for configuring this or some workaround would be required to set this up.

Retrieve timestamp of k8s upgrades

kubectl version does not show when the cluster was upgraded with the current version. Is there a way to get the cluster upgrade timestamp or better timestamps of all previous upgrades?
We can see a list of every running and completed operations in the cluster using the following command :
gcloud beta container operations list
Each operation is assigned an operation ID , operation type, start and end times, target cluster, and status.
To get more information about a specific operation, specify the operation ID in the following command:
gcloud beta container operations describe OPERATION_ID -z region

Get Redshift cluster status in outputs of cloudformation

I am creating a redshift cluster using CF and then I need to output the cluster status (basically if its available or not). There are ways to output the endpoints and port but I could not find any possible way of outputting the status.
How can I get that, or it is not possible ?
You are correct. According to AWS::Redshift::Cluster - AWS CloudFormation, the only available outputs are Endpoint.Address and Endpoint.Port.
Status is not something that you'd normally want to output from CloudFormation because the value changes.
If you really want to wait until the cluster is available, you could create a WaitCondition and then have something monitor the status and the signal for the Wait Condition to continue. This would probably need to be an Amazon EC2 instance with some User Data. Linux instances are charged per-second, so this would be quite feasible.

Switch job on online Node jenkins

I have jenkins and job in jenkins with selection "Node" (Server) by users and "Label Expression" (ex. server1||server2). But, if my server1 is gone to offline, I want to start my job on server2 automatically. Can anyone help me?
Thanks.
Jenkins' node labels are supposed to be used the other way round. See Manage Jenkins → Manage Nodes → select a node → Configure → Click
to the right of the Labels field:
Labels (AKA tags) are used for grouping multiple slaves into one logical group.
So each of your servers (Server1, Server2) should have the same label assigned to it. Let's say build. Define this in Label Expression in your project.
Select Ignore offline nodes under This build is parameterized → Node → Node eligibility.
Such, if a user selects an offline node the other one of the group should be taken (if this one is online).