How to create a Confluence user macro that divides the size of 2 Jira filters - confluence

What I tried was:
## Macro title: totalBugs
## Macro has a body: Y
## Body processing: Rendered
## Output: Rendered
##
## Developed by: Margus Martsepp
## Date created: 06/02/2023
## Installed by: Margus Martsepp
## #param Search1:title=Search1|type=string|required=true|desc=Choose search 1
## #param Search2:title=Search2|type=string|required=true|desc=Choose search 2
#set($jiraUrl = "https://.../")
#set($proxyHost = "http://...")
#set($proxyPort = "8080")
#set($action = $helper.getActionContext())
#set($jira = $action.getApplicationContext().getComponent("jira"))
#if($jira)
#error("Jira variable is not initialized, check if the Jira plugin is properly installed and configured")
#end
#set($result1 = $jira.search($paramSearch1, $jiraUrl, $proxyHost, $proxyPort))
#set($result2 = $jira.search($paramSearch2, $jiraUrl, $proxyHost, $proxyPort))
#if($result1)
#error("Search 1 has returned no results, check the query and connection details")
#end
#if($result2)
#error("Search 2 has returned no results, check the query and connection details")
#end
#if($result1.total == 0)
#error("Search 1 has returned no results, division by zero is not possible")
#end
#if($result2.total == 0)
#error("Search 2 has returned no results, division by zero is not possible")
#end
#set($count1 = $result1.total)
#set($count2 = $result2.total)
#if($count2 == 0)
#error("Search 2 has returned no results, division by zero is not possible")
#end
#set($result = $count1 / $count2 * 100)
#set($output = "Result of dividing search 1 count ($count1) by search 2 count ($count2) is: $result")
$output
Similarly to this I tried getIssuesFromJqlSearch ex:$jira.getIssuesFromJqlSearch($paramSearch1, 1000))
but in both cases this yielded to:
Result of dividing search 1 count ($count1) by search 2 count ($count2) is: $result
Is there something that I forgot to configure for the JIRA service or is the API used in a different way?

Related

Create an alert for when a cosmosdb partition key exceeds 80% using terraform and alerts should be sent to actions groups

Create an alert for when a cosmosdb partition key exceeds 80% using terraform and
Alerts for some environment should be sent using existing action groups (level 0 / 1), lower environments 2+
created this code not sure to use which can somebody guide me which is the right code to use
# Example: Alerting Action with result count trigger
resource "azurerm_monitor_scheduled_query_rules_alert" "example" {
name = format("%s-queryrule", var.prefix)
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
action {
action_group = []
email_subject = "Email Header"
custom_webhook_payload = "{}"
}
data_source_id = azurerm_application_insights.example.id
description = "Alert when total results cross threshold"
enabled = true
# Count all requests with server error result code grouped into 5-minute bins
query = <<-QUERY
requests
| where tolong(resultCode) >= 500
| summarize count() by bin(timestamp, 5m)
QUERY
severity = 1
frequency = 5
time_window = 30
trigger {
operator = "GreaterThan"
threshold = 3
}
}

cur.execute() psycopg2.ProgrammingError: can't call .execute() on named cursors more than once

I'm trying to get this code to run but I get this error above. Can someone please help? I've tried reading about this in other posts but I don't really know how to apply it here. I'm trying to iterate over lines of this database and select 1400 random ones. It blocks on the error above.
def paragraph_generator(test=True, itersize=5000, year=None, state=None):
con, cur = database_connection.connect(cursor_type="server")
cur.itersize = itersize
while True:
sql = f"""
SELECT
text_id,
lccn_sn,
date,
ed,
seq,
chroniclingamerica_meta.statefp,
chroniclingamerica_meta.countyfp,
text_ocr
FROM
chroniclingamerica natural join chroniclingamerica_meta
WHERE date_part('year',date) BETWEEN 1860 AND 1920
ORDER BY RANDOM()
LIMIT 1400
"""
if test:
sql = (
sql + " limit 10000"
) # limit 1000 means it only goes through 1000 lines of the database
else:
pass
print(sql)
cur.execute(sql)
for p in cur.fetchall():
tokens = stem_text(p[-1]) # Stem
# print(tokens)
tokens = (
p[-1]
.translate(str.maketrans("", "", punct))
.replace("\n", " ")
.lower()
.split(" ")
)
tokens_3 = [
a for a in tokens if len(a) == 3 if a in wn_lemmas
] # For 3-letter words, only keep WordNet recognized tokens
tokens = gensim.parsing.preprocessing.remove_short_tokens(
tokens, minsize=4
) # Remove 1-, 2-, and 3-letter words
tokens = tokens + tokens_3 # Add back in 3-letter WordNet-recognized tokens
tokens = gensim.parsing.preprocessing.remove_stopword_tokens(
tokens, stopwords=stop_words
) # Remove stopwords in stopword list above
print("THIS IS THE LENGTH OF TOKENS")
a = len(tokens)
print(a)
if len(tokens) != 0:
ocr_2 = 1 - (
len([a for a in tokens if a in wn_lemmas]) / len(tokens)
) # Generate a measure for proportion of OCR errors in a page
else:
ocr_2 = float("nan")
print("THIS IS OCR")
print(ocr_2)
ocr = ocr_2
if ocr < 0.75 and ~np.isnan(
ocr
): # If the % of OCR in a page is less than 75%, then keep the page and all tokens
tokens = tokens
else:
tokens = [] # Otherwise, give it an empty list (i.e. drop the page)
yield tokens
con.close()
Error:
cur.execute(sql)
psycopg2.ProgrammingError: can't call .execute() on named cursors more than once

What default id should I use such that all the ids in my collection are greater than this id?

I intend to fetch 100 ids at once in a sorted manner.
I find the ids greater than skip where skip can be set to a default value at the beginning. I need to sort the ids generated in the find() and the limit set is 100.
So, my query is:
db['Organization'].find({"_id":{"$gt":ObjectId(skip)}},{"_id":1}).sort([("_id",1)]).limit(100)
As of now, I have set skip to str(0). I intend to update it with the last id fetched in the iteration.
The complete endpoint is:
#hug.get('/organization/collect_pricing')
def get_organizations():
start_time = datetime.strptime('2016-11-01', '%Y-%m-%d')
org_ids = []
org_pricing_plans = []
counter = 0
skip = str(0)
result_check = True
pricing_response = []
ob_toolbox = Toolbox()
while(result_check is True):
print(counter)
try:
if
organizations = db['Organization'].find({"_id":{"$gt":ObjectId(skip)}},{"_id":1}).sort([("_id",1)]).limit(100)
except Exception as e:
print(e)
if organizations.count(True) == 0:
result_check = False
continue
counter += 100
for org in organizations:
org_ids.append("Organization$"+org["_id"])
try:
pricing_plans = ob_toolbox.bulk_fetch(collection="PricingPlan", identifier="_p_organization", ids=org_ids)
except Exception as e:
print(e)
currDict = {}
for i in range(0, organizations.count(True)):
currDict["id"] = org_ids[i]
currDict["expiresAt"] = pricing_plans[i]["expiresAt"]
currDict["resources"] = pricing_plans[i]["resources"]
currDict["_created_at"] = pricing_plans[i]["_created_at"]
org_pricing_plans.append(currDict)
print(currDict["id"])
skip = currDict["id"]
if organizations.count(True) < 100:
result_check = False
return (org_pricing_plans)
If you want the default "minimal" value, then null object id is better. It's the same type (ObjectId) and will sort lowest.
ObjectId('000000000000000000000000')
Alternatively, you could branch when doing a query. Is it first query? If yes, don't include the skip part. If no, use last id from previous results.

QlikSense - Set Analysis - Handling complexities - Arithmetic, Fields, Variables, Variables within variables, Greater than etc

I am somewhat new to QlikSense, but am getting a hang of it. Set Analysis is probably my weak spot and no matter how much I read, I tend to forget everything within hours. Plus, the guides don't do a great job explaining how to handle more complex/'tricky' situations (aka Level II or III complexity) than what they deem complex (aka Level 1 complexity) .
I went through this, this and this, still no dice. The only thing left for me to do is to bang my head to the wall and see if something shakes up.
The actual file is pretty big and proprietary, so can't post it here... so I would appreciate if you can give me an idea and point me in the right direction.
GOAL:
I have an expression that works, but I need it in the form of set analysis. Simple, right?
BACKGROUND:
//IN LOAD SCRIPT - set some default values
SET dMinSOS = 20000;
SET dMaxSUSPD = 225;
SET dSUR = 1;
SET dSOR = 0.3;
//IN LOAD SCRIPT - generate some custom inputs so user can select a value
FOR i = 1 to 20
LET counter = i*5000;
LOAD * INLINE [
Min. SOS
$(counter)
];
NEXT i
FOR i = 0 to 9
LET counter = i/10;
LOAD * INLINE [
SOR
$(counter)
];
NEXT i
FOR i = 1 to 30
LET counter = i/10;
LOAD * INLINE [
SUR
$(counter)
];
NEXT i
FOR i = 1 to 15
LET counter = i*25;
LOAD * INLINE [
Max. SUSPD
$(counter)
];
NEXT i
//IN LOAD SCRIPT - if user selects a value from above, then get the max because they can select multiple; otherwise use default values
SET vMinSOS = "IF(ISNULL([Min. SOS]), $(dMinSOS), MAX([Min. SOS]))";
SET vMaxSUSPD = "IF(ISNULL([Max. SUSPD]), $(dMaxSUSPD), MAX([Max. SUSPD]))";
SET vSUR = "IF(ISNULL([SUR]), $(dSUR), MAX([SUR]))";
SET vSOR = "IF(ISNULL([SOR]), $(dSOR), MAX([SOR]))";
//EXPRESSION - works! - [Size], [Heads], [SPD] are direct fields in a table, the return value of 1 or 0 is strictly for reference
=IF(
[Size] >= $(vMinSOS) AND
[Size] - ((([Heads] * IF([SPD] >= $(vMaxSUSPD), $(vMaxSUSPD), [SPD])) / $(vSUR)) + ([Size] * $(vSOR))) >= 0,
1, 0)
//SET ANALYSIS - this needs fixing - i.e. replicate 2nd condition in expression above - Show just the results where both the conditions above are true
=SUM({<
[Size]={">=$(=$(vMinSOS))"},
[Size]={">= #### What goes here? #### "},
>}[Size])
Open to recommendations on better ways of solving this.
=SUM({
"=[Size] >= $(vMinSOS) AND [Size] - ((([Heads] * IF([SPD] >= $(vMaxSUSPD), $(vMaxSUSPD), [SPD])) / $(vSUR)) + ([Size] * $(vSOR))) >= 0"
}>} [Size] )

stress centrality in social network

i got the error of this code which is:
path[index][4] += 1
IndexError: list index out of range
why this happened?how can i remove this error ?
Code:
def stress_centrality(g):
stress = defaultdict(int)
for a in nx.nodes_iter(g):
for b in nx.nodes_iter(g):
if a==b:
continue
pred = nx.predecessor(G,b) # for unweighted graphs
#pred, distance = nx.dijkstra_predecessor_and_distance(g,b) # for weighted graphs
if a not in pred:
return []
path = [[a,0]]
path_length = 1
index = 0
while index >= 0:
n,i = path[index]
if n == b:
for vertex in list(map(lambda x:x[0], path[:index+1]))[1:-1]:
stress[vertex] += 1
if len(pred[n]) >i:
index += 1
if index == path_length:
path.append([pred[n][i],0])
path_length += 1
else:
path[index] = [pred[n][i],0]
else:
index -= 1
if index >= 0:
path[index][4] += 1
return stress
Without the data it's hard to give you anything more than an indicative answer.
This line
path[index][4] += 1
assumes there are 5 elements in path[index] but there are fewer than that. It seems to me that your code only assigns or appends to path lists of length 2. As in
path = [[a,0]]
path.append([pred[n][i],0])
path[index] = [pred[n][i],0]
So it's hard to see how accessing the 5th element of one of those lists could ever be correct.
This is a complete guess, but I think you might have meant
path[index][1] += 4