Why prometheus got history data? - kubernetes

I have grabbed some metrics via prometheus, but it seems like I got some history data.
I used the command curl -X GET $APISERVER/metrics --header "Authorization: Bearer $TOKEN" --insecure | grep apiserver_flowcontrol_dispatched_requests_total three times in a row, result showed in the picture.
The result of the second command shows that there is no priority_level="global-default" data, which is indicated by red underline in the result of other two commands. The number of priority_level="global-default" which is indicated by yellow underline is counter data type, but the result of the second command is less than the first one.
I guess my prometheus got the history data.
How can I resolve this problem?

I'm not sure this is a problem.
Yes. Prometheus is a time series database, meaning that every time you query a metric (to get a value) you get the value of that specific metric in the particular time where the query happened.
More details: https://prometheus.io/docs/introduction/overview/

Related

Metrics responsed in random order on querying CrUX Report API

I queries to CrUX Report API, as dev docs show.
Instead of origin I use url to get data for certain URLs, so my query looks like:
curl https://chromeuxreport.googleapis.com/v1/records:queryRecord?key=API_KEY \
--header 'Content-Type: application/json' --data '{"url":"https://www.spiegel.de/schlagzeilen/"}'
I do this one by one for different urls.
My problem: responses are coming in different order: for the first query CLS comes as first metric, for the second query - FID and so on.
This issue doesn't depend on the kind I run queries: cURL in terminal, by Postman, or by Google App script in Google Sheets.
I tried to setup an explicit metrics order in the request, like
curl https://chromeuxreport.googleapis.com/v1/records:queryRecord?key=API_KEY \
--header 'Content-Type: application/json' --data '{"url":"https://www.spiegel.de/schlagzeilen/","metrics":["cumulative_layout_shift","first_contentful_paint","first_input_delay","largest_contentful_paint"]}'
but responses come still in random order.
Q: is there a possibility to force a metrics order in the response I wish to have?
While the metrics input parameter does allow you to list the metrics that get output in the results, it doesn't control the ordering of the metrics. There is no other input mechanism to enforce a particular metric ordering.
That said, the metrics response is a JSON object, which is an inherently unordered data structure. The ordering of the object keys may affect how the object is iterated, for example Object.fromEntries(response.record.metrics) will iterate over the metrics in the order they appear.
If the order is critical to your application, I would recommend deterministically looping through a constant array of metric IDs rather than iterating over the object keys. For example:
const METRICS = ['first_contentful_paint', 'largest_contentful_paint', 'first_input_delay', 'cumulative_layout_shift'];
const cruxData = METRICS.map(metric => response.record[metric]);
I see you're using cURL to issue the requests, so you can adapt this strategy to whichever programming language you use to parse the results.

How can I set my local waves network successfully?

I have 2 questions.
I did execute local waves network.
I want to set 2 miner nodes.
First booted node did woking well and mining blocks.
Second booted node did woking well but just syncing blocks.
Second node didn't mining blocks.
Second node also did set "miner.enable=yes" and have 1000WAVES.
Is there anything else that needs to be set for this node to be minor? Or does this node need time to participate in the mining schedule?
I want to get miner info by using REST API.
My local node's config did set like followings.
api-key-hash = "H6nsiifwYKYEx6YzYD7woP1XCn72RVvx6tC1zjjLXqsu"
And I did call API like this.
curl -X GET http://127.0.0.1:6869/debug/minerInfo -H "Content-Type:application/json" -H "api_key: H6nsiifwYKYEx6YzYD7woP1XCn72RVvx6tC1zjjLXqsu"
But I got error message like this.
{"error":2,"message":"Provided API key is not correct"}
I did call same API in "https://nodes-testnet.wavesnodes.com/api-docs/index.html#/debug/minerInfo_1"
But I got same error message.
How can I call this API successpully?
That should be enough, but if your first node has 99.9999 million Waves and the second one - 1000, the first one will generate 99.9999% of blocks, so maybe it is not the proper time to generate a block for the second node.
You should add header X-Api-Key with an actual API key, not with the hash of it. For example, you had "myawesomekey" and got a hash from it (H6nsiifwYKYEx6YzYD7woP1XCn72RVvx6tC1zjjLXqsu), then you send a header X-Api-Key=myawesomekey

Tracking events with prometheus and grafana

There's an article "Tracking Every Release" which tells about displaying a vertical line on graphs for every code deployment. They are using Graphite. I would like to do something similar with Prometheus 2.2 and Grafana 5.1. More specifically I want to get an "application start" event displayed on a graph.
Grafana annotations seem to be the appropriate mechanism for this but I can't figure out what type of prometheus metric to use and how to query it.
The simplest way to do this is via the same basic approach as in the article, by having your deployment tool tell Grafana when it performs a deployment.
Grafan has a built-in system for storing annotations, which are displayed on graphs as vertical lines and can have text associated with them. It would be as simple as creating an API key in your Grafana instance and adding a curl call to your deploy script:
curl -H "Authorization: Bearer <apikey>" http://grafana:3000/api/annotations -H "Content-Type: application/json" -d '{"text":"version 1.2.3 deployed","tags":["deploy","production"]}'
For more info on the available options check the documentation:
http://docs.grafana.org/http_api/annotations/
Once you have your deployments being added as annotations, you can display those on your dashboard by going to the annotations tab in the dashboard settings and adding a new annotation source:
Then the annotations will be shown on the panels in your dashboard:
You can get the same result purely from Prometheus metrics, no need to push anything into Grafana:
If you wanted to track all restarts your search expression could be something like:
changes(start_time_seconds{job="foo",env="prod"} > 0
Or something like this if you only wanted to track version changes (and you had some sort of info metric that provided the version):
alertmanager_build_info unless max_over_time(alertmanager_build_info[1d] offset 5m)
The latter expression should only produce an output for 5 minutes whenever a new alertmanager_build_info metric appears (i.e. one with different labels such as version). You can further tweak it to only produce an output when version changes, e.g. by aggregating away all other labels.
A note here as technology has evolved. We get deployment job state information in Prometheus metrics format scraped directly from the community edition of Hashicorp's Nomad and we view this information in Grafana.
In your case, you would just add an additional query to an existing panel to overlay job start events, which is equivalent to a new deployment for us. There are a lot of related metrics "out of the box," such as for a change in job version that can be considered as well. The main point is no additional work is required besides adding a query in Grafana.

Finding all the users in Jira using the REST API

I'm trying to list all the users in Jira using the REST API, I'm currently using the search user feature using GET : https://docs.atlassian.com/jira/REST/server/#api/2/user-findUsers
The thing is it says that the result will by default display the 50 first result and that we can expand that result up to 1000. Compared to other features available in the REST API, the pagination here is not specified.
An example is the group member feature : https://docs.atlassian.com/jira/REST/server/#api/2/group-getUsersFromGroup
Thus I did a test and with my test Jira filled with 2 members, tried to get only one result and see if there was some sort of indication referring to the rest of my result.
The response provided will only give the results and no ways to get to know if there was more thatn 1000 (or 1 in my example), it's maybe logical but in the case of an organization with more than 1000 members, listing all the users doing this : http://jira/rest/api/2/user/search?username=.&maxResults=1000&includeInactive=true will only give at most 1000 results.
I'm getting all the users no matter what their name are using . as the matching character.
Thanks for your help!
What you can do, is to calculate manually the number of users.
Let's say you have 98 users in your system.
First search will give you 50 users. Now you have an array and you can get the length of that array which is 50.
Since you do not know if there are 50 or 51 users, you execute another search with the parameter &startAt=50.
This time the array length is 48 instead of 50 and you know that you've reached all the users in the system.
From speaking to Atlassian support, it seems like the user/search endpoint has a bug where it will only ever return the first 1,000 results at most.
One possible other way to get all of the users in your JIRA instance is to use the Crowd API's /rest/usermanagement/1/search endpoint:
curl -X GET \
'https://jira.url/rest/usermanagement/1/search?entity-type=user&start-index=0&max-results=1000&expand=user' \
-H 'Accept: application/json' -u username:password
You'll need to create a new JIRA User Server entry to create Crowd credentials (the username:password parameter above) for your application to use in its REST API calls:
Go to User Management.
Select JIRA User Server.
Add an application.
Enter the application name and password that the application will use when accessing your JIRA server application.
Enter the IP address, addresses, or IP CIDR block of the application, and click Save.

Neo4J Import Tool Inconsistencies

I've tried searching for documentation to use the "new" neo4j-admin import tool but have not found anything in regards to it's usage compared to the, soon-to-be, deprecated neo4j-import tool.
I receive no fatal errors when using neo4j-import and am able to query the database. The command I use for import is as follows:
./neo4j-import \
--into /Users/rlinchan/Applications/neo4j-community-3.0.8/data/databases/graph.db/ \
--stacktrace 'true' \
--delimiter "|" \
--array-delimiter ";" \
--quote "\"" \
--bad-tolerance 10000 \
--nodes "/Users/rlinchan/Desktop/v4/nodes/disease_ontology.do.nodes.csv" \
--nodes "/Users/rlinchan/Desktop/v4/nodes/ebi.chebi.nodes.csv" \
--relationships "/Users/rlinchan/Desktop/v4/edges/disease_ontology.do.edges.csv" \
--relationships "/Users/rlinchan/Desktop/v4/edges/ebi.chebi.edges.csv"
There are many more node and relationship files being loaded (~6 GB of data) but I've excluded them here for brevity. The issue I'm having is, upon a cursory inspection of the data using the Neo4j browser, an inability to see the relationship types in the "Database Information" section.
I am able to query the database and receive results in the browser, showing that the relationships do exist. I am not able to modify the color, size, or label of nodes and edges in the Neo4j browser visualizations however, which I need for publication figures. All nodes and edges are gray, and selections of color, size, and label are not applied to the query results.
Is this an issue with my data import? I've used this command for the import tool on various other Neo4j versions without fault.
Here are examples of the node files and edge files, if that helps at all:
Node Header
source_id:ID|name:string|synonyms:string[]|definition:string|:LABEL
Edge Header
:START_ID|:TYPE|source:string|:END_ID
The labels assigned to node types will throw an error if the label contains special characters, a period in specific.
Previous graph databases I had created worked just fine, regardless the version of Neo4j and I realized it had to be the dataset itself.
Periods in the label must have a naming convention issue within javascript or java itself (usually used for calling classes, properties, methods).
I've simply renamed the labels in my dataset by replacing periods with underscores and coloring, naming, and size modifications in the Neo4j browser are no longer an issue. (See image below)
Neo4j Browser Modifications (since I can't post images)
It could just be that some of the metadata in the browser is temporarily out of sync with the server. Try disconnecting from the server using the browser command :server disconnect, and then logging back in.