In the Stage View plugin, we can see on each of the stages a timestamp that displays number of seconds for the stage, and number of seconds spent waiting within that stage. This is kind of interesting data but I haven't figure out where we may be able to access it outside of the display on the single pipeline. If we want to use these times in our own metrics program, say to measure trends over multiple pipelines and/or projects, is it accessible externally somehow?
You can use the existent JSON API: http://[JENKINS_HOST]/job/test/[BUILD_NUMBER]/wfapi/describe
There is timing information in the response.
Related
I have a report on PowerBI that has many pages/tabs and each one also has alot of data being displayed. As I didn't design this, I'm going through the report to eliminate as much as I can and possibly splitting the report as alot of the data only requries refreshing once a week.
This is where my query comes in, I have information on one page that requires a refresh every two hours over a 12 hour duration, one field of data that requires a daily refresh and two more fields only require refreshing when required.
Is it possible to segment scheduled refreshes throughout a single part of the report, or does scheduled refresh only allow the entire report to be refreshed? (I.E. Sales status is hourly, Outbound status is daily, and sales summary is weekly)
I'd rather avoid having to split reports, as it is very handy to have them on one page; rather than having to open two and view them on multiple monitors.
I am just starting out on PowerBI reports, having been shown enough to get what I need done; but plan to delve further in, this being my first port of call if it is possible.
Thanks for any reponses in advance.
Brian.
No. It's Not Possible.
PowerBI Internal working like Tabular Model.
In Import mode we can not do incremental refresh also.
So other option is you can create Reporting layer and define denormilized with calucaluated columns Reporting tables.( Sales ,summary )
and use Direct query or Refresh and Do ETL for This table.
So you can schedule ETL for specific Tables i.e.Sales or summary.
I'm planning to introduce linting into a rather massive code base. Fixing all existing issues beforehand is not possible, so seeing thousands of linter errors at start is inevitable.
I'd like to record the number of detected errors each time the build runs for master and treat this number as a success / failure threshold. If a new pull request does not exceed the current baseline, its pipeline passes and so the proposed change is good to go. However, if the number of errors increases, I'd like the pipeline to fail, thus preventing the merge.
This functionality I’ve described narrows down to writing variables to Azure DevOps servers as some side-effects of builds and also reading these values from the previous build. This looks very similar to comparing code coverage, however, I can't seem to find any docs on how to implement the read-write logic manually.
What pipeline task could I use? What else can I leverage to track a custom metric over a number of builds and compare the value with previous? To summarise, my ultimate goal is to gradually lower an arbitrary value from a large number to zero over the course of several months.
I have a JMeter project with multiple GET and POST requests and assertions for these. I use Aggregate results and View results tree listeners, but none of these can store results on hourly basis. I tried JMeterPlugins-Standard and JMeterPlugins-Extras packages and jp#gc - Graphs Generator listener, but all of them use aggregated data instead of hourly data. So I would like to get number of successful and failed requests/assertions per hour, maybe a bar chart would be most suitable for this purpose.
I'm going to suggest a non-conventional design-level solution: name your samplers dynamically with hour (or date and hour), so that each hour the name will change, and thus they will appear in different category, i.e.:
The code for such name is:
${__time(dd:hh,)} the rest of sampler name
Such sampler will appear in the following way in Aggregate Report (here I simulated it with minutes/seconds, but same will happen with days/hours, just on larger scale):
Pros and cons of such approach:
Simple, you can aggregate anything by hour, minute, or any other time slice while test is running, and not by analysis after execution.
Not listener-dependant, can be used with pretty much any listener or visualizer
If you want to also have overall stats, it will require to sum up every sub-category. So it alters data, but in the way that it can still can be added back to original relatively easy.
Calculating __time before every sampler will not be unnoticed completely from performance perspective, but I don't think it will add visible overhead to a script.
You could get the same data by properly aggregating JTL or CSV (whichever you use) after execution, so it doesn't provide you with anything that is not possible to achieve using standard methods
Script needs altering to make this happen. if you have 100s of samplers, it's going to take a while. And if you want to change back...
You might want to use Filter Results Tool which has --start-offset and --end-offset parameters, you can "cut" your results file into "interesting" pieces and plot them according to your requirements.
You can install Filter Results Tool using JMeter Plugins Manager
Also be aware that according to JMeter Best Practices you should
Use as few Listeners as possible; if using the -l flag as above they can all be deleted or disabled.
Don't use "View Results Tree" or "View Results in Table" listeners during the load test, use them only during scripting phase to debug your scripts.
You can get whatever information you need from the .jtl results file, you can specify test results location via -l command-line argument
To get summarized results per hour add to your test plan Generate Summary Results:
Generates a summary of the test run so far to the log file and/or standard output
Update interval in jmeter.properties to your needs ,1 hour, 3600 seconds:
summariser.interval=3600
You will get summary per hour of your requests.
You can try with Jmeter backend Listener. It has integration with graphite and Influxdb. After storing the results in these time series database you can display the result in Grafana dashboard. Grafana has its own filtering of showing the results in hourly, monthly, daily basis and so on.
I have a real-time workflow for creating unique numbers. This workflow get a numeric field from my custom entity, increase it by 1, and update it for next use.
I want to run this workflow on multiple records.
Running on-demand mode, it works fine,and I have true and unique numbers, but for "Record is Created" mode, it dose not work fine and get repeated numbers.
What I have to do?
This approach wont work, when the workflow runs on demand its running multi-threaded, e.g. two users create two records, two instances of the workflow start. As there is no locking mechanism you end up with duplicated numbers.
I'm guessing this isn't happening when running on demand because you are running as a single user.
You will need to implement a custom auto number approach, such as Auto Number for DynamicsCRM.
Disclaimer: I work for Gap Consulting who produce the tool linked above.
Google Compute Engine allows for a daily export of a project's itemized bill to a storage bucket (.csv or .json). In the daily file I can see X-number of seconds of N1-Highmem-8 VM usage. Is there a mechanism for further identifying costs, such as per tag or instance group, when a project has many of the same resource type deployed for different functional operations?
As an example, Qty:10 N1-Highmem-8 VM's are deployed to a region in a project. In the daily bill they just display as X-seconds of N1-Highmem-8.
Functionally:
2 VM's might run a database 24x7
3 VM's might run batch analytics operation averaging 2-5 hrs each night
5 VM's might perform a batch operation which runs in sporadic 10 minute intervals through the day
final operation writes data to a specific GS Buckets, other operations read/write to different buckets.
How might costs be broken out across these four operations each day?
The Usage Logs do not provide 'per-tag' granularity at this time and it can be a little tricky to work with the usage logs but here is what I recommend.
To further break down the usage logs and get better information out of em, I'd recommend trying to work like this:
Your usage logs provide the following fields:
Report Date
MeasurementId
Quantity
Unit
Resource URI
ResourceId
Location
If you look at the MeasurementID, you can choose to filter by the type of image you want to verify. For example VmimageN1Standard_1 is used to represent an n1-standard-1 machine type.
You can then use the MeasurementID in combination with the Resource URI to find out what your usage is on a more granular (per instance) scale. For example, the Resource URI for my test machine would be:
https://www.googleapis.com/compute/v1/projects/MY_PROJECT/zones/ZONE/instances/boyan-test-instance
*Note: I've replaced the "MY_PROJECT" and "ZONE" here, so that's that would be specific to your output along with the name of the instance.
If you look at the end of the URI, you can clearly see which instance that is for. You could then use this to look for a specific instance you're checking.
If you are better skilled with Excel or other spreadsheet/analysis software, you may be able to do even better as this is just an idea on how you could use the logs. At that point it becomes somewhat a question of creativity. I am sure you could find good ways to work with the data you gain from an export.
9/2017 update.
It is now possible to add user defined labels, then track usage and billing by these labels for Compute and GCS.
Additionally, by enabling the billing export to Big Query, it is then possible to create custom views or hit Big Query in a tool more friendly to finance people such as Google Docs, Data Studio, or anything which can connect to Big Query. Here is a great example of labels across multiple projects to split costs into something friendlier to organizations, in this case a Data Studio report.