My objective is to count the number of repositories that use PyTorch. Therefore, I came up with the following code, using the THUNDER CLIENT extension in VS Code -
https://api.github.com/search/repositories?q=language:python + readme:PyTorch
However, this gives me just 7 search results. I am confident the result should be in the range of thousands. Could someone suggest where I am going wrong?
The GitHub search API for repositories checks the name, description and the README of all repositories.
Therefore, all that was needed to be done was -
https://api.github.com/search/repositories?q=PyTorch
I have installed nominatim 4.1.0 (tokenizer= ICU) via following instructions on nominatim documentation, added wikipedia data during the installation, and imported an updated pbf file from geofabrik.de.
All works but when I sent some kind of request (e.g. Cagliari via Roma) the answer I get from Nominatim Website (https://nominatim.openstreetmap.org/) and my local installation are very different. The right results are on nominatim website of course.
The problems seems to be with the search candidate algorithm or the attribuition/calc of AdressImportance parameter.
The very strange thing is that I get these wrong results only for some requests.
There is any particular parameter to set or anything else to verify?
I hope it is clear for you and even small advice or comment would be very helpful for me
Thanks
Michele
After a discussion with the maintainers (https://github.com/osm-search/Nominatim/discussions/2839), I found an acceptable solution by editing the Geocode.php file line as follows:
$this->iLimit = $iLimit + max($iLimit, 150)
the result is not exactly the same as that of the online version but it works fine for me
Consider this module App::TimeTracker. If you click on the tracker link in the SYNOPSIS section you end up here whereas you should have ended up here. The Pod source code responsible for the behavior is given here, which shows that the following Pod formatting code was used:
L<tracker>
I can fix the problem by providing an absolute link instead:
L<tracker|https://metacpan.org/pod/release/DOMM/App-TimeTracker-3.000/bin/tracker>
but this fixes the link to version 3.000 which may change in the future.
So how should this be done in general?
Use the full path without the version number: https://metacpan.org/pod/distribution/App-TimeTracker/bin/tracker.
The problem is that tracker_bash_autocomplete is not being indexed correctly as documentation by MetaCPAN. The NAME section has a very specific format based on manpages which must be adhered to for MetaCPAN to know how to link to your documentation. Putting tracker bash autocomplete before the hyphen makes MetaCPAN index it as tracker.
=head1 NAME
tracker_bash_autocomplete - whatever
My apologies if the question is duplicated. We are facing an issue with the analysis of a profile using Watson Personality Insights API in Spanish. We have a demo we implemented using PI API version 2 and then we tested the results (exact same text) with the demo published on developer cloud(in spanish) and we found important differences on how the big five were calculated when the facet values were not that different. Is it possible that these differences are caused because of the API version? The issue that with our demo the big five values produced a kind of negative summary profile when the developercloud summary is kinder.
We could send both result jsons. For example here is how the big five rated:
BigFive DeveloperCloud Demo V2
Openness 0.773834349 0.847273232
Conscientiousness 0.916616088 0.914907481
Extraversion 0.796331544 0.612606551
Agreeableness 0.17445636 0.096118648
Emotional range 0.036287447 0.01623536
thanks in advance!!
So the API version would not make a difference, as that just governs the format of the API; the back-end models are the same for both v2 and v3 of the API.
So the jist of your question is that when you run the same text in your app, and in the demo you get different big5 results, while the facet values are about the same.
This might be easiest solved by you opening a support ticket so we can debug the issue together; if you'd rather not do that then can you provide a sample text? Typically it boils down to a difference in the way the text is parsed.
Another question; did you try making the request using curl? That would cut out any custom logic in your app and narrow down the problem.
thanks Neil for you answer!
We tested the text using CURL and we noticed that the results didn't change by the service version used but instead by how the text was sent. If we called the service using curl passing a plain text input(formatted in UTF-8 with line breaks) it returned the same results for version2 and version 3 and also matched the ones from our demo. If we called the service using curl passing json input WITHOUT line breaks it returned the same values as well. But if we called the service passing the json input WITH line breaks then the results changed and almost matched those shown by ibm demo. My question here is which are the correct results? The ones shown when the text is sent as a plain text input(with line breaks) or when the text is sent as json input(with line breaks)? Is there any technical guideline besides the one shown in developercloud on how the text should be parsed to use this service?
Thanks again!
I have a project running in Klocwork and after the build gets completed the Klocwork results will be generated. Every time I need to go to the Klocwork portal to get the results and look for the new issues or the total issues. Instead I need an API or script to get the total number of issues from the Klocwork results automatically when the build is successful.
Is there any way to achieve this? One way is to view the portal page source as html and get the result I need. However, I think there might be a better solution.
Can someone help me in achieving this?
Thanks in advance.
I answered a similar question over here. Below is an updated answer with links to the documentation for the most recent release, Klocwork 11.
Klocwork has a WebAPI which you can use to query this type of information from your favorite scripting language, or for example with curl. API documentation is also provided on your Klocwork server at http://klocwork_server_host:port/review/api, for example http://localhost:8080/review/api.
The query:
curl --data "action=search&user=my_account&project=my_project&query=build:build_1 status:Analyze state:New,Existing<oken=xxxx" http://localhost:8080/review/api
will return a list of all open (state New and Existing), non-cited (status Analyze) issues found in a build named build_1 of project my_project.
For a list of the keywords you can use in the query string with the search action, see Searching in Klocwork Review.
If you want just a summary of the number of defects instead of getting the whole list, you can use the report action:
curl --data "action=report&user=my_account&project=my_project&build:build_1&x=Category&y=Component&filterQuery=status:Analyze state:New,Existing<oken=xxxx" http://localhost:8080/review/api
which returns back a summary of the number of defects by checker category (taxonomy) and component. Sample output is below:
{"rows":[{"id":1,"name":"C and C++"},{"id":3,"name":"MISRA C"},{"id":4,"name":"MISRA C++"}],"columns":[{"id":5,"name":"System Model"}],"data":[[122],[354],[890]],"warnings":[]}
You can modify the x and y axis parameters to produce different breakdowns of the issues, for example by Severity and State instead:
curl --data "action=report&user=my_account&project=my_project&build:build_3&x=Severity&y=State&filterQuery=state:New,Existing,Fixed<oken=xxxx" http://localhost:8080/review/api
output:
{"rows":[{"id":1,"name":"Critical"},{"id":2,"name":"Error"},{"id":3,"name":"Warning"},{"id":4,"name":"Review"}],"columns":[{"id":-1,"name":"Existing"},{"id":-1,"name":"Fixed"},{"id":-1,"name":"New"}],"data":[[10,5,2],[20,6,1],[45,11,3],[1112,78,23]],"warnings":[]}
The WebAPI cookbook documentation has an example of using python with the report action and processing and formatting the response.