How to GET the list of dependabot alerts via GitHub API? - github

How can I GET the list of dependabot alerts available at https://github.com/{user}/{repo}/security/dependabot?page=1&q=is%3Aopen via the GitHub API?
I searched through the documentation but couldn't find anything there.
Thanks!

There is this RepositoryVulnerabilityAlert object available with the Graphql API.
For example for a specific repository, you can get all the alerts with the following query (check this out in the explorer) :
{
repository(name: "repo-name", owner: "repo-owner") {
vulnerabilityAlerts(first: 100) {
nodes {
createdAt
dismissedAt
securityVulnerability {
package {
name
}
advisory {
description
}
}
}
}
}
}
It also returns alerts that were dismissed which can be spotted using the dismissedAt field. But there doesn't seem to be a way to filter only "active" alerts
Sample output:
{
"data": {
"repository": {
"vulnerabilityAlerts": {
"nodes": [
{
"createdAt": "2018-03-05T19:13:26Z",
"dismissedAt": null,
"securityVulnerability": {
"package": {
"name": "moment"
},
"advisory": {
"description": "Affected versions of `moment` are vulnerable to a low severity regular expression denial of service when parsing dates as strings.\n\n\n## Recommendation\n\nUpdate to version 2.19.3 or later."
}
}
},
....
]
}
}
}
}

Related

How to filter by issues created by GitHub App using their GraphQL API?

I'm trying to migrate from GitHub's REST API to their GraphQL API in my GitHub bot. I want to filter open issues created by my bot on this repository.
I've tried the following queries:
query ListOpenIssues {
repository(name: "pacstall-programs", owner: "pacstall") {
issues(last: 100, filterBy: {states: OPEN, createdBy: "app/pacstall-pacbot"}) {
nodes {
number
title
url
}
}
}
}
query ListOpenIssues {
repository(name: "pacstall-programs", owner: "pacstall") {
issues(last: 100, filterBy: {states: OPEN, createdBy: "pacstall-pacbot"}) {
nodes {
number
title
url
}
}
}
}
But both of them return
{
"data": {
"repository": {
"issues": {
"nodes": []
}
}
}
}
How do I properly filter the issues created by my bot?
PS: I've seen this similar question, but it was created 3 years ago, and since then GitHub's GraphQL API has changed, and supports the createdBy field to filter issues.

MongoDB Trigger - Match Expression

I wrote the trigger to listen for the update operation type on the collection1. This is the huge transactional collection and number of updates are high per secs. This caused the trigger often goes to the 'Suspended' status.
I thought of implementing the 'Match Expression' option under the ADVANCED section on the Trigger creation page. Here, we can write the match block to ensure the trigger will fire only if match this filter. The problem here is that I am not able to use any mongodb clause/operator here.
Working Code:
{
"updateDescription.updateFields": {
"status": "blocked"
}
}
Not Working Code:
{
"updateDescription.updateFields": {
"status": {"$in":["blocked","non_blocked"]}
}
}
This is how your match expression should work:
{
"$or": [
{
"updateDescription.updatedFields": {
"$eq": {
"status": "blocked"
}
}
},
{
"updateDescription.updatedFields": {
"$eq": {
"status": "non_blocked"
}
}
}
]
}

Posting to Consecutive Commands to Solr - Data Missing from 1st Call

I am sending the following commands with
my $url = "http://xxxxx/solr/inventory/update?commitWithin=1000";
I am using perl to send to a solr setup on another server.
Please excuse the formatting. I really did try.
Thanks
Mike
RESULTING DATA - The data from the first command is not here. All subsequent calls are.
{
"responseHeader":{
"status":0,
"QTime":0,
"params":{
"q":"*:*",
"fq":"id:3-159682",
"_":"1529984183431"
}
},
"response":{
"numFound":1,
"start":0,
"docs":[
{
"checklist_id":249746,
"brand_s":"Pinnacle",
"featured":"",
"sf_set_sort":"Baseball1992Pinnacle",
"sf_set_sort_s":"Baseball1992Pinnacle",
"sport_s":"Baseball",
"cardnumber":"308",
"issue_s":"",
"id":"3-159682",
"year_s":"1992",
"team":"Los Angeles Dodgers",
"set_name_s":"",
"has_image":1,
"amazon_sku":"159682",
"amazon_sync":1,
"sf_id":378827,
"sf_ending_time":2222222222,
"sf_sort_id":199230875,
"sf_listing_type":"buy",
"shopify_id":"1302493397094",
"_version_":1604345060355211264
}
]
}
}
COMMANDS AND RESPONSES
[
{
"inv_location":"",
"ean":"",
"site_id":"3",
"category_id":[
"1",
"55",
"2162220",
"2715086",
"306",
"2352370",
"2413461"
],
"cp_id":"159682",
"isbn":"",
"id":"3-159682",
"consigner":"",
"upc_code":"0",
"quantity":"1",
"created_date":"2018-06-26T10:17:55Z",
"mpn":"",
"description":"",
"inv_num":"",
"cp_listing_type":"1",
"price":"0.69",
"title":"1992 Pinnacle #308 Darryl Strawberry NM-MT ",
"live_status":""
}
]
Success: {
"responseHeader":{
"status":0,
"QTime":1
}
}
[
{
"checklist_id":"249746",
"brand_s":"Pinnacle",
"featured":"",
"sf_set_sort":"Baseball1992Pinnacle",
"sport_s":"Baseball",
"cardnumber":"308",
"issue_s":"",
"id":"3-159682",
"year_s":"1992",
"team":"Los Angeles Dodgers",
"set_name_s":""
}
]
Success: {
"responseHeader":{
"status":0,
"QTime":1
}
}
[
{
"has_image":{
"set":"1"
},
"id":"3-159682"
}
]
Success: {
"responseHeader":{
"status":0,
"QTime":1
}
}
[
{
"amazon_sku":{
"set":"159682"
},
"amazon_sync":{
"set":"1"
},
"id":"3-159682"
}
]
Success: {
"responseHeader":{
"status":0,
"QTime":1
}
}
[
{
"sf_id":{
"set":"378827"
},
"sf_ending_time":{
"set":"2222222222"
},
"sf_sort_id":{
"set":"199230875"
},
"id":"3-159682",
"sf_listing_type":{
"set":"buy"
}
}
]
Success: {
"responseHeader":{
"status":0,
"QTime":1
}
}
[
{
"id":"3-159682",
"shopify_id":{
"set":"1302493397094"
}
}
]
Success: {
"responseHeader":{
"status":0,
"QTime":1
}
}
The two documents you submit have the same id. The second document overwrites the first one, since the id has to uniquely identify a document. If the id doesn't do that, change which field is defined as a uniqueKey, or use a UUID generator to get a new id each time the document is submitted. The latter will cause issues if you're trying to do updates without having the new uuid readily available, tho.
Another solution would be to prefix the id with the document type, or (depending on your use case) merge it into a single document before indexing.
The answer to my problem was that the 1st and 2nd commands were full updates, and the rest were partial updates (using "set"
I change the second command to a format like this
[{"checklist_id":{"set":"249725"},"brand_s":{"set":"Pinnacle"},"featured":{"set":""},"sf_set_sort":{"set":"Baseball1992Pinnacle"},"sport_s":{"set":"Baseball"},"cardnumber":{"set":"287"},"issue_s":{"set":""},"id":"3-159694","year_s":{"set":"1992"},"team":{"set":"Milwaukee Brewers"},"set_name_s":{"set":""}}]
And all was right with the code, no lionger overwriting the first query.
Maybe this will help someone else!
Thanks
Mike

How do I use the Echonest API Start Parameter?

I am following the examples located on the following page:
http://developer.echonest.com/docs/v4/genre.html#artists
I'd like to offset the results from a search for artists by genre. The example they provide on the page listed "results" and "start". I assume "start" is the offset. The example query they provide is:
http://developer.echonest.com/api/v4/genre/artists?api_key=JEXNQ223JXCCQEINO&format=json&results=5&start=0&bucket=hotttnesss&name=jazz
But I get any error stating the "start" is an invalid parameter. Has anyone been able to use the "start" parameter with success?
This looks like a bug in their example. If you read the documentation, "start" and "results" are not valid for the genre/artists endpoint. Changing the example to remove these to parameters works.
Calling:
http://developer.echonest.com/api/v4/genre/artists?api_key=*********&format=json&bucket=hotttnesss&name=jazz
(replace the *** with your Key)
Yields:
{
"response":{
"status":{
"version":"4.2",
"code":0,
"message":"Success"
},
"artists":[
{
"name":"John Coltrane",
"hotttnesss":0.588225,
"id":"ARIOZCU1187FB3A3DC"
},
{
"name":"Thelonious Monk",
"hotttnesss":0.649332,
"id":"AR9PLH11187FB58A87"
},
{
"name":"Miles Davis",
"hotttnesss":0.697302,
"id":"AR7RTGF1187FB38793"
},
{
"name":"Miles Davis Quintet",
"hotttnesss":0.489603,
"id":"AR5DF1C1187FB4E94C"
},
{
"name":"Cannonball Adderley",
"hotttnesss":0.560071,
"id":"ARQ5TM41187FB3E97D"
},
{
"name":"Wayne Shorter",
"hotttnesss":0.548165,
"id":"ARO3CKW1187B9905A8"
},
{
"name":"Wynton Marsalis",
"hotttnesss":0.566708,
"id":"ARV3VEI1187B9AD5C9"
},
{
"name":"Sonny Rollins",
"hotttnesss":0.577764,
"id":"AR6Q4T91187B995616"
},
{
"name":"The Dave Brubeck Quartet",
"hotttnesss":0.570099,
"id":"ARLKR161187FB50694"
},
{
"name":"Kenny Burrell",
"hotttnesss":0.543388,
"id":"ARQYH461187FB3E975"
},
{
"name":"Stan Getz",
"hotttnesss":0.559735,
"id":"ARMGQLA1187B9AEBF8"
},
{
"name":"Dizzy Gillespie",
"hotttnesss":0.561122,
"id":"ARXA17J1187FB3B507"
},
{
"name":"Yusef Lateef",
"hotttnesss":0.513261,
"id":"ART95BW1187FB3AF79"
},
{
"name":"Bill Evans",
"hotttnesss":0.581819,
"id":"ARTLL9E1187FB4436F"
},
{
"name":"Freddie Hubbard",
"hotttnesss":0.524227,
"id":"ARU1K2U1187FB48529"
}
]
}
}
As far as I can tell, there isn't a way to page through the artists associated with a genre...

Elasticsearch: Find substring match

I want to perform both exact word match and partial word/substring match. For example if I search for "men's shaver" then I should be able to find "men's shaver" in the result. But in case case I search for "en's shaver" then also I should be able to find "men's shaver" in the result.
I using following settings and mappings:
Index settings:
PUT /my_index
{
"settings": {
"number_of_shards": 1,
"analysis": {
"filter": {
"autocomplete_filter": {
"type": "edge_ngram",
"min_gram": 1,
"max_gram": 20
}
},
"analyzer": {
"autocomplete": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"autocomplete_filter"
]
}
}
}
}
}
Mappings:
PUT /my_index/my_type/_mapping
{
"my_type": {
"properties": {
"name": {
"type": "string",
"index_analyzer": "autocomplete",
"search_analyzer": "standard"
}
}
}
}
Insert records:
POST /my_index/my_type/_bulk
{ "index": { "_id": 1 }}
{ "name": "men's shaver" }
{ "index": { "_id": 2 }}
{ "name": "women's shaver" }
Query:
1. To search by exact phrase match --> "men's"
POST /my_index/my_type/_search
{
"query": {
"match": {
"name": "men's"
}
}
}
Above query returns "men's shaver" in the return result.
2. To search by Partial word match --> "en's"
POST /my_index/my_type/_search
{
"query": {
"match": {
"name": "en's"
}
}
}
Above query DOES NOT return anything.
I have also tried following query
POST /my_index/my_type/_search
{
"query": {
"wildcard": {
"name": {
"value": "%en's%"
}
}
}
}
Still not getting anything.
I figured it is because of "edge_ngram" type filter on Index which is not able to find "partial word/sbustring match".
I tried "n-gram" type filter as well but it is slowing down the search alot.
Please suggest me how to achieve both excact phrase match and partial phrase match using same index setting.
To search for partial field matches and exact matches, it will work better if you define the fields as "not analyzed" or as keywords (rather than text), then use a wildcard query.
See also this.
To use a wildcard query, append * on both ends of the string you are searching for:
POST /my_index/my_type/_search
{
"query": {
"wildcard": {
"name": {
"value": "*en's*"
}
}
}
}
To use with case insensitivity, use a custom analyzer with a lowercase filter and keyword tokenizer.
Custom Analyzer:
"custom_analyzer": {
"tokenizer": "keyword",
"filter": ["lowercase"]
}
Make the search string lowercase
If you get search string as AsD: change it to *asd*
The answer given by #BlackPOP will work, but it uses the wildcard approach, which is not preferred as it has a performance issue and if abused can create a huge domino effect (performance issue) in the Elastic cluster.
I have written a detailed blog on partial search/autocomplete covering the latest options available in Elasticsearch as of today (Dec 2020) with performance in mind. For more trade-off information please refer to this answer.
IMHO a better approach will be to use the customized n-gram tokenizer according to use-case, which will have already tokens needed for search term so it will be faster, although it will have a bigger index size, but you size is not that costly and speed will be better with more control on how exactly you want substring search to work.
Also size can be controlled if you are conservative in defining the min and max gram in tokenizer setting.
By searching with any string or substring Use:
query: {
or: [{
match_phrase_prefix: {
name: str
}
}, {
match_phrase_prefix: {
surname: str
}
}]
}
Happy coding with Elastic Search....