The GitHub page for web-interactive issue filtering describes commenter and an involved filters.
However, the GitHub issues API page lists the following parameters, which don't seem to include commenter:
+-----------+-------------------+
| Name | Type |
+-----------+-------------------+
| milestone | integerĀ orĀ string |
| state | string |
| assignee | string |
| creator | string |
| mentioned | string |
| labels | string |
| sort | string |
| direction | string |
| since | string |
+-----------+-------------------+
Related
I'm trying to merge multiple hive tables using spark where some of the columns with the same name have different data types especially string and bigint.
My final table (hiveDF) should have schema like below-
+--------------------------+------------+----------+--+
| col_name | data_type | comment |
+--------------------------+------------+----------+--+
| announcementtype | bigint | |
| approvalstatus | string | |
| capitalrate | double | |
| cash | double | |
| cashinlieuprice | double | |
| costfactor | double | |
| createdby | string | |
| createddate | string | |
| currencycode | string | |
| declarationdate | string | |
| declarationtype | bigint | |
| divfeerate | double | |
| divonlyrate | double | |
| dividendtype | string | |
| dividendtypeid | bigint | |
| editedby | string | |
| editeddate | string | |
| exdate | string | |
| filerecordid | string | |
| frequency | string | |
| grossdivrate | double | |
| id | bigint | |
| indicatedannualdividend | string | |
| longtermrate | double | |
| netdivrate | double | |
| newname | string | |
| newsymbol | string | |
| note | string | |
| oldname | string | |
| oldsymbol | string | |
| paydate | string | |
| productid | bigint | |
| qualifiedratedollar | double | |
| qualifiedratepercent | double | |
| recorddate | string | |
| sharefactor | double | |
| shorttermrate | double | |
| specialdivrate | double | |
| splitfactor | double | |
| taxstatuscodeid | bigint | |
| lastmodifieddate | timestamp | |
| active_status | boolean | |
+--------------------------+------------+----------+--+
This final table (hiveDF) schema can be made with below JSON-
{
"id": -2147483647,
"productId": 150816,
"dividendTypeId": 2,
"dividendType": "Dividend/Capital Gain",
"payDate": null,
"exDate": "2009-03-25",
"oldSymbol": "ILAAX",
"newSymbol": "ILAAX",
"oldName": "",
"newName": "",
"grossDivRate": 0.115,
"shortTermRate": 0,
"longTermRate": 0,
"splitFactor": 0,
"shareFactor": 0,
"costFactor": 0,
"cashInLieuPrice": 0,
"cash": 0,
"note": "0",
"createdBy": "Yahoo",
"createdDate": "2009-08-03T06:44:19.677-05:00",
"editedBy": "Yahoo",
"editedDate": "2009-08-03T06:44:19.677-05:00",
"netDivRate": null,
"divFeeRate": null,
"specialDivRate": null,
"approvalStatus": null,
"capitalRate": null,
"qualifiedRateDollar": null,
"qualifiedRatePercent": null,
"declarationDate": null,
"declarationType": null,
"currencyCode": null,
"taxStatusCodeId": null,
"announcementType": null,
"frequency": null,
"recordDate": null,
"divOnlyRate": 0.115,
"fileRecordID": null,
"indicatedAnnualDividend": null
}
I am doing something like below-
var hiveDF = spark.sqlContext.sql("select * from final_destination_tableName")
var newDataDF = spark.sqlContext.sql("select * from incremental_table_1 where id > 866000")
My incremental table (newDataDF) has some columns with different data types. I have around 10 incremental tables where somewhere bigint and the same in other table as string so can't be sure if I do typecast. Typecast may be easy but I am not sure on which type should I do since multiple tables are there. I am looking for any approach where without typecast I can do.
For an example incremental table is something like below-
+--------------------------+------------+----------+--+
| col_name | data_type | comment |
+--------------------------+------------+----------+--+
| announcementtype | string | |
| approvalstatus | string | |
| capitalrate | string | |
| cash | double | |
| cashinlieuprice | double | |
| costfactor | double | |
| createdby | string | |
| createddate | string | |
| currencycode | string | |
| declarationdate | string | |
| declarationtype | string | |
| divfeerate | string | |
| divonlyrate | double | |
| dividendtype | string | |
| dividendtypeid | bigint | |
| editedby | string | |
| editeddate | string | |
| exdate | string | |
| filerecordid | string | |
| frequency | string | |
| grossdivrate | double | |
| id | bigint | |
| indicatedannualdividend | string | |
| longtermrate | double | |
| netdivrate | string | |
| newname | string | |
| newsymbol | string | |
| note | string | |
| oldname | string | |
| oldsymbol | string | |
| paydate | string | |
| productid | bigint | |
| qualifiedratedollar | string | |
| qualifiedratepercent | string | |
| recorddate | string | |
| sharefactor | double | |
| shorttermrate | double | |
| specialdivrate | string | |
| splitfactor | double | |
| taxstatuscodeid | string | |
| lastmodifieddate | timestamp | |
| active_status | boolean | |
+--------------------------+------------+----------+--+
I'm doing this union for table something like below-
var combinedDF = hiveDF.unionAll(newDataDF)
but no luck. I tried to give final schema as below but no luck-
val rows = newDataDF.rdd
val newDataDF2 = spark.sqlContext.createDataFrame(rows, hiveDF.schema)
var combinedDF = hiveDF.unionAll(newDataDF2)
combinedDF.coalesce(1).write.mode(SaveMode.Overwrite).option("orc.compress", "snappy").orc("/apps/hive/warehouse/" + database + "/" + tableLower + "_temp")
As per this, I tried below-
var combinedDF = sparkSession.read.json(hiveDF.toJSON.union(newDataDF.toJSON).rdd)
Finally I am trying to write into table like above but no luck, plz help me-
I also faced this situation while merging an incremental table with the existing table. There are generally 2 cases to handle
1. Incremental data with extra column:
This can be solved by normal merging process which you are trying here.
2. Incremental data with same column name but different schema:
This is the tricky one. One easy solution is to convert bot the data to toJSON and do union
hiveDF.toJSON.union(newDataDF.toJSON). This however will cause json schema merging and will change the existing schema. For eg: If the column a:Long in the table and a:String in the incremental table, after merging the final schema will be a:String. There is no way to evade this if you want to do json union.
The alternate to this is to have strict schema check for the incremental data. You test whether the incremental table has the same schema that the hive table, if the schema differs don't merge.
This is however little too stringent as for real time data it is pretty hard to put schema enforcement.
So the way I solved this is to have a separate enrichment process before merging. The process actually checks the schema and if the incoming column can be upgraded/downgraded to the current hive table schema it does that.
Essentially it iterate over the incoming delta, for each row convert that to the correct schema. This is little expensive but provides very good guarantee for the data correctness. In case the process fails to convert a row. I sideline the row and raise an alarm so that the data could be validated manually for any bug in the upstream system which is generating the data.
This is the code I use to validate whether the two schemas are mergable of not.
I currently coding a fitness app that permits to record all the personal records for a user.
I'm really new with Cloud Firestore from Firebase, so I really don't know how I could structure the database.
In my mind, I have two options:
OPTION 1
Users
|
+--UserID
| |
| +--Name
| +--Phone
| +--etc..
|
|
Users-records
|
+--UserID
| |
| +--RecordName
| | |
| | +--recordValue
| | +--recordType
| |
| +--RecordName
| | +--recordValue
| | +--recordType
OPTION 2
Users
|
+--UserID
| |
| +--Name
| +--Phone
| +--etc..
| +--Records
| | |
| | +--RecordName
| | | |
| | | +--recordValue
| | | +--recordType
| | +--RecordName
| | | |
| | | +--recordValue
| | | +--recordType
The questions are: Do I have to split the collection for the user?
Do you think this architecture is well designed for the purpose (ie record personal records from users)?
Thank you very much
Your database structure really depends on how you are going to use it. Keep in mind that whenever you observe a node, you are also observing all of the children nodes.
So I'd probably go with something closer to Option two, maybe like this:
Users
|
+--UserID
| |
| +--UserInfo
| | |
| | +--Name
| | +--Phone
| | +--etc..
| |
| +--Records
| | |
| | +--RecordName
| | | |
| | | +--recordValue
| | | +--recordType
| | +--RecordName
| | | |
| | | +--recordValue
| | | +--recordType
I'd choose this, because I'd image you'd want to get all of the UserInfo at once, So we can observe that "UserInfo" node and get all of the children: name, phone, etc....
Then I'd think you'd also want to get all of the records at once, so we can observe that "Records" node and get all of that data.
Additionally, if you wanted, you could get everything at once by observing the UserID!
However, if you were maybe going to be getting a list of all the users, then you definitely don't want all this data in one spot and this design wouldn't work, because that is a lot of data to observe just to get all the users.
In summary: Choose an option which makes it easiest for you to get what you need, without getting extra data you don't want!
I have gone through numerous post on triggering Jenkins build when a PR is raised in github.
I have checked Git hub Pull Request Builder Option in jenkins job and also provided ${sha1} as branch.
Apart from above , I have added webhook and jenkins Github plugin as service in my repo.
Anything else being missed here . I dont see build getting triggered when PR is raised.
You can use Generic Webhook Trigger Plugin to do that.
Setup a webhook in GitHub.
Configure Generic Webhook Trigger Plugin with variable action with expression $.action
Configure the filter text as $action and the filter regexp as: ^(opened|reopened|synchronize)$
Now this job will run any time a PR is opened, re-opened or new commits are pushed.
You can also pick other values from the webhook like:
| variable | expression | expressionType | defaultValue | regexpFilter |
| action | $.action | JSONPath | | |
| pr_id | $.pull_request.id | JSONPath | | |
| pr_state | $.pull_request.state | JSONPath | | |
| pr_title | $.pull_request.title | JSONPath | | |
| pr_from_ref | $.pull_request.head.ref | JSONPath | | |
| pr_from_sha | $.pull_request.head.sha | JSONPath | | |
| pr_from_git_url | $.pull_request.head.repo.git_url | JSONPath | | |
| pr_to_ref | $.pull_request.base.ref | JSONPath | | |
| pr_to_sha | $.pull_request.base.sha | JSONPath | | |
| pr_to_git_url | $.pull_request.base.repo.git_url | JSONPath | | |
| repo_git_url | $.repository.git_url | JSONPath | | |
There is a test case showing this feature here: https://github.com/jenkinsci/generic-webhook-trigger-plugin/blob/master/src/test/resources/org/jenkinsci/plugins/gwt/bdd/github/github-pull-request.feature
I can see that it is possible to add metadata to a Rackspace virtual machine instance.
I want to get a list of running instances, filtered by a particular metatag value.
I can't see how to do so in the documentation however.
is it possible?
You should be able to do so using the openstack client... but it depends on which metatag you're interested in.
You can get a list of all servers:
openstack server list
Will spit something like
+--------------------------------------+------------------+--------+-----------------------------------------------------------------------------------------------------------+
| ID | Name | Status | Networks |
+--------------------------------------+------------------+--------+-----------------------------------------------------------------------------------------------------------+
| 97606ae9-7f18-4a3c-903a-1583d446119b | trysmallwin | ERROR | |
| cb78b8d5-2f03-4a3f-ab26-f389acbd0b76 | Win-try again | ERROR | public=2607:f298:5:101d:f816:3eff:fe9e:5cd4, 208.113.133.90, 2607:f298:5:101d:f816:3eff:fe36:da45, |
| | | | 208.113.133.93, 2607:f298:5:101d:f816:3eff:fe40:57d5, 208.113.133.95 |
| 040751d1-c4c5-47aa-8dec-1d69a468be1c | hnxhdkwskrvwvdwr | ACTIVE | public=2607:f298:5:101d:f816:3eff:fe60:324, 208.113.130.52 |
+--------------------------------------+------------------+--------+-----------------------------------------------------------------------------------------------------------+
note the ID of the server and investigate deeper:
openstack server show 040751d1-c4c5-47aa-8dec-1d69a468be1c
+--------------------------------------+------------------------------------------------------------+
| Field | Value |
+--------------------------------------+------------------------------------------------------------+
| OS-DCF:diskConfig | MANUAL |
| OS-EXT-AZ:availability_zone | iad-2 |
| OS-EXT-STS:power_state | Running |
| OS-EXT-STS:task_state | None |
| OS-EXT-STS:vm_state | active |
| OS-SRV-USG:launched_at | 2016-07-26T17:32:01.000000 |
| OS-SRV-USG:terminated_at | None |
| accessIPv4 | |
| accessIPv6 | |
| addresses | public=2607:f298:5:101d:f816:3eff:fe60:324, 208.113.130.52 |
| config_drive | True |
| created | 2016-07-26T17:31:51Z |
| flavor | gp1.semisonic (50) |
| hostId | e1efd75d1e8f6a7f5bb228a35db13647281996087d39c65af8ce83d9 |
| id | 040751d1-c4c5-47aa-8dec-1d69a468be1c |
| image | Ubuntu-14.04 (03f89ff2-d66e-49f5-ae61-656a006bbbe9) |
| key_name | stef |
| name | hnxhdkwskrvwvdwr |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| project_id | d2fb6996496044158cf977c2129c8660 |
| properties | |
| security_groups | [{u'name': u'default'}] |
| status | ACTIVE |
| updated | 2016-07-26T17:32:01Z |
| user_id | 5b2ca246f39a425f9a833460bf322603 |
+--------------------------------------+------------------------------------------------------------+
openstack --f json will output the same stuff but in json format that you can more easily manipulate programmatically.
HTH
When setting the location of an event in Calendar on a Mac it offers suggestions which when clicked embeds a map into the event. Is it possible to embed a map into a .ics file so that the map shows once imported? It seems that simply setting LOCATION when creating the calendar file isn't sufficient.
I've scanned RFC 2445 but can't find anything to help.
My assumption is that to embed a map into the event the user needs to specifically select a location from the suggestions offered when typing and that this can't be done automatically on import. Is my assumption correct?
short-answer: NO you cannot embed a map in the .ics file, but your calendar render could do it by parsing .ics file
long-answer:
RFC2445 was supersed by RFC5545
RFC5545 specifies in section 8.3.4, the following data types of which none allow you to have a map in an .ics file:
+-----------------+---------+--------------------------+
| Value Data Type | Status | Reference |
+-----------------+---------+--------------------------+
| BINARY | Current | RFC 5545, Section 3.3.1 |
| | | |
| BOOLEAN | Current | RFC 5545, Section 3.3.2 |
| | | |
| CAL-ADDRESS | Current | RFC 5545, Section 3.3.3 |
| | | |
| DATE | Current | RFC 5545, Section 3.3.4 |
| | | |
| DATE-TIME | Current | RFC 5545, Section 3.3.5 |
| | | |
| DURATION | Current | RFC 5545, Section 3.3.6 |
| | | |
| FLOAT | Current | RFC 5545, Section 3.3.7 |
| | | |
| INTEGER | Current | RFC 5545, Section 3.3.8 |
| | | |
| PERIOD | Current | RFC 5545, Section 3.3.9 |
| | | |
| RECUR | Current | RFC 5545, Section 3.3.10 |
| | | |
| TEXT | Current | RFC 5545, Section 3.3.11 |
| | | |
| TIME | Current | RFC 5545, Section 3.3.12 |
| | | |
| URI | Current | RFC 5545, Section 3.3.13 |
| | | |
| UTC-OFFSET | Current | RFC 5545, Section 3.3.14 |
+-----------------+---------+--------------------------+
To achieve what you want to do, your calendar renderer needs to parse either the calendar property LOCATION (see 3.8.1.7) which is a string, or better if you have it you could parse the propertyGEO (see 3.8.1.6 ) which is two floats giving longitude and latitude.