Differentiate dropdown multi select (without options defined) with regular text columns - smartsheet-api

Is there any way to differentiate columns that are of type drop down multi select from regular text columns :
This is supposed to be a multi select drop down list without any option :
"id": 5414087443146628,
"version": 2,
"index": 2,
"title": "Column3",
"type": "TEXT_NUMBER",
"validation": false,
"width": 150
Same question goes for multi contact list without contact options defined.

If you think of multi-contact or multi-dropdown as new versions of the various GET requests, then its easier to return the correct values. For multi-dropdown, you use a combination of query parameters of "level=3" and "include=objectValue", then you'll see the column type change to MULTI_PICKLIST instead of TEXT. (The TEXT value is to maintain backwards compatibility.)
So, essentially, your request would look something like GET /sheets/{sheetId}?level=3&include=objectValue.

To test the scenario you've described, I created the following sheet structure in Smartsheet, where the column names indicate the type of each column:
Then I used Postman to issue a Get Sheet request for that sheet:
GET https://api.smartsheet.com/2.0/sheets/5831916227192708
The columns portion of the API response looks like this:
{
"id": 5831916227192708,
...
"columns": [
{
"id": 1256050323154820,
"version": 0,
"index": 0,
"title": "Description",
"type": "TEXT_NUMBER",
"primary": true,
"validation": false,
"width": 124
},
{
"id": 5759649950525316,
"version": 0,
"index": 1,
"title": "Type=Text/Number",
"type": "TEXT_NUMBER",
"validation": false,
"width": 128
},
{
"id": 1323283741206404,
"version": 0,
"index": 2,
"title": "Type=Dropdown (single select)",
"type": "PICKLIST",
"validation": false,
"width": 111
},
{
"id": 7741495861110660,
"version": 2,
"index": 3,
"title": "Type=Dropdown (multiple select)",
"type": "TEXT_NUMBER",
"validation": false,
"width": 113
},
{
"id": 3048711514285956,
"version": 0,
"index": 4,
"title": "Type=Contact List (single select)",
"type": "CONTACT_LIST",
"validation": false,
"width": 122
},
{
"id": 3992195570132868,
"version": 1,
"index": 5,
"title": "Type=Contact List (multiple select)",
"type": "TEXT_NUMBER",
"validation": false,
"width": 125
}
],
...
}
In this response, we see the following:
If column type is specified as Text/Number, the type attribute value is TEXT_NUMBER
If column type is specified as Dropdown (single select), the type attribute value is PICKLIST
If column type is specified as Dropdown (multiple select), the type attribute value is TEXT_NUMBER
If column type is specified as Contact List (single select), the type attribute value is CONTACT_LIST
If column type is specified as Contact List (multiple select), the type attribute value is TEXT_NUMBER
Therefore, it doesn't seem possible to programmatically differentiate a Dropdown (multiple select) column from a Text/Number column or a Contact List (multiple select) column from a Text/Number column, based on column metadata alone. IMO, seems like a bug for the Dropdown (multiple select) column type and Contact List (multiple select) column type to return type: TEXT_NUMBER. Perhaps someone with Smartsheet can comment here to provide more insight into this behavior.

Did a few tests and level 3 isn't available : https://api.smartsheet.com/2.0/sheets/{sheetId}?level=3 :
{
"errorCode": 1018,
"message": "The value '3' was not valid for the parameter 'level'.",
"refId": "1godowa5cigf1"
}
Although i tried with level 2 and got the info :
https://api.smartsheet.com/2.0/sheets/{sheetId}?level=2&include=objectValue
Results for a multi drop down list :
{
"id": 5414087443146628,
"version": 2,
"index": 2,
"title": "Column3",
"type": "MULTI_PICKLIST",
"options": [
"a",
"b"
],
"validation": false,
"width": 150
}

Related

JDBC sink topic with multiple structs to postgres

I am trying to sink a few topics top a postgres database. However the topic schema defines a array at the top level and within it multiple structs. Automapping does not work and I cannot find any reference how to handle this. I need all structs because they are dependent types, the second struct references the first struct as a field.
Currently it breaks when hitting the 2nd struct stating statusChangeEvent (struct) has no mapping to sql column type. This because it is using auto.create to make a table (probably called ProcessStatus) then when hitting the second entry there is no column of course.
[
{
"type": "record",
"name": "processStatus",
"namespace": "company.some.process",
"fields": [
{
"name": "code",
"doc": "The code of the processStatus",
"type": "string"
},
{
"name": "name",
"doc": "The name of the processStatus",
"type": "string"
},
{
"name": "description",
"type": "string"
},
{
"name": "isCompleted",
"type": "boolean"
},
{
"name": "isSuccessfullyCompleted",
"type": "boolean"
}
]
},
{
"type": "record",
"name": "StatusChangeEvent",
"namespace": "company.some.process",
"fields": [
{
"name": "contNumber",
"type": "string"
},
{
"name": "processId",
"type": "string"
},
{
"name": "processVersion",
"type": "int"
},
{
"name": "extProcessId",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "fromStatus",
"type": "process.status"
},
{
"name": "toStatus",
"doc": "The new status of the process",
"type": "company.some.process.processStatus"
},
{
"name": "changeDateTime",
"type": "long",
"logicalType": "timestamp-millis"
},
{
"name": "isPublic",
"type": "boolean"
}
]
}
]
I am not using ksql atm. Which connector settings are suited for this task? If there is a ksql alternative it would be nice to know but the current requirement is to use the JDBC connector.
I tried using flatten but it does not support struct fields that have a schema. Which seems kind of weird. Aren't schema's the whole selling point of connect with kafka? Or is it more of a constraint you have to work around?
Aren't schema's the whole selling point of connect with kafka?
Yes, but Postgres (or the JDBC Sink, in general) doesn't really support nested objects within columns. For that, you're better off with a document database, such as using Mongo Sink Connector.
Which connector settings are suited for this task?
None, really, other than transforms. You could write your own if flatten doesn't work.
You could try pre-defining your table to use JSONB for the two status columns, however, that's more of a workaround.

Coalesce value bound to object's key into parent's value

I have a PostgreSQL 12.x database. There is a column data in a table typename that contains JSON. The actual JSON data is not fixed to a particular structure; these are some examples:
{"emt": {"key": " ", "source": "INPUT"}, "id": 1, "fields": {}}
{"emt": {"key": "Stack Overflow", "source": "INPUT"}, "id": 2, "fields": {}}
{"emt": {"key": "https://www.domain.tld/index.html", "source": "INPUT"}, "description": {"key": "JSONB datatype", "source": "INPUT"}, "overlay": {"id": 5, "source": "bOv"}, "fields": {"id": 1, "description": "Themed", "recs ": "1"}}
Basically, what I'm trying to come up with is a (database migration) script that will find any object with the keys key and source, take the actual value of key and assign it to the corresponding key/value pair where the object was originally bound to. For instance:
{"emt": " ", "id": 1, "fields": {}}
{"emt": "Stack Overflow", "id": 2, "fields": {}}
{"emt": "https://www.domain.tld/index.html", "description": "JSONB datatype", "overlay": {"id": 5, "source": "bOv"}, "fields": {"id": 1, "description": "Themed", "recs ": "1"}}
I started finding the rows that contained "source": "INPUT" by using:
select * from typename
where jsonb_path_exists(data, '$.** ? (#.type() == "string" && # like_regex "INPUT")');
...but then I'm not sure how to update the returned subset or to loop through it :/
It took me a while but here is the update statement:
update typename
set data = jsonb_set(data, '{emt}', jsonb_extract_path(data, 'emt', 'key')::jsonb, false)
where jsonb_typeof(data -> 'emt') = 'object'
and jsonb_path_exists(data, '$.emt.key ? (#.type() == "string")')
and jsonb_path_exists(data, '$.emt.source ? (#.type() == "string" && # like_regex "INPUT")');
There are probably better ways to implement that where clause, but that one works ;)
One downside is that I had to figure it out how many keys are involved in the update and align it with the number of update statements; e.g.: in the original example there were two keys: emt and description — so it should have been two update statements.

Sonarqube REST API : What is the structure of "metrics" in "GET api/measures/component" WS

In the response example from the web api documentation, I can see metric "ncloc" should be like this in the JSON web response :
"measures": [
{
"metric": "ncloc",
"value": "114",
"periods": [
{
"index": 1,
"value": "3"
}
]
},
But it's not, there is no "periods" for this metric in my response :
"measures":[
{
"metric": "ncloc",
"value": "2943"
},
There are "periods" for some other metrics though, and in this case, there is no metric value, only a value for each period (and there are never multiple periods, only one corresponding to the "new code" period).
So here are my questions about this :
How can I know which structure to expect for a metric ? what would be the metric where "periods" would have a list of periods and not just one corresponding to "new code" ?
I don't think there are multiple periods. API documentation for my version (8.9) only states period, which probably stands for Leak Period as the New Code used to be called. I assume that general value is for overall value and period value is for the new code. Some measures may not make sense or be counted for overall or new code, so I would not make assumptions on whether there will be a metric value for the period or general.
Edit:
The following is the 8.9 documentation:
GET api/measures/component
SINCE 5.4
Return component with specified measures.
Requires the following permission: 'Browse' on the project of specified component.
Parameters
Parameter
Required?
Since
Description
Example value
additionalFields
optional
Comma-separated list of additional fields that can be returned in the response.
Possible values: metrics, period, periods Example value: period,metrics
branch
optional
6.6
Branch key. Not available in the community edition.
Example value: feature/my_branch
component
required
Component key
Example value: my_project
metricKeys
required
Comma-separated list of metric keys
Example value: ncloc,complexity,violations
pullRequest
optional
7.1
Pull request id. Not available in the community edition.
Example value: 5461
Response Example
{
"component": {
"key": "MY_PROJECT:ElementImpl.java",
"name": "ElementImpl.java",
"qualifier": "FIL",
"language": "java",
"path": "src/main/java/com/sonarsource/markdown/impl/ElementImpl.java",
"measures": [
{
"metric": "complexity",
"value": "12",
"period": {
"value": "2",
"bestValue": false
}
},
{
"metric": "new_violations",
"period": {
"value": "25",
"bestValue": false
}
},
{
"metric": "ncloc",
"value": "114",
"period": {
"value": "3",
"bestValue": false
}
}
]
},
"metrics": [
{
"key": "complexity",
"name": "Complexity",
"description": "Cyclomatic complexity",
"domain": "Complexity",
"type": "INT",
"higherValuesAreBetter": false,
"qualitative": false,
"hidden": false,
"custom": false
},
{
"key": "ncloc",
"name": "Lines of code",
"description": "Non Commenting Lines of Code",
"domain": "Size",
"type": "INT",
"higherValuesAreBetter": false,
"qualitative": false,
"hidden": false,
"custom": false
},
{
"key": "new_violations",
"name": "New issues",
"description": "New Issues",
"domain": "Issues",
"type": "INT",
"higherValuesAreBetter": false,
"qualitative": true,
"hidden": false,
"custom": false
}
],
"period": {
"mode": "previous_version",
"date": "2016-01-11T10:49:50+0100",
"parameter": "1.0-SNAPSHOT"
}
}
Changelog
Version
Change
8.8
deprecated response field 'id' has been removed.
8.8
deprecated response field 'refId' has been removed.
8.1
the response field periods under measures field is deprecated. Use period instead.
8.1
the response field periods is deprecated. Use period instead.
7.6
The use of module keys in parameter 'component' is deprecated
6.6
the response field 'id' is deprecated. Use 'key' instead.
6.6
the response field 'refId' is deprecated. Use 'refKey' instead.

How do I configure kafka-connect-spooldir to consume a json array?

I've configured kafka-connect-spooldir to consume files containing JSON objects according to the instructions at https://github.com/jcustenborder/kafka-connect-spooldir. This consumes files containing one or more JSON objects. Now how can I configure this to consume a file containing a JSON array instead?
Here is my current key and value schemas:
key.schema={"name": "com.example.users.UserKey", "type": "STRUCT", "isOptional": false, "fieldSchemas": {"id": {"type": "INT64", "isOptional": false }}}
value.schema={"name": "com.example.users.User", "type": "STRUCT", "isOptional": false, "fieldSchemas": {"id": {"type": "INT64", "isOptional": false}, "test": {"type": "STRING", "isOptional": true}}}
Here is a sample of my data:
{
"id": 10,
"test": "Carla Howe"
}
{
"id": 1,
"test": "Gayle Becker"
}
Here is what I would like the data to look like:
[
{
"id": 10,
"test": "Carla Howe"
},
{
"id": 1,
"test": "Gayle Becker"
}
]
I've tried simply to change the first type from STRUCT to ARRAY, but this throws an NPE "valueSchema cannot be null".
Can someone please point me in the right direction, or provide an example?
According to documentation there is a SchemaGenerator tool that can be run to generate the schema for sample data.

Loopback - GET model using custom String ID from MongoDB

I'm developing an API with loopback, everything worked fine until I decided to change the ids of my documents in the database. Now I don't want them to be auto generated.
Now that I'm setting the Id myself. I get an "Unknown id" 404, whenever I hit this endpoint: GET properties/{id}
How can I use custom IDs with loopback and mongodb?
Whenever I hit this endpoint: http://localhost:5000/api/properties/20020705171616489678000000
I get this error:
{
"error": {
"name": "Error",
"status": 404,
"message": "Unknown \"Property\" id \"20020705171616489678000000\".",
"statusCode": 404,
"code": "MODEL_NOT_FOUND"
}
}
This is my model.json, just in case...
{
"name": "Property",
"plural": "properties",
"base": "PersistedModel",
"idInjection": false,
"options": {
"validateUpsert": true
},
"properties": {
"id": {"id": true, "type": "string", "generated": false},
"photos": {
"type": [
"string"
]
},
"propertyType": {
"type": "string",
"required": true
},
"internalId": {
"type": "string",
"required": true
},
"flexCode": {
"type": "string",
"required": true
}
},
"validations": [],
"relations": {},
"acls": [],
"methods": []
}
Your model setup (with with idInjection: true or false) did work when I tried it with a PostGreSQL DB setup with a text id field for smaller numbers.
Running a Loopback application with DEBUG=loopback:connector:* node . outputs the database queries being run in the terminal - I tried it with the id value you are trying and the parameter value was [2.002070517161649e+25], so the size of the number is the issue.
You could try raising it as a bug in Loopback, but JS is horrible at dealing with large numbers so you may be better off not using such large numbers as identifiers anyway.
It does work if the ID is an alphanumeric string over 16 characters so there might be a work around for you (use ObjectId?), depending on what you are trying to achieve.