Google Sheets REST API 4 get row with specific column - rest

I want to get only one row that contain specific value in one column.
How to make query and set something like: IF "column1" = '3' return that row?
If i use sheets.spreadsheets.values.batchGetByDataFilter
i don't know what to write in DataFilter (now i write only "A:D" to return all rows and columns)
POST https://sheets.googleapis.com/v4/spreadsheets/<Sheet_ID>/values:batchGetByDataFilter?key={YOUR_API_KEY}
{
"dataFilters": [
{
"a1Range": "A:D"
}
]
}

Related

empty cells in appending data one by one into the google sheet via api v4

When i append the list of data in a column the data appended leaves the rows equivalent to the maximum rows used in the previous column and then starts appending from the last cell. The picture would explain the problem quite well. I am using flutter and gsheets dependency to connect to the sheet. Date is the column key and data is the data being appended.
if (column != null) {
if (!column.contains(data)) {
await sheet.values.map.appendRow({date: data});
}
return true;
} else {
await sheet.values.insertColumnByKey(date, [data]);
return true;
}
Current output
Required output
You should find the last cell that contains a value in the target column and insert values starting in the cell just below that cell, instead of appending after the last row of the occupied data range in the sheet.
One way to do that is to use my appendRows_() utility function.

How Do I Generate RowId For Intermediate Group Rows?

I am working on implementing grouping w/ the Server Side Row Model. I need to generate an appropriate ID for the intermediate group rows. For example, if I group by Status then I would have intermediate rows representing each Status (NEW, IN PROGRESS, COMPLETE, etc). I need to come up with a unique ID for these rows (but preferable something deterministic if they need to be accessed/updated later).
The getRowId function is passed an object that contains things like the row's data, the previous parent group values, a reference to the api, etc.
What I would ideally like to know is the current list of group fields... I have all of the values readily accessible, but I don't know what field the current row is being grouped by - else I could just go grab that field from the row's data to use as part of the row id...
Is there any good way to acquire this information?
The columnApi exposes the 'getRowGroupColumns' function from which the field property can be deduced:
getRowId: ({ columnApi, data, level, parentKeys = [] }) => {
const groupColumns = columnApi.getRowGroupColumns();
if (groupColumns.length > level) {
const field = groupColumns[level].getColDef().field;
return [...parentKeys, data[field]].join('-');
}
return [...parentKeys, data.athlete, data.year];
},

DynamoDB - How to upsert nested objects with updateItem

Hi I am newbie to dynamoDB. Below is the schema of the dynamo table
{
"user_id":1, // partition key
"dob":"1991-09-12", // sort key
"movies_watched":{
"1":{
"movie_name":"twilight",
"movie_released_year":"1990",
"movie_genre":"action"
},
"2":{
"movie_name":"harry potter",
"movie_released_year":"1996",
"movie_genre":"action"
},
"3":{
"movie_name":"lalaland",
"movie_released_year":"1998",
"movie_genre":"action"
},
"4":{
"movie_name":"serendipity",
"movie_released_year":"1999",
"movie_genre":"action"
}
}
..... 6 more attributes
}
I want to insert a new item if the item(that user id with dob) did not exist, otherwise add the movies to existing movies_watched map by checking if the movie is not already available the movies_watched map .
Currently, I am trying to use update(params) method.
Below is my approach:
function getInsertQuery (item) {
const exp = {
UpdateExpression: 'set',
ExpressionAttributeNames: {},
ExpressionAttributeValues: {}
}
Object.entries(item).forEach(([key, item]) => {
if (key !== 'user_id' && key !== 'dob' && key !== 'movies_watched') {
exp.UpdateExpression += ` #${key} = :${key},`
exp.ExpressionAttributeNames[`#${key}`] = key
exp.ExpressionAttributeValues[`:${key}`] = item
}
})
let i = 0
Object.entries(item. movies_watched).forEach(([key, item]) => {
exp.UpdateExpression += ` movies_watched.#uniqueID${i} = :uniqueID${i},`
exp.ExpressionAttributeNames[`#uniqueID${i}`] = key
exp.ExpressionAttributeValues[`:uniqueID${i}`] = item
i++
})
exp.UpdateExpression = exp.UpdateExpression.slice(0, -1)
return exp
}
The above method just creates update expression with expression names and values for all top level attributes as well as nested attributes (with document path).
It works well if the item is already available by updating movies_watched map. But throws exception if the item is not available and while inserting. Below is exception:
The document path provided in the update expression is invalid for update
However, I am still not sure how to check for duplicate movies in movies_watched map
Could someone guide me in right direction, any help is highly appreciated!
Thanks in advance
There is no way to do this, given your model, without reading an item from DDB before an update (at that point the process is trivial). If you don't want to impose this additional read capacity on your table for update, then you would need to re-design your data model:
You can change movies_watched to be a Set and hold references to movies. Caveat is that Set can contain only Numbers or Strings, thus you would have movie id or name or keep the data but as JSON Strings in your Set and then parse it back into JSON on read. With SET you can perform ADD operation on the movies_watched attribute. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.UpdateExpressions.html#Expressions.UpdateExpressions.ADD
You can go with single table design approach and have these movies watched as separate items with (PK:userId and SK:movie_id). To get a user you would perform a query and specify only PK=userId -> you will get a collection where one item is your user record and others are movies_watched. If you are new to DynamoDB and are learning the ropes, then I would suggest go with this approach. https://www.alexdebrie.com/posts/dynamodb-single-table/

Creating multi drop down columns using the smartsheet api

Very recently a new type of column was add to smartsheet : multi drop down :
Is there any solution to create such column using the api ?
Is a new version of the api planned ?
As of Oct 1, you can actually create a column that supports the new multi-dropdown feature. The documentation is a little behind.
If you don't yet have a column, you'll have to Add a column first.
Once you have a columnId, you can send an Update Column request and specify "type" as "MULTI_PICKLIST".
To retrieve the correct type when you do a GET /sheets/{sheetId} or GET /{columnId}, you have to use a query parameter of ?level=3&include=objectValue.
It is possible to create a Dropdown (multi select) column via the API!
To briefly address the TEXT_NUMBER issue, this type is used for backwards-compatibility. If you are unaware of the ?level=2&include=objectValue suffix, the response will return a TEXT_NUMBER column type to avoid breaking any existing clients that aren't set up to handle the new column type.
In the following examples the double brace variables represent your targets:
{{environment}} is something like https://api.smartsheet.com/2.0/
{{sheetId}} is a sheet Id in the form 6264126827992742
{{columnId}} is the Id of your primary column in the form 2641268279927426
{{columnId2}} is the Id of your multi picklist column in the form 6412682799274262
To add a column with a MULTI_PICKLIST type to an existing sheet:
POST: {{environment}}/sheets/{{sheetId}}/columns/
{
"title": "I'm a new multi picklist column",
"type":"MULTI_PICKLIST",
"index": 1,
"options": ["opt1","opt2","opt3"]
}
To create a column on a brand new sheet, this example will create a sheet with a primary column, a MULTI_PICKLIST column, and then add a row with some data.
Then it will get the sheet using level 2 to avoid the backwards compatibility TEXT_NUMBER type.
To create a sheet with a MULTI_PICKLIST column:
POST: {{environment}}/sheets
{
"name":"API PL Sheet",
"columns":
[
{
"title":"My primary Column",
"primary":true,
"type":"TEXT_NUMBER"
},
{
"title":"My multi select column",
"type":"MULTI_PICKLIST",
"options":["options","in","this","form"]
}
]
}
To add a row on this sheet:
POST: {{environment}}/sheets/{{sheetId}}/rows?include=objectValue
[
{
"toTop": true,
"cells":
[
{
"columnId":{{columnId}},
"value": "1"
},
{
"columnId":{{columnId2}},
"objectValue":
{
"objectType":"MULTI_PICKLIST",
"values":["in", "form"]
}
}
]
}
]
To view the sheet with the MULTI_PICKLIST objectValue:
GET: {{environment}}/sheets/{{sheetId}}?level=2&include=objectValue
If you do not include the ?level=2&include=objectValue suffix then the JSON response will have columns that appear as though they are TEXT_NUMBER types.
For one final note, different endpoint groups require different levels. They are as follows:
GET Cell History is level 2
GET Sheets is level 2
GET Row is level 2
GET Column is level 2
POST Sort is level 2
GET Sights (dashboards) is level 3
GET reports is level 3

Composite views in couchbase

I'm new to Couchbase and am struggling to get a composite index to do what I want it to. The use-case is this:
I have a set of "Enumerations" being stored as documents
Each has a "last_updated" field which -- as you may have guessed -- stores the last time that the field was updated
I want to be able to show only those enumerations which have been updated since some given date but still sort the list by the name of the enumeration
I've created a Couchbase View like this:
function (doc, meta) {
var time_array;
if (doc.doc_type === "enum") {
if (doc.last_updated) {
time_array = doc.last_updated.split(/[- :]/);
} else {
time_array = [0,0,0,0,0,0];
}
for(var i=0; i<time_array.length; i++) { time_array[i] = parseInt(time_array[i], 10); }
time_array.unshift(meta.id);
emit(time_array, null);
}
}
I have one record that doesn't have the last_updated field set and therefore has it's time fields are all set to zero. I thought as a first test I could filter out that result and I put in the following:
startkey = ["a",2012,0,0,0,0,0]
endkey = ["Z",2014,0,0,0,0,0]
While the list is sorted by the 'id' it isn't filtering anything! Can anyone tell me what I'm doing wrong? Is there a better composite view to achieve these results?
In couchbase when you query view by startkey - endkey you're unable to filter results by 2 or more properties. Couchbase has only one index, so it will filter your results only by first param. So your query will be identical to query with:
startkey = ["a"]
endkey = ["Z"]
Here is a link to complete answer by Filipe Manana why it can't be filtered by those dates.
Here is a quote from it:
For composite keys (arrays), elements are compared from left to right and comparison finishes as soon as a element is different from the corresponding element in the other key (same as what happens when comparing strings à la memcmp() or strcmp()).
So if you want to have a view that filters by date, date array should go first in composite key.