When i have the follow query:
str(db(db.items.id==int(row)).select(db.items.imageName)) + "\n"
The output includes the field name:
items.imageName
homegear\homegear.jpg
How do i remove it so that field name will not be included and just the selected imagename.
i tried referencing it like a list [1] gives me an out of range error and [0] i end up with:
<Row {'imageName': 'homegear\\homegear.jpg'}>
The above is not a list, what object is that and how can i reference on it?
Thanks!
John
db(db.items.id==int(row)).select(db.items.imageName) returns a Rows object, and its __str__ method converts it to CSV output, which is what you are seeing.
A Rows object contains Row objects, and a Row object contains field values. To access an individual field value, you must first index the Rows object to extract the Row, and then get the individual field value as an attribute of the Row. So, in this case, it would be:
db(db.items.id==int(row)).select(db.items.imageName)[0].imageName
or:
db(db.items.id==int(row)).select(db.items.imageName).first().imageName
The advantage of rows.first() over rows[0] is that the former returns None in case there are no rows, whereas the latter will generate an exception (this doesn't help in the above case, because the subsequent attempt to access the .imageName attribute would raise an exception in either case if there were no rows).
Note, even when the select returns just a single row with a single field, you still have to explicitly extract the row and the field value as above.
Related
First of all, I have my array of columns parameter called $array_merge_keys
$array_merge_keys = ['Column1', 'Column2', 'NoColumnInSomeCases']
So then I am going to hash them, if the third NoColumnInSomeCases is not existed, I would like to treat it as null or some strings else there value.
But actually, when I use them with byNames(), it would return NULL because the last is not existed, even though first and second still have values. So I would expect byNames($array_merge_keys) always return value in order to hash them.
Since that problem cannot be solved, I am back to filter these only existed column
filter(columnNames('', true()), contains(['Column1', 'Column2', 'NoColumnInSomeCases'], #item_1 == #item)) => ['Column1', 'Column2']
But it comes to another problem that byNames() cannot compute on the fly, it said 'byNames' does not accept column or argument
array(byNames(filter(columnNames('', true()), contains(['Column1', 'Column2', 'NoColumnInSomeCases'], #item_1 == #item))))
Spark job failed: { "text/plain":
"{"runId":"649f28bf-35af-4472-a170-1b6ece50c551","sessionId":"a26089f4-b0f4-4d24-8b79-d2a91a9c52af","status":"Failed","payload":{"statusCode":400,"shortMessage":"DF-EXPR-030
at Derive 'CreateTypeFromFile'(Line 35/Col 36): Column name function
'byNames' does not accept column or argument
parameters","detailedMessage":"Failure 2022-04-13 05:26:31.317
failed DebugManager.processJob,
run=649f28bf-35af-4472-a170-1b6ece50c551, errorMessage=DF-EXPR-030 at
Derive 'CreateTypeFromFile'(Line 35/Col 36): Column name function
'byNames' does not accept column or argument parameters"}}\n" } -
RunId: 649f28bf-35af-4472-a170-1b6ece50c551
I have tried lots of ways, even created a new derived column (before that stream name) to store ['Column1', 'Column2']. But it said that column cannot be referenced within byNames() function
Do we have any elegant solution?
It is true that byName() cannot evaluate with late binding. You need to either use a Select transformation to set the columns in the stream you wish to hash first or send in the column names via a parameter. Since that is "early column binding", byName() will work.
You can use a get metadata activity in the pipeline to inspect which columns are present in your source before calling the data flow, allowing you to send a pipeline parameter with just those columns you wish to hash.
Alternatively, you can create a new branch, use a select matching rule, then hash the row based on those columns (see example below).
I have a table with an id column and a data column. The data column contains JSON objects with the following keys:
age
name
gender
addresses (array)
ratings
specialties (array)
I want to find the 3 keys within each JSON object that are most often left empty or null.
I know how I'd approach this in Python; I'd just iterate through each row and in turn iterate through each value within the data JSON object of that row and store the results in a dictionary. If a null/empty value is detected it would first check to see if that key already exists in the results dictionary, and if so += 1 to the value of that key. If the key doesn't already exist in the results dictionary, it would be added to the dictionary with a starting value of 1. From there I'd just sort the resulting dictionary and take the 3 keys with the 3 highest values.
For the sake of clarity, here's an example scenario:
Row 1: values of age and addresses keys in the data JSON object are empty/null
Row 2: values of age and specialties keys in the data JSON object are empty/null
Row 3: values of ratings and addresses keys in the data JSON object are empty/null
Row 4: value of name key in the data JSON object is empty/null
Row 5: value of age key in the data JSON object is empty/null
Row 6: values of age and addresses keys in the data JSON object are empty/null
Row 7: value of specialties key in the data JSON object is empty/null
In this example, the 3 keys that are most often left empty or null would be:
age (empty/null in 4 rows)
addresses (empty/null in 3 rows)
specialties (empty/null in 2 rows)
How would I accomplish this in Postgres? I figure I'll have to make a custom looping function, but I've never done anything like that in Postgres before so I'd really appreciate some guidance here. Any suggestions for the best way to tackle this?
No need to use loops or a custom iteration. With an SQL mindset, counting things from a table and sorting by the counts is even simpler than in Python.
The secret sauce here is composed of the jsonb_each function and a LATERAL subquery:
SELECT key, count(*)
FROM example t,
LATERAL jsonb_each(t.data)
WHERE value = 'null'
GROUP BY key
ORDER BY 2 DESC
LIMIT 3;
(Online demo)
However, notice that by iterating through data (just like in Python) you won't notice if a JSON object doesn't have the property at all - it is only iterated if it exists. A column where data = '{}' wouldn't be counted at all. If you wanted to treat those as "empty", you would actually need to try accessing the object with any existing key. This can be done by joining against the known keys:
SELECT key, count(*)
FROM example t,
UNNEST(ARRAY['age', 'name', 'gender', 'addresses', 'ratings', 'specialties']) AS keys(key)
WHERE data->key IS NULL OR data->key = 'null'
GROUP BY key
ORDER BY 2 DESC
LIMIT 3;
(Online demo)
I am using an expression builder in derived column action of Azure data factory. I have an iif statement that that adds objects to a single array of objects based on whether 5 columns are null. Within the iif statement if the object is not null it adds it to the array object and I did not specify an action for when the columns is null. So if the 3 columns have a value then there should be 3 total objects in the array but the issue is for those 2 empty columns they show up as 2 "null" values within the array. I don't want that. I just want to cleanly have only the 3 objects in the array. How can I convert the null values to whitespace or is there a better way to get this done?
I've made a test to conver null value to whitespace successfully.
My source data is a csv file with 6 columns and some columns may contains Null value:
In the dataflow, I'm using Derived Column to convert the Null value.
In the data preview, we can see the Null value was replaced with whitespace/blank
Summary:
So we can use expression iif(isNull(<Column_Name>),'\n',<Column_Name>) to replace the NULL value to a whitespace.
$result = mysqli_query($link, "INSERT INTO mytable...
`friends`,
`friend1`,
`friend2`,
`friend3`,
VALUES (NULL, '$friends',
'$friends[0]'
'$friends[1]'
'$friends[2]'
Using cloneya.js to duplicate fields, I get an array value for a set of 3 names. Posting to mysql, I get three names in the in the first field(friends) but only the first,second and third letter of the first name in the subsequent fields (friend1-3). How can I insert each name to the separate fields?
$friends is not an array in your case, its just a string. Which means that $friends[0] will be the first character of that string, $friends[1] will be the second character and so on. You must send the friends with a different variable name so you have $friends which is a string and $otherName as an array which you can use in your sql query.
Keep in mind to use prepared statements when you use sql queries which depends on variables. Also convert your tables to 3NF.
I have a Filemaker table with multiple entries in fieldA, how can I set fieldB to count the number of occurrences of the corresponding number of records which have the same value in fieldA.
For example, if fieldA is a;b;b;c I want fieldB to read 1;2;2;1.
The simplest is to make a self-relationship from the table to another occurrence of the same table by fieldA. Then fieldB can be like Count( sameFieldA::fieldA ).
You'll want a recursive custom function which you pass the fieldA contents in to.
It takes as parameters:
the text being parsed
the current position being parsed (starting at 1)
the output text being built
grab the fieldA value (e.g. "a") at the supplied position, then count the number of occurrences of "a" in the text being parsed. Append this to the output text, then if there are more values to process, call the recursive function again, with an incremented position, returning the result. Otherwise, return the output text.