typo3 extension: issue with function fullQuoteArray() - typo3

I am reading an extension file, and see below codes:
$GLOBALS['TYPO3_DB']->exec_UPDATEquery(
'tx_jcjob_job',
'uid = '.$this->piVars['job'],
array('hit_counter' => 'hit_counter + 1'),
array('hit_counter')
);
Then in file: class.t3lib_db.php,I checked two functions function exec_UPDATEqueryfile():
* #param string Database tablename
* #param string WHERE clause, eg. "uid=1". NOTICE: You must escape values in this argument with $this->fullQuoteStr() yourself!
* #param array Field values as key=>value pairs. Values will be escaped internally. Typically you would fill an array like "$updateFields" with 'fieldname'=>'value' and pass it to this function as argument.
* #param string/array See fullQuoteArray()
* #return pointer MySQL result pointer / DBAL object
*/
function exec_UPDATEquery($table, $where, $fields_values, $no_quote_fields = FALSE)
and function fullQuoteArray():
/**
* Will fullquote all values in the one-dimensional array so they are ready to "implode" for an sql query.
*
* #param array Array with values (either associative or non-associative array)
* #param string Table name for which to quote
* #param string/array List/array of keys NOT to quote (eg. SQL functions) - ONLY for associative arrays
* #return array The input array with the values quoted
* #see cleanIntArray()
*/
function fullQuoteArray($arr, $table, $noQuote = FALSE)
But I still got question:
how does this work: array('hit_counter')? or in other words, how does function fullQuoteArray() work? what does this mean: fullquote all values in the one-dimensional array?

On each array value the function real_escape_string (since 6.x) or mysql_real_escape (before 6.x) is used. So, every value should be SQL-Injection save.
There is no magic inside :)

Related

Indexed array in query

I am struggling to write a parameter annotation for a field that would have to result as indexed array of integers:
voucher_products[0]: 23
voucher_products[1]: 102
voucher_products[2]: 233
I tried the following
* #OA\Parameter(
* name="voucher_products",
* in="query",
* description="",
* required=false,
* #OA\Schema(
* type="array",
* #OA\Items(
* type="integer",
* )
* )
* ),
I complete the form this way:
form
The result I get in the query string parameters is
voucher_products: 23
voucher_products: 102
voucher_products: 233
If I check this field in $_POST its final value is voucher_products=233, since this doesn't turn to be an array.
What am I doing wrong?
OpenAPI Specification curently doesn't have a way to represent query strings containing an indexed array such as
?arr[0]=val1&arr[1]=val2&arr[2]=val3&...
Here's are the related issues in the OpenAPI Specification repository:
Are indexes in the query parameter array representable?
Support deep objects for query parameters with deepObject style
However, if your API can accept the array with just square brackets but without the indexes
?voucher_products[]=23&voucher_products[]=102&voucher_products[]=233
then you can define it as a parameter with name="voucher_products[]".

Returning result from two functions within a function - how?

I have two functions, qdgc_getlonlat and qdgc_getrecursivestring, which separately return a string. I am now creating a new function where the goal is concatenate the results from the said functions. This is where I am now:
return query
select *
from qdgc_getlonlat(lon_value,lat_value)
union distinct
select *
from qdgc_getrecursivestring(lon_value,lat_value,depthlevel,'');
Unfortunately it returns an array which look slike this:
Not too bad, but I would like the functions to be returned as a concatenated text string like this:
E007S05BDCA
How can I do this?
Why not simply concatenate them?
SELECT
qdgc_getlonlat(lon_value,lat_value) || qdgc_getrecursivestring(lon_value,lat_value,depthlevel,'')
FROM
mytable

Spark - Column expression

I came across the following express, I know what does it mean - department("name"). I am curious to know, what it is resolved to. please share your inputs .
department("name") - it is used to refer the column with the name "name". Hope I am correct ? But , what it is resolved to, it seems like auxiliary constructor
From https://spark.apache.org/docs/2.4.5/api/java/index.html?org/apache/spark/sql/DataFrameWriter.html,
// To create Dataset[Row] using SparkSession
val people = spark.read.parquet("...")
val department = spark.read.parquet("...")
people.filter("age > 30")
.join(department, people("deptId") === department("id"))
.groupBy(department("name"), people("gender"))
.agg(avg(people("salary")), max(people("age")))
department("name") is just syntactic sugar for calling apply function:
department.apply("name") which returns Column
from Spark API, Dataset object:
/**
* Selects column based on the column name and returns it as a [[Column]].
*
* #note The column name can also reference to a nested column like `a.b`.
*
* #group untypedrel
* #since 2.0.0
*/
def apply(colName: String): Column = col(colName)

Hash function in spark

I'm trying to add a column to a dataframe, which will contain hash of another column.
I've found this piece of documentation:
https://spark.apache.org/docs/2.3.0/api/sql/index.html#hash
And tried this:
import org.apache.spark.sql.functions._
val df = spark.read.parquet(...)
val withHashedColumn = df.withColumn("hashed", hash($"my_column"))
But what is the hash function used by that hash()? Is that murmur, sha, md5, something else?
The value I get in this column is integer, thus range of values here is probably [-2^(31) ... +2^(31-1)].
Can I get a long value here? Can I get a string hash instead?
How can I specify a concrete hashing algorithm for that?
Can I use a custom hash function?
It is Murmur based on the source code:
/**
* Calculates the hash code of given columns, and returns the result as an int column.
*
* #group misc_funcs
* #since 2.0.0
*/
#scala.annotation.varargs
def hash(cols: Column*): Column = withExpr {
new Murmur3Hash(cols.map(_.expr))
}
If you want a Long hash, in spark 3 there is the xxhash64 function: https://spark.apache.org/docs/3.0.0-preview/api/sql/index.html#xxhash64.
You may want only positive numbers. In this case you may use hash and sum Int.MaxValue as
df.withColumn("hashID", hash($"value").cast(LongType)+Int.MaxValue).show()

Fulltext Postgres

I created an index for full text search in postgresql.
CREATE INDEX pesquisa_idx
ON chamado
USING
gin(to_tsvector('portuguese', coalesce(titulo,'') || coalesce(descricao,'')));
When I run this query:
SELECT * FROM chamado WHERE to_tsvector('portuguese', titulo) ## 'ura'
It returned to me some rows.
But when my argument is in all uppercase, no rows are returned. For example:
SELECT * FROM chamado WHERE to_tsvector('portuguese', titulo) ## 'URA'
When the argument is 'ura' I get a few lines; when the argument is 'URA' I do not get any rows.
Why does this happen?
You get no matches in the second case since to_tsvector() lowercases all lexemes. Use to_tsquery() to build the query, it will take care of the case issues as well:
SELECT * FROM chamado WHERE to_tsvector('portuguese', titulo) ## to_tsquery('portuguese', 'URA')