Zend_Db_Adapter_Mysqli::fetchAssoc() I don't want primary keys as array indexes! - zend-framework

According to ZF documentation when using fetchAssoc() the first column in the result set must contain unique values, or else rows with duplicate values in the first column will overwrite previous data.
I don't want this, I want my array to be indexed 0,1,2,3... I don't need rows to be unique because I won't modify them and won't save them back to the DB.

According to ZF documentation fetchAll() (when using the default fetch mode, which is in fact FETCH_ASSOC) is equivalent to fetchAssoc(). BUT IT'S NOT.
I've used print_r()function to reveal the truth.
print_r($db->fetchAll('select col1, col2 from table'));
prints
Array
(
[0] => Array
(
[col1] => 1
[col2] => 2
)
)
So:
fetchAll() is what I wanted.
There's a bug in ZF documentation

From http://framework.zend.com/manual/1.11/en/zend.db.adapter.html
The fetchAssoc() method returns data in an array of associative arrays, regardless of what value you have set for the fetch mode, **using the first column as the array index**.
So if you put
$result = $db->fetchAssoc(
'SELECT some_column, other_column FROM table'
);
you'll have as result an array like this
$result['some_column']['other_column']

Related

PostgreSQL array of data composite update element using where condition

I have a composite type:
CREATE TYPE mydata_t AS
(
user_id integer,
value character(4)
);
Also, I have a table, uses this composite type as an array of mydata_t.
CREATE TABLE tbl
(
id serial NOT NULL,
data_list mydata_t[],
PRIMARY KEY (id)
);
Here I want to update the mydata_t in data_list, where mydata_t.user_id is 100000
But I don't know which array element's user_id is equal to 100000
So I have to make a search first to find the element where its user_id is equal to 100000 ... that's my problem ... I don't know how to make the query .... in fact, I want to update the value of the array element, where it's user_id is equal to 100000 (Also where the id of tbl is for example 1) ... What will be my query?
Something like this (I know it's wrong !!!)
UPDATE "tbl" SET "data_list"[i]."value"='YYYY'
WHERE "id"=1 AND EXISTS (SELECT ROW_NUMBER() OVER() AS i
FROM unnest("data_list") "d" WHERE "d"."user_id"=10000 LIMIT 1)
For example, this is my tbl data:
Row1 => id = 1, data = ARRAY[ROW(5,'YYYY'),ROW(6,'YYYY')]
Row2 => id = 2, data = ARRAY[ROW(10,'YYYY'),ROW(11,'YYYY')]
Now i want to update tbl where id is 2 and set the value of one of the tbl.data elements to 'XXXX' where the user_id of element is equal to 11
In fact, the final result of Row2 will be this:
Row2 => id = 2, data = ARRAY[ROW(10,'YYYY'),ROW(11,'XXXX')]
If you know the value value, you can use the array_replace() function to make the change:
UPDATE tbl
SET data_list = array_replace(data_list, (11, 'YYYY')::mydata_t, (11, 'XXXX')::mydata_t)
WHERE id = 2
If you do not know the value value then the situation becomes more complex:
UPDATE tbl SET data_list = data_arr
FROM (
-- UPDATE doesn't allow aggregate functions so aggregate here
SELECT array_agg(new_data) AS data_arr
FROM (
-- For the id value, get the data_list values that are NOT modified
SELECT (user_id, value)::mydata_t AS new_data
FROM tbl, unnest(data_list)
WHERE id = 2 AND user_id != 11
UNION
-- Add the values to update
VALUES ((11, 'XXXX')::mydata_t)
) x
) y
WHERE id = 2
You should keep in mind, though, that there is an awful lot of work going on in the background that cannot be optimised. The array of mydata_t values has to be examined from start to finish and you cannot use an index on this. Furthermore, updates actually insert a new row in the underlying file on disk and if your array has more than a few entries this will involve substantial work. This gets even more problematic when your arrays are larger than the pagesize of your PostgreSQL server, typically 8kB. All behind the scene so it will work, but at a performance penalty. Even though array_replace sounds like changes are made in-place (and they indeed are in memory), the UPDATE command will write a completely new tuple to disk. So if you have 4,000 array elements that means that at least 40kB of data will have to be read (8 bytes for the mydata_t type on a typical system x 4,000 = 32kB in a TOAST file, plus the main page of the table, 8kB) and then written to disk after the update. A real performance killer.
As #klin pointed out, this design may be more trouble than it is worth. Should you make data_list as table (as I would do), the update query becomes:
UPDATE data_list SET value = 'XXXX'
WHERE id = 2 AND user_id = 11
This will have MUCH better performance, especially if you add the appropriate indexes. You could then still create a view to publish the data in an aggregated form with a custom type if your business logic so requires.

Get unique values from PostgreSQL array

This seems like it would be straightforward to do but I just can not figure it out. I have a query that returns an ARRAY of strings in one of the columns. I want that array to only contain unique strings. Here is my query:
SELECT
f."_id",
ARRAY[public.getdomain(f."linkUrl"), public.getdomain(f."sourceUrl")] AS file_domains,
public.getuniqdomains(s."originUrls", s."testUrls") AS source_domains
FROM
files f
LEFT JOIN
sources s
ON
s."_id" = f."sourceId"
Here's an example of a row from my return table
_id
file_domains
source_domains
2574873
{cityofmontclair.org,cityofmontclair.org}
{cityofmontclair.org}
I need file_domains to only contain unique values, IE a 'set' instead of a 'list'. Like this:
_id
file_domains
source_domains
2574873
{cityofmontclair.org}
{cityofmontclair.org}
Use a CASE expression:
CASE WHEN public.getdomain(f."linkUrl") = public.getdomain(f."sourceUrl")
THEN ARRAY[public.getdomain(f."linkUrl")]
ELSE ARRAY[public.getdomain(f."linkUrl"), public.getdomain(f."sourceUrl")]
END

How to split array in json using json_query?

I've got a column in a table that's a json. It contains only values without keys like
Now I'm trying to split the data from the json and create new table using every index of each array as new entry like
I've already tried
SELECT JSON_QUERY(abc) as 'Type', Id as 'ValueId' from Table FOR JSON AUTO
Is there any way to handle splitting given that some arrays might be empty and look like
[]
?
A fairly simply approach would be to use outer apply with openjson.
First, create and populate sample table (Please save us this step in your future questions):
DECLARE #T AS TABLE
(
Id int,
Value nvarchar(20)
)
INSERT INTO #T VALUES
(1, '[10]'),
(2, '[20, 200]'),
(3, '[]'),
(4, '')
The query:
SELECT Id, JsonValues.Value
FROM #T As t
OUTER APPLY
OPENJSON( Value ) As JsonValues
WHERE ISJSON(t.Value) = 1
Results:
Id Value
1 10
2 20
2 200
3 NULL
Note the ISJSON condition in the where clause will prevent exceptions in case the Value column contains anything other than a valid json (an empty array [] is still considered valid for this purpose).
If you don't want to return a row where the json array is empty, use cross apply instead of outer apply.
Your own code calling for FOR JSON AUTO tries to create JSON out of tabular data. But what you really needs seems to be the opposite direction: You want to transform JSON to a result set, a derived table. This is done by OPENJSON.
Your JSON seems to be a very minimalistic array.
You can try something along this.
DECLARE #json NVARCHAR(MAX) =N'[1,2,3]';
SELECT * FROM OPENJSON(#json);
The result returns the zero-based ordinal position in key, the actual value in value and a (very limited) type-enum.
Hint: If you want to use this against a table's column you must use APPLY, something along
SELECT *
FROM YourTable t
OUTER APPLY OPENJSON(t.TheJsonColumn);

How to update a jsonb column in postgresql which is just an array of values and no keys

I need to update a jsonb column which is called "verticals" and the array of values it holds are like HOM, BFB etc. There are no keys in the array.
Table: Product(verticals jsonb, code int)
sample value stored in "verticals" column is
[HOM,rst,NLF,WELSAK,HTL,TRV,EVCU,GRT]
I need to update the value 'HOM' to 'XXX' in the column "verticals" where code =1
My expected output is
[XXX,rst,NLF,WELSAK,HTL,TRV,EVCU,GRT]
Because you chose to store your data in a de-normalized way, updating it is more complicated then it has to be.
You need to first unnest the array (essentially normalizing the data), replace the values, then aggregate them back and update the column:
update product p
set verticals = t.verticals
from (
select jsonb_agg(case when x.v = 'HOM' then 'XXX' else x.v end order by idx) as verticals
from product p2, jsonb_array_elements_text(p2.verticals) with ordinality as x(v,idx)
where code = 1
) t
where p.code = t.code;
This assumes that product.code is a primary (or unique) key!
Online example: http://rextester.com/KZQ65481
If the order of the array elements is not important, this gets easier:
update product
set verticals = (verticals - 'HOM')||'["XXX"]'
where code = 1;
This removes the element 'HOM' from the array (regardless of the posisition) and then appends 'XXX' to the end of the array.
You should use jsonb_set(target jsonb, path text[], new_value jsonb[, create_missing boolean]) and array_position() OR array_replace(anyarray, anyelement, anyelement)
https://www.postgresql.org/docs/9.5/static/functions-json.html
https://www.postgresql.org/docs/10/static/functions-array.html

Get table alias from Zend_Db_Table_Select

I'm working on an Active Record pattern (similar to RoR/Cake) for my Zend Framework library. My question is this: How do I figure out whether a select object is using an alias for a table or not?
$select->from(array("c" => "categories"));
vs.
$select->from("categories");
and I pass this to a "fetch" function which adds additional joins and whatnot to get the row relationships automatically...I want to add some custom sql; either "c.id" or "categories.id" based on how the user used the "from" method.
I know I can use
$parts = $select->getPart(Zend_Db_Select::FROM);
to get the from data as an array, and the table name or alias seems to be in "slot" 0 of said array. Will the table name or alias always be in slot zero? i.e. can I reliably use:
$tableNameOrAlias = $parts[0];
Sorry if this is convolute but hope you can help! :)
Logically, I would think that's how it should work. To be on the safe side, build a few dummy queries using a Select() and dump the part array using print_r or such.
I just performed this test, the alias is the array key, it is not a zero-based numeric array:
$select = $this->db->select()->from(array("c" => "categories","d" => "dummies"));
$parts = $select->getPart(Zend_Db_Select::FROM);
echo '<pre>';
print_r($parts);
echo '</pre>';
Output:
Array
(
[c] => Array
(
[joinType] => inner join
[schema] =>
[tableName] => categories
[joinCondition] =>
)
)
So you would need to reference it as $part["c"]