Postgresql got enum support some time ago.
CREATE TYPE myenum AS ENUM (
'value1',
'value2',
);
How do I get all values specified in the enum with a query?
If you want an array:
SELECT enum_range(NULL::myenum)
If you want a separate record for each item in the enum:
SELECT unnest(enum_range(NULL::myenum))
Additional Information
This solution works as expected even if your enum is not in the default schema. For example, replace myenum with myschema.myenum.
The data type of the returned records in the above query will be myenum. Depending on what you are doing, you may need to cast to text. e.g.
SELECT unnest(enum_range(NULL::myenum))::text
If you want to specify the column name, you can append AS my_col_name.
Credit to Justin Ohms for pointing out some additional tips, which I incorporated into my answer.
Try:
SELECT e.enumlabel
FROM pg_enum e
JOIN pg_type t ON e.enumtypid = t.oid
WHERE t.typname = 'myenum'
SELECT unnest(enum_range(NULL::your_enum))::text AS your_column
This will return a single column result set of the contents of the enum "your_enum" with a column named "your_column" of type text.
You can get all the enum values for an enum using the following query. The query lets you pick which namespace the enum lives in too (which is required if the enum is defined in multiple namespaces; otherwise you can omit that part of the query).
SELECT enumlabel
FROM pg_enum
WHERE enumtypid=(SELECT typelem
FROM pg_type
WHERE typname='_myenum' AND
typnamespace=(SELECT oid
FROM pg_namespace
WHERE nspname='myschema'))
Related
how are you?
I needed to store an array of numbers as JSONB in PostgreSQL.
Now I'm trying to calculate stats moments from this JSON, I'm facing some issues.
Sample of my data:
I already was able to convert a JSON into a float array.
I used a function to convert jsonb to float array.
CREATE OR REPLACE FUNCTION jsonb_array_castdouble(jsonb) RETURNS float[] AS $f$
SELECT array_agg(x)::float[] || ARRAY[]::float[] FROM jsonb_array_elements_text($1) t(x);
$f$ LANGUAGE sql IMMUTABLE;
Using this SQL:
with data as (
select
s.id as id,
jsonb_array_castdouble(s.snx_normalized) as serie
FROM
spectra s
)
select * from data;
I found a function that can do these calculations and I need to pass an array for that: https://github.com/ellisonch/PostgreSQL-Stats-Aggregate/
But this function requires an array in another way: unnested
I already tried to use unnest, but it will get only one value, not the entire array :(.
My goal is:
Be able to apply stats moment (kurtosis, skewness) for each row.
like:
index
skewness
1
21.2131
2
1.123
Bonus: There is a way to not use this 'with data', use the transformation in the select statement?
snx_wavelengths is JSON, right? And also you provided it as a picture and not text :( the data looks like (id, snx_wavelengths) - I believe you meant id saying index (not a good idea to use a keyword, would require identifier doublequotes):
1,[1,2,3,4]
2,[373,232,435,84]
If that is right:
select id, (stats_agg(v::float)).skewness
from myMeasures,
lateral json_array_elements_text(snx_wavelengths) v
group by id;
DBFiddle demo
BTW, you don't need "with data" in the original sample if you don't want to use and could replace with a subquery. ie:
select (stats_agg(n)).* from (select unnest(array[16,22,33,24,15])) data(n)
union all
select (stats_agg(n)).* from (select unnest(array[416,622,833,224,215])) data(n);
EDIT: And if you needed other stats too:
select id, "count","min","max","mean","variance","skewness","kurtosis"
from myMeasures,
lateral (select (stats_agg(v::float)).* from json_array_elements_text(snx_wavelengths) v) foo
group by id,"count","min","max","mean","variance","skewness","kurtosis";
DBFiddle demo
I need to populate a new table in a second schema from an existing one, but having problems casting the "schema1.a.disclosure_level" column enum to the "schema2.b.disclosure_level" enum. A cast via ::text or :: varchar did not help. Casting to ::schema1.a.disclosure_level raises a cross-database reference error.
INSERT INTO schema1.a (id, disclosure_level)
SELECT schema2.b.id, schema2.b.disclosure_level
FROM schema2.b;
Any ideas?
#Bergi showed me the solution.
INSERT INTO schema1.a (id, disclosure_level)
SELECT schema2.b.id, schema2.b.disclosure_level::text:schema1.disclosure_level_enum
FROM schema2.b;
where my fault was to use the column name instead of the enum type definition in the cast: schema1.disclosure_level_enum (type) instead of schema1.a.disclosure_level (column)!
From here:
https://www.postgresql.org/docs/current/datatype-enum.html
8.7.3. Type Safety
Each enumerated data type is separate and cannot be compared with other enumerated types.
Example:
CREATE TYPE animal AS ENUM ('dog', 'cat', 'rabbit');
CREATE TYPE animal_2 AS ENUM ('dog', 'cat', 'rabbit');
create table enum_test(id integer, a animal, a2 animal_2);
insert into enum_test values (1, 'dog', 'cat');
select a::animal from enum_test ;
a
-----
dog
select a::animal_2 from enum_test ;
ERROR: cannot cast type animal to animal_2
LINE 1: select a::animal_2 from enum_test ;
So the answer is no you can't cast one enum to another.
It can be possible if you cast to VARCHAR as first and second to another enum
"valueType" = NEW."valueType"::VARCHAR::"enum_second"
select typname
from pg_type
Will return a list of data types. Some of them can be used when creating a table (numeric) while others can't be used (cardinal_number). How get a list of valid column data types?
SELECT typname
FROM pg_type
WHERE typtype NOT IN ('p', 'd');
From documentation:
typtype is b for a base type, c for a composite type (e.g., a table's row type), d for a domain, e for an enum type, or p for a pseudo-type. See also typrelid and typbasetype
I have a details table with adeet column defined as jsonb[]
a sample value stored in adeet column is as below image
Sample data stored in DB :
I want to return the rows which satisfies id=26088 i.e row 1 and 3
I have tried array operations and json operations but it does'nt work as required. Any pointers
Obviously the type of the column adeet is not of type JSON/JSONB, but maybe VARCHAR and we should fix the format so as to convert into a JSONB type. I used replace() and r/ltrim() funcitons for this conversion, and preferred to derive an array in order to use jsonb_array_elements() function :
WITH t(jobid,adeet) AS
(
SELECT jobid, replace(replace(replace(adeet,'\',''),'"{','{'),'}"','}')
FROM tab
), t2 AS
(
SELECT jobid, ('['||rtrim(ltrim(adeet,'{'), '}')||']')::jsonb as adeet
FROM t
)
SELECT t.*
FROM t2 t
CROSS JOIN jsonb_array_elements(adeet) j
WHERE (j.value ->> 'id')::int = 26088
Demo
You want to combine JSONB's <# operator with the generic-array ANY construct.
select * from foobar where '{"id":26088}' <# ANY (adeet);
I need to count the number of objects in SQL Server 2000 when restoring from the database to make sure that restore includes the latest updates. I also wanted to get the latest date an object was created or modified.
Specifically wanted to get counts for number of tables, the number of views, the number of udfs, the number of sprocs, and the date it was created or modified.
select
count(xtype) as [MyCounts],
crdate as [CreateDate],
refdate as [ModifiedDate]
from sysobjects
where xtype like 'U%'
--does not appear to be working correctly.
A very basic solution would simply use grouping and aggregating, like this:
SELECT
xtype,
total_count = COUNT(*),
last_crdate = MAX(crdate),
last_refdate = MAX(refdate)
FROM sysobjects
GROUP BY xtype
This, however, returns information on all types of objects in the current database, including those you didn't mention in your question, like constraints, keys etc.
So you might want to narrow the resulting list by applying a filter on xtype, like this:
SELECT
xtype,
total_count = COUNT(*),
last_crdate = MAX(crdate),
last_refdate = MAX(refdate)
FROM sysobjects
WHERE xtype IN ('U', 'V', 'FN', 'TF', 'IF', 'P')
GROUP BY xtype
Note that there are three types of UDFs in SQL Server. They are designated in sysobjects as follows:
FN – scalar function
TF – multi-statement table-valued function
IF – inline table-valued function
Accordingly the information about functions will be scattered in three rows if you use the above script. If you'd like to group those results in one row, your query would have to be slightly more sophisticated. For example, like this:
SELECT
type,
total_count = COUNT(*),
last_crdate = MAX(crdate),
last_refdate = MAX(refdate)
FROM (
SELECT
type = CASE
WHEN xtype = 'U' THEN 'table'
WHEN xtype = 'V' THEN 'view'
WHEN xtype = 'P' THEN 'proc'
WHEN xtype IN ('FN', 'TF', 'IF') THEN 'udf'
END,
crdate,
refdate
FROM sysobjects
WHERE xtype IN ('FN', 'TF', 'IF', 'P', 'U', 'V')
) s
GROUP BY type
Here the original types are first replaced by custom types based on the xtype value. All rows pertaining to functions are marked simply as udf, regardless of the actual function type, so in the end you can simply group by the custom type column and get the necessary totals, the information on functions now being gathered in one row.
Reference:
sys.sysobjects (Transact-SQL)
You should post the errors that you're getting so it's easier to diagnose. However, looking at your query, it's likely because you're missing a GROUP BY to accompany the COUNT aggregation you're attempting to do.
Now, the real question is how do you display a COUNT aggregation along with line-specific information like the created date?
If there are 5 views and 4 procs, what does each line look like? What are the column headers? Is the COUNT shown on each row along with the detail for that item, like this?
select
c.cnt as [MyCounts],
s.name as [Name],
s.xtype as [Type],
s.crdate as [CreateDate],
s.refdate as [ModifiedDate]
from
sysobjects s
inner join (select COUNT(1) cnt, xtype from sysobjects group by xtype) c
on s.xtype = c.xtype
where
s.xtype like 'U%'