I know that PostgreSQL 9+ has module to generate UUID. But after a lot of search I couldn't find any solution to generate UUID in PostgreSQL 8.2. I also need to keep this version because lot of my customers are still using 8.2. I am thinking to use current Timestamp as UUID.
Running select extract(epoch from now()); query and getting a Timestamp like 1384396600.53923. I want to store it as character and use as UUID. Is there any possibility of generating duplicate Timestamp? Also will it be feasible to use Timestamp as UUID?
i am not sure it is feasible.
but i suggest you not to use the Timestamp as UUID.
i don't konw how large concurrent your condition is.
if you are in a very much large concurrent condition,or in a very extreme condition by chance,i think it must will be give you two same Timestamp,so there will be something wrong,right?
so,use some other way to make a "UUID".
Related
I am trying to find an alternative to the ORA_HASH Oracle function in Postgres(edb). I know there are hashtext() and MD5(). Hashtext should be ideal, but it's not documented and so I can't use it for some reasons. I'd like to know if there is any way of using MD5() and getting the same value that you'd get in ORA_HASH giving the same value for both of them.
For example, this is how I use it and what I get in Oracle:
SELECT ORA_HASH('huyu') FROM dual;
----------------------------------
3284595515
I'd like to do something similar in postgres, so that if I pass the 'huyu' value, I'd get the exact same '3284595515' value.
Now, I know that ORA_HASH restores a number and MD5 restores hexadecimal value. I think I'd need the function that converts the hexadecimal 32 into a number, but I can't get the same value and I'm not sure if it is possible.
Thank you in advance for any suggestions
If you rely on the exact implementation of ORA_HASH, which is a proprietary hash function in a closed-source software, you are locked to this vendor, sorry.
I don't see how using PostgreSQL's hashtext is worse than ORA_HASH: it is documented (in the source), it's implementation is public, and it is not going to be removed, because it is required for hash partitioning.
I have a table with a column named "data" that contains json object of telemetry data.
The telemetry data is recieved from devices, where some of the devices has received a firmware upgrade. After the upgrade some of the properties has received a "namespace" (e.g. ""propertyname"" is now ""dsns:propertyname""), and there is also a few of the properties which is suddenly camelcased (e.g. ""propertyname"" is now ""propertyName"".
Neither the meaning of the properties or the number of properties has changed.
When querying the table, I do not want to get the whole "data", as the json is quite large. I only want the properties that are needed.
For now I have simply used a json_build_object such as:
select json_build_object('requestedproperty',"data"-> 'requestedproperty','anotherrequestedproperty',"data"-> 'anotherrequestedproperty') as "data"
from device_data
where id ='f4ddf01fcb6f322f5f118100ea9a81432'
and timestamp >= '2020-01-01 08:36:59.698' and timestamp <= '2022-02-16 08:36:59.698'
order by timestamp desc
But it does not work for fetching data from after the firmware upgrade for the devices that has received it.
It still works for querying data from before the upgrade, and I would like to have only one query for this if possible.
Are there some easy way to "ignore" namespaces and be case insensitive through json_build_object?
I'm obviously no postgresql expert, and would love all the help I can get.
This kind of insanity is exactly why device developers should not ever be allowed anywhere near a keyboard :-)
This is not an easy fix, but so long as you know the variations on the key names, you can use coalesce() to accomplish your goal and update your query with each new release:
json_build_object(
'requestedproperty', coalesce(
data->'requestedproperty',
data->'dsns:requestedproperty',
data->'requestedProperty'),
'anotherrequestedproperty', coalesce(data-> 'anotherrequestedproperty',
data-> 'dsns:anotherrequestedproperty',
data-> 'anotherRequestedProperty')
) as "data"
Edit to add: You can also use jsonb_each_text(), treat key-value pairs as rows, and then use lower() and a regex split to doctor key names more generally, but I bet that the kinds of scatterbrains behind the inconsistencies you already see will eventually lead to a misspelled key name someday.
i have a DB2 data source and an Oracle 12c target.
The Oracle has a DB link to the DB2 defined which is working in general.
Now i have a huge table in the DB2 which has a timestamp column (lets call it ROW_CHANGED) for row changes. I want to retrieve rows which have changed after a particular time.
Running
SELECT * FROM lib.tbl WHERE ROW_CHANGED >'2016-08-01 10:00:00'
on the DB2 returns exactly 1 row after ca. 90 secs which is fine.
Now i try the same query from the Oracle via the db link:
SELECT * FROM lib.tbl#dblink_name WHERE ROW_CHANGED >TO_TIMESTAMP('2016-08-01 10:00:00')
This runs for hours and ends up in a timeout.
I read some Oracle docs and found distributed query optimization tips but most of them refer to joining a local to a remote table which is not my case.
In my desperation, i have tried the DRIVING_SITE hint, without effect.
Now i wonder when the WHERE part of the query will be evaluated. Since i have to use Oracle syntax and not DB2 syntax for the query, is it possible the Oracle will try to first copy the full table and apply the where clause afterwards? I did some research but did not find anything which would help me in this direction.
The ROW_CHANGED is a hidden column in the DB2, if that matters.
Thx for any hint in advance.
Update
Thanks#all for help. I'll share what did the trick for me.
First of all i have used TO_TIMESTAMP since the DB2 column is also Timestamp (not date) and i had expected to circumvent implicit conversions by this.
Without the explicit conversion i ran into ORA-28534: Heterogeneous Services preprocessing error and i have no hope of touching the DB config within reasonable time.
The explain plan btw did not bring much. It showed a FULL hint and no conversion on the predicates. Indeed it showed the ROW_CHANGED column as Date, i wonder why.
I have tried Justins suggestion to use a bind variable, however i got ORA-28534 again. Next thing i did was to wrap it into a pl/sql block (will run in a SP anyway later).
declare
v_tmstmp TIMESTAMP := 01.08.16 10:00:00;
begin
INSERT INTO ORAUSER.TMP_TBL (SRC_PK,ROW_CHANGED)
SELECT SRC_PK,ROW_CHANGED
FROM lib.tbl#dblink_name
WHERE ROW_CHANGED > v_tmstmp;
end;
This was executing in the same time as in DB2 itself. The date format is DD.MM.YY here since it is the default unfortunately.
When changing the variable assignment to
v_tmstmp TIMESTAMP := TO_TIMESTAMP('01.08.16 10:00:00','DD.MM.YY HH24:MI:SS');
I got the same problem as before.
Meanwhile the DB2 operators have created an index in the ROW_CHANGED column which i requested earlier that day. This has solved the problem in general it seems. Even my original query finishes in no time now.
If you are actually using an Oracle-specific conversion function like to_timestamp, that forces the predicate to be evaluated on the Oracle side. Oracle isn't going to know how to convert a built-in function like to_timestamp into an exactly equivalent function call in DB2.
If you used a bind variable, that would be more likely to get evaluated on the DB2 side. But that may be complicated by the data type mapping between different databases-- there may not be a perfect mapping between one engine's date and another engine's timestamp data type. If this was a numeric column, a bind variable would be almost certain to get pushed. In this case, it probably involves playing around a bit to figure out exactly what data type to use for your variable that works for your framework, Oracle, and DB2.
If using a bind variable doesn't work, you can force the predicate to be evaluated on the remote server using the dbms_hs_passthrough package. That lets you send a query verbatim to the remote server which allows you to do things like use functions defined in your DB2 database. That's a bit of overkill in this situation, hopefully, but it's nice to have the hammer as your backup if the simpler solution doesn't work quickly enough.
Is it possible to sort the data inside of an oracle table? Like ascending/descending via a certain column, alphabetically. Oracle 10g express.
You could try
Select *
from some_table
order by some_column asc
This will sort the results by some_column and place them in ascending order. Use desc instead of asc if you want descending order. Or did you mean to have the ordering in the physical storage itself?
I believe it's possible to specify the ordering/sorting of an indexed column in storage. It's probably closest to what you want. I don't usually use this index sort feature, but for more info see: http://www.stanford.edu/dept/itss/docs/oracle/10g/server.101/b10759/statements_5010.htm#i2062718
Perhaps you could use an index organized table - IOT to ensure that the data is stored ordered by index.
Have a look at the physical properties clause of the CREATE TABLE statement:
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_7002.htm#i2128663
What is the problem that you are trying to solve though? An IOT may or may not be what you should be using.
As defined by the relational model, the rows and columns in a table are unordered. That's the theory at least.
In practice, if you want the data to come out of a query in a particular order then you should always use the ORDER BY clause. The order of the output is not guarantee unless you provide that.
It would be possible to use an ORDER BY when inserting into a table but that doesn't guarantee the order that data will be output. A query might come out in the same order every time.... but that doesn't mean it will come out in the same order next time.
There were issues when Oracle 10g came out where aggregate queries (with GROUP BY) were not coming out sorted because users had come to rely on the data being sorted as a side-effect of the grouping. With the introduction of the HASH GROUP BY in addition to the SORT GROUP BY people were caught out. This was a useful reminder that ORDER BY should always be used.
What do you really mean ?
Are you just asking for the order by clause ?
http://www.1keydata.com/sql/sqlorderby.html
Oracle 10g Express supports ANSI SQL like most RDBM's so you can sort in the standard manner:
SELECT * FROM Persons ORDER BY LastName
A good tutorial on SQL can be found here: w3schools SQL
Oracle Express does have some limitations compared to the Enterprise Edition but not in the basic SQL dialect it supports.
This question already has answers here:
Change postgres to case insensitive
(2 answers)
Closed last year.
I'm developing an app in Rails on OS X using PostgreSQL 8.4. I need to setup the database for the app so that standard text queries are case-insensitive. For example:
SELECT * FROM documents WHERE title = 'incredible document'
should return the same result as:
SELECT * FROM documents WHERE title = 'Incredible Document'
Just to be clear, I don't want to use:
(1) LIKE in the where clause or any other type of special comparison operators
(2) citext for the column datatype or any other special column index
(3) any type of full-text software like Sphinx
What I do want is to set the database locale to support case-insensitive text comparison. I'm on Mac OS X (10.5 Leopard) and have already tried setting the Encoding to "LATIN1", with the Collation and Ctype both set to "en_US.ISO8859-1". No success so far.
Any help or suggestions are greatly appreciated.
Thanks!
Update
I have marked one of the answers given as the correct answer out of respect for the folks who responded. However, I've chosen to solve this issue differently than suggested. After further review of the application, there are only a few instances where I need case-insensitive comparison against a database field, so I'll be creating shadow database fields for the ones I need to compare case-insensitively. For example, name and name_lower. I believe I came across this solution on the web somewhere. Hopefully PostgreSQL will allow similar collation options to what SQL Server provides in the future (i.e. DOCI).
Special thanks to all who responded.
You will likely need to do something like use a column function to convert your text e.g. convert to uppercase - an example :
SELECT * FROM documents WHERE upper(title) = upper('incredible document')
Note that this may mess up performance that used index scanning, but if it becomes a problem you can define an index including column functions on target columns e.g.
CREATE INDEX I1 on documents (upper(title))
With all the limitations you have set, possibly the only way to make it work is to define your own = operator for text. It is very likely that it will create other problems, such as creating broken indexes. Other than that, your best bet seems to be to use the citext datatype; that would still let the ORM stuff you're using generate the SQL.
(I am not mentioning the possibility of creating your own locale definition because I haven't ever heard of anyone doing it.)
Your problem and your exclusives are like saying "I want to swim, but I don't want to have to move my arms.".
You will drown trying.
I don't think that is what local or encoding is used for. Encoding is more for picking a character set and not determining how to deal with characters. If there were a setting it would be in the config, but I haven't seen one.
If you do not want to use ilike for fear of not being able to port to another database then I would suggest you look into what ORM options might be available with ActiveRecord if you are using that.
here is something from one of the top postgres guys: http://archives.postgresql.org/pgsql-php/2003-05/msg00045.php
edit: fixed specific references to locale.
SELECT * FROM documents WHERE title ~* 'incredible document'