How do you handle casting problems using OPENROWSET? - tsql

I am trying to use a txt file that contains only keys in a sql query.
SELECT
*
FROM
MyTable
JOIN
OPENROWSET (
BULK '<File-Of-Ids>.txt',
FORMATFILE = '<Format-File>.txt'
) B ON MyTable.Id = B.Id
WHERE
<a field> = <a value> AND <another field> = <another value>
But I'm getting the following error: Error converting datatype nvarchar into bigint.
Seems obvious to me that some casting needs to be done on the input from the file. Can anybody point me in the right direction? Thanks in advance!

One of your ID columns has something other than an 0-9 in it. This probably isn't by design. But you can just cast them both to VARCHAR to get around this. You probably want to inspect the data though and cleanse it.
CONVERT(VARCHAR, MyTable.Id) = CONVERT(VARCHAR, B.Id)
You could also bring your .txt file into a TEMP TABLE which is declared and formatted. That way, if your source file doesn't match the temp, an error is thrown.

Related

Snowflake : Unsupported subquery type cannot be evaluated

I am using snowflake as a data warehouse. I have a CSV file at AWS S3. I am writing a merge sql to merge data received in CSV to the table in snowflake. I have a column in time dimension table with data type as Number(38,0) data type in SF. This table holds all dates time, one e.g. is of column
time_id= 232 and time=12:00
In CSV I am getting a column with the label as time and value as 12:00.
In merge sql I am fetching this value and trying to get time_id for it.
update table_name set start_time_dim_id = (select time_id from time_dim t where t.time_name = csv_data.start_time_dim_id)
On this statement I am getting this error "SQL compilation error: Unsupported subquery type cannot be evaluated"
I am struggling to solve it, during this I google for it and got one reference for it
https://github.com/snowflakedb/snowflake-connector-python/issues/251
So want to make sure if anyone have encountered this issue? If yes, will appreciate pointers over it.
It seems like a conversion issue. I suggest you to check the data in CSV file. Maybe there is a wrong or missing value. Please check your data, and make sure it returns numeric values
create table simpleone ( id number );
insert into simpleone values ( True );
The last statement fails with:
SQL compilation error: Expression type does not match column data type, expecting NUMBER(38,0) but got BOOLEAN for column ID
If you provide sample data, and SQL to produce this error, maybe we can provide a solution.
unfortunately correlated and nested subqueries in Snowflake are a bit limited at this stage.
I would try running something like this:
update table_name
set start_time_dim_id= time_id
from time_dim
where t.time_name=csv_data.start_time_dim_id

what is the reasoning behind this 70-761 exam dump question

I would like to understand question about conversion:
exam dump I'm working with has this question at least three times with 3 different solutions you approve or don't approve of note that RegistrationNumber is defined as varchar(5) :
You run the following query:
SELECT UserId FROM tblVehicleRegistration
WHERE RegistrationNumber = 20012
AND RegistrationDate > '2016-01-01'
The query output window displays the following error message: “Conversion failed when converting the varchar value ‘AB012’ to data type int.”
You need to resolve the error.
Solution: You modify the Transact-SQL statement as follows:
SELECT UserId FROM tblVehicleRegistration
WHERE RegistrationNumber = '20012'
AND RegistrationDate > '2016-01-01'
answer says this does not work
I would think the test is incorrect. Here is a simplified example:
declare #tblVehicleRegistration table (RegistrationNumber varchar(5))
insert into #tblVehicleRegistration(RegistrationNumber) VALUES('AB012')
SELECT * FROM #tblVehicleRegistration WHERE RegistrationNumber = 20012 --Fails as expected
SELECT * FROM #tblVehicleRegistration WHERE RegistrationNumber = '20012' --works as expected
SQL Server will do a convertion in order to compare 'AB012' and 20012. If you check this link Data type precedence you will see that type varchar, wich is low precedence, needs to be converted to int, wich is high precedence, in order to make a comparison.
I created a table and tried hands-on. It worked properly after casting or changing the integer value to a string with quotation marks.

Postgres HStore - Updating existing columns with values from Hstore

I am trying to move data from a HStore to a Column using the following query
update mytable set "height" = tags->"height" ,
"building:levels" = tags->"building:levels", "type" = tags->"type",
"building:part" = tags->"building:part"
Where ( tags->"building:levels" <>'' or "tags"->"height" <> ''
or "tags"->"type" <> '' or "tags"->"building:part" <> '' );
The idea was to try and speed the query up by testing for non null values in the HStore. (This is a very large database)
After two days of the query running, I am sure there must be a better way. This is my first attempt at moving data form HStore into a Column.
I can see populate_record in the documentation, but cannot figure out how to use it to just get the hstore tags I need to the correct columns.
Is my original syntax correct and is there any way I can do it faster using populate_record and if so, what should the query look like?
Many Thanks

Check if any field has empty value in a table

I recently, approx 2 months ago, needed to check for any field in a table that has NULL value.
I am now onto another task but this time i need to check if any field in a table has an Empty String value.
Starting Query:
;With xmlnamespaces('http://www.w3.org/2001/XMLSchema-instance' as ns)
SELECT *
FROM [Schema].[Table] act
where (
select act.*
for xml path('row'), elements xsinil, type
).exist('//*/#ns:nil') = 1
I know i need to change #ns:nil but as i am uneducated on TSql's XQuery implementation, i need someone to help me with this initial query. As well, where i should go outside of MSDN to get read up on usage and functionality.
Update #1:
;With xmlnamespaces('http://www.w3.org/2001/XMLSchema-instance' as ns)
Select *
from Schema.Table act
where (
select act.*
for xml path('row'), elements xsinil, type
).value('(//row/*)[1]', 'varchar(max)') = ''
Tried this but evidently one of the fields contains character 0x001C and so requires a conversion to binary, varbinary, or image and then use BINARY BASE64 directive.
Build the XML and check for node values that are empty. Simpler than checking for null and as stated in comment, only (n)varchar produces an empty string as a node value.
select *
from T
where (
select T.*
for xml path(''), type
).exist('*/text()[. = ""]') = 1

syb_describe in DBD::Sybase

I am looking to extract Sybase datatype for all the columns in a table. When I try to achieve this using $sth->{TYPE}, I get a numeric version of the datatype (i.e. instead of sybase datatype varchar, I get 0).
From the DBD::Sybase documentation, I noticed that SYBTYPE attribute of syb_describe function might be able to produce what I am looking for. But it seems that my understanding is not proper. SYBTYPE also prints datatype in numeric form only.
Is there any way to fetch the textual representation of actual Sybase datatype (instead of the number)?
It sounds like you wish to reverse engineer the create table definition. Here is an SQL script you can use for Sybase or SQL Server tables.
select c.name,
"type(size)"=case
when t.name in ("char", "varchar") then
t.name + "(" + rtrim(convert(char(3), c.length)) + ")"
else t.name
end,
"null"=case
when convert(bit, (c.status & 8)) = 0 then "NOT NULL"
else "NULL"
end
from syscolumns c, systypes t
where c.id = object_id("my_table_name")
and c.usertype *= t.usertype
order by c.colid
go
Note: This could still be edited with a nawk script to create a real SQL schema file.
The nawk script would strip the header, add "create table my_table_name", add commas, strip the footer and add a "go".
Good SQL, good night!
I found a workaround (Note: This does not answer the question though):
What I did was simply joined the sysobjects, systypes and syscolumns system tables.