T-SQL - Insert Statement Ignoring a Column - tsql

I have a table with a dozen or so columns. I only know the name of the first column and the last 4 columns (in theory, I could only know the name of one column and not its position)
How can I can I write a statement which ignores this column? At the moment I do various column counts in ASP and construct a statement that way but was wondering if there was an easier way
UPDATE
INSERT INTO tblName VALUES ("Value for col2", "Value for col3")
but the table has a col4 and potentially more, which I'd be ignoring.
I basically have a CSV file. This CSV file has no headers. It has 'X' less columns than the table I'm inserting into. I would like to insert the data from the CSV into the table.
There are many tables of different structures and many CSV files. I have created a ASP page to take any CSV and upload it to the corresponding table (based on a parameter within the CSV file).
It works fine, I was just wondering that when I was doing the INSERT statement, if I could ignore certain columns and cut down on my code.
So let's say the CSV has data as follows
123 | 456 | 789
234 | 567 | 873
The table has a structure of
ID | Col1 | Col2 | Col3 | Col4 | Col5
I currently construct an insert statement that says
INSERT into tblName ("123", "456","789","","")
However I was wondering if there was a way I could omit the empty values by somehow "ignoring" the columns. As mentioned, the column names are not known apart from the ones I have no data for.

There is no Sql shortcut for
Select * (except column col1) from ...
You have to construct your Sql from database metadata, like you already did if I understood you correctly.

You can specify the columns that you want to insert.
So instead of...
INSERT INTO tblName VALUES ("Value for col2", "Value for col3")
You could specify column names...
INSERT INTO tblName (ColumnName1, ColumnName2) VALUES ("Value for col2", "Value for col3")

Related

How to avoid commas for a specific column in csv file

I have similar question in input csv file. I’m currently loading data from csv file to DB. Getting wrong data in target table not sure how to ignore commas
I have below input
Col1,col2,col3
1,2,3,4
Output should be populated as
Col1 col2 col3 col4
1 2 3 ,4
3,4 should be populated in col3
Instead I’m getting data like below. It has not populated like above. Can someone please help me.Not sure how to do it in Talend.
Col1 col2 col3 col4
1 2 3 4
3,4 data was not populating in same column not sure how to ignore ,comma for 3 and 4
Use Text enclosure character in source file
Col1,col2,col3
"1","2","3,4"
You may also use different escape characters or even delimiter. see "CSV Options" in Component tFileInputDelimited

Inserting a substring of column in Redshift

Hello I am using Redshift where I have a staging table & a base table. one of the column (city) in my base table has data type varchar & its length is 100.When I am trying to insert the column value from staging table to base table, I want this value to be truncated to 1st 100 characters or leftmost 100 characters. Can this be possible in Redshift?
INSERT into base_table(org_city) select substring(city,0,100) from staging_table;
I tried using the above query but it failed. Any solutions please ?
Try this! Your base table column length is Varchar(100), so you need to substring 0-99 chars, which is 100 chars. You are trying to substring 101 chars.
INSERT into base_table(org_city) select substring(city,0,99) from staging_table;

RE: Greatest Value with column name

Greatest value of multiple columns with column name?
I was reading the question above (link above) and the "ACCEPTED" answer (which seems correct) and have several questions concerning this answer.
(Sorry I have to create a new post, don't have a high enough reputation to comment on the old post as it seems very old)
Questions
My first question is what is the significance of "#var_max_val:= "? I reran the query without it and everything ran fine.
My second question is can someone explain how this achieve it's desired result:
CASE #var_max_val WHEN col1 THEN 'col1'
WHEN col2 THEN 'col2'
...
END AS max_value_column_name
My third question is as follows:
It seems that in this "case" statement he manually has to write a line of code ("when x then y") for every column in the table. This is fine if you have 1-5 columns. But what if you had 10,000? How would you go about it?
PS: I might be violating some forum rules in this post, do let me know if I am.
Thank you for reading, and thank you for your time!
The linked question is about mysql so it does not apply to postgresql (e.g. the #var_max_val syntax is specific to mysql). To accomplish the same thing in postgresql you can use a LATERAL subquery. For example, suppose that you have the following table and sample data:
CREATE TABLE t(col1 int, col2 int, col3 int);
INSERT INTO t VALUES (1,2,3), (5,8,6);
Then you can identify the maximum column for each row with the following query:
SELECT *
FROM t, LATERAL (
VALUES ('col1',col1),('col2',col2),('col3',col3)
ORDER BY 2 DESC
LIMIT 1
) l(maxcolname, maxcolval);
which produces the following output:
col1 | col2 | col3 | maxcolname | maxcolval
------+------+------+------------+-----------
1 | 2 | 3 | col3 | 3
5 | 8 | 6 | col2 | 8
I think this solution is much more elegant than the one presented in the linked article for mysql.
As for having to manually write the code, unfortunately, I do not think you can avoid that.
In Postgres 9.5 you can use jsonb functions to get column names. In this case you do not have to write manually all the columns names. The solution needs a primary key (or a unique column) for proper
grouping:
create table a_table(id serial primary key, col1 int, col2 int, col3 int);
insert into a_table (col1, col2, col3) values (1,2,3), (5,8,6);
select distinct on(id) id, key, value
from a_table t, jsonb_each(to_jsonb(t))
where key <> 'id'
order by id, value desc;
id | key | value
----+------+-------
1 | col3 | 3
2 | col2 | 8
(2 rows)

Cassandra CQL3 select row keys from table with compound primary key

I'm using Cassandra 1.2.7 with the official Java driver that uses CQL3.
Suppose a table created by
CREATE TABLE foo (
row int,
column int,
txt text,
PRIMARY KEY (row, column)
);
Then I'd like to preform the equivalent of SELECT DISTINCT row FROM foo
As for my understanding it should be possible to execute this query efficiently inside Cassandra's data model(given the way compound primary keys are implemented) as it would just query the 'raw' table.
I searched the CQL documentation but I didn't find any options to do that.
My backup plan is to create a separate table - something like
CREATE TABLE foo_rows (
row int,
PRIMARY KEY (row)
);
But this requires the hassle of keeping the two in sync - writing to foo_rows for any write in foo(also a performance penalty).
So is there any way to query for distinct row(partition) keys?
I'll give you the bad way to do this first. If you insert these rows:
insert into foo (row,column,txt) values (1,1,'First Insert');
insert into foo (row,column,txt) values (1,2,'Second Insert');
insert into foo (row,column,txt) values (2,1,'First Insert');
insert into foo (row,column,txt) values (2,2,'Second Insert');
Doing a
'select row from foo;'
will give you the following:
row
-----
1
1
2
2
Not distinct since it shows all possible combinations of row and column. To query to get one row value, you can add a column value:
select row from foo where column = 1;
But then you will get this warning:
Bad Request: Cannot execute this query as it might involve data filtering and thus may have unpredictable performance. If you want to execute this query despite the performance unpredictability, use ALLOW FILTERING
Ok. Then with this:
select row from foo where column = 1 ALLOW FILTERING;
row
-----
1
2
Great. What I wanted. Let's not ignore that warning though. If you only have a small number of rows, say 10000, then this will work without a huge hit on performance. Now what if I have 1 billion? Depending on the number of nodes and the replication factor, your performance is going to take a serious hit. First, the query has to scan every possible row in the table (read full table scan) and then filter the unique values for the result set. In some cases, this query will just time out. Given that, probably not what you were looking for.
You mentioned that you were worried about a performance hit on inserting into multiple tables. Multiple table inserts are a perfectly valid data modeling technique. Cassandra can do a enormous amount of writes. As for it being a pain to sync, I don't know your exact application, but I can give general tips.
If you need a distinct scan, you need to think partition columns. This is what we call a index or query table. The important thing to consider in any Cassandra data model is the application queries. If I was using IP address as the row, I might create something like this to scan all the IP addresses I have in order.
CREATE TABLE ip_addresses (
first_quad int,
last_quads ascii,
PRIMARY KEY (first_quad, last_quads)
);
Now, to insert some rows in my 192.x.x.x address space:
insert into ip_addresses (first_quad,last_quads) VALUES (192,'000000001');
insert into ip_addresses (first_quad,last_quads) VALUES (192,'000000002');
insert into ip_addresses (first_quad,last_quads) VALUES (192,'000001001');
insert into ip_addresses (first_quad,last_quads) VALUES (192,'000001255');
To get the distinct rows in the 192 space, I do this:
SELECT * FROM ip_addresses WHERE first_quad = 192;
first_quad | last_quads
------------+------------
192 | 000000001
192 | 000000002
192 | 000001001
192 | 000001255
To get every single address, you would just need to iterate over every possible row key from 0-255. In my example, I would expect the application to be asking for specific ranges to keep things performant. Your application may have different needs but hopefully you can see the pattern here.
according to the documentation, from CQL version 3.11, cassandra understands DISTINCT modifier.
So you can now write
SELECT DISTINCT row FROM foo
#edofic
Partition row keys are used as unique index to distinguish different rows in the storage engine so by nature, row keys are always distinct. You don't need to put DISTINCT in the SELECT clause
Example
INSERT INTO foo(row,column,txt) VALUES (1,1,'1-1');
INSERT INTO foo(row,column,txt) VALUES (2,1,'2-1');
INSERT INTO foo(row,column,txt) VALUES (1,2,'1-2');
Then
SELECT row FROM foo
will return 2 values: 1 and 2
Below is how things are persisted in Cassandra
+----------+-------------------+------------------+
| row key | column1/value | column2/value |
+----------+-------------------+------------------+
| 1 | 1/'1' | 2/'2' |
| 2 | 1/'1' | |
+----------+-------------------+------------------+

Copy selected query fields name in Mysql Workbench

I am using mysql workbench (SQL Editor). I need copy the list of columns in each query as was existed in Mysql Query Browser.
For example
Select * From tb
I want have the list of fields like as:
id,title,keyno,......
You mean you want to be able to get one or more columns for a specified table?
1st way
Do SHOW COLUMNS FROM your_table_name and from there on depending on what you want have some basic filtering added by specifying you want only columns that data type is int, default value is null etc e.g. SHOW COLUMNS FROM your_table_name WHERE type='mediumint(8)' ANDnull='yes'
2nd way
This way is a bit more flexible and powerful as you can combine many tables and other properties kept in MySQL's INFORMATION_SCHEMA internal db that has records of all db columns, tables etc. Using the query below as it is and setting TABLE_NAME to the table you want to find the columns for
SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME='your_table_name';
To limit the number of matched columns down to a specific database add AND TABLE_SCHEMA='your_db_name' at the end of the query
Also, to have the column names appear not in multiple rows but in a single row as a comma separated list you can use GROUP_CONCAT(COLUMN_NAME,',') instead of only COLUMN_NAME
To select all columns in select statement, please go to SCHEMAS menu and right click ok table which you want to select column names, then select "Copy to Clipboard > Select All statement".
The solution accepted is fine, but it is limited to field names in tables. To handle arbitrary queries would be to standardize your select clause to be able to use regex to strip out only the column aliases. I format my select clause as "1 row per element" so
Select 1 + 1 as Col1, 1 + 2 Col2 From Table
becomes
Select 1 + 1 as Col1
, 1 + 2 Col2
From Table
Then I use simple regex on the "1 row per select element" version to replace "^.* " (excluding quotes) with nothing. The regex finds everything before the final space in the line, so it assumes your column aliases doesn't contain spaces (so replace spaces with underscore). Or if you don't like "1 row per element" then always use "as" keyword to give you a handle that regex can grasp.