Dynamic Column name changes every year in where clause - tsql

I'm trying to automate a TSQL select statement on a website. Each year the column names change with the number values at the end of the name increasing by 1 so instead of manually updating the site I'm trying to figure out how to include the dynamic column name in the where clause.
The data looks something like this.
+------+------+------+------+------+
| FY18 | FY19 | FY20 | FY21 | FY22 |
+------+------+------+------+------+
| 1 | 0 | 1 | 0 | 1 |
| 1 | 1 | 0 | 1 | 0 |
| 1 | 0 | 0 | 1 | 0 |
+------+------+------+------+------+
Here is what I've come up with so far... The select statement looks something like this
Select distinct
POS
from TBL_Staff
where [' 'FY'+right(year(dateadd(month,3,getdate()))-1,2) '] = 1
What I'm trying to figure it out is if there is a way to dynamically generate the date and get SQl to recognize name+date as a column
Note: This is fake data so please let me know if something isn't clear
Any help on this is most appreciated.

You could use dynamic SQL
DECLARE #sql NVARCHAR(MAX) = 'Select distinct
POS
from TBL_Staff
where FY' + CAST(right(year(dateadd(month,3,getdate()))-1,2) AS VARCHAR) + ' = 1'
exec sp_executesql #sql
#sql is a variable that is used to construct a SQL statement dynamically. Then the sp_executesql procedure executes it. Beware of using dynamic sql when other alternatives exist. It's harder to read, can be difficult to maintain, and can be a security issue if taking input from users and you're not careful to sanitize input parameters.

Related

ERROR: user "xxx" cannot be dropped because permission dependency is found

I'm trying to drop a user from a Redshift cluster but receive the following error:
drop user "xxx";
ERROR: user "xxx" cannot be dropped because permission dependency is found
I've installed the admin views and revoked all privileges from all tables and schemas. I cannot find any reference to this specific error. It is also not included in this instructional: https://aws.amazon.com/premiumsupport/knowledge-center/redshift-user-cannot-be-dropped/
select ddl from admin.v_generate_user_grant_revoke_ddl where ddltype='revoke' and grantee='xxx' order by objseq, grantseq desc;
ddl
-----
(0 rows)
select ddl, grantor, grantee from admin.v_generate_user_grant_revoke_ddl where grantee='xxx' and ddltype='grant' and objtype <>'default acl' order by objseq,grantseq;
ddl | grantor | grantee
-----+---------+---------
(0 rows)
select * from pg_user where usename = 'xxx';
usename | usesysid | usecreatedb | usesuper | usecatupd | passwd | valuntil | useconfig
------------+----------+-------------+----------+-----------+----------+----------+-----------
xxx | 110 | f | f | f | xxx | |
(1 row)
select * from pg_default_acl where defacluser=110;
defacluser | defaclnamespace | defaclobjtype | defaclacl
------------+-----------------+---------------+-----------
(0 rows)
The user in not in any groups either. Any guidances is appreciated.
The user had not run queries for at least the last two weeks, he was not very active. His only access was through a Redshift/Excel ODBC setup. I did not check initially, but a day later there were no active sessions for him. I re-ran the drop user command and got the expected result. There must have been some lingering 'something'. For reference I ran this cmd to see who had active session: select * from stv_sessions;. My problem has been resolved by trying again the next day. https://docs.aws.amazon.com/redshift/latest/dg/r_STV_SESSIONS.html

Copy multiple csv files to PostgreSQL

I have multiple csv files and need to copy them into one postgres database. Besides, I want to add one more column for my table which can display the name from each file. Anybody can help on this? Thanks.
You can create a temp table with the structure of data from your CSV and another with the same structure and adding the column to the name of CSV file, and make a plpgsql code to load the files into the table, for example:
Files:
file1.csv
hello,world21,2021
hello,world20,2020
hello,world12,2019
file2.csv
hello,world18,2018
hello,world17,2017
hello,world16,2016
Code to load csv files
create table csvtable (file text, hi text, world text, yr integer );
do $$
declare
files text[]:=array['file1.csv','file2.csv'];
copy_command text;
x text;
begin
create table csvtable_tmp ( hi text, world text, yr integer );
FOREACH x IN ARRAY files
LOOP
copy_command := 'copy csvtable_tmp from '''||x || '''csv delimiter '',''';
execute copy_command;
insert into csvtable select x,* from csvtable_tmp;
truncate csvtable_tmp;
END LOOP;
drop table csvtable_tmp;
end;
$$
db=# select * from csvtable;
file | hi | world | yr
-----------+-------+---------+------
file1.csv | hello | world21 | 2021
file1.csv | hello | world20 | 2020
file1.csv | hello | world12 | 2019
file2.csv | hello | world18 | 2018
file2.csv | hello | world17 | 2017
file2.csv | hello | world16 | 2016
(6 rows)

Bad or inaccessible location specified in external data source

I'm trying to save a file from Azure File Storage into Azure SQL Database table varbinary(max) column (store whole content as advised in this answer). I've tried a few times to adjust my SQL query but without success. Here's the code which results in error 'Bad or inaccessible location specified in external data source "my_Azure_Files".' when it invokes OPENROWSET:
OPEN MASTER KEY DECRYPTION BY PASSWORD = 'mypassword123'
GO
CREATE DATABASE SCOPED CREDENTIAL [https://mystorageaccount.file.core.windows.net/]
WITH IDENTITY = 'SHARED ACCESS SIGNATURE',
SECRET = 'sas_token_generated_on_azure_portal';
CREATE EXTERNAL DATA SOURCE my_Azure_Files
WITH (
LOCATION = 'https://mystorageaccount.file.core.windows.net/test',
CREDENTIAL = [https://mystorageaccount.file.core.windows.net/],
TYPE = BLOB_STORAGE
);
Insert into dbo.myTable(targetColumn)
Select BulkColumn FROM OPENROWSET(
BULK 'test.csv',
DATA_SOURCE = 'my_Azure_Files',
SINGLE_BLOB) AS testFile;
CLOSE MASTER KEY;
GO
I'm able to download the test.csv file by a web-browser using the same SAS token and url path. I'm also able to verify that the credential and the external source are successfully created in the database:
+-------------------------------------------------+------------------+-----------------------------------------------------+-------------------------+------------------+---------------------------+---------------+---------------+----------------+--------------------+----------+
| data_source_id | name | location | type_desc | type | resource_manager_location | credential_id | database_name | shard_map_name | connection_options | pushdown |
+-------------------------------------------------+------------------+-----------------------------------------------------+-------------------------+------------------+---------------------------+---------------+---------------+----------------+--------------------+----------+
| 65540 | my_Azure_Files | https://mystorageaccount.file.core.windows.net/test | BLOB_STORAGE | 05/01/1900 00:00 | NULL | 65539 | NULL | NULL | NULL | ON |
| name | principal_id | credential_id | credential_identity | create_date | modify_date | target_type | target_id | | | |
+-------------------------------------------------+------------------+-----------------------------------------------------+-------------------------+------------------+---------------------------+---------------+---------------+----------------+--------------------+----------+
| https://mystorageaccount.file.core.windows.net/ | 1 | 65539 | SHARED ACCESS SIGNATURE | 15/07/2020 13:14 | 15/07/2020 13:14 | NULL | NULL | | | |
When creating SAS on Azure portal I checked all allowed resource types and all allowed permissions, except 'Delete'. I also removed the leading '?' from SAS to use in the SECRET field.
I've tried variations of TYPE = BLOB_STORAGE and TYPE = HADOOP as well as SINGLE_BLOB, SINGLE_CLOB and SINGLE_NCLOB parameters.
Please help me solve my problem.
By following below steps, able to successfully insert into the target table:
While generating the SAS, please select Allowed Resource Type as ‘Container’ and ‘Object’:
Copy the SAS and use below command:
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'password#123'
Use SAS token generated without ‘?’ at the start and create Scoped Credentials:
CREATE DATABASE SCOPED CREDENTIAL MyAzureBlobStorageCredential WITH IDENTITY =
'SHARED ACCESS SIGNATURE', SECRET = 'sv=2019-10-
10XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX';
Create External Data Source referencing your blob path:
CREATE EXTERNAL DATA SOURCE MyAzureBlobStorage
WITH ( TYPE = BLOB_STORAGE,
LOCATION = 'https://mystorageaccount.file.core.windows.net'
, CREDENTIAL= MyAzureBlobStorageCredential
);
Run the insert using OPENROWSET:
Insert into dbo.test(name1)
Select BulkColumn FROM OPENROWSET(
BULK 'test/test.csv',
DATA_SOURCE = 'MyAzureBlobStorage',
SINGLE_BLOB) AS testFile;
Can also use Bulk insert:
BULK INSERT dbo.test
FROM 'test/test.csv'
WITH (DATA_SOURCE = 'MyAzureBlobStorage',
FORMAT = 'CSV');
Assuming table dbo.test is already created

How to split the denominator value from one column and store it in another column using perl?

My example code output:
time | name | status | s_used | s_max |
+------------+-------------+-----------+------------+-----------+
| 1482222363 | asf | Closed | 0/16 | 0 |
| 1482222363 | as0 | Available | 4/16 | 4 |
I have attached the part of my output which is generated using perl cgi script and mysql database.
My query is how to take denominator value from the column s_used and store only the denominator values in the s_max column using perl.
3.I had attached the following part of code which i tried.
if($i == 4){
if(/s_used/){
print;
}
else{
chomp();
my($num,$s_max)=split /\//,$table_data{2}{'ENTRY'};
print $s_max;
}
}
Code Explanation:
$i == 4 is the column where should I store the variable.
I got time column from the sql database $time, name I got from $table_data{0}{'ENTRY'}, status from $table_data{1}{'ENTRY'}, s_used from $table_Data{2}{'ENTRY'}.
Expected output:
time | name | status | s_used | s_max |
+------------+-------------+-----------+------------+-----------+
| 1482222363 | asf | Closed | 0/16 | 16 |
| 1482222363 | as0 | Available | 4/16 | 16 |
Seems your code "my($num,$s_max)=split /\//,$table_data{2}{'ENTRY'};" is right.
Somehow the value $s_max at the time it's writing to the DB is incorrect. Since you did not post the portion of code to show the part $s_max writing back to the DB, you need to check what value is in $s_max (e.g. printing the $s_max value) at the time right before writing it back to DB. From there, please try to trace back why an incorrect value is assigned to $s_max. Then, the problem would be solved.

Powershell: Deduping an array

I have a pipe delimited flat file from which I need to deduplicate the entries based on an object, to be specific a part of file is:
"001A"|"1"|"*"||"A"|"504367667"|"1"|"2005-06-10-16.57.23.000000"|
"001A"|"1"|"*"||"A"|"504367667"|"1"|"2005-10-24-16.52.29.000000"|
"001A"|"1"|"*"||"A"|"504367667"|"1"|"2007-12-13-15.48.47.000000"|
"001A"|"1"|"*"||"A"|"504367667"|"1"|"2008-12-09-17.10.39.000000"|
"001B"|"1"|"*"||"B"|"800026800"|"1"|"2005-08-08-10.48.16.000000"|
"001C"|"1"|"*"||"C"|"490349139"|"1"|"2006-01-19-12.03.08.000000"|
"001C"|"1"|"*"||"C"|"490349139"|"1"|"2009-03-12-15.08.11.000000"|
The first field is ID and last field is a timestamp, I want to deduplicate the entries such that only the latest timestamp entry is kept for each ID. So, The output that I need should be:
"001A"|"1"|"*"||"A"|"504367667"|"1"|"2008-12-09-17.10.39.000000"|
"001B"|"1"|"*"||"B"|"800026800"|"1"|"2005-08-08-10.48.16.000000"|
"001C"|"1"|"*"||"C"|"490349139"|"1"|"2009-03-12-15.08.11.000000"|
I read the file and stored the entries in an array with distinct object names, then I tried
$inputdeduped = $inputfilearray | Sort-Object Date
$inputdeduped = $inputdeduped | Select-Object ID -Unique
hoping that once the date is sorted, get-unique cmdlet used as -unique here would either pick either the first or last of the duplicated entry in the sorted array so depending on that I would sort the date in either desc or asc order, however it doesn't and randomly picks one entry.
Please help me out guys or help me understand how the get-unique cmdlet works.
you can try this:
$newInputdeduped = $inputfilearray | sort id, date -ascending | group -Property id |
select #{n="GroupedList"; e={ $_.group | select -first 1 }} |
select -expa list
This is what I do with your example data after saving it as a txt file:
> $a = Import-Csv -Header "id","n1","n2","v1","n3","n4","n5","date" -Path .\c.txt -delimiter '|'
> $a | ft -AutoSize
id n1 n2 v1 n3 n4 n5 date
-- -- -- -- -- -- -- ----
001A 1 * A 504367667 1 2005-06-10-16.57.23.000000
001A 1 * A 504367667 1 2005-10-24-16.52.29.000000
001A 1 * A 504367667 1 2007-12-13-15.48.47.000000
001A 1 * A 504367667 1 2008-12-09-17.10.39.000000
001B 1 * B 800026800 1 2005-08-08-10.48.16.000000
001C 1 * C 490349139 1 2006-01-19-12.03.08.000000
001C 1 * C 490349139 1 2009-03-12-15.08.11.000000
> $b = $a | sort id, date -ascending | group -Property id | select #{n="list";e={ $_.group | select -first 1 }} | select -expa list
> $b | ft -AutoSize
id n1 n2 v1 n3 n4 n5 date
-- -- -- -- -- -- -- ----
001C 1 * C 490349139 1 2009-03-12-15.08.11.000000
001B 1 * B 800026800 1 2005-08-08-10.48.16.000000
001A 1 * A 504367667 1 2008-12-09-17.10.39.000000