I'm mostly just looking to be pointed in the right direction so I can piece it together myself. I have a decent amount of batch file scripting experience. I'm a PS noob but I think PS would be better for the project below.
We have software which requires the client ID to be part of the install string (along with switches, usr/pass, other switches, logging paths, etc).
I've created a batch file (hundreds actually) which I execute with PSEXEC on remote machines which does work but it's burly to maintain. The only change in each is the client ID.
What I'm attempting to do is have a CSV with 2 columns as input (so I just have to maintain the CSV): machine name (as presented by %hostname%) & client ID. I want to create a script which matches %hostname% to a corresponding row in column 1, read the data in column 2 of the same row, and then be able to call that as a variable in the install string.
E.G.
If my CSV has bobs-pc in column 1, row 6, then insert the data from column 2, row 6 (let's call it 0006) in the following install string:
install.exe /client_ID=0006
no looping
I don't want it to install on all machines simultaneously due to the multiple time zones we operate in.
Something like this would be really useful for many projects I have so I'm more interested in learning than having anyone write it for me.
I understand I should be using Import-Csv. I've created a sample csv and can get certain fields to print out in PS. What I need is for a script to be able to insert those fields as variables in the install string.
Sounds like you want something along the lines of this, (assumes your CSV has a header row of col1 and col2):
$hostname = 'server1'
$value = Import-CSV myfile.csv | where { $_.col1 -eq $hostname } | select -expandproperty col2
Install.exe /client_id=$value
Related
I am doing 2 separate SQL queries on separate databases / connections in a Powershell script. The goal is to export the results of both requests into a single CSV file.
What I am doing now is:
# Create a data table for Clients
$ClientsTable = new-object "System.Data.DataTable"
# Create text commands
$ClientsCommand1 = $connection1.CreateCommand()
$ClientsCommand1.CommandText = $ClientsQuery1
$ClientsCommand2 = $connection2.CreateCommand()
$ClientsCommand2.CommandText = $ClientsQuery2
# Get Clients results
$ClientsResults1 = $ClientsCommand1.ExecuteReader()
$ClientsResults2 = $ClientsCommand2.ExecuteReader()
# Load Clients in data table
$ClientsTable.Load($ClientsResults1)
$ClientsTable.Load($ClientsResults2)
# Export Clients data table to CSV
$ClientsTable | export-csv -Encoding UTF8 -NoTypeInformation -delimiter ";" "C:\test\clients.csv"
where $connection1 and $connection2 are opened System.Data.SqlClient.SqlConnection.
Both requests work fine and both output data with exactly the same columns names. If I export the 2 results sets to 2 separate CSV files, all is fine.
But loading the results in the data table as above fails with the following message:
Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints.
If instead I switch the order in which I load data into the data tables, like
$ClientsTable.Load($ClientsResults2)
$ClientsTable.Load($ClientsResults1)
(load second results set before the first one), then the error goes away and my CSV is generated without any problem with the data from the 2 requests. I cannot think of why appending data in one way, or the other, would trigger this error, or work fine.
Any idea?
I'm skeptical reversing the order works. More likely, it's doing something like appending to the csv file that was already created from the first attempt.
It is possible, though, that different primary key definitions from the original data could produce the results you're seeing. Datatable.Load() can do unexpected things when pulling data from an additional sort. It will try to MERGE the data rather than simply append it, using different matching strategies depending on the overload and argument. If the primary key used for the one of the tables causes nothing match and no records to merge, but the primary key for the table matched everything, that might explain it.
If you want to just append the results, what you want to do instead is Load() the first result into the datatable, export to CSV, clear the table, load the second result into the table, and then export again in append mode.
Hello Stack Overflowers!
I'm currently exporting a Postgres table as a .csv using a C# application I developed. I'm able to export them no problem with the following command...
set PGPASSWORD=password
psql -U USERNAME Database_Name
\copy (SELECT * FROM table1) TO C:\xyz\exportfile.csv CSV DELIMITER ',' HEADER;
The problem I am running into is the .csv is meant to be used with Tableau, however, when importing to excel I run into the same issue. It turns text fields into integers in both Tableau and Excel. This causes issues specifically on joining serial numbers on the Tableau side.
I know I can change these fields in Tableau/Excel manually but I am trying to find a way to make sure the end-user wouldn't need to do this. I'd like for them to just drag and drop the updated .csv postgresql data extracts and be able to start Tableau no problem. They don't seem real tech-savvy. I know you can connect Tableau directly to Postgres but in this particular case, I am not allowed to due to limitations beyond my control.
I'm using PostgreSQL 12 and Tableau v2019.4.0
EDIT: As request providing example data! Both of the fields are TEXT inside of PostgreSQL but the export doesn't specify.
Excel Formatting
ASSETNUM,ITEMNUM
1834,8.11234E+12
1835,8.11234E+12
Notepad Formatting
ASSETNUM,ITEMNUM
1834,8112345673294
1835,8112345673295
Note: If you select the specific cell in Excel it shows the full number.
CSV files don't have any type information, so programs like Excel/Tableau are free to interpret the data how they like.
However, #JorgeCampos's link provides useful information. For example
"=""123""","=""123"""
gets interpreted differently than
123,123
when you load it into Excel.
If you want to add quotes to your data, the easiest way is to use PostgreSQL's string functions, e.g.
SELECT '"=""' || my_column || '"""' FROM my_database
When I'm exporting data in DBeaver i'm using ${table} and a number is now incrementing on the file name.
I have changed every setting out there that I"m aware of. Tried different extensions.
I'm expecting TABLE.CSV and FILE.CSV and I'm getting TABLE_1.CSV and FILE_2.CSV. What do I do to get the incremental number off so my python scripts can pick up specific table names.
I am trying to read in a large CSV with millions of rows for testing. I know that I can treat the CSV as a database using the provider Microsoft.ACE.OLEDB.12.0
Using a small data set I am able to read the row contents positionally using .GetValue(int). I am having a tough time finding a better was to read the data (assuming there even is one.). If I know the column names before hand this is easy. However if I didn't know them I would have to read in the first line of the file to get that data which seems silly.
#"
id,first_name,last_name,email,ip_address
1,Edward,Richards,erichards0#businessweek.com,201.133.112.30
2,Jimmy,Scott,jscott1#clickbank.net,103.231.149.144
3,Marilyn,Williams,mwilliams2#chicagotribune.com,52.180.157.43
4,Frank,Morales,fmorales3#google.ru,218.175.165.205
5,Chris,Watson,cwatson4#ed.gov,75.251.1.149
6,Albert,Ross,aross5#abc.net.au,89.56.133.54
7,Diane,Daniels,ddaniels6#washingtonpost.com,197.156.129.45
8,Nancy,Carter,ncarter7#surveymonkey.com,75.162.65.142
9,John,Kennedy,jkennedy8#tumblr.com,85.35.177.235
10,Bonnie,Bradley,bbradley9#dagondesign.com,255.67.106.193
"# | Set-Content .\test.csv
$conn = New-Object System.Data.OleDb.OleDbConnection("Provider=Microsoft.ACE.OLEDB.12.0;Data Source='C:\Users\Matt';Extended Properties='Text;HDR=Yes;FMT=Delimited';")
$cmd=$conn.CreateCommand()
$cmd.CommandText="Select * from test.csv where first_name like '%n%'"
$conn.open()
$data = $cmd.ExecuteReader()
$data | ForEach-Object{
[pscustomobject]#{
id=$_.GetValue(0)
first_name=$_.GetValue(1)
last_name=$_.GetValue(2)
ip_address=$_.GetValue(4)
}
}
$cmd.Dispose()
$conn.Dispose()
Is there a better way to deal with the output from $cmd.ExecuteReader()? Finding hard to get information for a CSV import. Most of the web deals with exporting to CSV using this provider from a SQL database. The logic here would be applied to a large CSV so that I don't need to read the whole thing in just to ignore most of the data.
I should have looked closer on TechNet for the OleDbDataReader Class. There are a few methods and properties that help understand the data returned from the SQL statement.
FieldCount: Gets the number of columns in the current row.
So if nothing else you know how many columns your rows have.
Item[Int32]: Gets the value of the specified column in its native format given the column ordinal.
Which I can use to pull back the data from each row. This appears to work the same as GetValue().
GetName(Int32): Gets the name of the specified column.
So if you don't know what the column is named this is what you can use to get it from a given index.
There are many other methods and some properties but those are enough to shed light if you are not sure what data is contained within a csv (assuming you don't want to manually verify before hand). So, knowing that, a more dynamic way to get the same information would be...
$data | ForEach-Object{
# Save the current row as its own object so that it can be used in other scopes
$dataRow = $_
# Blank hashtable that will be built into a "row" object
$properties = #{}
# For every field that exists we will add it name and value to the hashtable
0..($dataRow.FieldCount - 1) | ForEach-Object{
$properties.($dataRow.GetName($_)) = $dataRow.Item($_)
}
# Send the newly created object down the pipeline.
[pscustomobject]$properties
}
$cmd.Dispose()
$conn.Dispose()
Only downside of this is that the columns will likely be output in not the same order as the originating CSV. That can be address by saving the row names in a separate variable and using a Select at the end of the pipe. This answer was mostly trying to make sense of the column names and values returned.
I have a PostgreSQL database. I had to extend an existing, big table with a few more columns.
Now I need to fill those columns. I tought I can create an .csv file (out of Excel/Calc) which contains the IDs / primary keys of existing rows - and the data for the new, empty fields. Is it possible to do so? If it is, how to?
I remember doing exactly this pretty easily using Microsoft SQL Management Server, but for PostgreSQL I am using PG Admin (but I am ofc willing to switch the tool if it'd be helpfull). I tried using the import function of PG Admin which uses the COPY function of PostgreSQL, but it seems like COPY isn't suitable as it can only create whole new rows.
Edit: I guess I could write a script which loads the csv and iterates over the rows, using UPDATE. But I don't want to reinvent the wheel.
Edit2: I've found this question here on SO which provides an answer by using a temp table. I guess I will use it - although it's more of a workaround than an actual solution.
PostgreSQL can import data directly from CSV files with COPY statements, this will however only work, as you stated, for new rows.
Instead of creating a CSV file you could just generate the necessary SQL UPDATE statements.
Suppose this would be the CSV file
PK;ExtraCol1;ExtraCol2
1;"foo",42
4;"bar",21
Then just produce the following
UPDATE my_table SET ExtraCol1 = 'foo', ExtraCol2 = 42 WHERE PK = 1;
UPDATE my_table SET ExtraCol1 = 'bar', ExtraCol2 = 21 WHERE PK = 4;
You seem to work under Windows, so I don't really know how to accomplish this there (probably with PowerShell), but under Unix you could generate the SQL from a CSV easily with tools like awk or sed. An editor with regular expression support would probably suffice too.