Loading csv to postgresql using copy from command - postgresql

I'm trying to load data into postgresql from csv. During this process, I come across certain issue:-
here is my code:-
stmt1 = conPost.createStatement();
ResultSet rs = stmt1.executeQuery(sqlQuery);
ResultSetMetaData rsmd = rs.getMetaData();
int columnCount=rsmd.getColumnCount();
for(int i=1; i<=columnCount; i++) {
if(rsmd.getColumnTypeName(i) == "varchar" || rsmd.getColumnTypeName(i) == "text" || rsmd.getColumnTypeName(i) == "char") {
header.add(rsmd.getColumnName(i));
}
}
if(header.isEmpty()) {
copyQuery="COPY "+this.tableName+"("+columns+") FROM STDIN WITH DELIMITER '"+this.delimit+"' CSV HEADER";
} else {
strListString = header.toString();
strListString = strListString.replaceAll("\\[|\\]", "");
copyQuery="COPY "+this.tableName+"("+columns+") FROM STDIN WITH DELIMITER '"+this.delimit+"' CSV HEADER FORCE NULL " + strListString;
}
Previously, I come across issue that, NULL values in csv for varchar,text columns are as "" . So, I used FORCE NULL option on varchar and text type columns so that it will insert "" NULL only. Now it is working fine.
Now the new issue is:- if in my csv text data is like
"Hi, "vignesh". How do you do"
After loading into postgres:- it is like
Hi, vignesh. How do you do
I mean, suppose if inside the text data in csv contains double quotes, it is not inserting it as double quotes in postgres.
Is I'm missing something??? How can we overcome this??? Thanks in Advance

I would state that the problem is in your source string.
"Hi, "vignesh". How do you do"
Contains " also in the middle of the string, making it parsable incorrectly.
I would try to alter the string to be something similar to
"Hi, ""vignesh"". How do you do"
I tested the case with a file composed by
1,"Hi, ""vignesh"". How do you do"
And a table made of
create table testtable (id int, text varchar);
when executing the
copy testtable from 'data.csv' delimiter ',' CSV;
I end up with
id | text
----+------------------------------
1 | Hi, "vignesh". How do you do

Related

Salesforce trigger-Not able to understand

Below is the code written by my collegue who doesnt work in the firm anymore. I am inserting records in object with data loader and I can see success message but I do not see any records in my object. I am not able to understand what below trigger is doing.Please someone help me understand as I am new to salesforce.
trigger DataLoggingTrigger on QMBDataLogging__c (after insert) {
Map<string,Schema.RecordTypeInfo> recordTypeInfo = Schema.SObjectType.QMB_Initial_Letter__c.getRecordTypeInfosByName();
List<QMBDataLogging__c> logList = (List<QMBDataLogging__c>)Trigger.new;
List<Sobject> sobjList = (List<Sobject>)Type.forName('List<'+'QMB_Initial_Letter__c'+'>').newInstance();
Map<string, QMBLetteTypeToVfPage__c> QMBLetteTypeToVfPage = QMBLetteTypeToVfPage__c.getAll();
Map<String,QMBLetteTypeToVfPage__c> mapofLetterTypeRec = new Map<String,QMBLetteTypeToVfPage__c>();
set<Id>processdIds = new set<Id>();
for(string key : QMBLetteTypeToVfPage.keyset())
{
if(!mapofLetterTypeRec.containsKey(key)) mapofLetterTypeRec.put(QMBLetteTypeToVfPage.get(Key).Letter_Type__c, QMBLetteTypeToVfPage.get(Key));
}
for(QMBDataLogging__c log : logList)
{
Sobject logRecord = (sobject)log;
Sobject QMBLetterRecord = new QMB_Initial_Letter__c();
if(mapofLetterTypeRec.containskey(log.Field1__c))
{
string recordTypeId = recordTypeInfo.get(mapofLetterTypeRec.get(log.Field1__c).RecordType__c).isAvailable() ? recordTypeInfo.get(mapofLetterTypeRec.get(log.Field1__c).RecordType__c).getRecordTypeId() : recordTypeInfo.get('Master').getRecordTypeId();
string fieldApiNames = mapofLetterTypeRec.containskey(log.Field1__c) ? mapofLetterTypeRec.get(log.Field1__c).FieldAPINames__c : '';
//QMBLetterRecord.put('Letter_Type__c',log.Name);
QMBLetterRecord.put('RecordTypeId',tgh);
processdIds.add(log.Id);
if(string.isNotBlank(fieldApiNames) && fieldApiNames.contains(','))
{
Integer i = 1;
for(string fieldApiName : fieldApiNames.split(','))
{
string logFieldApiName = 'Field'+i+'__c';
fieldApiName = fieldApiName.trim();
system.debug('fieldApiName=='+fieldApiName);
Schema.DisplayType fielddataType = getFieldType('QMB_Initial_Letter__c',fieldApiName);
if(fielddataType == Schema.DisplayType.Date)
{
Date dateValue = Date.parse(string.valueof(logRecord.get(logFieldApiName)));
QMBLetterRecord.put(fieldApiName,dateValue);
}
else if(fielddataType == Schema.DisplayType.DOUBLE)
{
string value = (string)logRecord.get(logFieldApiName);
Double dec = Double.valueOf(value.replace(',',''));
QMBLetterRecord.put(fieldApiName,dec);
}
else if(fielddataType == Schema.DisplayType.CURRENCY)
{
Decimal decimalValue = Decimal.valueOf((string)logRecord.get(logFieldApiName));
QMBLetterRecord.put(fieldApiName,decimalValue);
}
else if(fielddataType == Schema.DisplayType.INTEGER)
{
string value = (string)logRecord.get(logFieldApiName);
Integer integerValue = Integer.valueOf(value.replace(',',''));
QMBLetterRecord.put(fieldApiName,integerValue);
}
else if(fielddataType == Schema.DisplayType.DATETIME)
{
DateTime dateTimeValue = DateTime.valueOf(logRecord.get(logFieldApiName));
QMBLetterRecord.put(fieldApiName,dateTimeValue);
}
else
{
QMBLetterRecord.put(fieldApiName,logRecord.get(logFieldApiName));
}
i++;
}
}
}
sobjList.add(QMBLetterRecord);
}
if(!sobjList.isEmpty())
{
insert sobjList;
if(!processdIds.isEmpty()) DeleteDoAsLoggingRecords.deleteTheProcessRecords(processdIds);
}
Public static Schema.DisplayType getFieldType(string objectName,string fieldName)
{
SObjectType r = ((SObject)(Type.forName('Schema.'+objectName).newInstance())).getSObjectType();
DescribeSObjectResult d = r.getDescribe();
return(d.fields.getMap().get(fieldName).getDescribe().getType());
}
}
You might be looking in the wrong place. Check if there's an unit test written for this thing (there should be one, especially if it's deployed to production), it should help you understand how it's supposed to be used.
You're inserting records of QMBDataLogging__c but then it seems they're immediately deleted in DeleteDoAsLoggingRecords.deleteTheProcessRecords(processdIds). Whether whatever this thing was supposed to do succeeds or not.
This seems to be some poor man's CSV parser or generic "upload anything"... that takes data stored in QMBDataLogging__c and creates QMB_Initial_Letter__c out of it.
QMBLetteTypeToVfPage__c.getAll() suggests you could go to Setup -> Custom Settings, try to find this thing and examine. Maybe it has some values in production but in your sandbox it's empty and that's why essentially nothing works? Or maybe some values that are there are outdated?
There's some comparison if what you upload into Field1__c can be matched to what's in that custom setting. I guess you load some kind of subtype of your QMB_Initial_Letter__c in there. Record Type name and list of fields to read from your log record is also fetched from custom setting based on that match.
Then this thing takes what you pasted, looks at the list of fields in from the custom setting and parses it.
Let's say the custom setting contains something like
Name = XYZ, FieldAPINames__c = 'Name,SomePicklist__c,SomeDate__c,IsActive__c'
This thing will look at first record you inserted, let's say you have the CSV like that
Field1__c,Field2__c,Field3__c,Field4__c
XYZ,Closed,2022-09-15,true
This thing will try to parse and map it so eventually you create record that a "normal" apex code would express as
new QMB_Initial_Letter__c(
Name = 'XYZ',
SomePicklist__c = 'Closed',
SomeDate__c = Date.parse('2022-09-15'),
IsActive__c = true
);
It's pretty fragile, as you probably already know. And because parsing CSV is an art - I expect it to absolutely crash and burn when text with commas in it shows up (some text,"text, with commas in it, should be quoted",more text).
In theory admin can change mapping in setup - but then they'd need to add new field anyway to the loaded file. Overcomplicated. I guess somebody did it to solve issue with Record Type Ids - but there are better ways to achieve that and still have normal CSV file with normal columns and strong type matching, not just chucking everything in as strings.
In theory this lets you have "jagged" csv files (row 1 having 5 fields, row 2 having different record type and 17 fields? no problem)
Your call whether it's salvageable or you'd rather ditch it and try normal loading of QMB_Initial_Letter__c records. (get back to your business people and ask for requirements?) If you do have variable number of columns at source - you'd need to standardise it or group the data so only 1 "type" of records (well, whatever's in that "Field1__c") goes into each file.

psql \copy command using scala process builder syntax error

I need to insert 1M rows of data to a table in postgres so i'm using postgres' copy from csv command. since the COPY needs a superuser account to work, i'm using the /copy instead.
here's my scala code:
val execCommand = Seq("psql", s"postgresql://$user:$pwd#$host:5432/$db", "-c", s"""\"\\copy $fullTableName (${columnsList}) from '${file.getAbsolutePath}' with delimiter ',' csv HEADER;\" """)
val result = execCommand.!!
println(result)
the command would look like this and works when run from my terminal:
psql postgresql://user:password#host:5432/db -c "\copy tableName (column1, column2, column3) from 'file_to_load.csv' with delimiter ',' csv HEADER;"
but when my code is run, it throws this error:
syntax error at or near ""\copy tableName (column1, column2, column3) with delimiter ',' csv HEADER;""
if i replace the command with a select query, it works fine. can someone help me identify the error on the \copy command? the syntax looks correct to me. maybe i'm missing out on something. i'm new to scala's process builder also so i also don't know if i need to fix the command. and if i do, how should i change this? thanks.
There is probably no need to run a psql command. You're usually better off using the corresponding JDBC API:
https://jdbc.postgresql.org/documentation/publicapi/org/postgresql/copy/CopyManager.html
posting how i fixed it using CopyManager. for documentation on this, see Matthias Berndt's comment.
object PgDbHandler {
def getConnection(db: ConnectionName, userName: String = user, pwd: String = password): Connection = {
Class.forName("org.postgresql.Driver")
DriverManager.getConnection(s"jdbc:postgresql://${db.sqlDns}/${db.databaseName}?user=$userName&password=$pwd&sslmode=require")
}
def copyFileToPg(file: File, fullTableName: String, columnsList: List[String]): Long = {
logger.info(s"Writing $file to postgres")
val conn = getConnection(db, user, pwd)
try{
val rowsInserted = new CopyManager(conn.asInstanceOf[BaseConnection])
.copyIn(s"COPY $fullTableName (${columnsList.mkString(",")}) FROM STDIN (DELIMITER ',',FORMAT csv, header true)",
new FileInputStream(file.getAbsolutePath))
logger.info(s"$rowsInserted row(s) inserted for file $file")
rowsInserted
}
finally
conn.close
}
}

How to insert a placeholder in a json key:value ? Flutter

I'm porting my Swift app to Flutter and for localising it I'm following this https://github.com/billylev/flutter_localizations but I can't see how to insert placeholder to insert a value in the translated values.
Basically the guide uses
String text(String key) {
return _localisedValues[key] ?? "$key not found";
}
to get the corresponding key:value pair from a .json file as
{
"Shop": "Negozio",
}
I just pass it in the Textwidget as :
Text(AppLocalizations.instance.text('Shop')).
How to modify text to insert one or more placeholders and how would be the .json be constructed?
Say for the value "User": "User" I'd like to insert a value after the transaction I can simply use a string sum and add the value as `Text(
AppLocalizations.instance.text('User') + ' ${widget.user.name}', but if I need to insert a value in the middle of the translated sentence, eg a message, I don't see how to accomplish it.
I need it to make localised versions of incoming push notification, and they have args.
In Swift I have it like this:
"ORDER_RECEIVED_PUSH_TITLE" = "Order number: %#";
"ORDER_RECEIVED_PUSH_SUBTITLE" = "Shop: %#";
"ORDER_RECEIVED_PUSH_BODY" = "Thank you %#! We received your order and we'll let you know when we start preparing it and when it's ready. Bye";
Any suggestions on how to accomplish that in Flutter?
Many thanks
I was suggested this package https://pub.dev/packages/sprintf#-installing-tab- and it works just as I needed. Sprintf just lets you specify one or more placeholders in a String and pass an array of args.
https://developermemos.com/posts/using-sprintf-flutter-dart. for more info, even this is pretty much it. So for example
"ORDER_RECEIVED_PUSH_TITLE" = "Order number: %#";
in the .json file would be :
{
"ORDER_RECEIVED_PUSH_TITLE": "Oder number: %s"
}
and using it would be
String orderNumber = 'some uuid';
Text(Sprintf(AppLocalitazions.instance.text('ORDER_RECEIVED_PUSH_TITLE'),[orderNumber]);
Hope this helps others.

How to used a blank delimiter when exporting CSV

I would like to export crystal report into csv but withouter a delimter character around fields,
here is a snipper of my code:
ExportOptions exportOpts = new ExportOptions();
exportOpts.ExportDestinationType = ExportDestinationType.DiskFile;
exportOpts.ExportFormatType = ExportFormatType.CharacterSeparatedValues;
DiskFileDestinationOptions DiskFileDestinationOpt = ExportOptions.CreateDiskFileDestinationOptions();
DiskFileDestinationOpt.DiskFileName = filename;
exportOpts.ExportDestinationOptions = DiskFileDestinationOpt;
CharacterSeparatedValuesFormatOptions csvOptions = new CharacterSeparatedValuesFormatOptions();
csvOptions.Delimiter = "";
csvOptions.SeparatorText = "";
exportOpts.ExportFormatOptions = csvOptions;
crystalReportDoc.Export(exportOpts);
the problem is whenever I use an empty string for the delimiter property, crystal report will use the default double quote character in the result csv file.
can someone please assist on how to export a csv document with a blank delimiter?
If you use a blank delimiter, it's not a csv file. (Comma-separated value) You're probably looking for more of a "Text" file type.

Change date column to integer

I have a large csv file as below:
DATE status code value value2
2014-12-13 Shipped 105732491-20091002165230 0.000803398 0.702892835
2014-12-14 Shipped 105732491-20091002165231 0.012925206 1.93748834
2014-12-15 Shipped 105732491-20091002165232 0.000191278 0.004772389
2014-12-16 Shipped 105732491-20091002165233 0.007493046 0.44883348
2014-12-17 Shipped 105732491-20091002165234 0.022015049 3.081006137
2014-12-18 Shipped 105732491-20091002165235 0.001894693 0.227268466
2014-12-19 Shipped 105732491-20091002165236 0.000312871 0.003113062
2014-12-20 Shipped 105732491-20091002165237 0.001754068 0.105016053
2014-12-21 Shipped 105732491-20091002165238 0.009773315 0.585910214
:
:
What i need to do is remove the header and change the date format to an integer yyyymmdd (eg. 20141217)
I am using opencsv to read and write the file.
Is there a way where i can change all the dates at once without parsing them one by one?
Below is my code to remove the header and create a new file:
void formatCsvFile(String fileToChange) throws Exception {
CSVReader reader = new CSVReader(new FileReader(new File(fileToChange)), CSVParser.DEFAULT_SEPARATOR, CSVParser.NULL_CHARACTER, CSVParser.NULL_CHARACTER, 1)
info "Read all rows at once"
List<String[]> allRows = reader.readAll();
CSVWriter writer = new CSVWriter(new FileWriter(fileToChange), CSVWriter.DEFAULT_SEPARATOR, CSVWriter.NO_QUOTE_CHARACTER)
info "Write all rows at once"
writer.writeAll(allRows)
writer.close()
}
Please can some one help?
Thanks
You don't need to parse the dates, but you do need to process each line in the file and convert the data on each line you want to convert. Java/Groovy doesn't have anything like awk where you can work with file data as columns, for example, the first 10 "columns" (characters usually) in every line in a file. Java/Groovy only deals with "rows" of data in a file, not "columns".
You could try something like this: (in Groovy)
reader.eachLine { String theLine ->
int idx = theLine.indexOf(' ')
String oldDate = theLine.subString(0, idx)
String newDate = oldDate.replaceAll('-', '')
String newLine = newDate + theLine.subString(idx);
writer.writeLine(newline);
}
Edit:
If your CSVReader class is not derived from File, then you can't use Groovy's eachLine method on it. And if the CSVReader class's readAll() method really returns a List of String arrays, then the above code could change to this:
allRows.each { String[] theLine ->
String newDate = theLine[0].replaceAll('-', '')
writer.writeLine(newDate + theLine[1..-1])
}
Ignore the first line (the header):
List<String[]> allRows = reader.readAll()[1..-1];
and replace the '-' in the dates by splitting each row and editting the first:
allrows = allrows.collect{
row -> row.split(',')[0].replace(',','') // the date
+ row.split(',')[1..-1] // the rest
}
I don't know what you mean by "all dates at once". For me can only be iterated.