I am just taking a csv file and importing it directly into cloud sql through GCP console import and getting an unexpected error.Can you please advise what could be the issue.GCP doesnt show me what that unexpected error is ?
sku name item
43900 Duracell - AAA Batteries (4-Pack) HardGood
48530 Duracell - AA 1.5V CopperTop Batteries (4-Pack) HardGood
127687 Duracell - AA Batteries (8-Pack) HardGood
150115 Energizer - MAX Batteries AA (4-Pack) HardGood
185230 Duracell - C Batteries (4-Pack) HardGood
Related
I have written PySpark code to hit a REST API and extract the contents in an XML format and later wrote to Parquet in a data lake container.
I am trying to add logging functionality where I not only write out errors but updates of actions/process we execute.
I am comparatively new to Spark I have been relying on online articles and samples. All explain the error handling and logging through "1/0" examples and saving logs in the default folder structure (not in ADLS account/container/folder) which do not help at all. Most of the code written in Pure Python doesn't run as-is.
Could I get some assistance with setting up the following:
Push errors to a log file under a designated folder sitting under a data lake storage account/container/folder hierarchy".
Catching REST specific exceptions.
This is a sample of what I have written:
''''
LogFilepath = "abfss://raw#.dfs.core.windows.net/Data/logging/data.log"
#LogFilepath2 = "adl://.azuredatalakestore.net/raw/Data/logging/data.log"
print(LogFilepath)
try:
1/0
except Exception as e:
print('My Error...' + str(e))
with open(LogFilepath, "a") as f:
f.write("An error occured: {}\n".format(e))
''''
I have tried it both ABFSS and ADL file paths with no luck. The log file is already available in the storage account/container/folder.
I have reproduced the above using abfss path in with open() function but it gave me the below error.
FileNotFoundError: [Errno 2] No such file or directory: 'abfss://synapsedata#rakeshgen2.dfs.core.windows.net/datalogs.logs'
As per this Documentation
we can use open() on ADLS file with a path like /synfs/{jobId}/mountpoint/{filename}.
For that, first we need to mount the ADLS.
Here I have mounted it using ADLS linked service. you can mount either by Storage account access key or SAS as per your requirement.
mssparkutils.fs.mount(
"abfss://<container_name>#<storage_account_name>.dfs.core.windows.net",
"/mountpoint",
{"linkedService":"<ADLS linked service name>"}
)
Now use the below code to achieve your requirement.
from datetime import datetime
currentDateAndTime = datetime.now()
jobid=mssparkutils.env.getJobId()
LogFilepath='/synfs/'+jobid+'/synapsedata/datalogs.log'
print(LogFilepath)
try:
1/0
except Exception as e:
print('My Error...' + str(e))
with open(LogFilepath, "a") as f:
f.write("Time : {}- Error : {}\n".format(currentDateAndTime,e))
Here I am writing date time along with the error and there is no need to create the log file first. The above code will create and append the error.
If you want to generate the logs daily, you can generate date file names log files as per your requirement.
My Execution:
Here I have executed 2 times.
I "borrowed" the LPINFOX REXX program from this url: [http://www.longpelaexpertise.com/toolsLPinfoX.php]
When I run it "directly" (EX 'hlq.EXEC(LPINFOX)') it runs fine:
------------------------------------------------------
LPInfo: Information for z/OS ssssssss as of 18 Mar 2021
------------------------------------------------------
z/OS version: 02.04
Sysplex name: LOCAL
JES: JES2 z/OS 2.4 (Node nnnn)
Security Software: RACF
CEC: 3907-Z02 (IBM Z z14 ZR1)
CEC Serial: ssssss
CEC Capacity mmmm MSU
LPAR name: llll
LPAR Capacity mmm`enter code here` MSU
Not running under a z/VM image
But, if I insert the call into another exec, I get a RC -2 from the address LINKPGM call:
------------------------------------------------------
LPInfo: Information for z/OS ssssssss as of 18 Mar 2021
------------------------------------------------------
z/OS version: 02.04
Sysplex name: LOCAL
JES: JES2 z/OS 2.4 (Node N1)
Security Software: RACF
79 - Address Linkpgm 'IWMQVS QVS_Out'
+++ RC(-2) +++
CEC: -
CEC Serial:
LPAR name:
Not running under a z/VM image
I'm sure this has to do with the second level of REXX program running, but what can I do about the error (besides queueing up the EXecution of the second REXX)? I'm also stumped on where this RC is documented...my Google search for "REXX ADDRESS RC -2" comes up short.
Thanks,
Scott
PS(1), per answer from #phunsoft:
Interesting. I didn't copy the code to my other REXX. I invoked LPINFOX from within another rexx: I have a hlq.LOGIN.EXEC that has a "EX 'hlq.LPINFOX.EXEC" statement within it. When I reduce the first exec to "TEST1" (follows), it fails the same way:
/* REXX */
"EXECUTIL TS"
"EX 'FAGEN.LPINFOX.EXEC'"
exit 0
When I run TEST1, this is the output from the EXECUTIL from around the IWMQVS call:
When I run LPINFOX.EXEC directly from the command line, the output is the same, except the address LINKPGM IWMQVS works fine:
I can only surmise that there is some environmental difference when I run the exec "standalone" vs. when I run the exec from another exec.
PS(2), per question about replacing IWMQVS with IEFBR14 from phunsoft:
Changing the program to IEFBR14 doesn't change the result, RC=-2.
LINKPGM is a TSO/E REXX host command environment, so you need to search in the TSO/E REXX Reference. From that book:
Additionally, for the LINKMVS, ATTCHMVS, LINKPGM, and ATTCHPGM
environments, the return code set in RC may be -2, which indicates that processing
of the variables was not successful. Variable processing may have been
unsuccessful because the host command environment could not:
o Perform variable substitution before linking to or attaching the program
o Update the variables after the program completed
Difficult to say what th problem is without seeing the code.
You may want to use REXX's trace feature to debug. Do you run this REXX from TSO/E foreground? If so, you might run TSO EXECUTIL TS just before you start that REXX. It will then run as if trace ?i wa specified as the fist line of the code.
I've had look at the LPINFOX EXEC and saw that variable QVS_Out is set as follows just before linking to IWMQVS:
QVS_Outlen = 500 /* Output area length */
QVS_Outlenx = Right(x2c(d2x(QVS_Outlen)),4,d2c(0))
/* Get length as fullword */
QVS_Out = QVS_Outlenx || Copies('00'X,QVS_Outlen-4)
Did you do this also when you copied the call to your other REXX?
I have the following problem:
I am using the following command:
EXPORT TO "D:\ExportFiles\ACTIVATE_DICT.csv" OF DEL MODIFIED BY TIMESTAMPFORMAT="YYYY/MM/DD HH:MM:SS" STRIPLZEROS MESSAGES "D:\ExportFiles\FMessage.txt" SELECT * FROM DB2INST4.ACTIVATE_DICT;
In the Command Editor of the program, the Control Center successfully exported data from the ACTIVATE_DICT table to a CSV file ACTIVATE_DICT.csv.
But for a number of reasons, I need you to execute this command in the IBM Data Studio or DataGrip program, and there it cannot be executed in this form.
Therefore, I read the following manual enter link description here
and based on it wrote the following command:
CALL SYSPROC.ADMIN_CMD('EXPORT to /lotus/ExportFiles/ACTIVATE_DICT.csv OF DEL MODIFIED BY TIMESTAMPFORMAT="YYYY/MM/DD HH:MM:SS" STRIPLZEROS MESSAGES /lotus/ExportFiles/FMessage.txt SELECT * FROM DB2INST4.ACTIVATE_DICT');
Here is the message on the result of the command:
[2018-10-11 15:15:23] [ ][3107] There is at least one warning
message in the message file.. SQLCODE=3107, SQLSTATE= ,
DRIVER=4.23.42 [2018-10-11 15:15:23] 1 row retrieved starting from 1
in 75 ms (execution: 29 ms, fetching: 46 ms)
And in the / lotus / ExportFiles / directory there is no ACTIVATE_DICT.csv file and there is no FMessage.txt file in the / lotus / ExportFiles / directory.
Question: How then to correctly execute this command ??? Maybe I'm doing something wrong?
sqlcode 3107 is a warning message:
SQL3107W At least one warning message was encountered during LOAD processing.
Explanation
You can load data into a database from a file, tape, or named pipe using the LOAD command. You can specify that any warnings or errors from the LOAD processing be printed to a message file. If no message file is specified, the warnings or errors are printed to standard out (unless the database manager instance is configured as a partitioned-database environment.)
It is to tell you to read message log in the message file you specified. In your case: /lotus/ExportFiles/FMessage.txt
Please read into the file to see what error is logged and if you need help understand what is logged, please post the content of the file.
This message is returned when at least one warning was received during processing. If a message file is being used, the warnings and errors will be printed there.
This warning does not affect processing.
User response
Review the message file warning.
EXPORT command using the ADMIN_CMD procedure
See use of the 'MESSAGES ON SERVER' clause, and how to get these messages using the result set returned by this routine in this case.
I need to upload a Data in the format 'MMDDYYYY'
current way code i am using to send via psql
SET BaseFolder=C:\
psql -h hostname -d database -c "\copy test_table(id_test,
colum_test,columndate DATEFORMAT 'MMDDYYYY')
from '%BaseFolder%\test_table.csv' with delimiter ',' CSV HEADER;"
here test_table is the table in the postgres DB
Id_test: float8
Column_test: float8
columndate: timestamp
id_test colum_test colum_date
94 0.3306 12312017
16 0.3039 12312017
25 0.5377 12312017
88 0.6461 12312017
i am getting the following error when i run the above query in CMD in windows 10
ERROR: date/time field value out of range: "12312017"
HINT: Perhaps you need a different "datestyle" setting.
CONTEXT: COPY test_table, line 1, column columndate : "12312017"
The DATEFORMAT applies to the whole COPY command, not a single field.
I got it to work as follows...
Your COPY command suggests that the data is comma-separated, so I used this input data and stored it in an Amazon S3 bucket:
id_test colum_test,colum_date
94,0.3306,12312017
16,0.3039,12312017
25,0.5377,12312017
88,0.6461,12312017
I created a table:
CREATE TABLE foo (
foo_id BIGINT,
foo_value DECIMAL(4,4),
foo_date DATE
)
Then loaded the data:
COPY foo (foo_id, foo_value, foo_date)
FROM 's3://my-bucket/foo.csv'
IAM_ROLE 'arn:aws:iam::123456789012:role/Redshift-Role'
CSV
IGNOREHEADER 1
DATEFORMAT 'MMDDYYYY'
Please note that the recommended way to load data into Amazon Redshift is from files stored in Amazon S3. (I haven't tried using the native psql copy command with Redshift, and would recommend against it — particularly for large data files. You certainly can't mix commands from the Redshift COPY command into the psql Copy command.)
Then, I ran SELECT * FROM foo and it returned:
16 0.3039 2017-12-31
88 0.6461 2017-12-31
94 0.3306 2017-12-31
25 0.5377 2017-12-31
That is a horrible format for dates. Don't break your date type, convert your data to a saner format.
=> select to_date('12312017', 'MMDDYYYY');
to_date
------------
2017-12-31
As the title suggests is it possible to say... install a list of packages that are stored in a Postgres database table.
Example:
testing=# select * from package;
name
-------
rsync
lftp
curl
(3 rows)
Then use these values in a state to install the appropriate packages.. something like:
install_network_packages:
pkg.installed:
pkgs:
- 'select * from package'
I currently have this state that returns the values of the table, but do not know where to go from there:
testing:
module.run:
- name: postgres.psql_query
- query: 'select * from public.package'
- maintenance_db: testing
Output:
minion1:
ID: testing
Function: module.run
Name: postgres.psql_query
Result: True
Comment: Module function postgres.psql_query executed
Started: 15:25:23.161788
Duration: 79.784 ms
Changes:
----------
ret:
|_
----------
name:
rsync
|_
----------
name:
lftp
|_
----------
name:
curl
Summary for minion1
Succeeded: 1 (changed=1)
Failed: 0
Total states run: 1
Total run time: 79.784 ms
Good question! The salt component that typically stores this sort of data (what packages should be installed what role a server has, etc.) is called the "pillar".
Typically, pillar data is stored internally in the salt master, but salt also has a (rather poorly documented) feature called "external pillars" that allows you to query and use other data sources (postgres, an arbitrary command line call, etc.) the same way you use builtin pillars. It seems that in the development version of salt (Nitrogen) there's a builtin postgres external pillar. I imagine you could use it on the release version by just copy pasting the code into your external modules directory (I've done this successfully in the past with unreleased salt features).
Best of luck!