In talend, I am trying to load date from csv file in the format (mm/dd/year)1/14/2012(string format) to a database MySQL database. Whenever I am running the job I am getting an error default value is needed. I tried using talend parsing and staging option but still, I was facing this error. Please help me out.
Related
I am using data frame to write into the database using a JDBC connection but neither the data is getting populated nor I am getting any error.
Any direction from where I should start the debugging?
Code line being used:
df.reparition(8).write.mode('append').jdbc(url, table, properties,)
I have tried the python lib psycopg but with this jdbc connection it is not working
I have created a visual job in AWS Glue where I extract data from Snowflake and then my target is a postgresql database in AWS.
I have been able to connect to both Snowflak and Postgre, I can preview data from both.
I have also been able to get data from snoflake, write to s3 as csv and then take that csv and upload it to postgre.
However when I try to get data from snowflake and push it to postgre I get the below error:
o110.pyWriteDynamicFrame. null
So it means that you can get the data from snowflake in a Datafarme and while writing the data from this datafarme to postgres, you are failing.
You need to check was glue logs to get more understanding why is this failing while writing the data into postgres.
Please check if you have the right version of jars (needed by postgres) compatible with scala(on was glue side).
I'm new to postgres so this problem is probably a relatively easy one for someone else. However, I have spent many frustrating hours trying to figure out the solution. I have an Access Database of metadata that must be kept updated for sending records to other groups. I also have a database using PostgreSQL and PGAdmin which also has these same metadata tables. Currently these tables in the Postgres database get updated manually by exporting the Access tables as excel files, and then importing them into the SQL tables. It's not the most efficient process and could lead to errors in the SQL database if someone forgets to check before running any queries that they are using the most recent data from Access. So I would like to integrate some of the tables from my Access database with my Postgres database.
Originally I tried just installing drivers to export the Access tables directly to Postgres which worked, but not in the way that I wanted since it just brings in a table which I would still need to manually update. From my understanding, I can create a server connection in postgres to access and that would then bring in updated data using a foreign data wrapper.
I tried to use ogr_fdw.
CREATE EXTENSION ogr_fdw;
When I try:
CREATE SERVER metadata
FOREIGN DATA WRAPPER ogr_fdw
OPTIONS (
datasource 'H:\Databases\20170712.accdb',
format 'ODBC' );
I receive: ERROR: unable to connect to data source "H:\Databases\20170712.accdb"
SQL state: HV00D
When I try:
CREATE SERVER metadata
FOREIGN DATA WRAPPER ogr_fdw
OPTIONS (
datasource 'H:\Databases\20170712.accdb',
format 'ACCDB' );
I receive: ERROR: unable to find format "ADDCB"
HINT: See the formats list at http://www.gdal.org/ogr_formats.html.
I also tried MDB and received the same error. However, MDB is the code name given by the website but it says that it needs JDK/JRE to compile and I'm not really sure if that's another type of driver that I would need or what it is.
When I try:
CREATE SERVER metadata
FOREIGN DATA WRAPPER ogr_fdw
OPTIONS (
datasource 'H:\Databases\20170712.mdb',
format 'ODBC' );
I receive: ERROR: unable to connect to data source "H:\Databases\20170712.mdb"
SQL state: HV00D
Hint: Unable to initialize ODBC connection to DSN for DRIVER=Microsoft Access Driver (*.mdb);DBQ=H:\Databases\20170712.mdb,
[Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified
However I thought after looking at the github help page for ogr_fdw didn't need ODBC and special drivers https://github.com/pramsey/pgsql-ogr-fdw/blob/master/FAQ.md.
A lot of this is probably due to my limited knowledge of the terminology when I'm reading through a lot of this stuff. Also my Access database is an .accdb file but since that wasn't working I tried around with mdb and ODBC as the "format" too. If anyone has any suggestions I would greatly appreciate it.
Thanks!
I had been trying to migrate my parse data to localhost mongoDB but to no avail. There are a total of 12 steps as mentioned in https://parse.com/migration#database
I am currently still at step 1 and I had encountered some difficulties. I managed to set mongoDB on my computer (localhost). Then I went to my "app settings" in parse to start the data migration. Parse wanted me to paste the mongoDB connection URL which I had entered as "mongodb://localhost/". However, there was an error "no reachable servers". On my localhost, I am running the mongoDB using my terminal.
Any kind advise on this? This is my first time doing data migration and trying out mongoDB. Any help will be greatly appreciated!
Cheers
In your parse dashboard, go to App settings -> General. In this page you can find the "Export app data" button. Click and parse send you an email with the csv database data, use it for import your data in your local database (use rockmongo for example or mongoimport)
I have 50 txt files on windows and I would like to insert their data into a single table on Redshift.
I created the basic table structure and now I'm having issues with inserting the data. I tried using COPY command from SQLWorkbench/J but it didn't work out.
Here's the command:
copy feed
from 'F:\Data\feed\feed1.txt'
credentials 'aws_access_key_id=<access>;aws_secret_access_key=<key>'
Here's the error:
-----------------------------------------------
error: CREDENTIALS argument is not supported when loading from file system
code: 8001
context:
query: 0
location: xen_load_unload.cpp:333
process: padbmaster [pid=1970]
-----------------------------------------------;
Upon removing the Credentials argument, here's the error I get:
[Amazon](500310) Invalid operation: LOAD source is not supported. (Hint: only S3 or DynamoDB or EMR based load is allowed);
I'm not a UNIX user so I don't really know how this should be done. Any help in this regard would be appreciated.
#patthebug is correct in that Redshift cannot see your local Windows drive. You must push the data into an S3 bucket. There are some additional sources you can use per http://docs.aws.amazon.com/redshift/latest/dg/t_Loading_tables_with_the_COPY_command.html, but they seem outside the context you're working with. I suggest you get a copy of Cloudberry Explorer (http://www.cloudberrylab.com/free-amazon-s3-explorer-cloudfront-IAM.aspx) which you can use to copy those files up to S3.