Birt not being able to find available fields Mongodb datasource dataset - mongodb

I'm trying to connect a mongodb database to Birt and create a dataset. But after i connect (ping get succeeded) the database and trying to set up the dataset by specifying a collection name but following error comes
Unable to find available fields. Invalid collection name clinic.

Seems its an issue with the mongodb-java driver.
Open the Eclipse IDE and navigate to the plugins dir, delete the org.eclipse.orbit.mongodb_2.10.1.v20130422-1135.jar file, and add the mongo-java-driver-2.14.3.jar there.

Related

Export a postgis table to geopackage using the database management in QGIS

I am working in QGIS through the DB Manager by running some spatial queries. After I create new tables with the queries I need to export the tables as new geopackages.
I've tried using the Export to Vector file inside the DB Manager but I get the following error message:
Error 2 Creation of data source failed (OGR error:
sqlite3_open(/Users/xxx/Documents/xxx/xxx/xxx/xxxx/xx/new_geopackage_layer.gpkg)failed:
unable to open database file)
I've read a couple of posts and they said I needed to create an empty geopackage first and then export the table and save it inside the geopackage but that did not work either. When I try to save inside an existing geopackage
I get an error saying:
"geopackage.gpkg already exists. Do you want to replace it? A file or
folder with the same name already exists in the folder xxx Replacing
it will overwrite its current contents."
If I choose to overwrite then I get a second error message saying:
" Error 1 Unable to create the datasource.
/Users/xxx/Documents/xxx/xxx/xxx/xxx/xxxx/new_geopackage.gpkg exists
and overwrite flag is false."
All I want is to be able to run spatial queries inside QGIS and be able to export the tables created with the queries as geopackages.
It seems that as of now I won't be able to do this from inside QGIS but instead will need to use ogr2ogr command to export to any file type.
Any help would be really appreciated.
Thank you

JDBC Producer in streamsets that could not write data into MySql

I had configured the JDBC connection configuration in the pipeline.
and when the application executes i get the following error on the logs.
"java.sql.SQLSyntaxErrorException: Table 'databaseName.aim_table' doesn't exist"
The databaseName is not what I have set.
I have tried many times. it shows the same message that could not find the table in different database, and the question is all the db occurred in the sdc.log are that I had never configured ,and the correct database is never used ,so I want to know how could it find the wrong db and I had checked before start the pipeline and it shows successful:
Do you have anything set in the Schema Name configuration for JDBC
Producer? This should be blank for MySQL, since you're setting the
database/schema name in the connect URL.
Check that your MySQL driver matches the server. In particular, using
the current version 8.0.x JDBC driver with a 5.x.x server seems to
result in this problem. Download the older 5.1.x driver (currently
5.1.46) and it should work.
refer this
This problem is indeed caused by the wrong version of the driver package. I found the correct driver package and successfully wrote the data to the target table. add aonther point, I have set the SCHEMA NAME to blank and defined the database name in the connect URL for mysql.
My English is not good. Please forgive me.

Unable to connect to a data base in mongo db from power BI

Getting the following error:
DataSource.Error: ODBC: ERROR [HY000] [MySQL][ODBC 1.0(a) Driver][mysqld-5.7.12 mongosqld v2.6.0-beta3]Unknown database ''
Details:
DataSourceKind=Odbc
DataSourcePath=dsn=DigitalTeam
OdbcErrors=Table
As shown in the image the error arise as i am trying to get the data from Reports collection from test Database. How can I solve this problem?
Per the comment by CodeCaster the most likely scenario of you getting a successful connection but not loading actual data is the fact you may have not selected a db from the dropdown when setting up the ODBC driver
After you setup your connection and port go to the last choice of Database: and select a db in the dropdown from the available list of db's in your environment.
Then when setting up the excel sheet for use of the ODBC data, via the data tab, choose your collection and documents from that collection should load.

Linking Access tables into a PostgreSQL Database using a foreign data wrapper

I'm new to postgres so this problem is probably a relatively easy one for someone else. However, I have spent many frustrating hours trying to figure out the solution. I have an Access Database of metadata that must be kept updated for sending records to other groups. I also have a database using PostgreSQL and PGAdmin which also has these same metadata tables. Currently these tables in the Postgres database get updated manually by exporting the Access tables as excel files, and then importing them into the SQL tables. It's not the most efficient process and could lead to errors in the SQL database if someone forgets to check before running any queries that they are using the most recent data from Access. So I would like to integrate some of the tables from my Access database with my Postgres database.
Originally I tried just installing drivers to export the Access tables directly to Postgres which worked, but not in the way that I wanted since it just brings in a table which I would still need to manually update. From my understanding, I can create a server connection in postgres to access and that would then bring in updated data using a foreign data wrapper.
I tried to use ogr_fdw.
CREATE EXTENSION ogr_fdw;
When I try:
CREATE SERVER metadata
FOREIGN DATA WRAPPER ogr_fdw
OPTIONS (
datasource 'H:\Databases\20170712.accdb',
format 'ODBC' );
I receive: ERROR: unable to connect to data source "H:\Databases\20170712.accdb"
SQL state: HV00D
When I try:
CREATE SERVER metadata
FOREIGN DATA WRAPPER ogr_fdw
OPTIONS (
datasource 'H:\Databases\20170712.accdb',
format 'ACCDB' );
I receive: ERROR: unable to find format "ADDCB"
HINT: See the formats list at http://www.gdal.org/ogr_formats.html.
I also tried MDB and received the same error. However, MDB is the code name given by the website but it says that it needs JDK/JRE to compile and I'm not really sure if that's another type of driver that I would need or what it is.
When I try:
CREATE SERVER metadata
FOREIGN DATA WRAPPER ogr_fdw
OPTIONS (
datasource 'H:\Databases\20170712.mdb',
format 'ODBC' );
I receive: ERROR: unable to connect to data source "H:\Databases\20170712.mdb"
SQL state: HV00D
Hint: Unable to initialize ODBC connection to DSN for DRIVER=Microsoft Access Driver (*.mdb);DBQ=H:\Databases\20170712.mdb,
[Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified
However I thought after looking at the github help page for ogr_fdw didn't need ODBC and special drivers https://github.com/pramsey/pgsql-ogr-fdw/blob/master/FAQ.md.
A lot of this is probably due to my limited knowledge of the terminology when I'm reading through a lot of this stuff. Also my Access database is an .accdb file but since that wasn't working I tried around with mdb and ODBC as the "format" too. If anyone has any suggestions I would greatly appreciate it.
Thanks!

Pentaho PDI - Reading data from MongoDB

I have installed Pentaho Data Integration version (ce-5.0.1.A-stable) in my machine and I am trying to retrive information from MongoDB using PDI. I have created a transformation with Mongo Input step. Now when I try to configure my MongoDB connection details, I couldnt find any explicit connection Type for MongoDB. Could someone please advise on how to configure MongoDB datasource in Pentaho.
I have referred most of the Pentaho-MongoDb docs, but none of the solution works out.
Also, I have tried performing below steps as mentioned in Pentaho Official site, but still I couldnt find any connection Type for MongoDB
1- Move the following folder out of the data-integration folder structure:
data-integration/plugins/pentaho-big-data-plugin
2- Move the following files out of the data-integration folder structure if they exist:
data-integration/libext/JDBC/pentaho-hadoop-hive-jdbc-shim-1.3.0.jar
data-integration/libext/JDBC/pentaho-hadoop-hive-jdbc-shim-1.3.1.jar
data-integration/libext/JDBC/pentaho-hadoop-hive-jdbc-shim-1.3.2.jar
3- Unzip the file pentaho-big-data-plugin-shimtastic-1.3.3.1.zip from the data-integration/plugins folder.
4- Optionally, remove irrelevant folders under data-integration/plugins/pentaho-big-data-plugin/hadoop-configurations.
5- Copy the file pentaho-hadoop-hive-jdbc-shim-1.3.3.jar into the folder
data-integration/libext/JDBC
6- Unzip the file pentaho-instaview-templates-shimtastic-1.3.3.zip to the following directory to
data-integration/plugins/spoon/agile-bi/platform/pentaho-solutions/system/instaview/templates/Big Data
Any help is really appreciated..!
Pentaho doesnot have a specific database connection for MongodB. So you will not find it in the Database Connection viewer. The way to connect to Mongodb is to use Mongodb Input step in PDI. There you will find the connection details section (configure credentials). You can then connect JSON Input step to read the results of your mongodb output. Check the below screenshot:
You can also read it from the Pentaho Wiki in here. Though the documentation seems to be slightly old, but it is the exact process to do it.
On a note you don't need Bigdata shims to connect to mongodb. It seems you have configured the hadoop-hive shims. It not required in here.
Hope it helps :)