I was used pentaho CE biserver-ce-4.8.0 stable version. I want to create dashboard which fetch data from mongodb, so I was created ktr file in data integration which communicate to mongodb and fetch data from mongodb. After that I was used .ktr file in my CDE dashboard datasource and below was some part in ktr file
<hostname>localhost</hostname>
<port>27017</port>
<use_all_replica_members>N</use_all_replica_members>
<db_name>${db_name}</db_name>
<fields_name/>
<collection>test</collection>
<json_field_name>json</json_field_name>
<json_query/>
<auth_user/>
<auth_password>Encrypted </auth_password>
<auth_kerberos>N</auth_kerberos>
<connect_timeout/>
<socket_timeout/>
<read_preference>primary</read_preference>
<output_json>Y</output_json>
<query_is_pipeline>N</query_is_pipeline>
<execute_for_each_row>N</execute_for_each_row>`
and ${db_name} was my parameter and I want to pass this parameter through url but when I was passed db_name as url and read that url parameter I got url parameter but my ktr file not understand parameter and hence it was created db in mongo with name ${db_name} so I was passed parameter to ktr file in pentaho CDE?
After going through Pentaho Data Integration 4 Cookbook I found solution of my questions. I solved my problem using following ways.
1> First create transformation using PDI and that transformation file add mongodb input.
2> Click on edit->settings and select parameters and add parameter name as db_name.
3> In mongodb input set database name as ${db_name} and set collection name save that transformation file.
4> Now login to pentaho bi server and create new CDE dashboard.
5> go to datasource and select kettle query and add kettle transformation file and add above created ktr file, in variable set arg as db_name and value blank.
6> In same datasource add parameters set name as db_name and value your data base name which you want to pass to ktr file in my case db name as demo.
7> Set above ktr to component panel and it work fine
for more ref click here
Related
I am working in QGIS through the DB Manager by running some spatial queries. After I create new tables with the queries I need to export the tables as new geopackages.
I've tried using the Export to Vector file inside the DB Manager but I get the following error message:
Error 2 Creation of data source failed (OGR error:
sqlite3_open(/Users/xxx/Documents/xxx/xxx/xxx/xxxx/xx/new_geopackage_layer.gpkg)failed:
unable to open database file)
I've read a couple of posts and they said I needed to create an empty geopackage first and then export the table and save it inside the geopackage but that did not work either. When I try to save inside an existing geopackage
I get an error saying:
"geopackage.gpkg already exists. Do you want to replace it? A file or
folder with the same name already exists in the folder xxx Replacing
it will overwrite its current contents."
If I choose to overwrite then I get a second error message saying:
" Error 1 Unable to create the datasource.
/Users/xxx/Documents/xxx/xxx/xxx/xxx/xxxx/new_geopackage.gpkg exists
and overwrite flag is false."
All I want is to be able to run spatial queries inside QGIS and be able to export the tables created with the queries as geopackages.
It seems that as of now I won't be able to do this from inside QGIS but instead will need to use ogr2ogr command to export to any file type.
Any help would be really appreciated.
Thank you
I have data catalog tables generated by crawlers one is data source from mongodb, and second is datasource Postgres sql (rds). Crawlers running successfully & connections test working.
I am trying to define an ETL job from mongodb to postgres sql (simple transform).
In the job I defined source as AWS Glue Data Catalog (mongodb) and target as Data catalog Postgres.
When I run the job I get this error:
IllegalArgumentException: Missing collection name. Set via the 'spark.mongodb.input.uri' or 'spark.mongodb.input.collection' property
It looks like this is related to the mongodb part. I tried to set the 'database' and 'collection' parameters in the data catalog tables and it didn't help
Script generated for source is:
AWSGlueDataCatalog_node1653400663056 = glueContext.create_dynamic_frame.from_catalog(
database="data-catalog-db",
table_name="data-catalog-table",
transformation_ctx="AWSGlueDataCatalog_node1653400663056"
What could be missing?
I had the same problem, just add the parameter below.
AWSGlueDataCatalog_node1653400663056 = glueContext.create_dynamic_frame.from_catalog(
database="data-catalog-db",
table_name="data-catalog-table",
transformation_ctx="AWSGlueDataCatalog_node1653400663056"
additional_options = {"database":"data-catalog-db",
"collection":"data-catalog-table"}
Additional parameters can be found on the AWS page
https://docs.aws.amazon.com/glue/latest/dg/connection-mongodb.html
I'm working on a project in which I have store csv file data in my mongodb. If the database is not exist then I have to create it using springboot and if does exist then I have to directly store the data in db
Previously I stored all the data in "admin" database in mongodb.
Below is the for the same. In my properties file I specified this.
spring.data.mongodb.authentication-database=admin
spring.data.mongodb.uri=mongodb://localhost:27017/admin
spring.data.mongodb.database=admin
spring.data.mongodb.repositories.enabled=true
you don't need to create a database just replace admin with the name of the database you want to create mongoDB will create automatically
like this :-
spring.data.mongodb.uri=mongodb://localhost:27017/newDatabaseName
spring.data.mongodb.database=newDatabaseName
I have a table in DB which I need to export the data based on the condition < current date to a CSV file using ODI 12c can I get the steps please..
Please follow below steps to export data from DB to a file using a condition.
1) Create a New Package
2) Under Toolbox navigate to Files --> click on OdiSqlUnload and click on the actual package area.
3) Input Parameters in Properties section
Target File: Give the Path where file needs to be created
JDBC Driver: Oracle.jdbc.oracledriver
JDBC URL: Give the jdbc url which you use to connect to DB
User: User Name for the DB which you are trying to connect to.
Pwd:
File Format: Delimited
Field Separator: , (Comma)
SQL Query: Example: select * from emp where rownum<10
4) Leave the rest all parameters as is
5) Save Package and execute the step with desired context. If you are creating file in local do not use agent while execution.
I'm using OrientDB 2.0.2. I'm writing a SQL script that will create a new DB, construct the schema, and then populate the DB with static initial data. I know that the database will be automatically created if I simply start loading data, and that classes will automatically be created if I start inserting records, but I want to make sure that my classes have the right inheritance and Properties and Indexes before I begin my data load.
The SQL syntax is easy - I just need to know how to run the SQL script from the command line in order to instantiate the the DB, as if I were deploying a new instance of my DB at a new location, or for a new customer.
Thanks.
Assuming you installed OrientDB in the base directory ORIENTDB_HOME, then go to the $ORIENTDB_HOME/bin directory. In this directory, execute the OrientDB SQL Script with the simple command:
*nux:
$ ./console.sh myscript.osql
Windows:
> console.bat myscript.osql
mysql.osql is a simple text file, containing all SQL command (.oslq is a typical file extension for OrientDB SQL scripts).
See documentation for details how to create a new database. Example:
CREATE DATABASE plocal:/usr/local/orient/databases/demo/demo
or
CREATE DATABASE remote:localhost/trick root verySecretPassword